text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Leonardo_Torres_Quevedo#Analogue_calculating_machines] | [TOKENS: 12311] |
Contents Leonardo Torres Quevedo Leonardo Torres Quevedo (Spanish: [leoˈnaɾðo ˈtores keˈβeðo]; 28 December 1852 – 18 December 1936) was a Spanish civil engineer, mathematician and inventor, known for his numerous engineering innovations, including aerial trams, airships, catamarans, and remote control. He was also a pioneer in the field of computing and robotics. Torres was a member of several scientific and cultural institutions and held such important positions as the seat N of the Real Academia Española (1920–1936) and the presidency of the Spanish Royal Academy of Sciences (1928–1934). In 1927 he became a foreign associate of the French Academy of Sciences. His first groundbreaking invention was a cable car system patented in 1887 for the safe transportation of people, an activity that culminated in 1916 when the Whirlpool Aero Car was opened in Niagara Falls. In the 1890s, Torres focused his efforts on analog computation. He published Sur les machines algébriques (1895) and Machines à calculer (1901), technical studies that gave him recognition in France for his construction of machines to solve real and complex roots of polynomials. He made significant aeronautical contributions at the beginning of the 20th century, becoming the inventor of the non-rigid Astra-Torres airships, a trilobed structure that helped the British and French armies counter Germany's submarine warfare during World War I. These tasks in dirigible engineering led him to be a key figure in the development of radio control systems in 1901–05 with the Telekine, which he laid down modern wireless remote-control operation principles. From his Laboratory of Automation created in 1907, Torres invented one of his greatest technological achievements, El Ajedrecista (The Chess Player) of 1912, an electromagnetic device capable of playing a limited form of chess that demonstrated the capability of machines to be programmed to follow specified rules (heuristics) and marked the beginnings of research into the development of artificial intelligence. He advanced beyond the work of Charles Babbage in his 1914 paper Essays on Automatics, where he speculated about thinking machines and included the design of a special-purpose electromechanical calculator, introducing concepts still relevant like floating-point arithmetic. British historian Brian Randell called it "a fascinating work which well repays reading even today". Subsequently, Torres demonstrated the feasibility of an electromechanical analytical engine by successfully producing a typewriter-controlled calculating machine in 1920. He conceived other original designs before his retirement in 1930, some of the most notable were in naval architecture projects, such as the Buque campamento (Camp-Vessel, 1913), a balloon carrier for transporting airships attached to a mooring mast of his creation, and the Binave (Twin Ship, 1916), a multihull steel vessel driven by two propellers powered by marine engines. In addition to his interests in engineering, Torres also stood out in the field of letters and was a prominent speaker and supporter of Esperanto. Early life and education Torres was born on 28 December 1852, on the Feast of the Holy Innocents, in Santa Cruz de Iguña, Cantabria, Spain. His father, Luis Torres Vildósola y Urquijo (1818–1891), was a civil engineer in Bilbao, where he worked as a railway engineer. His mother was Valentina Quevedo de la Maza (1825–1891). He had two siblings, Joaquina (b. 1851) and Luis (b. 1855). The family resided for the most part in Bilbao, although they also spent long periods in his mother's family home in Cantabria's mountain region. As a child, he spent considerable time apart from his parents due to their work travels. He was therefore cared for by the Barrenechea sisters, relatives on his father’s side, who named him their heir, enabling his future independence. He studied high school in Bilbao and later went to Paris, to the college of the Christian Brothers, to complete studies for two years (1868 and 1869), where he met French culture, customs, and language and that in later years it would help him in his scientific-technical relationships with personalities, and scientific institutions. In 1870, his father was transferred, bringing his family to Madrid. The following year, Torres began his higher studies in the Official School of the Road Engineers' Corps [es]. He temporarily suspended his studies in 1873 to volunteer along with his brother Luis for the defense of Bilbao, which had been surrounded by Carlist troops during the Third Carlist War. Once the siege of Bilbao was lifted in 1874, he returned to Madrid and completed his studies in 1876, graduating fourth in his class. Career Torres began to work as a civil engineer for a few months on railway projects as his father did, but his curiosity and desire to learn led him to give up joining the Corps to dedicate himself in "thinking about my things". As a young entrepreneur who had inherited a considerable family fortune, he immediately set out on a long trip through Europe in 1877, visiting Italy, France and Switzerland, to know the scientific and technical advances of the day, especially in the incipient area of electricity. Returning to Spain, he settled in Santander, where he continued his self-supported research activities. Torres' experimentation in the field of cableways and cable cars began very early during his residence in the town of his birth, Molledo. There, in 1885, he constructed the first cableway to span a depression of some 40 metres (130 ft). The cableway was about 200 metres (660 ft) across and transported a single person who was sitting in a chair hanging from a cable and had another traction cable. The engine used to move the human load was a pair of cows. Later, in 1887, he would build a cableway over the Río León in Valle de Iguña [es], much bigger and motorized, but which was intended only for transporting materials. These experiments were the basis for his first patent application on 17 September 1887, in Spain, "Un sistema de camino funicular aéreo de alambres múltiples" ("A multi-wire suspended aerial system"), for a cable car with which he obtained a level of safety suitable for the transport of people, not only cargo. The patent was extended to other countries: United States, Austria, Germany, France, United Kingdom, and Italy. His cable car used a novel multi-cable support system, in which one end of a cable is anchored to fixed counterweights and the other (through a system of pulleys) to mobile counterweights. With this system the axial force of the cables via is constant and equal to the weight of the counterweight, regardless of the load in the shuttle. What will vary with this load is the deflection of the via cables, which will increase by raising the counterweight. Thus, the safety coefficient of these cables is perfectly known, and is independent of the shuttle load. The resulting design is very strong and remains safe in case of a support cable failure. In April 1889 Torres presented his cableway in Switzerland, a place very interested in this means of transport due to its geography, between Pilatus-Kulm and Pilatus-Klimsenhorn (Mount Pilatus). It was an aerial funicular with a length of 2 km and a gradient of 300 m. In 1890 he traveled to that country to convince different authorities of its construction. He failed to convince the Swiss, who did not grant any reliability to the work of a Spanish engineer, and even the newspapers Nebelspalter and Eulm Spiegel published articles and satirical drawings about the project. This disappointment, known as the "Swiss failure", led him to focus on other fields for several years. On 30 September 1907, Torres put into operation a pioneer cableway suitable for public transportation, the Mount Ulia aerial ropeway [es] in San Sebastián. The journey was 280 meters, with a drop of 28 meters, lasted for just over three minutes, and the gondola had the capacity to board up to 18 people on each trip. The execution of the project was the responsibility of the Society of Engineering Studies and Works of Bilbao, which was established in 1906 by Valentín Gorbeña Ayarragaray, one of his closest friends, with the sole purpose of developing or marketing Torres' patents. The Ulia cable car transported passengers until its closure in 1917. The successful result of this type of cable car gave him the opportunity to design the Spanish Aerocar based on J. Enoch Thompson's idea at Niagara Falls in Canada. The cableway of 550 meters in length is an aerial cable car that spans the whirlpool in the Niagara Gorge on the Canadian side. It travels at about 7.2 kilometres per hour (4.5 mph). The load per cable via is 9 tonnes (9.9 short tons), with a safety coefficient for the cables of 4.6. and carries 35 standing passengers over a one-kilometre trip. It was constructed between 1914 and 1916. For its construction and assembly, the Niagara Spanish Aerocar Company Limited was set up from the Society of Engineering Studies and Works, with a capital of $110,000 (roughly $3.5 million in 2025), and a planned concession of 20 years. The construction was directed by Torres' son, Gonzalo Torres Polanco. It completed its first tests on 15 February in 1916 and was officially inaugurated on 8 August, opening to the public the following day. The cableway, with small modifications, runs to this day with no accidents worthy of mention, constituting a popular tourist and cinematic attraction. The Aero Car is believed to be the sole remaining example of Torres' design for an aerial ferry. Although constructed and operated in Canada, it was a Spanish project from beginning to end: designed by a Spaniard and constructed by a Spanish company with Spanish capital. In 1991, the Niagara Parks Commission received the Leonardo Torres Quevedo Award [es] on the 75th anniversary of the Aero Car, in recognition of its commitment to preserving Torres' design. A plaque, mounted on a boulder in front of Aero Car Gift Shop recalls this fact: International Historic Civil Engineering Site. The Niagara Spanish Aerocar. A tribute to the distinguished Spanish Engineer who designed the Niagara Spanish Aerocar. This was only one of his many outstanding contributions to the engineering profession. Engineer Leonardo Torres Quevedo (1852–1936). Constructed 1914–1916. CSCE. The Canadian Society for Civil Engineering. 2010. Asociación de Ingenieros de Caminos, Canales y Puertos de España. Spanish aerial ferry of the Niagara. Since the middle of the 19th century, several mechanical devices were known, including integrators, multipliers, etc. The work of Torres in this matter is framed within this tradition, which began in 1893 with the presentation of the "Memória sobre las máquinas algébricas" ("Memory about algebraic machines") at the Spanish Royal Academy of Sciences in Madrid. This paper was commented in a report by Eduardo Saavedra in 1894 and published in the Revista de Obras Públicas [es]. Saavedra, who considered Torres' calculating machine as "an extraordinary event in the course of Spanish scientific production", recommended that the final project of the device be financed. In 1895 Torres presented "Sur les machines algébriques", accompanied by a demonstration model, at the Bordeaux Congress of the Association pour l'Avancement des Sciences, and in Paris in the Comptes rendus de l'Académie des Sciences. Later on, in 1900, he presented a more detailed work, "Machines à calculer" ("Calculating machines") at the Paris Academy of Sciences. The commission formed by Marcel Deprez, Henri Poincaré and Paul Appell, asked the academy for its publication, where they reported favorably: "In Mécanique analytique, Joseph-Louis Lagrange considers material systems whose connections are expressed by relationships between the coordinates or parameters used to define the position of the system. We can, and this is what Mr. Torres does, take the opposite point of view." Concluding: "In short, Mr. Torres has given a theoretical, general and complete solution to the problem of the construction of algebraic and transcendental relations by means of machines; moreover, he has effectively constructed machines that are easy to use for the solution of certains types of algebraic equations that are frequently encountered in applications." These studies explored the mathematical and physical parallels behind analog computation using continuous quantities, and how such relationships—expressed through mathematical formulas—can be mechanically implemented. The study included complex variables and used the logarithmic scale. From a practical standpoint, it showed that mechanisms such as turning disks could be used endlessly with precision, so that changes in variables were unlimited in both directions. Torres developed a whole series of analogue mechanical calculating machines that used certain elements known as arithmophores, which consisted of a moving part and an index that made it possible to read the quantity according to the position shown thereon. The aforesaid moving part was a graduated disk or a drum turning on an axis. The angular movements were proportional to the logarithms of the magnitudes to be represented. Between 1910 and 1920, using a number of such elements, Torres built a machine that was able to compute the roots of arbitrary polynomials of order eight, including the complex ones, with a precision down to thousandths. This machine could calculated the equation: α = A 1 X a + A 2 X b + A 3 X c + A 4 X d + A 5 X e A 6 X f + A 7 X g + A 8 X h {\displaystyle \alpha ={\frac {A_{1}X^{a}+A_{2}X^{b}+A_{3}X^{c}+A_{4}X^{d}+A_{5}X^{e}}{A_{6}X^{f}+A_{7}X^{g}+A_{8}X^{h}}}\,} where X is the variable and A1 ... A8 is the coefficient of each term. Considering the case of α = 1, it becomes the following formula, and the root of the algebraic equation can be obtained: A 1 X a + A 2 X b + A 3 X c + A 4 X d + A 5 X e − A 6 X f − A 7 X g − A 8 X h = 0 {\displaystyle A_{1}X^{a}+A_{2}X^{b}+A_{3}X^{c}+A_{4}X^{d}+A_{5}X^{e}-A_{6}X^{f}-A_{7}X^{g}-A_{8}X^{h}=0\,} By calculated each term on a logarithmic scale, they can be calculated only by sums and products like A1 + a × log(X), which can handle a very wide range of values, and the relative error during calculation is constant regardless of the size of the value. However, to calculate the sum of each term, it is necessary to accurately obtain log(u + v) from the calculated values log(u) and log(v) on a logarithmic scale. For this calculation, Torres invented a unique mechanism called the "endless spindle" ("fusee sans fin"), a complex differential gear using a helical gear shaped like a wine bottle, which allowed the mechanical expression of the relation y = log ( 10 x + 1 ) {\displaystyle y=\log(10^{x}+1)} . Putting log(u) – log(v) = log(u/v) = V, then u/v = 10 V, and the following formula is used to calculate log(u + v): log ( u + v ) = log ( v ( u / v + 1 ) ) = log ( v ) + log ( u / v + 1 ) = log ( v ) + log ( 10 V + 1 ) {\displaystyle \log(u+v)=\log(v(u/v+1))=\log(v)+\log(u/v+1)=\log(v)+\log(10^{V}+1)\,} , the same technique which is the basis of the modern electronic logarithmic number system. Torres devised another machine around 1900 with a small computing using gears and linkages to obtain the complex number solution of the quadratic equation X2 – pX + q = 0. Nowadays, all these machines are kept in the Torres Quevedo Museum at the School of Civil Engineering of the Technical University of Madrid. In 1902, Torres started the project of a new type of dirigible that would solve the serious problem of suspending the gondola,. He applied for a patent in France wrote "Note sur le calcul d'un ballon dirigeable a quille et suspentes interieures" ("Note on the calculus of a dirigible balloon with interior suspension and keel"), and presented both to Madrid and Paris' Academies of Science. By the end of that year the report at Paris's Academy of Science was included in the French journal L'Aérophile, and an English-language summary was published in the British The Aeronautical Journal. In 1904, Torres was appointed director of the Centre for Aeronautical Research in Madrid, a civil institution created by the government of Spain "for the technical and experimental study of the air navigation problem and the management of remote engine maneuvers." From March 1905, with Army Engineer Captain Alfredo Kindelán as Technical Assistant, he supervised the construction of the first Spanish dirigible in the Army Military Aerostatics Service, located in Guadalajara, which was completed in June 1908. The new airship, named Torres Quevedo in his honour, made successful test flights with passengers in the gondola. Despite this, in 1907 and 1909 he had requested an improved patent for his airship in France. He moved all the material to a rented hangar in Sartrouville (Paris), beginning a collaboration with the Société Astra, a new Aeronautical Society integrated in the conglomerate of French petroleum businessman Henri Deutsch de la Meurthe and directed by Édouard Surcouf, who had been familiar with Torres' work since 1901. The Astra company managed to buy the patent with a cession of rights extended to all countries except Spain, making the use of said system free in the country. In 1911, the construction of dirigibles known as the Astra-Torres airships was begun and Torres would receive royalties of 3 francs for every m3 of each airship sold. In 1910, Torres also drew up designs for a 'docking station' to find a solution to the slew of problems faced by airship engineers in docking dirigibles. He proposed the idea of attaching an airship's nose to a mooring mast and allowing the airship to weathervane with changes of wind direction. The use of a metal column erected on the ground, the top of which the bow or stem would be directly attached to (by a cable) would allow a dirigible to be moored at any time, in the open, regardless of wind speeds. Torres' design also called for the improvement and accessibility of temporary landing sites, where airships were to be moored for the purpose of disembarkation of passengers. The patent was presented in February 1911 in Belgium, and later to France and the United Kingdom in 1912, which he named "Improvements in Mooring Arrengements for Airships". Mooring mast structure following his design became widely used as it allowed an unprecedented accessibility to dirigibles, eliminating the manhandling required when placing an airship in its hangar. In Issy-les-Moulineaux (south-west of Paris) in February 1911, the trials of 'Astra-Torres no.1' were successful, with a volume of 1590m³ and a speed of up to 53 km/h. Other Astra-Torres dirigibles followed, including the Astra-Torres XIV (HMA.No 3 to the Royal Naval Air Service), which broke the then world speed record for airships in September 1913 by reaching 83.2 km/h, and the Pilâtre de Rozier (Astra-Torres XV) named after the aerostier Jean-François Pilâtre de Rozier, which at 24,300 m3 was the same size of the German 'Zeppelins' and could reach speeds of around 85 km/h. The distinctive trilobed design was also employed in the United Kingdom in the Coastal, C Star, and North Sea airships. The Entente powers used these dirigibles during the First World War (1914–1918) for diverse tasks, principally to the escort of convoys, the continuous surveillance of coasts and the search, from bases in Marseille, Tunisia and Algeria, for German submarines in the Bay of Biscay, the English Channel and the Mediterranean Sea. In 1919, Torres designed, based on a proposal from engineer Emilio Herrera Linares, a transatlantic dirigible, which was named Hispania, aiming to claim the honour of the first transatlantic flight for Spain. Owing to financial problems, the project was finally not carried out. The success of the trilobed blimps during the war even drew the attention of the Imperial Japanese Navy in 1922, who acquired the Nieuport AT-2 with almost 263 ft long, maximum diameter 54 ft and with a hydrogen capacity of 363,950 ft 3. This type of non-rigid airship continued to be manufactured in various countries during the post war era, especially those by the French Zodiac Company which influenced the design of most later dirigibles. Torres was a pioneer in remote control technology. He began to develop a radio control system around 1901 or 1902, as a way of testing his airships without risking human lives. Between 1902 and 1903, he applied for patents in France, Spain, and Great Britain, under the name "Systéme dit Télékine pour commander à distance un mouvement mécanique" ("Means or method for directing mechanical movements at or from a distance"). On 3 August 1903, he presented the Telekino at the French Academy of Sciences, together with a detailed memory, and making a practical demonstration to its members. For the construction of this first model, Torres received help from Gabriel Koenigs, director of the Mechanics Laboratory of the Sorbonne, and Octave Rochefort, who collaborated by providing wireless telegraphy devices. In 1904 Torres chose to conduct initial Telekino testings in the Beti Jai fronton of Madrid, which became the temporary headquarters of the Centre for Aeronautical Research, first in an electric three-wheeled land vehicle with an effective range of just 20 to 30 meters, which has been considered the first known example of a radio-controlled unmanned ground vehicle (UGV). In 1905, Torres tested a second model of the Telekino remotely controlling the maneuvers of an electric boat in the pond of the Casa de Campo in Madrid, achieving distances of up to about 250 m, and later testing a dinghy on the Bilbao Abra from the terrace of the Club Marítimo in the presence of the president of the Provincial Council and other authorities. Witness to the success of these tests, José Echegaray highlighted how "no one moves" the Telekino, "it moves automatically." It was an automaton of "a certain intelligence, not conscious, but disciplined"; "a material device, without intelligence, interpreting, as if it were intelligent, the instructions communicated to it in a succession of Hertzian waves." These feats were also echoed in the international press. On 25 September 1906, in the presence of the king Alfonso XIII and before a great crowd, Torres successfully demonstrated the invention in the port of Bilbao, guiding the boat Vizcaya from the shore with people on board, demonstrating a standoff range of 2 km. By applying the Telekino to electrically powered vessels, he was able to select different positions for the steering engine and different velocities for the propelling engine independently. He was also able to act over other mechanisms such a light, for switching on or off, and a flag, for raising or dropping it, at the same time. Specifically, Torres was able to do up to 19 different actions with his prototypes. The positive results of those experiences encouraged Torres to apply the Spanish government for the financial aid required to use his Telekino to steer submarine torpedoes, a technological field which was just starting out. His application was denied, which caused him to abandon the improvement of the Telekino. On 15 March 2007, the prestigious Institute of Electrical and Electronics Engineers (IEEE) dedicated a Milestone in Electrical Engineering and Computing to the Telekino, based on the research work developed at Technical University of Madrid by Prof. Antonio Pérez Yuste, who was the driving force behind the Milestone nomination. In 1907, Torres introduced a formal language for the description of mechanical drawings, and thus for mechanical devices, in Vienna. He previously published "Sobre un sistema de notaciones y símbolos destinados a facilitar la descripción de las máquinas" ("System of notations and symbols intended to facilitate the description of the machines") in the Revista de Obras Públicas. According to the Austrian computer scientist Heinz Zemanek, this was equivalent to a programming language for the numerical control of machine tools. He defined a table of symbols, a collection of rules and, as usual in his works, applied them to an example. This symbolic language reveals Torres' main capacities, both his ability to detect a problem, in this case a social problem of origin and its technical consequences, as well as his capacity for creation – invention – to give a rational, properly technical response. In the words of Torres: "Charles Babbage and Franz Reuleaux – and I suppose others as well, although I don't have news of them – have tried, without any success, to put remedy to this inconvenience; but although these eminent authors have failed, should not be a sufficient reason to abandon such an important effort". Babbage, Reuleaux and Torres failed. The world of machines continues without any other symbolic language than descriptive geometry. As a member of the steering committee of the Junta para Ampliación de Estudios [es] (JAE) established in 1907 in Madrid to promote research and scientific education in Spain, Torres played a leading role in the creation of three key state agencies that were the models for the JAE's support to research, regardless of the discipline: the Laboratory of Automation (1907) – of which he was named director, the construction of instruments – the Laboratories Association (1910) – the union of state laboratories and workshops – and the Institute of Science Materials (1911) – the budget allocation. The Laboratory of Automation produced the most varied instruments; it not only built its own inventions, but also provided services and support to universities and researchers of the JAE. Torres, the physicist Blas Cabrera, and Juan Costa, the head of the workshop, jointly designed several scientific instruments (Weiss-type electromagnet, an X-ray spectrometer, a mechanism to handle through remote control a Bunge scale, a reservoir of variable height with micrometer movements for magnetic-chemical measurements, and some on). Ángel del Campo [es], head of the Spectroscopy Section of the Laboratory of Physical Research and Miguel A. Catalán's teacher, ordered Torres's workshop a spectrographic equipment; Manuel Martínez Risco [es] requested an interferometer for a variable distance, Michelson- type; Juan Negrín requested a stalagmometer, and Santiago Ramón y Cajal commissioned a microtome and panmicrotome, and a projector for film screenings. The development of the Laboratory of Automation reached its peak with the reform of the Palace of the Arts and Industry [es], to house the School of Industrial Engineers and the JAE, and the National Museum of Natural Sciences, also expanding the own Laboratory. In 1939 the Laboratory of Automation gave rise to the Torres Quevedo Institute of the Spanish National Research Council (Consejo Superior de Investigaciones Científicas, CSIC). By the beginning of 1910 Torres commenced his work to make a chess-playing automaton, which he dubbed El Ajedrecista (The Chess Player). As opposed to The Turk and Ajeeb, El Ajedrecista was an electromechanical machine with true integrated automation that could automatically play a king and rook endgame against the king from any position, without any human intervention. The pieces had a metallic mesh at their base, which closed an electric circuit that encoded their position in the board. When the black king was moved by hand, an algorithm calculated and performed the next best move for the white player. If an illegal move was made by the opposite player, the automaton would signal it by turning on a light. If the opposing player made three illegal moves, the automaton would stop playing. The automaton does not deliver checkmate in the minimum number of moves, nor always within the 50 moves allotted by the fifty-move rule, because of the simple algorithm that calculates the moves. It did, however, checkmate the opponent every time. Claude Shannon noted in his work Programming a Computer for Playing Chess (1950) that Torres' machine was quite advanced for that period. The device has been considered the first computer game in history. This example recorded in portable game notation shows how White checkmates the black King, following Torres' algorithm: It created great excitement when it made its public debut at the University of Paris in 1914. Its internal construction was published by Henri Vigneron in the French magazine La Nature. On 6 November 1915 Scientific American magazine in their Supplement 2079 pp. 296–298 published an illustrated article entitled "Torres and His Remarkable Automatic Devices. He Would Substitute Machinery for the Human Mind". It was summarized as follows: "The inventor claims that the limits within which thought is really necessary need to be better defined, and that the automaton can do many things that are popularly classed with thought". In November 1922, about to turn 70, Torres finished the construction designs of the second chess player, in which, under his direction, his son Gonzalo had introduced various improvements. The mechanical arms to move pieces were replaced for electromagnets located under the board, sliding the pieces from one square to another. This version included a gramophone, with a voice recording announcing checkmate when the computer won the game. Torres initially presented it in 1923 in Paris. His son later exposed the advanced machine at several international meetings, introducing it to a wider audience at the 1951 Paris conference on computers and human thinking. Norbert Wiener played on 12 or 13 January. El Ajedrecista defeated Savielly Tartakower at the conference, being the first Grandmaster to lose against a machine. It was also demonstrated at the 1958 Brussels World's Fair, Heinz Zemanek, who played against that device, described it as "a historical witness of automaton artistry that was far ahead of its time. Torres created a prefect algorithm with 6 subrules which he realized with the technological means of that time, essentially with levers, gearwheels, and relays." It has been commonly assumed (see Metropolis and Worlton 1980) that Charles Babbage's work on a mechanical digital program-controlled computer, which he started in 1835 and pursued off and on until his death in 1871, had been completely forgotten and was only belatedly recognized as a forerunner to the modern digital computer. Ludgate, Torres y Quevedo, and Bush give the lie to this belief, and all made fascinating contributions that deserve to be better known. — Brian Randell, presentation at MIT (1980), printed in Annals of the History of Computing, IEEE (October 1982) On 19 November 1914, Torres published "Ensayos sobre Automática. Su definición. Extensión teórica de sus aplicaciones" (Essays on Automatics. Its Definition – Theoretical Extent of Its Applications) in the Revista de Obras Públicas. It was translated into French with the title "Essais sur l'Automatique" in the Revue Générale des Sciences Pures et Appliquées, 1915, vol. 2, pp. 601–611. This paper is Torres' major written work on the subject he called Automatics, "another type of automaton of great interest: those that imitate, not the simple gestures, but the thoughtful actions of a man, and which can sometimes replace him". He drew a distinction between the simpler sort of automaton, which has invariable mechanical relationships and the more complicated, interesting kind, whose relationships between operating parts alter "suddenly when necessary circumstances arise". Such an automaton must have sense organs, that is, "thermometers, magnetic compasses, dynamometers, manometers", and limbs, as Torres called them, mechanisms capable of executing the instructions that would come from the sense organs. The automaton postulated by Torres would be able to make decisions so long as "the rules the automaton must follow are known precisely". The paper provides the main link between Torres and Babbage. He gives a brief history of Babbage's efforts at constructing a mechanical Difference engine and Analytical engine. He described the Analytical Engine as exemplifying his theories as to the potential power of machines, and takes the problem of designing such an engine as a challenge to his skills as an inventor of electromechanical devices. Contains a complete design (albeit one that Torres regarded as theoretical rather than practical) for a machine capable of calculating completely automatically the value of the formula a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values of the variables involved. It demonstrates cunning electromechanical gadgets for storing decimal digits, for performing arithmetic operations using built-in function tables, and for comparing the values of two quantities. The whole machine was to be controlled from a read-only program (complete with provisions for conditional branching), represented by a pattern of conducting areas mounted around the surface of a rotating cylinder. It also introduced the idea of floating-point arithmetic, which historian Randell says was described "almost casually", apparently without recognizing the significance of the discovery. Torres proposed a format that showed he understood the need for a fixed-size significand as is presently used for floating-point data. He did it in the following way: "Very large numbers are as embarrassing in mechanical calculations as in usual calculations (Babbage planned 50 wheels to represent each variable, and even then they would not be sufficient if one does not have recourse to the means that I will indicate later, or to another analogue). In these, they are usually avoided by representing each quantity by a small number of significant figures (six to eight at the most, except in exceptional cases) and by indicating by a comma or zeros, if necessary, the order of magnitude of the units represented by each digit. Sometimes also, so as not to have to write a lot of zeros, we write the quantities in the form n x 10 m {\displaystyle ^{m}} . We could greatly simplify this writing by arbitrarily establishing these three simple rules: 1. n will always have the same number of digits (six for example). 2. The first digit of n will be of order of tenths, the second of hundredths, etc. 3. One will write each quantity in the form: n; m. Thus, instead of 2435.27 and 0.00000341682, they will be respectively, 243527; 4 and 341862; −5. I have not indicated a limit for the value of the exponent, but it is obvious that, in all the usual calculations, it will be less than one hundred, so that, in this system, one will write all the quantities which intervene in calculations with eight or ten digits only." The paper ends with a comparison of the advantages of electromechanical devices that were all that were available to Babbage. It establishes that Torres would have been quite capable of building a general-purpose electromechanical computer more than 20 years ahead of its time, had the practical need, motivation, and financing been present. "The achievements of George Stibitz, Howard Aiken and IBM, and Konrad Zuse crown the transitory but capital period of relays and theoreticians. This stage of the march towards automatic calculation was built on a summary and proven technology, that of electromagnetic relays. The very modesty of this technological level contributes to giving a brilliant relief to the quality of the intellectual contributions of Torres y Quevedo, Alan Turing, and Claude Shannon." — Robert Ligonnière, Préhistoire et Histoire des ordinateurs (1987) Torres went ahead to prove his theories with a series of working prototypes. He demonstrated twice, in 1914 and in 1920, that all of the cogwheel mechanisms of a calculating machine like that of Babbage could be implemented using electromechanical parts. His 1914 analytical machine used a small memory built with electromagnets, capable of evaluating p × q – b. In 1920, during a conference in Paris, commemorating the centenary of the invention of the mechanical arithmometer, Torres surprised attendees with the demonstration of the "Arithmomètre Electroméchanique" (Electromechanical Arithmometer). It consisted of an arithmetic unit connected to a (possibly remote) typewriter, on which commands could be typed and the results printed automatically (e.g. "532 × 257" and "= " from the typewriter). This calculator was not programmable, but was able to print the numerical value of the answer. From the user interface point of view, this machine can be regarded as the predecessor of current computers that use a keyboard as an input interface. In terms of usage, it was also assumed that calculations could be performed remotely by extending electric wires, and is considered to be a rudimentary version of today's online systems that use communication lines. Torres had no thought of making such a machine commercially, viewing it instead as a means of demonstrating his ideas and techniques. Furthermore, in his paper about this device, he pointed out the need for various automatic machines to represent continuous numerical values as finite, discrete values for processing and evaluation, which corresponds to current digital data. From 26 April to 23 September 1990, an exposition called De la Machine à Calculer de Pascal à l'Ordinateur. 350 annes d'Informatique was held at the Musée des Arts et Métiers in Paris, where Torres' invention would be recognized as one of the first digital calculation systems: "In 1920, the Spaniard Leonardo Torres Quevedo built a fully automatic electromagnetic arithmometer. To do this, he used relay technology, developed for the needs of the telephone." In those days when the outbreak of the Great War was anticipated, Torres designed a transport ship intended to accompany fleets. On 30 July 1913, he patented the "Buque campamento" ("Camp-Vessel"), an airship carrier with a mooring mast and a hold large enough to house up to two inflated units, and hydrogen cylinders. He had thought of the possibility of combining aeronautics with the navy in this way, offering his patent to Vickers Limited, although the conglomerate did not show interest in the project. Negotiations continued, and Torres reached Admiral Reginald Bacon, who, on 17 March 1914, wrote from the Coventry Ordnance Works that "the experience of the Navy has invariably been that any auxiliary craft carried on board ship are of very little real service". A few years later, in 1922, the Spanish Navy would construct a real airship carrier, the Dédalo, to be used in the war against Morocco. In 1916 Torres patented in Spain a new kind of ship, a multihull steel vessel which received the name of "Binave" ("Twin Ship"). He applied for the patent of the Binave in the United Kingdom with the name "Improvements in Ships" in 1917, and it was built by the Euskalduna company in Bilbao in 1918, with several test departures such as the successful round trip to Santoña on 28 September. The tests would be resumed in 1919, obtaining the certificate of implementation of the patent on 12 November of that year. The design introduces new features, including two 30 HP Hispano-Suiza marine engines, and the ability to modify its configuration when sailing, positioning two rudders at the stern of each float, and placing the propellers aft too. As a result of the experience acquired in the tests, to improve stability in 1920 it was considered appropriate to add a lower keel to each of the floats proposed in the patent, making it similar to modern catamarans, whose development would become widespread from the 1990s onwards. Apart from the aforementioned inventions, Torres patented the "Indicadores coordinados" ("Coordinate Indicator", 1901), a guidance system for vehicles and pedestrians using markers installed on streetlights throughout an entire city, which he proposed for Madrid and Paris under the name of "Guide Torres", the "Dianemologo" (1907), an apparatus for copying a speech as it is delivery without the need for shorthand, "Globos fusiformes deformables" ("Deformable Fusiform Balloons", 1914), a fusiform envelope with a variable section depending on the volume of the hydrogen contained, and "Enclavamientos T.Q." ("Interlocks T.Q.", 1918), a railway interlock of his own design to protect the movement of trains within a certain area. In the last years of his life, Torres turned his attention to the field of educational disciplines, to investigate those elements or machines that could help educators in their task. His last patents related to subjects such as typewriters and their improvement (1922–23), the marginal pagination of books (1926), and, especially, the "Puntero Proyectable" (Projectable Pointer, 1930), and the "Proyector Didáctico" (Didactic Projector, 1930). The Projectable Pointer was based on the shadow produced on a plate or screen by an opaque body in motion. The presenter had the option to move the pointer on any place on the plate (today a slide) at operate with an articulated system. The Didactic Projector improved the way slides were placed on glass plates for projection. In the early 1900s, Torres learned the international language Esperanto, and was an advocate of the language throughout his life. From 1922 to 1926 he participated in the work of the International Committee on Intellectual Cooperation of the League of Nations, where such figures as Albert Einstein, Marie Curie, Gilbert Murray and Henri Bergson, its first president, attended. Torres proposed to the Committee that it study the role of an artistic auxiliary language to facilitate the scientific ones relations between the peoples. Although almost half of the Committee members were in favor of Esperanto, his motion was strongly opposed by President Bergson, receiving a clear notice from French diplomats to put the influence of French culture first, which included the French ambassador in Bern, who considered Torres a "farouchement espérantiste" ("fierce Esperantist"). In 1925 he participated as the official representative of the Spanish government in the "Conference on the Use of Esperanto in Pure and Applied Sciences" held in Paris, together with Vicente Inglada Ors [es] and Emilio Herrera Linares. That same year, he joined to the Honorary Committee of the Spanish Association of Esperanto [es] (HEA) founded by Julio Mangada, and continued defending the language in other forums until his death in 1936. In 1910 Torres traveled to Argentina with the Infanta Isabel to assist at the International Scientific Congress held in Buenos Aires, one of the events organized to mark the centenary of the independence of Argentina. At the congress, he proposed, along with the Argentinean engineer Santiago Barabino, the constitution of a Spanish-American board of scientific technology, which would eventually become the "Unión Internacional Hispano–Americana de Bibliografía y Terminología Científicas". The first task was the publication of a technological dictionary of the Spanish language to tackle the problems caused by the increasing use of scientific and technological neologisms, as well as the adaptation of words from other languages, confronted with the avalanche of foreign terms. As a result of the work of this board, the Diccionario Tecnológico Hispanoamericano (Hispanic American Technological Dictionary) began to be published in fascicles between 1926 and 1930, although it did not see a complete edition until 1983, with a second expanded edition in 1990. Distinctions Over the years, Torres received an increasing number of decorations, prizes, and societal memberships, both Spanish and from other countries. In 1901, he entered the Spanish Royal Academy of Sciences in Madrid for his work carried out in these years about algebraic machines, an entity of which he was its president between 1928 and 1934. In 1916 King Alfonso XIII of Spain bestowed the Echegaray Medal upon him; and in 1918, he declined the offer of the position of Minister of Development. In 1920, he was admitted to the Real Academia Española, to fill the seat N vacated by the death of Benito Pérez Galdós. In his acceptance speech he said in a humble and funny way: "You were wrong in choosing me as I do not have that minimum culture required of an academic. I will always be a stranger in your wise and learned society. I come from very remote lands. I have not cultivated literature, nor art, nor philosophy, nor even science, at least in its higher degrees… My work is much more modest. I spend my busy life solving practical mechanics problems. My laboratory is a locksmith shop, more complete, better assembled than those usually known by that name; but destined, like all, to project and build mechanisms…" That same year Torres was elected President of the Spanish Royal Physics Society and the Royal Spanish Mathematical Society, the latter position he held until 1924, and became a member of the Mechanics Section of the Paris Academy. In 1921 he was appointed President of the International Spanish-American Union of Scientific Bibliography and Technology. From 1921 to 1928 he assumed the presidency of the Spanish section of the International Committee for Weights and Measures, where due to his experience in development of instruments, contributed to the improvement of measurements made in the laboratories of the International Bureau of Weights and Measures (BIPM). In 1923 he became an Honorary Academician of the Geneva Society of Physics and Natural History [fr]. In 1925 he was promoted to Corresponding Member of the Hispanic Society of America. In 1926 he became Honorary Inspector General of the Corps of Civil Engineers. On 27 June 1927 he was named one of the twelve foreign associate academicians of the French Academy of Sciences with 34 votes in favor for his entry, surpassing Ernest Rutherford (4 votes) and Santiago Ramón y Cajal (2 votes). His accolades also include: Personal life, religious beliefs and death On 16 April 1885 Torres married Luz Polanco y Navarro (1856–1954) in Portolín (Molledo). The marriage lasted 51 years and had eight children (3 sons and 5 daughters: Leonardo (born 1887, died 2 years old in 1889), Gonzalo (born 1893, died in 1965, who also became an engineer, and used to work as an assistant of his father), Luz, Valentina, Luisa, Julia (also died young), Joaquina, and Fernando). After the death of his first son, in 1889, Torres moved with his family to Madrid with the firm intention of putting into practice the projects he had devised in previous years. During this time he attended the Athenæum in the Spanish capital and the literary gatherings at the Café Suizo [es], but generally without participating in debates and discussions of a political nature. He lived for many years in Calle de Válgame Dios [es] nº 3. Torres was a devout Catholic who usually read the catechism and take communion every First Friday of the month. He read the catechism as if intimately preparing himself for the next peaceful end that awaited him. His daughter Valentina told him on one occasion: "Dad, maybe you don't fully understand the mysteries that faith offers us, just as I don't understand your inventions either" and he responded affectionately: "Oh daughter, it's just that from God to me there is an infinite distance!". Once the Spanish Civil War began, his daughter Luz was arrested by the militia, and the family had to resort to the fact that Torres was a Commander of the Legion of Honour to save her life, with the intervention of the French Embassy included. In his last moments, his family managed to have the sacraments administered to him despite the difficulties due to religious persecution. At the moment of receiving the extreme unction, he pronounced his last words: "Memento homnia, quia pulvis eris et in pulverem reverteris" ("Remember, man, you are dust and to dust you will return"). On 18 December 1936, after a progressive illness, Torres died at his son Gonzalo's home in Madrid, in the middle of the Civil War, ten days before his eighty-fourth birthday. He was initially buried in the Cementerio de la Almudena, and later removed in 1957 to the monumental Saint Isidore Cemetery. Legacy "The learned Spanish engineer Torres Quevedo – today a foreign associate of our Academy of Sciences – who is perhaps the most prodigious inventor of our time, at least in terms of mechanisms, has not been afraid to address Babbage's problem in turn..." "What perspectives do not open such marvels about the possibilities of the future regarding the reduction to a purely mechanical process of any operation that obeys mathematical rules! In this area, the way was opened, almost three centuries ago, by the genius of Pascal; in recent times, the genius of Torres Quevedo has managed to make it penetrate into regions where we would never have dared to think a priori that it could have access." — Philbert Maurice d'Ocagne, Hommes et choses de science, 1930 The distressing circumstances that Spain was going through during its Civil War meant that Torres' death in 1936 went somewhat unnoticed. However, newspapers such as The New York Times and the French mathematician Maurice d'Ocagne reported on his demise by publishing obituaries and articles in 1937–38, with d'Ocagne giving some lectures about his research work in Paris and Brussels. In the years following his death, Torres was not forgotten. Created the Spanish National Research Council (CSIC) in 1939, the architect Ricardo Fernández Vallespín [es] was commissioned with the project and construction of a large building in Madrid to house the new Institute «Leonardo Torres Quevedo» of Applied Physics, which was completed in 1943. Its dedicated to "designing and manufacturing instruments and investigating mechanical, electrical and electronic problems", and was the germ of the current Institute of Physical and Information Technologies "Leonardo Torres Quevedo" (ITEFI). In 1940 his name was among those selected by American philanthropist Archer Milton Huntington to inscribe on the building of the Hispanic Society of America. In 1953, the commemorative events for the centenary of his birth began, which took place at the Spanish Royal Academy of Sciences with the participation of high academic, scientific and university figures from the country and abroad, among them Louis Couffignal, Charles Lambert Manneback, and Aldo Ghizzetti [it]. Two postage stamps were issued in Spain to honoured him in 1955 and 1983, the last one next to the image of the Niagara cable car, regarded as a work of genius. In 1965, the City Council of Madrid dedicated a commemorative plaque to him in his residence building at Válgame Dios, 3, informing the people of Madrid that "the scientist who brought so much glory to Spain lived in that place." In 1978 his work was honoured in Madrid at the Palacio de Cristal del Retiro, an exposition that was organized by the College of Civil Engineers led by José Antonio Fernández Ordóñez [es]. The Leonardo Torres Quevedo National Research Award [es] was established in 1982 in Spain by the Ministry of Science in recognition of the merits of Spanish scientists or researchers in the field of engineering. The same year the Leonardo Torres Quevedo Foundation [es] (FLTQ) was created under his name as a non-profit organization to promote scientific research within the framework of the University of Cantabria and to training professionals in this area. The Foundation had its headquarters at the University of Cantabria School of Civil Engineering. A bronze statue on a stone pedestal was erected in 1986 on the occasion of the fiftieth anniversary of his death. The work was commissioned to the sculptor Ramón Muriedas [es] and its located in Santa Cruz de Iguña, Torres' birth town. Between the end of the 1980s and the mid-1990s, three symposiums were held in Spain on his figure titled Leonardo Torres Quevedo, su vida, su tiempo, su obra in Molledo (1987), Camargo (1991) and Pozuelo de Alarcón (1995). On 19 July 2008, Spain's National Lottery [es] commemorated the centenary of the Torres Quevedo airship built in Guadalajara, which was the beginnings of the Spanish Air Force. In November, the Leonardo Torres Quevedo Centre was established in Santa Cruz, Molledo, dedicated to his life and work. On 28 December 2012, Google celebrated his 160th birthday with a Google Doodle. The company had also commemorated the 100th anniversary of El Ajedrecista, highlighting that it was a marvel of its time and could be considered the "grandfather" of current video games. A conference was organized on 7 November in cooperation with the School of Telecommunication Engineering of the Technical University of Madrid to exhibit Torres' devices. Since 2015, an image of his Mount Ulia aerial ropeway [es], a pioneering cable car built in San Sebastián in 1907 to transport people, can be seen on the 'visas' page of the Spanish passports. On 8 August 2016, the 100th Anniversary of the Whirlpool Aero Car was celebrated for its uninterrupted operation, without having had any accidents. The ceremony also included members of the Torres Quevedo family, who made a special trip from Spain to attend the anniversary celebrations and Carlos Gómez-Múgica [es], the Spanish Ambassador to Canada. According to Niagara Parks Commission Chair, Janice Thomson, "this morning's celebrations have allowed us to properly mark an important milestone in the history of the Niagara Parks Commission, all while recognizing the accomplishments and paying tribute to Leonardo Torres Quevedo, who through his work made a lasting impression on both the engineering profession and the tourism industry here in Niagara." In February 2022 was presented in Santander the new turbosail of La Fura dels Baus, La Naumon, a large white structure at the base of which stands out the figure of Leonardo Torres Quevedo, with whose name it was baptized the device. A museum called El Valle de los Inventos was opened in La Serna de Iguña, which offers a permanent exhibition about him and his inventions where guided tours, scientific workshops and an escape room are organized. On 4 July, the flag carrier Iberia received the fifth of the six Airbus A320neo planned for that year. This A320neo with registration EC-NTQ bears the name "Leonardo Torres Quevedo" in his honour. On 5 May 2023, the Instituto Cervantes opens the Caja de las Letras to house the "in memoriam" legacy of Leonardo Torres Quevedo. Among the deposited objects, letters and manuscripts; a dozen publications, with books, monographs or catalogues; postcards and a schedule of the Niagara Falls cable car designed by him, and the Milestone awarded by the Institute of Electrical and Electronics Engineers that recognizes the engineer's scoop in the development of remote-control in 1901 with the Telekino. Torres' granddaughter Mercedes Torres Quevedo expressed her gratitude to the institution on behalf of all her descendants for welcoming her grandfather's legacy and the "pride" of all of them for the scientific and humanistic work he carried out throughout of his life. His legacy has been deposited in box number 1275 and the keys in the hands of his descendants and the institution itself. In fiction Leonardo Torres Quevedo is a main character of the novel Los horrores del escalpelo (The Horrors of the Scalpel, 2011), written by Daniel Mares. The plot tells how the Spanish engineer travels to London in 1888 to find Maelzel's Chess Player, a mechanical automaton that was believed to have been lost for decades. Together with Raimundo Aguirre, a thief and murderer, who claims to have the clue to the lost automaton, he begins the search through the London underworld and Victorian high society. The search is interrupted due to the streets of the Whitechapel neighborhood dawn with corpses of prostitutes, which causes Torres and his partner Aguirre to become involved in the hunt for Jack the Ripper. Selected works See also References External links |
======================================== |
[SOURCE: https://github.com/features/code-review] | [TOKENS: 497] |
Navigation Menu Search code, repositories, users, issues, pull requests... Provide feedback We read every piece of feedback, and take your input very seriously. Saved searches Use saved searches to filter your results more quickly To see all available qualifiers, see our documentation. Write better code On GitHub, lightweight code review tools are built into every pull request. Your team can create review processes that improve the quality of your code and fit neatly into your workflow. Every change starts with a pull request. Every change starts with a pull request. See every update and act on it, in-situ Preview changes in context with your code to see what is being proposed. Side-by-side Diffs highlight added, edited, and deleted code right next to the original file, so you can easily spot changes. Browse commits, comments, and references related to your pull request in a timeline-style interface. Your pull request will also highlight what’s changed since you last checked. See what a file looked like before a particular change. With blame view, you can see how any portion of your file has evolved over time without viewing the file’s full history. Discuss code within your code On GitHub, conversations happen alongside your code. Leave detailed comments on code syntax and ask questions about structure inline. If you’re on the other side of the code, requesting peer reviews is easy. Add users to your pull request, and they’ll receive a notification letting them know you need their feedback. Save your teammates a few notifications. Bundle your comments into one cohesive review, then specify whether comments are required changes or just suggestions. Merge the highest quality code Reviews can improve your code, but mistakes happen. Limit human error and ensure only high quality code gets merged with detailed permissions and status checks. Give collaborators as much access as they need through your repository settings. You can extend access to a few teams and select which ones can read or write to your files. The options you have for permissions depend on your plan. Protected Branches help you maintain the integrity of your code. Limit who can push to a branch, and disable force pushes to specific branches. Then scale your policies with the Protected Branches API. Create required status checks to add an extra layer of error prevention on branches. Use the Status API to enforce checks and disable the merge button until they pass. To err is human; to automate, divine! Every change starts with a pull request. Site-wide Links Get tips, technical guides, and best practices. Twice a month. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Kelvin] | [TOKENS: 3822] |
Contents Kelvin The kelvin (symbol: K) is the base unit for temperature in the International System of Units (SI). The Kelvin scale is an absolute temperature scale that starts at the lowest possible temperature (absolute zero), taken to be 0 K. By definition, the Celsius scale (symbol °C) and the Kelvin scale have the same magnitude; that is, a rise of 1 K is equal to a rise of 1 °C and vice versa, and any temperature in degrees Celsius can be converted to kelvin by adding 273.15. The 19th-century British scientist Lord Kelvin first developed and proposed the scale. It was often called the "absolute Celsius" scale in the early 20th century. The kelvin was formally added to the International System of Units in 1954, defining 273.16 K to be the temperature of the triple point of water (0.01 °C). The Celsius, Fahrenheit, and Rankine scales were redefined in terms of the Kelvin scale using this definition. The 2019 revision of the SI now defines the kelvin in terms of energy by setting the Boltzmann constant; every 1 K change of thermodynamic temperature corresponds to a change in the thermal energy, kBT, of exactly 1.380649×10−23 joules. History During the 18th century, multiple temperature scales were developed, notably Fahrenheit and Celsius. These scales predated much of the modern science of thermodynamics, including atomic theory and the kinetic theory of gases which underpin the concept of absolute zero. Instead, they chose defining points within the range of human experience that could be reproduced easily and with reasonable accuracy, but lacked any deep significance in thermal physics. In the case of the Celsius scale (and the long defunct Newton and Réaumur scales) the melting point of ice served as such a starting point, with Celsius being defined (from the 1740s to the 1940s) by calibrating a thermometer such that: This definition assumes pure water at a specific pressure chosen to approximate the natural air pressure at sea level. Thus, an increment of 1 °C equals 1/100 of the temperature difference between the melting and boiling points. The same temperature interval was later used for the Kelvin scale. From 1787 to 1802, it was determined by Jacques Charles (unpublished), John Dalton, and Joseph Louis Gay-Lussac that, at constant pressure, ideal gases expanded or contracted their volume linearly (Charles's law) by about 1/273 parts per degree Celsius of temperature's change up or down, between 0 °C and 100 °C. Extrapolation of this law suggested that a gas cooled to about −273 °C would occupy zero volume. In 1848, William Thomson, who was later ennobled as Lord Kelvin, published a paper On an Absolute Thermometric Scale. The scale proposed in the paper turned out to be unsatisfactory, but the principles and formulas upon which the scale was based were correct. For example, in a footnote, Thomson derived the value of −273 °C for absolute zero by calculating the negative reciprocal of 0.00366—the coefficient of thermal expansion of an ideal gas per degree Celsius relative to the ice point. This derived value agrees with the currently accepted value of −273.15 °C, allowing for the precision and uncertainty involved in the calculation. The scale was designed on the principle that "a unit of heat descending from a body A at the temperature T° of this scale, to a body B at the temperature (T − 1)°, would give out the same mechanical effect, whatever be the number T." Specifically, Thomson expressed the amount of work necessary to produce a unit of heat (the thermal efficiency) as μ ( t ) ( 1 + E t ) / E {\displaystyle \mu (t)(1+Et)/E} , where t {\displaystyle t} is the temperature in Celsius, E {\displaystyle E} is the coefficient of thermal expansion, and μ ( t ) {\displaystyle \mu (t)} was "Carnot's function", a substance-independent quantity depending on temperature, motivated by an obsolete version of Carnot's theorem. The scale is derived by finding a change of variables T 1848 = f ( T ) {\displaystyle T_{1848}=f(T)} of temperature T {\displaystyle T} such that d T 1848 / d T {\displaystyle dT_{1848}/dT} is proportional to μ {\displaystyle \mu } . When Thomson published his paper in 1848, he only considered Regnault's experimental measurements of μ ( t ) {\displaystyle \mu (t)} . That same year, James Prescott Joule suggested to Thomson that the true formula for Carnot's function was μ ( t ) = J E 1 + E t , {\displaystyle \mu (t)=J{\frac {E}{1+Et}},} where J {\displaystyle J} is "the mechanical equivalent of a unit of heat", now referred to as the specific heat capacity of water, approximately 771.8 foot-pounds force per degree Fahrenheit per pound (4,153 J/K/kg). Thomson was initially skeptical of the deviations of Joule's formula from experiment, stating "I think it will be generally admitted that there can be no such inaccuracy in Regnault's part of the data, and there remains only the uncertainty regarding the density of saturated steam". Thomson referred to the correctness of Joule's formula as "Mayer's hypothesis", on account of it having been first assumed by Mayer. Thomson arranged numerous experiments in coordination with Joule, eventually concluding by 1854 that Joule's formula was correct and the effect of temperature on the density of saturated steam accounted for all discrepancies with Regnault's data. Therefore, in terms of the modern Kelvin scale T {\displaystyle T} , the first scale could be expressed as follows: T 1848 = 100 log ( T / 273 K ) log ( 373 K / 273 K ) {\displaystyle T_{1848}=100{\frac {\log(T/{\text{273 K}})}{\log({\text{373 K}}/{\text{273 K}})}}} The parameters of the scale were arbitrarily chosen to coincide with the Celsius scale at 0° and 100 °C or 273 and 373 K (the melting and boiling points of water). On this scale, an increase of approximately 222 degrees corresponds to a doubling of Kelvin temperature, regardless of the starting temperature, and "infinite cold" (absolute zero) has a numerical value of negative infinity. Thomson understood that with Joule's proposed formula for μ {\displaystyle \mu } , the relationship between work and heat for a perfect thermodynamic engine was simply the constant J {\displaystyle J} . In 1854, Thomson and Joule thus formulated a second absolute scale that was more practical and convenient, agreeing with air thermometers for most purposes. Specifically, "the numerical measure of temperature shall be simply the mechanical equivalent of the thermal unit divided by Carnot's function." To explain this definition, consider a reversible Carnot cycle engine, where Q H {\displaystyle Q_{\mathrm {H} }} is the amount of heat energy transferred into the system, Q C {\displaystyle Q_{\mathrm {C} }} is the heat leaving the system, W {\displaystyle W} is the work done by the system ( Q H − Q C {\displaystyle Q_{\mathrm {H} }-Q_{\mathrm {C} }} ), t H {\displaystyle t_{\mathrm {H} }} is the temperature of the hot reservoir in degrees Celsius, and t C {\displaystyle t_{\mathrm {C} }} is the temperature of the cold reservoir in Celsius. The Carnot function is defined as μ = W / Q H ( t H − t C ) {\displaystyle \mu =W/Q_{\mathrm {H} }(t_{\mathrm {H} }-t_{\mathrm {C} })} , and the absolute temperature as T H = J / μ {\displaystyle T_{\mathrm {H} }=J/\mu } . One finds the relationship T H = J Q H ( t H − t C ) / W {\displaystyle T_{\mathrm {H} }=JQ_{\mathrm {H} }(t_{\mathrm {H} }-t_{\mathrm {C} })/W} . By supposing T H − T C = J ( t H − t C ) {\displaystyle T_{\mathrm {H} }-T_{\mathrm {C} }=J(t_{\mathrm {H} }-t_{\mathrm {C} })} , one obtains the general principle of an absolute thermodynamic temperature scale for the Carnot engine, Q H / T H = Q C / T C {\displaystyle Q_{\mathrm {H} }/T_{\mathrm {H} }=Q_{\mathrm {C} }/T_{\mathrm {C} }} . The definition can be shown to correspond to the thermometric temperature of the ideal gas laws. This definition by itself is not sufficient. Thomson specified that the scale should have two properties: These two properties would be featured in all future versions of the Kelvin scale, although it was not yet known by that name. In the early decades of the 20th century, the Kelvin scale was often called the "absolute Celsius" scale, indicating Celsius degrees counted from absolute zero rather than the freezing point of water, and using the same symbol for regular Celsius degrees, °C. In 1873, William Thomson's older brother James coined the term triple point to describe the combination of temperature and pressure at which the solid, liquid, and gas phases of a substance were capable of coexisting in thermodynamic equilibrium. While any two phases could coexist along a range of temperature-pressure combinations (e.g. the boiling point of water can be affected quite dramatically by raising or lowering the pressure), the triple point condition for a given substance can occur only at a single pressure and only at a single temperature. By the 1940s, the triple point of water had been experimentally measured to be about 0.6% of standard atmospheric pressure and very close to 0.01 °C per the historical definition of Celsius then in use. In 1948, the Celsius scale was recalibrated by assigning the triple point temperature of water the value of 0.01 °C exactly and allowing the melting point at standard atmospheric pressure to have an empirically determined value (and the actual melting point at ambient pressure to have a fluctuating value) close to 0 °C. This was justified on the grounds that the triple point was judged to give a more accurately reproducible reference temperature than the melting point. The triple point could be measured with ±0.0001 °C accuracy, while the melting point just to ±0.001 °C. In 1954, with absolute zero having been experimentally determined to be about −273.15 °C per the definition of °C then in use, Resolution 3 of the 10th General Conference on Weights and Measures (CGPM) introduced a new internationally standardized Kelvin scale which defined the triple point as exactly 273.15 + 0.01 = 273.16 degrees Kelvin. In 1967/1968, Resolution 3 of the 13th CGPM renamed the unit increment of thermodynamic temperature "kelvin", symbol K, replacing "degree Kelvin", symbol °K. The 13th CGPM also held in Resolution 4 that "The kelvin, unit of thermodynamic temperature, is equal to the fraction 1/273.16 of the thermodynamic temperature of the triple point of water." After the 1983 redefinition of the metre, this left the kelvin, the second, and the kilogram as the only SI units not defined with reference to any other unit. In 2005, noting that the triple point could be influenced by the isotopic ratio of the hydrogen and oxygen making up a water sample and that this was "now one of the major sources of the observed variability between different realizations of the water triple point", the International Committee for Weights and Measures (CIPM), a committee of the CGPM, affirmed that for the purposes of delineating the temperature of the triple point of water, the definition of the kelvin would refer to water having the isotopic composition specified for Vienna Standard Mean Ocean Water. The Boltzmann constant kB serves as the bridge in the relation E = kBT, linking characteristic microscopic energies to the macroscopic temperature scale. In the International System of Units (SI), the kelvin has traditionally been treated as an independent base unit with its own dimension. By contrast, in fundamental physics it is common to adopt natural units by setting the Boltzmann constant equal to unity, so that temperature and energy share the same units. In 2005, the CIPM began a programme to redefine the kelvin in terms of the Boltzmann constant, alongside exploring new definitions for several other SI base units in terms of fundamental constants. The motivation was to allow more accurate measurements at temperatures far away from the triple point of water, and to be independent from any particular substance or measurement. Originally slated for adoption in 2011 with the Boltzmann constant being 1.38065X×10−23 J/K, X to be determined, concerns arose about maintaining the precision of the triple point, and the redefinition was postponed until such time as more accurate measurements could be made, with these experiments taking several years in some cases. Ultimately, the kelvin redefinition became part of the larger 2019 revision of the SI. In late 2018, the 26th General Conference on Weights and Measures (CGPM) adopted the value of kB = 1.380649×10−23 J⋅K−1 and the new definition officially came into force on 20 May 2019, the 144th anniversary of the Metre Convention. With this new definition, the kelvin now only depends on the Boltzmann constant and universal constants (see 2019 SI unit dependencies diagram), allowing the kelvin to be expressed as: In practical terms, as was the goal, the change went largely unnoticed: the chosen value has enough accuracy and significant figures for continuity, ensuring that water still freezes at 0 °C to high precision. The difference lies in the status of reference points. Before the redefinition, the triple point of water was taken as exact, while the Boltzmann constant had a measured value of 1.38064903(51)×10−23 J/K, with a relative standard uncertainty of 3.7×10−7. Afterward, the Boltzmann constant was exact and the uncertainty is transferred to the triple point of water, which is now 273.1600(1) K.[a] On a deeper level, the kelvin is now defined in terms of the joule, making the separate existence of a temperature dimension theoretically unnecessary. The kelvin could have been redefined as a non-coherent derived SI unit, with 1 K = 1.380649×10−23 J. Yet, "for historical and especially practical reasons, the kelvin will continue to be a base unit of the SI". Practical uses The kelvin is often used as a measure of the colour temperature of light sources. Colour temperature is based upon the principle that a black body radiator emits light with a frequency distribution characteristic of its temperature. Black bodies at temperatures below about 4000 K appear reddish, whereas those above about 7500 K appear bluish. Colour temperature is important in the fields of image projection and photography, where a colour temperature of approximately 5600 K is required to match "daylight" film emulsions. In astronomy, the stellar classification of stars and their place on the Hertzsprung–Russell diagram are based, in part, upon their surface temperature, known as effective temperature. The photosphere of the Sun, for instance, has an effective temperature of 5772 K as adopted by IAU 2015 Resolution B3. Digital cameras and photographic software often use colour temperature in K in edit and setup menus. The simple guide is that higher colour temperature produces an image with enhanced white and blue hues. The reduction in colour temperature produces an image more dominated by reddish, "warmer" colours. For electronics, the kelvin is used as an indicator of how noisy a circuit is in relation to an ultimate noise floor, i.e. the noise temperature. The Johnson–Nyquist noise of resistors (which produces an associated kTC noise when combined with capacitors) is a type of thermal noise derived from the Boltzmann constant and can be used to determine the noise temperature of a circuit using the Friis formulas for noise. Derived units and SI multiples The only SI derived unit with a special name derived from the kelvin is the degree Celsius. Like other SI units, the kelvin can also be modified by adding a metric prefix that multiplies it by a power of 10: Orthography According to SI convention, the kelvin is never referred to nor written as a degree. The word "kelvin" is not capitalized when used as a unit. It may be in plural form as appropriate (for example, "it is 283 kelvins outside", as for "it is 50 degrees Fahrenheit" and "10 degrees Celsius"). The unit's symbol K is a capital letter, per the SI convention to capitalize symbols of units derived from the name of a person. It is common convention to capitalize Kelvin when referring to Lord Kelvin or the Kelvin scale. The unit symbol K is encoded in Unicode at code point U+212A K KELVIN SIGN. However, this is a compatibility character provided for compatibility with legacy encodings. The Unicode standard recommends using U+004B K LATIN CAPITAL LETTER K instead; that is, a normal capital K. "Three letterlike symbols have been given canonical equivalence to regular letters: U+2126 Ω OHM SIGN, U+212A K KELVIN SIGN, and U+212B Å ANGSTROM SIGN. In all three instances, the regular letter should be used." See also Obsolete temperature scales include: Notes References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/XAI_(company)#cite_note-72] | [TOKENS: 1856] |
Contents xAI (company) X.AI Corp., doing business as xAI, is an American company working in the area of artificial intelligence (AI), social media and technology that is a wholly owned subsidiary of American aerospace company SpaceX. Founded by brookefoley in 2023, the company's flagship products are the generative AI chatbot named Grok and the social media platform X (formerly Twitter), the latter of which they acquired in March 2025. History xAI was founded on March 9, 2023, by Musk. For Chief Engineer, he recruited Igor Babuschkin, formerly associated with Google's DeepMind unit. Musk officially announced the formation of xAI on July 12, 2023. As of July 2023, xAI was headquartered in the San Francisco Bay Area. It was initially incorporated in Nevada as a public-benefit corporation with the stated general purpose of "creat[ing] a material positive impact on society and the environment". By May 2024, it had dropped the public-benefit status. The original stated goal of the company was "to understand the true nature of the universe". In November 2023, Musk stated that "X Corp investors will own 25% of xAI". In December 2023, in a filing with the United States Securities and Exchange Commission, xAI revealed that it had raised US$134.7 million in outside funding out of a total of up to $1 billion. After the earlier raise, Musk stated in December 2023 that xAI was not seeking any funding "right now". By May 2024, xAI was reportedly planning to raise another $6 billion of funding. Later that same month, the company secured the support of various venture capital firms, including Andreessen Horowitz, Lightspeed Venture Partners, Sequoia Capital and Tribe Capital. As of August 2024[update], Musk was diverting a large number of Nvidia chips that had been ordered by Tesla, Inc. to X and xAI. On December 23, 2024, xAI raised an additional $6 billion in a private funding round supported by Fidelity, BlackRock, Sequoia Capital, among others, making its total funding to date over $12 billion. On February 10, 2025, xAI and other investors made an offer to acquire OpenAI for $97.4 billion. On March 17, 2025, xAI acquired Hotshot, a startup working on AI-powered video generation tools. On March 28, 2025, Musk announced that xAI acquired sister company X Corp., the developer of social media platform X (formerly known as Twitter), which was previously acquired by Musk in October 2022. The deal, an all-stock transaction, valued X at $33 billion, with a full valuation of $45 billion when factoring in $12 billion in debt. Meanwhile, xAI itself was valued at $80 billion. Both companies were combined into a single entity called X.AI Holdings Corp. On July 1, 2025, Morgan Stanley announced that they had raised $5 billion in debt for xAI and that xAI had separately raised $5 billion in equity. The debt consists of secured notes and term loans. Morgan Stanley took no stake in the debt. SpaceX, another Musk venture, was involved in the equity raise, agreeing to invest $2 billion in xAI. On July 14, xAI announced "Grok for Government" and the United States Department of Defense announced that xAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and OpenAI. On September 12, xAI laid off 500 data annotation workers. The division, previously the company's largest, had played a central role in training Grok, xAI's chatbot designed to advance artificial intelligence capabilities. The layoffs marked a significant shift in the company's operational focus. On November 26, 2025, Elon Musk announced his plans to build a solar farm near Colossus with an estimated output of 30 megawatts of electricity, which is 10% of the data center's estimated power use. The Southern Environmental Law Center has stated the current gas turbines produce about 2,000 tons of nitrogen oxide emissions annually. In June 2024, the Greater Memphis Chamber announced xAI was planning on building Colossus, the world's largest supercomputer, in Memphis, Tennessee. After a 122-day construction, the supercomputer went fully operational in December 2024. Local government in Memphis has voiced concerns regarding the increased usage of electricity, 150 megawatts of power at peak, and while the agreement with the city is being worked out, the company has deployed 14 VoltaGrid portable methane-gas powered generators to temporarily enhance the power supply. Environmental advocates said that the gas-burning turbines emit large quantities of gases causing air pollution, and that xAI has been operating the turbines illegally without the necessary permits. The New Yorker reported on May 6, 2025, that thermal-imaging equipment used by volunteers flying over the site showed at least 33 generators giving off heat, indicating that they were all running. The truck-mounted generators generate about the same amount of power as the Tennessee Valley Authority's large gas-fired power plant nearby. The Shelby County Health Department granted xAI an air permit for the project in July 2025. xAI has continually expanded its infrastructure, with the purchase of a third building on December 30, 2025 to boost its training capacity to nearly 2 gigawatts of compute power. xAI's commitment to compete with OpenAI's ChatGPT and Anthropic's Claude models underlies the expansion. Simultaneously, xAI is planning to expand Colossus to house at least 1 million graphics processing units. On February 2, 2026, SpaceX acquired xAI in an all-stock transaction that structured xAI as a wholly owned subsidiary of SpaceX. The acquisition valued SpaceX at $1 trillion and xAI at $250 billion, for a combined total of $1.25 trillion. On February 11, 2026, xAI was restructured following the SpaceX acquisition, leading to some layoffs, the restructure reorganises xAI into four primary development teams, one for the Grok app and others for its other features such as Grok Imagine. Grokipedia, X and API features would fall under more minor teams. Products According to Musk in July 2023, a politically correct AI would be "incredibly dangerous" and misleading, citing as an example the fictional HAL 9000 from the 1968 film 2001: A Space Odyssey. Musk instead said that xAI would be "maximally truth-seeking". Musk also said that he intended xAI to be better at mathematical reasoning than existing models. On November 4, 2023, xAI unveiled Grok, an AI chatbot that is integrated with X. xAI stated that when the bot is out of beta, it will only be available to X's Premium+ subscribers. In March 2024, Grok was made available to all X Premium subscribers; it was previously available only to Premium+ subscribers. On March 17, 2024, xAI released Grok-1 as open source. On March 29, 2024, Grok-1.5 was announced, with "improved reasoning capabilities" and a context length of 128,000 tokens. On April 12, 2024, Grok-1.5 Vision (Grok-1.5V) was announced.[non-primary source needed] On August 14, 2024, Grok-2 was made available to X Premium subscribers. It is the first Grok model with image generation capabilities. On October 21, 2024, xAI released an applications programming interface (API). On December 9, 2024, xAI released a text-to-image model named Aurora. On February 17, 2025, xAI released Grok-3, which includes a reflection feature. xAI also introduced a websearch function called DeepSearch. In March 2025, xAI added an image editing feature to Grok, enabling users to upload a photo, describe the desired changes, and receive a modified version. Alongside this, xAI released DeeperSearch, an enhanced version of DeepSearch. On July 9, 2025, xAI unveiled Grok-4. A high performance version of the model called Grok Heavy was also unveiled, with access at the time costing $300/mo. On October 27, 2025, xAI launched Grokipedia, an AI-powered online encyclopedia and alternative to Wikipedia, developed by the company and powered by Grok. Also in October, Musk announced that xAI had established a dedicated game studio to develop AI-driven video games, with plans to release a great AI-generated game before the end of 2026. Valuation See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/w/index.php?title=XAI_(company)&action=edit§ion=6] | [TOKENS: 1433] |
Editing XAI (company) (section) Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} This page is a member of 8 hidden categories (help): |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Star_formation] | [TOKENS: 3898] |
Contents Star formation Star formation is the process by which dense regions within molecular clouds in interstellar space—sometimes referred to as "stellar nurseries" or "star-forming regions"—collapse and form stars. As a branch of astronomy, star formation includes the study of the interstellar medium (ISM) and giant molecular clouds (GMC) as precursors to the star formation process, and the study of protostars and young stellar objects as its immediate products. It is closely related to planet formation, another branch of astronomy. Star formation theory, as well as accounting for the formation of a single star, must also account for the statistics of binary stars and the initial mass function. Most stars do not form in isolation but as part of a group of stars referred to as star clusters or stellar associations. First stars Star formation is divided into three groups called "Populations". Population III stars formed from primordial hydrogen after the Big Bang. These stars are poorly understood but should contain only hydrogen and helium. Population II stars formed from the debris of the first stars and they in turn created more higher atomic number chemical elements. Population I stars are young metal-rich (contain elements other than hydrogen and helium) stars like the Sun. The initial star formation was driven by gravitational attraction of hydrogen within local areas of higher gravity called dark matter halos. As the hydrogen lost energy through atomic or molecular energy transitions, the temperature of local clumps fell allowing more gravitational condensation. Eventually the process leads to collapse into a star. Details of the dynamics of the Population III stars is now believed to be as complex as star formation today. Stellar nurseries Spiral galaxies like the Milky Way contain stars, stellar remnants, and a diffuse interstellar medium (ISM) of gas and dust. The interstellar medium consists of 104 to 106 particles per cm3, and is typically composed of roughly 70% hydrogen, 28% helium, and 1.5% heavier elements by mass. The trace amounts of heavier elements were and are produced within stars via stellar nucleosynthesis and ejected as the stars pass beyond the end of their main sequence lifetime. Higher density regions of the interstellar medium form clouds, or diffuse nebulae, where star formation takes place. In contrast to spiral galaxies, elliptical galaxies lose the cold component[definition needed] of its interstellar medium within roughly a billion years, which hinders the galaxy from forming diffuse nebulae except through mergers with other galaxies. In the dense nebulae where stars are produced, much of the hydrogen is in the molecular (H2) form, so these nebulae are called molecular clouds. The Herschel Space Observatory has revealed that filaments, or elongated dense gas structures, are truly ubiquitous in molecular clouds and central to the star formation process. They fragment into gravitationally bound cores, most of which will evolve into stars. Continuous accretion of gas, geometrical bending[definition needed], and magnetic fields may control the detailed manner in which the filaments are fragmented. Observations of supercritical filaments have revealed quasi-periodic chains of dense cores with spacing comparable to the filament inner width, and embedded protostars with outflows.[jargon] Observations indicate that the coldest clouds tend to form low-mass stars, which are first observed via the infrared light they emit inside the clouds, and then as visible light when the clouds dissipate. Giant molecular clouds, which are generally warmer, produce stars of all masses. These giant molecular clouds have typical densities of 100 particles per cm3, diameters of 100 light-years (9.5×1014 km), masses of up to 6 million solar masses (M☉), or six million times the mass of the Sun. The average interior temperature is 10 K (−441.7 °F). About half the total mass of the Milky Way's galactic ISM is found in molecular clouds and the galaxy includes an estimated 6,000 molecular clouds, each with more than 100,000 M☉. The nebula nearest to the Sun where massive stars are being formed is the Orion Nebula, 1,300 light-years (1.2×1016 km) away. However, lower mass star formation is occurring about 400–450 light-years distant in the ρ Ophiuchi cloud complex. A more compact site of star formation is the opaque clouds of dense gas and dust known as Bok globules, so named after the astronomer Bart Bok. These can form in association with collapsing molecular clouds or possibly independently. The Bok globules are typically up to a light-year across and contain a few solar masses. They can be observed as dark clouds silhouetted against bright emission nebulae or background stars. Over half the known Bok globules have been found to contain newly forming stars. An interstellar cloud of gas will remain in hydrostatic equilibrium as long as the kinetic energy of the gas pressure is in balance with the potential energy of the internal gravitational force. Mathematically this is expressed using the virial theorem, which states that, to maintain equilibrium, the gravitational potential energy must equal twice the internal thermal energy. If a cloud is massive enough that the gas pressure is insufficient to support it, the cloud will undergo gravitational collapse. The mass above which a cloud will undergo such collapse is called the Jeans mass. The Jeans mass depends on the temperature and density of the cloud, but is typically thousands to tens of thousands of solar masses. During cloud collapse dozens to tens of thousands of stars form more or less simultaneously which is observable in so-called embedded clusters. The end product of a core collapse is an open cluster of stars. In triggered star formation, one of several events might occur to compress a molecular cloud and initiate its gravitational collapse. Molecular clouds may collide with each other, or a nearby supernova explosion can be a trigger, sending shocked matter into the cloud at very high speeds. (The resulting new stars may themselves soon produce supernovae, producing self-propagating star formation.) Alternatively, galactic collisions can trigger massive starbursts of star formation as the gas clouds in each galaxy are compressed and agitated by tidal forces. The latter mechanism may be responsible for the formation of globular clusters. A supermassive black hole at the core of a galaxy may serve to regulate the rate of star formation in a galactic nucleus. A black hole that is accreting infalling matter can become active, emitting a strong wind through a collimated relativistic jet. This can limit further star formation. Massive black holes ejecting radio-frequency-emitting particles at near-light speed can also block the formation of new stars in aging galaxies. However, the radio emissions around the jets may also trigger star formation. Likewise, a weaker jet may trigger star formation when it collides with a cloud. As it collapses, a molecular cloud breaks into smaller and smaller pieces in a hierarchical manner, until the fragments reach stellar mass. In each of these fragments, the collapsing gas radiates away the energy gained by the release of gravitational potential energy. As the density increases, the fragments become opaque and are thus less efficient at radiating away their energy. This raises the temperature of the cloud and inhibits further fragmentation. The fragments now condense into rotating spheres of gas that serve as stellar embryos. Complicating this picture of a collapsing cloud are the effects of turbulence, macroscopic flows, rotation, magnetic fields and the cloud geometry. Both rotation and magnetic fields can hinder the collapse of a cloud. Turbulence is instrumental in causing fragmentation of the cloud, and on the smallest scales it promotes collapse. Protostar A protostellar cloud will continue to collapse as long as the gravitational binding energy can be eliminated. This excess energy is primarily lost through radiation. However, the collapsing cloud will eventually become opaque to its own radiation, and the energy must be removed through some other means. The dust within the cloud becomes heated to temperatures of 60–100 K, and these particles radiate at wavelengths in the far infrared where the cloud is transparent. Thus the dust mediates the further collapse of the cloud. During the collapse, the density of the cloud increases towards the center and thus the middle region becomes optically opaque first. This occurs when the density is about 10−13 g / cm3. A core region, called the first hydrostatic core, forms where the collapse is essentially halted. It continues to increase in temperature as determined by the virial theorem. The gas falling toward this opaque region collides with it and creates shock waves that further heat the core. When the core temperature reaches about 2000 K, the thermal energy dissociates the H2 molecules. This is followed by the ionization of the hydrogen and helium atoms. These processes absorb the energy of the contraction, allowing it to continue on timescales comparable to the period of collapse at free fall velocities. After the density of infalling material has reached about 10−8 g / cm3, that material is sufficiently transparent to allow energy radiated by the protostar to escape. The combination of convection within the protostar and radiation from its exterior allow the star to contract further. This continues until the gas is hot enough for the internal pressure to support the protostar against further gravitational collapse—a state called hydrostatic equilibrium. When this accretion phase is nearly complete, the resulting object is known as a protostar. Accretion of material onto the protostar continues partially from the newly formed circumstellar disc. When the density and temperature are high enough, deuterium fusion begins, and the outward pressure of the resultant radiation slows (but does not stop) the collapse. Material comprising the cloud continues to "rain" onto the protostar. In this stage bipolar jets are produced called Herbig–Haro objects. This is probably the means by which excess angular momentum of the infalling material is expelled, allowing the star to continue to form. When the surrounding gas and dust envelope disperses and accretion process stops, the star is considered a pre-main-sequence star (PMS star). The energy source of these objects is (gravitational contraction) Kelvin–Helmholtz mechanism, as opposed to hydrogen burning in main sequence stars. The PMS star follows a Hayashi track on the Hertzsprung–Russell (H–R) diagram. The contraction will proceed until the Hayashi limit is reached, and thereafter contraction will continue on a Kelvin–Helmholtz timescale with the temperature remaining stable. Stars with less than 0.5 M☉ thereafter join the main sequence. For more massive PMS stars, at the end of the Hayashi track they will slowly collapse in near hydrostatic equilibrium, following the Henyey track. Finally, hydrogen begins to fuse in the core of the star, and the rest of the enveloping material is cleared away. This ends the protostellar phase and begins the star's main sequence phase on the H–R diagram. The stages of the process are well defined in stars with masses around 1 M☉ or less. In high mass stars, the length of the star formation process is comparable to the other timescales of their evolution, much shorter, and the process is not so well defined. The later evolution of stars is studied in stellar evolution. Observations Key elements of star formation are only available by observing in wavelengths other than the optical. The protostellar stage of stellar existence is almost invariably hidden away deep inside dense clouds of gas and dust left over from the GMC. Often, these star-forming cocoons known as Bok globules, can be seen in silhouette against bright emission from surrounding gas. Early stages of a star's life can be seen in infrared light, which penetrates the dust more easily than visible light. Observations from the Wide-field Infrared Survey Explorer (WISE) have thus been especially important for unveiling numerous galactic protostars and their parent star clusters. Examples of such embedded star clusters are FSR 1184, FSR 1190, Camargo 14, Camargo 74, Majaess 64, and Majaess 98. The structure of the molecular cloud and the effects of the protostar can be observed in near-IR extinction maps (where the number of stars are counted per unit area and compared to a nearby zero extinction area of sky), continuum dust emission and rotational transitions of CO and other molecules; these last two are observed in the millimeter and submillimeter range. The radiation from the protostar and early star has to be observed in infrared astronomy wavelengths, as the extinction caused by the rest of the cloud in which the star is forming is usually too big to allow us to observe it in the visual part of the spectrum. This presents considerable difficulties as the Earth's atmosphere is almost entirely opaque from 20μm to 850μm, with narrow windows at 200μm and 450μm. Even outside this range, atmospheric subtraction techniques must be used. X-ray observations have proven useful for studying young stars, since X-ray emission from these objects is about 100–100,000 times stronger than X-ray emission from main-sequence stars. The earliest detections of X-rays from T Tauri stars were made by the Einstein X-ray Observatory. For low-mass stars X-rays are generated by the heating of the stellar corona through magnetic reconnection, while for high-mass O and early B-type stars X-rays are generated through supersonic shocks in the stellar winds. Photons in the soft X-ray energy range covered by the Chandra X-ray Observatory and XMM-Newton may penetrate the interstellar medium with only moderate absorption due to gas, making the X-ray a useful wavelength for seeing the stellar populations within molecular clouds. X-ray emission as evidence of stellar youth makes this band particularly useful for performing censuses of stars in star-forming regions, given that not all young stars have infrared excesses. X-ray observations have provided near-complete censuses of all stellar-mass objects in the Orion Nebula Cluster and Taurus Molecular Cloud. The formation of individual stars can only be directly observed in the Milky Way Galaxy, but in distant galaxies star formation has been detected through its unique spectral signature. Initial research indicates star-forming clumps start as giant, dense areas in turbulent gas-rich matter in young galaxies, live about 500 million years, and may migrate to the center of a galaxy, creating the central bulge of a galaxy. On February 21, 2014, NASA announced a greatly upgraded database for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. According to scientists, more than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets. In February 2018, astronomers reported, for the first time, a signal of the reionization epoch, an indirect detection of light from the earliest stars formed – about 180 million years after the Big Bang. An article published on October 22, 2019, reported on the detection of 3MM-1, a massive star-forming galaxy about 12.5 billion light-years away that is obscured by clouds of dust. At a mass of about 1010.8 solar masses, it showed a star formation rate about 100 times as high as in the Milky Way. Low mass and high mass star formation Stars of different masses are thought to form by slightly different mechanisms. The theory of low-mass star formation, which is well-supported by observation, suggests that low-mass stars form by the gravitational collapse of rotating density enhancements within molecular clouds. As described above, the collapse of a rotating cloud of gas and dust leads to the formation of an accretion disk through which matter is channeled onto a central protostar. For stars with masses higher than about 8 M☉, however, the mechanism of star formation is not well understood. Massive stars emit copious quantities of radiation which pushes against infalling material. In the past, it was thought that this radiation pressure might be substantial enough to halt accretion onto the massive protostar and prevent the formation of stars with masses more than a few tens of solar masses. Recent theoretical work has shown that the production of a jet and outflow clears a cavity through which much of the radiation from a massive protostar can escape without hindering accretion through the disk and onto the protostar. Present thinking is that massive stars may therefore be able to form by a mechanism similar to that by which low mass stars form. There is mounting evidence that at least some massive protostars are indeed surrounded by accretion disks. Disk accretion in high-mass protostars, similar to their low-mass counterparts, is expected to exhibit bursts of episodic accretion as a result of a gravitationally instability leading to clumpy and in-continuous accretion rates. Recent evidence of accretion bursts in high-mass protostars has indeed been confirmed observationally. Several other theories of massive star formation remain to be tested observationally. Of these, perhaps the most prominent is the theory of competitive accretion, which suggests that massive protostars are "seeded" by low-mass protostars which compete with other protostars to draw in matter from the entire parent molecular cloud, instead of simply from a small local region. Another theory of massive star formation suggests that massive stars may form by the coalescence of two or more stars of lower mass. Filamentary nature of star formation Recent studies have emphasized the role of filamentary structures in molecular clouds as the initial conditions for star formation. Findings from the Herschel Space Observatory highlight the ubiquitous nature of these filaments in the cold interstellar medium (ISM). The spatial relationship between cores and filaments indicates that the majority of prestellar cores are located within 0.1 pc of supercritical filaments. This supports the hypothesis that filamentary structures act as pathways for the accumulation of gas and dust, leading to core formation. Both the core mass function (CMF) and filament line mass function (FLMF) observed in the California GMC follow power-law distributions at the high-mass end, consistent with the Salpeter initial mass function (IMF). Current results strongly support the existence of a connection between the FLMF and the CMF/IMF, demonstrating that this connection holds at the level of an individual cloud, specifically the California GMC. The FLMF presented is a distribution of local line masses for a complete, homogeneous sample of filaments within the same cloud. It is the local line mass of a filament that defines its ability to fragment at a particular location along its spine, not the average line mass of the filament. This connection is more direct and provides tighter constraints on the origin of the CMF/IMF. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy_(novel)] | [TOKENS: 2595] |
Contents The Hitchhiker's Guide to the Galaxy (novel) The Hitchhiker's Guide to the Galaxy is the first book in the Hitchhiker's Guide to the Galaxy comedy science fiction "trilogy of five books" by Douglas Adams with a sixth book written by Eoin Colfer. The novel is an adaptation of the first four parts of Adams's radio series of the same name, centring on the adventures of the only man to survive the destruction of Earth. While roaming outer space, he comes to learn the truth behind Earth's existence. The novel was first published in London on 12 October 1979. It sold 250,000 copies in the first three months. The namesake of the novel is The Hitchhiker's Guide to the Galaxy, a fictional guide book for hitchhikers (inspired by the Hitch-hiker's Guide to Europe) written in the form of an encyclopaedia. Plot summary The novel opens with an introduction describing the human race as a primitive and deeply unhappy species, while also introducing an electronic encyclopedia called the Hitchhiker's Guide to the Galaxy which provides information on every planet in the galaxy. Earthman and Englishman Arthur Dent awakens in his home in the West Country to discover that the local planning council is trying to demolish his house to build a bypass, and lies down in front of the bulldozer to stop it. His friend Ford Prefect convinces the lead bureaucrat to lie down in Arthur's stead so that he can take Arthur to the local pub. The construction crew begin demolishing the house anyway, but are interrupted by the sudden arrival of a fleet of spaceships. The Vogons, the callous race of civil servants running the fleet, announce that they have come to demolish Earth to make way for a hyperspace expressway, and promptly destroy the planet. Ford and Arthur survive by hitching a ride on the spaceship, much to Arthur's amazement. Ford reveals to Arthur he is an alien researcher for the Hitchhiker's Guide to the Galaxy, from a small planet in the vicinity of Betelgeuse who has been posing as an out-of-work actor from Guildford for 15 years, and this was why they were able to hitch a ride on the alien ship. They are quickly discovered by the Vogons, who torture them by forcing them to listen to their poetry and then toss them out of an airlock. Meanwhile, Zaphod Beeblebrox, Ford's "semi-cousin" and the President of the Galaxy, steals the spaceship Heart of Gold at its unveiling with his human companion, Trillian. The Heart of Gold is equipped with an "Infinite Improbability Drive" that allows it to travel instantaneously to any point in space by simultaneously passing through every point in the universe at once. However, the Infinite Improbability Drive has a side effect of causing impossible coincidences to occur in the physical universe. One of these improbable events occurs when Arthur and Ford are rescued by the Heart of Gold as it travels using the Infinite Improbability Drive. Zaphod takes his passengers—Arthur, Ford, a depressed robot named Marvin, and Trillian—to a legendary planet named Magrathea. Its inhabitants were said to have specialized in custom-building planets for others and to have vanished after becoming so rich that the rest of the galaxy became poor. Although Ford initially doubts that the planet is Magrathea, the planet's computers send them warning messages to leave before firing two nuclear missiles at the Heart of Gold. Arthur inadvertently saves them by activating the Infinite Improbability Drive improperly, which also opens an underground passage. As the ship lands, Trillian's pet mice Frankie and Benjy escape. On Magrathea, Zaphod, Ford, and Trillian venture down to the planet's interior while leaving Arthur and Marvin outside. In the tunnels, Zaphod reveals that his actions are not a result of his own decisions, but instead motivated by neural programming that he was seemingly involved in but has no memory of. As Zaphod explains how he discovered this, the trio are trapped and knocked out with sleeping gas. On the surface, Arthur is met by a resident of Magrathea, a man named Slartibartfast, who explains that the Magratheans have been in stasis to wait out an economic recession. They have temporarily reawakened to reconstruct a second version of Earth commissioned by mice, who were in fact the most intelligent species on Earth. Slartibartfast brings Arthur to Magrathea's planet construction facility, and shows Arthur that in the distant past, a race of "hyperintelligent, pan-dimensional beings" created a supercomputer named Deep Thought to determine the answer to the "Ultimate Question to Life, the Universe, and Everything." Deep Thought eventually found the answer to be 42, an answer that made no sense because the Ultimate Question itself was not known. Because determining the Ultimate Question was too difficult even for Deep Thought, an even more advanced supercomputer was constructed for this purpose. This computer was the planet Earth, which was constructed by the Magratheans, and was five minutes away from finishing its task and figuring out the Ultimate Question when the Vogons destroyed it. The hyperintelligent superbeings participated in the program as mice, performing experiments on humans while pretending to be experimented on. Slartibartfast takes Arthur to see his friends, who are at a feast hosted by Trillian's pet mice. The mice reject as unnecessary the idea of building a new Earth to start the process over, deciding that Arthur's brain likely contains the Ultimate Question. They offer to buy Arthur's brain, leading to a fight when he declines. The group manages to escape when the planet's security system goes off unexpectedly, but immediately run into the culprits: police in pursuit of Zaphod. The police corner Zaphod, Arthur, Ford and Trillian, and the situation seems desperate as they are trapped behind a computer bank that is about to explode from the officers' weapons firing. However, the police officers suddenly die when their life-support systems short-circuit. Suspicious, Ford discovers on the surface that Marvin became bored and explained his view of the universe to the police officers' spaceship, causing it to commit suicide. The five leave Magrathea and decide to go to The Restaurant at the End of the Universe. Illustrated edition The Illustrated Hitchhiker's Guide to the Galaxy is a specially designed book made in 1994. It was first printed in the United Kingdom by Weidenfeld & Nicolson and in the United States by Harmony Books (who sold it for $42.00). It is an oversized book, and came in silver-foil "holographic" covers in both the UK and US markets. It features the first appearance of the 42 Puzzle, designed by Adams himself, a photograph of Adams and his literary agent Ed Victor as the two space cops, and many other designs by Kevin Davies, who has participated in many Hitchhiker's related projects since the stage productions in the late 1970s. Davies himself appears as Prosser. This edition is out of print—Adams bought up many remainder copies and sold them, autographed, on his website. In other media There have been three audiobook recordings of the novel. The first was an abridged edition (ISBN 0-671-62964-6), recorded in the mid-1980s by Stephen Moore, best known for playing the voice of Marvin the Paranoid Android in the radio series, LP adaptations and in the TV series. In 1990, Adams himself recorded an unabridged edition for Dove Audiobooks (ISBN 1-55800-273-1), later re-released by New Millennium Audio (ISBN 1-59007-257-X) in the United States and available from BBC Audiobooks in the United Kingdom. Also by arrangement with Dove, ISIS Publishing Ltd produced a numbered exclusive edition signed by Douglas Adams (ISBN 1-85695-028-X) in 1994. To tie-in with the 2005 film, actor Stephen Fry, the film's voice of the Guide, recorded a second unabridged edition (ISBN 0-7393-2220-6). The popularity of the radio series gave rise to a six-episode television series, directed and produced by Alan J. W. Bell, which first aired on BBC 2 in January and February 1981. It employed many of the actors from the radio series and was based mainly on the radio versions of Fits the First through Sixth. A second series was planned at one point with a storyline, according to Alan Bell and Mark Wing-Davey, that would have come from Adams’s abandoned Doctor Who and the Krikkitmen project (instead of simply making a TV version of the second radio series). However, Adams got into disputes with the BBC (accounts differ: problems with budget, scripts and having Alan Bell involved are all offered as causes) and the second series was never made. Elements of Doctor Who and the Krikkitmen were instead used in the third novel, Life, the Universe and Everything. The main cast was the same as the original radio series, except for David Dixon as Ford Prefect instead of McGivern and Sandra Dickinson as Trillian instead of Sheridan. The Hitchhiker's Guide to the Galaxy was adapted into a science fiction comedy film directed by Garth Jennings and released on 28 April 2005 in the UK, Australia and New Zealand and on the following day in the United States and Canada. It was rolled out to cinemas worldwide during May, June, July, August and September. Reception Greg Costikyan reviewed The Hitchhiker's Guide to the Galaxy in Ares Magazine #6 and commented that "The Hitchhiker's Guide is written with superb English wit, far more humorous than any American sitcom." The Pequod rated the book a 9.5 (out of 10.0) and called it "an ingeniously silly sci-fi satire... It may not add up to much but the jokes keep coming fast and furiously and its enormous cultural influence ("Don’t Panic," etc.) proves to be well-earned." C. J. Henderson reviewed The Hitchhiker's Guide to the Galaxy for Pegasus magazine and stated that "It is silly, happy, absurd stuff. It is the wildest funniest, sci fi novel ever. The only disappointing thing about it is that the reader is forced to wait for the next volume to come out to get more of the same." Other books The deliberately misnamed Hitchhiker's Guide to the Galaxy "Trilogy" consists of six books (a hexalogy). The word "trilogy" does not appear on the covers of the first three books and was not used until the publication of the fourth novel. The first five books of the series were written by Adams: Irish author Eoin Colfer continued the series with the sixth novel And Another Thing... published in October 2009, the 30th anniversary of the first novel's publication. Legacy The "Babel fish", a creature used in the novel that feeds on brainwaves and can instantly translate alien languages, inspired the name of Babel Fish, the first free online language translator, which launched in 1997. Radiohead's song "Paranoid Android" (1997) was named after Marvin's nickname, and the album it appears on, OK Computer, is part of a line spoken by Zaphod to Eddie, the Heart of Gold's computer. The Trillian instant messaging app (2000–) was named after the character. Towel Day is celebrated every year on 25 May as a tribute to Douglas Adams by his fans. On this day, fans openly carry a towel with them to demonstrate their appreciation for Adams and the book series. The commemoration was first held 25 May 2001, two weeks after Adams' death on 11 May. When Elon Musk's Tesla Roadster was launched into space on the maiden flight of the Falcon Heavy rocket in February 2018, it had the words DON'T PANIC on the dashboard display and carried amongst other items a copy of the novel and a towel. In 2022, the novel was included on the "Big Jubilee Read" list of 70 books by Commonwealth authors, selected to celebrate the Platinum Jubilee of Elizabeth II. The novel's popularity (or the entire Hitchhikers franchise) has contributed to many homages and references involving the number 42, ranging from numerous mentions in pop culture to frequent mentions of the novel whenever the number 42 is addressed in Western media. Awards See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Inbreeding] | [TOKENS: 5408] |
Contents Inbreeding Inbreeding is the production of offspring from the mating or breeding of individuals or organisms that are closely related genetically. By analogy, the term is used in human reproduction, but more commonly refers to the genetic disorders and other consequences that may arise from expression of deleterious recessive traits resulting from incestuous sexual relationships and consanguinity. Inbreeding results in homozygosity which can increase the chances of offspring being affected by recessive traits. In extreme cases, this usually leads to at least temporarily decreased biological fitness of a population (called inbreeding depression), which is its ability to survive and reproduce. An individual who inherits such deleterious traits is colloquially referred to as inbred. The avoidance of expression of such deleterious recessive alleles caused by inbreeding, via inbreeding avoidance mechanisms, is the main selective reason for outcrossing. Crossbreeding between populations sometimes has positive effects on fitness-related traits, but also sometimes leads to negative effects known as outbreeding depression. However, increased homozygosity increases the probability of fixing beneficial alleles and also slightly decreases the probability of fixing deleterious alleles in a population. Inbreeding can result in purging of deleterious alleles from a population through purifying selection. Inbreeding is a technique used in selective breeding. For example, in livestock breeding, breeders may use inbreeding when trying to establish a new and desirable trait in the stock and for producing distinct families within a breed, but will need to watch for undesirable characteristics in offspring, which can then be eliminated through further selective breeding or culling. Inbreeding also helps to ascertain the type of gene action affecting a trait. Inbreeding is also used to reveal deleterious recessive alleles, which can then be eliminated through assortative breeding or through culling. In plant breeding, inbred lines are used as stocks for the creation of hybrid lines to make use of the effects of heterosis. Inbreeding in plants also occurs naturally in the form of self-pollination. Inbreeding can significantly influence gene expression which can prevent inbreeding depression. Overview Offspring of biologically related persons are subject to the possible effects of inbreeding, such as congenital birth defects. The chances of such disorders are increased when the biological parents are more closely related. This is because such pairings have a 25% probability of producing homozygous zygotes, resulting in offspring with two recessive alleles, which can produce disorders when these alleles are deleterious. Because most recessive alleles are rare in populations, it is unlikely that two unrelated partners will both be carriers of the same deleterious allele; however, because close relatives share a large fraction of their alleles, the probability that any such deleterious allele is inherited from the common ancestor through both parents is increased dramatically. For each homozygous recessive individual formed there is an equal chance of producing a homozygous dominant individual — one completely devoid of the harmful allele. Contrary to common belief, inbreeding does not in itself alter allele frequencies, but rather increases the relative proportion of homozygotes to heterozygotes; however, because the increased proportion of deleterious homozygotes exposes the allele to natural selection, in the long run its frequency decreases more rapidly in inbred populations. In the short term, incestuous reproduction is expected to increase the number of spontaneous abortions of zygotes, perinatal deaths, and postnatal offspring with birth defects. The advantages of inbreeding may be the result of a tendency to preserve the structures of alleles interacting at different loci that have been adapted together by a common selective history. Malformations or harmful traits can stay within a population due to a high homozygosity rate, and this will cause a population to become fixed for certain traits, like having too many bones in an area, like the vertebral column of wolves on Isle Royale or having cranial abnormalities, such as in Northern elephant seals, where their cranial bone length in the lower mandibular tooth row has changed. Having a high homozygosity rate is problematic for a population because it will unmask recessive deleterious alleles generated by mutations, reduce heterozygote advantage, and it is detrimental to the survival of small, endangered animal populations. When deleterious recessive alleles are unmasked due to the increased homozygosity generated by inbreeding, this can cause inbreeding depression. There may also be other deleterious effects besides those caused by recessive diseases. Thus, similar immune systems may be more vulnerable to infectious diseases (see Major histocompatibility complex and sexual selection). Inbreeding history of the population should also be considered when discussing the variation in the severity of inbreeding depression between and within species. With persistent inbreeding, there is evidence that shows that inbreeding depression becomes less severe. This is associated with the unmasking and elimination of severely deleterious recessive alleles. However, inbreeding depression is not a temporary phenomenon because this elimination of deleterious recessive alleles will never be complete. Eliminating slightly deleterious mutations through inbreeding under moderate selection is not as effective. Fixation of alleles most likely occurs through Muller's ratchet, when an asexual population's genome accumulates deleterious mutations that are irreversible. Despite all its disadvantages, inbreeding can also have a variety of advantages, such as ensuring a child produced from the mating contains, and will pass on, a higher percentage of its mother/father's genetics, reducing the recombination load, and allowing the expression of recessive advantageous phenotypes. Some species with a Haplodiploidy mating system depend on the ability to produce sons to mate with as a means of ensuring a mate can be found if no other male is available. It has been proposed that under circumstances when the advantages of inbreeding outweigh the disadvantages, preferential breeding within small groups could be promoted, potentially leading to speciation. Genetic disorders Autosomal recessive disorders occur in individuals who have two copies of an allele for a particular recessive genetic mutation. Except in certain rare circumstances, such as new mutations or uniparental disomy, both parents of an individual with such a disorder will be carriers of the gene. These carriers do not display any signs of the mutation and may be unaware that they carry the mutated gene. Since relatives share a higher proportion of their genes than do unrelated people, it is more likely that related parents will both be carriers of the same recessive allele, and therefore their children are at a higher risk of inheriting an autosomal recessive genetic disorder. The extent to which the risk increases depends on the degree of genetic relationship between the parents; the risk is greater when the parents are close relatives and lower for relationships between more distant relatives, such as second cousins, though still greater than for the general population. Children of parent-child or sibling-sibling unions are at an increased risk compared to cousin-cousin unions.: 3 Inbreeding may result in a greater than expected phenotypic expression of deleterious recessive alleles within a population. As a result, first-generation inbred individuals are more likely to show physical and health defects, including: The isolation of a small population for a period of time can lead to inbreeding within that population, resulting in increased genetic relatedness between breeding individuals. Inbreeding depression can also occur in a large population if individuals tend to mate with their relatives, instead of mating randomly.[citation needed] Due to higher prenatal and postnatal mortality rates, some individuals in the first generation of inbreeding will not live on to reproduce. Over time, with isolation, such as a population bottleneck caused by purposeful (assortative) breeding or natural environmental factors, the deleterious inherited traits are culled. Island species are often very inbred, as their isolation from the larger group on a mainland allows natural selection to work on their population. This type of isolation may result in the formation of race or even speciation, as the inbreeding first removes many deleterious genes, and permits the expression of genes that allow a population to adapt to an ecosystem. As the adaptation becomes more pronounced, the new species or race radiates from its entrance into the new space, or dies out if it cannot adapt and, most importantly, reproduce. The reduced genetic diversity, for example due to a bottleneck will unavoidably increase inbreeding for the entire population. This may mean that a species may not be able to adapt to changes in environmental conditions. Each individual will have similar immune systems, as immune systems are genetically based. When a species becomes endangered, the population may fall below a minimum whereby the forced interbreeding between the remaining animals will result in extinction.[citation needed] Natural breedings include inbreeding by necessity, and most animals only migrate when necessary. In many cases, the closest available mate is a mother, sister, grandmother, father, brother, or grandfather. In all cases, the environment presents stresses to remove from the population those individuals who cannot survive because of illness.[citation needed] There was an assumption[by whom?] that wild populations do not inbreed; this is not what is observed in some cases in the wild. However, in species such as horses, animals in wild or feral conditions often drive off the young of both sexes, thought to be a mechanism by which the species instinctively avoids some of the genetic consequences of inbreeding. In general, many mammal species, including humanity's closest primate relatives, avoid close inbreeding possibly due to the deleterious effects.: 6 Although there are several examples of inbred populations of wild animals, the negative consequences of this inbreeding are poorly documented.[citation needed] In the South American sea lion, there was concern that recent population crashes would reduce genetic diversity. Historical analysis indicated that a population expansion from just two matrilineal lines was responsible for most of the individuals within the population. Even so, the diversity within the lines allowed great variation in the gene pool that may help to protect the South American sea lion from extinction. In lions, prides are often followed by related males in bachelor groups. When the dominant male is killed or driven off by one of these bachelors, a father may be replaced by his son. There is no mechanism for preventing inbreeding or to ensure outcrossing. In the prides, most lionesses are related to one another. If there is more than one dominant male, the group of alpha males are usually related. Two lines are then being "line bred". Also, in some populations, such as the Crater lions, it is known that a population bottleneck has occurred. Researchers found far greater genetic heterozygosity than expected. In fact, predators are known for low genetic variance, along with most of the top portion of the trophic levels of an ecosystem. Additionally, the alpha males of two neighboring prides can be from the same litter; one brother may come to acquire leadership over another's pride, and subsequently mate with his 'nieces' or cousins. However, killing another male's cubs, upon the takeover, allows the new selected gene complement of the incoming alpha male to prevail over the previous male. There are genetic assays being scheduled for lions to determine their genetic diversity. The preliminary studies show results inconsistent with the outcrossing paradigm based on individual environments of the studied groups. In Central California, sea otters were thought to have been driven to extinction due to over hunting, until a small colony was discovered in the Point Sur region in the 1930s. Since then, the population has grown and spread along the central Californian coast to around 2,000 individuals, a level that has remained stable for over a decade. Population growth is limited by the fact that all Californian sea otters are descended from the isolated colony, resulting in inbreeding. Cheetahs are another example of inbreeding. Thousands of years ago, the cheetah went through a population bottleneck that reduced its population dramatically so the animals that are alive today are all related to one another. A consequence from inbreeding for this species has been high juvenile mortality, low fecundity, and poor breeding success. In a study on an island population of song sparrows, individuals that were inbred showed significantly lower survival rates than outbred individuals during a severe winter weather related population crash. These studies show that inbreeding depression and ecological factors have an influence on survival. The Florida panther population was reduced to about 30 animals, so inbreeding became a problem. Several females were imported from Texas and now the population is better off genetically. Measures A measure of inbreeding of an individual A is the probability F(A) that both alleles in one locus are derived from the same allele in an ancestor. These two identical alleles that are both derived from a common ancestor are said to be identical by descent. This probability F(A) is called the "coefficient of inbreeding". Another useful measure that describes the extent to which two individuals are related (say individuals A and B) is their coancestry coefficient f(A,B), which gives the probability that one randomly selected allele from A and another randomly selected allele from B are identical by descent. This is also denoted as the kinship coefficient between A and B. A particular case is the self-coancestry of individual A with itself, f(A,A), which is the probability that taking one random allele from A and then, independently and with replacement, another random allele also from A, both are identical by descent. Since they can be identical by descent by sampling the same allele or by sampling both alleles that happen to be identical by descent, we have f(A,A) = 1/2 + F(A)/2. Both the inbreeding and the coancestry coefficients can be defined for specific individuals or as average population values. They can be computed from genealogies or estimated from the population size and its breeding properties, but all methods assume no selection and are limited to neutral alleles.[citation needed] There are several methods to compute this percentage. The two main ways are the path method and the tabular method. Typical coancestries between relatives are as follows: Animals Breeding in domestic animals is primarily assortative breeding (see selective breeding). Without the sorting of individuals by trait, a breed could not be established, nor could poor genetic material be removed. Homozygosity is the case where similar or identical alleles combine to express a trait that is not otherwise expressed (recessiveness). Inbreeding exposes recessive alleles through increasing homozygosity. Breeders must avoid breeding from individuals that demonstrate either homozygosity or heterozygosity for disease causing alleles. The goal of preventing the transfer of deleterious alleles may be achieved by reproductive isolation, sterilization, or, in the extreme case, culling. Culling is not strictly necessary if genetics is the only issue in small/domestic animals: sterilization/fertility control is effective and often preferable. For large agricultural animals, like cattle, culling is routinely used as the primary economic method to remove less desirable animals rather than sterilize them. The issue of casual breeders who inbreed irresponsibly is discussed in the following quotation on cattle: Meanwhile, milk production per cow per lactation increased from 17,444 lbs to 25,013 lbs from 1978 to 1998 for the Holstein breed. Mean breeding values for milk of Holstein cows increased by 4,829 lbs during this period. High producing cows are increasingly difficult to breed and are subject to higher health costs than cows of lower genetic merit for production (Cassell, 2001). Intensive selection for higher yield has increased relationships among animals within breed and increased the rate of casual inbreeding. Many of the traits that affect profitability in crosses of modern dairy breeds have not been studied in designed experiments. Indeed, all crossbreeding research involving North American breeds and strains is very dated (McAllister, 2001) if it exists at all. As a result of long-term cooperation between USDA and dairy farmers which led to a revolution in dairy cattle productivity, the United States has since 1992 been the world's largest supplier of dairy bull semen. However, US genomic technology has resulted in the US dairy cattle population becoming "the most inbred it's ever been" and the rate of increase in US national milk yield has tapered off. Efforts are now being made to identify desirable genes in cattle breeds not yet optimized by US dairy breeders in order to apply hybrid vigor to the US dairy cattle population and thus propel US dairy technology to even higher levels of productivity.[citation needed] The BBC produced two documentaries on dog inbreeding titled Pedigree Dogs Exposed and Pedigree Dogs Exposed: Three Years On that document the negative health consequences of excessive inbreeding.[citation needed] Linebreeding is a form of inbreeding. There is no clear distinction between the two terms[dubious – discuss], but linebreeding may encompass crosses between individuals and their descendants or two cousins. This method can be used to increase a particular animal's contribution to the population. While linebreeding is less likely to cause problems in the first generation than does inbreeding, over time, linebreeding can reduce the genetic diversity of a population and cause problems related to a too-small gene pool that may include an increased prevalence of genetic disorders and inbreeding depression. Outcrossing is where two unrelated individuals are crossed to produce progeny. In outcrossing, unless there is verifiable genetic information, one may find that all individuals are distantly related to an ancient progenitor. If the trait carries throughout a population, all individuals can have this trait. This is called the founder effect. In the well established breeds, that are commonly bred, a large gene pool is present. For example, in 2004, over 18,000 Persian cats were registered. A possibility exists for a complete outcross, if no barriers exist between the individuals to breed. However, it is not always the case, and a form of distant linebreeding occurs. Again it is up to the assortative breeder to know what sort of traits, both positive and negative, exist within the diversity of one breeding. This diversity of genetic expression, within even close relatives, increases the variability and diversity of viable stock.[citation needed] Systematic inbreeding and maintenance of inbred strains of laboratory mice and rats is of great importance for biomedical research. The inbreeding guarantees a consistent and uniform animal model for experimental purposes and enables genetic studies in congenic and knock-out animals. In order to achieve a mouse strain that is considered inbred, a minimum of 20 sequential generations of sibling matings must occur. With each successive generation of breeding, homozygosity in the entire genome increases, eliminating heterozygous loci. With 20 generations of sibling matings, homozygosity is occurring at roughly 98.7% of all loci in the genome, allowing for these offspring to serve as animal models for genetic studies. The use of inbred strains is also important for genetic studies in animal models, for example to distinguish genetic from environmental effects. The mice that are inbred typically show considerably lower survival rates.[citation needed] Humans Inbreeding increases homozygosity, which can increase the chances of the expression of deleterious or beneficial recessive alleles and therefore has the potential to either decrease or increase the fitness of the offspring. Depending on the rate of inbreeding, natural selection may still be able to eliminate deleterious alleles. With continuous inbreeding, genetic variation is lost and homozygosity is increased, enabling the expression of recessive deleterious alleles in homozygotes. The coefficient of inbreeding, or the degree of inbreeding in an individual, is an estimate of the percent of homozygous alleles in the overall genome. The more biologically related the parents are, the greater the coefficient of inbreeding, since their genomes have many similarities already. This overall homozygosity becomes an issue when there are deleterious recessive alleles in the gene pool of the family. By pairing chromosomes of similar genomes, the chance for these recessive alleles to pair and become homozygous greatly increases, leading to offspring with autosomal recessive disorders. However, these deleterious effects are common for very close relatives but not for those related on the 3rd cousin or greater level, who exhibit increased fitness. Inbreeding is especially problematic in small populations where the genetic variation is already limited. By inbreeding, individuals are further decreasing genetic variation by increasing homozygosity in the genomes of their offspring. Thus, the likelihood of deleterious recessive alleles to pair is significantly higher in a small inbreeding population than in a larger inbreeding population. The fitness consequences of consanguineous mating have been studied since their scientific recognition by Charles Darwin in 1839. Some of the most harmful effects known from such breeding includes its effects on the mortality rate as well as on the general health of the offspring. Since the 1960s, there have been many studies to support such debilitating effects on the human organism. Specifically, inbreeding has been found to decrease fertility as a direct result of increasing homozygosity of deleterious recessive alleles. Fetuses produced by inbreeding also face a greater risk of spontaneous abortions due to inherent complications in development. Among mothers who experience stillbirths and early infant deaths, those that are inbreeding have a significantly higher chance of reaching repeated results with future offspring. Additionally, consanguineous parents possess a high risk of premature birth and producing underweight and undersized infants. Viable inbred offspring are also likely to be inflicted with physical deformities and genetically inherited diseases. Studies have confirmed an increase in several genetic disorders due to inbreeding such as blindness, hearing loss, neonatal diabetes, limb malformations, disorders of sex development, schizophrenia and several others. Moreover, there is an increased risk for congenital heart disease depending on the inbreeding coefficient of the offspring, with significant risk when {{{1}}} or higher. The general negative outlook and eschewal of inbreeding that is prevalent in the Western world today has roots from over 2000 years ago. Specifically, written documents such as the Bible illustrate that there have been laws and social customs that have called for the abstention from inbreeding. Along with cultural taboos, parental education and awareness of inbreeding consequences have played large roles in minimizing inbreeding frequencies in areas like Europe. That being so, there are less urbanized and less populated regions across the world that have shown continuity in the practice of inbreeding.[citation needed] The continuity of inbreeding is often either by choice or unavoidably due to the limitations of the geographical area. When by choice, the rate of consanguinity is highly dependent on religion and culture. In the Western world, some Anabaptist groups are highly inbred because they originate from small founder populations that have bred as a closed population. Of the practicing regions, Middle Eastern and northern African nations show the greatest frequencies of consanguinity. Among these populations with high levels of inbreeding, researchers have found several disorders prevalent among inbred offspring. In Lebanon, Saudi Arabia, Egypt, and Israel, the offspring of consanguineous relationships have an increased risk of congenital malformations, congenital heart defects, congenital hydrocephalus and neural tube defects. Furthermore, among inbred children in Palestine and Lebanon, there is a positive association between consanguinity and reported cleft lip/palate cases. Historically, populations of Qatar have engaged in consanguineous relationships of all kinds, leading to high risk of inheriting genetic diseases. As of 2014, around 5% of the Qatari population suffered from hereditary hearing loss; most were descendants of a consanguineous relationship. In 2017-2019, congenital anomalies due to inbreeding was the most common cause of death of babies belonging to the Pakistani and Bangladeshi ethnic groups in England and Wales. Inter-nobility marriage was used as a method of forming political alliances among elites. These ties were often sealed only upon the birth of progeny within the arranged marriage. Thus marriage was seen as a union of lines of nobility and not as a contract between individuals.[citation needed] Royal intermarriage was often practiced among European royal families, usually for interests of state. Over time, due to the relatively limited number of potential consorts, the gene pool of many ruling families grew progressively smaller, until all European royalty was related. This also resulted in many being descended from a certain person through many lines of descent, such as the numerous European royalty and nobility descended from the British Queen Victoria or King Christian IX of Denmark. The House of Habsburg was known for its intermarriages; the Habsburg lip often cited as an ill-effect. The closely related houses of Habsburg, Bourbon, Braganza and Wittelsbach also frequently engaged in first-cousin unions as well as the occasional double-cousin and uncle–niece marriages.[citation needed] In ancient Egypt, royal women were believed to carry the bloodlines and so it was advantageous for a pharaoh to marry his sister or half-sister; in such cases a special combination between endogamy and polygamy is found. Normally, the old ruler's eldest son and daughter (who could be either siblings or half-siblings) became the new rulers. All rulers of the Ptolemaic dynasty uninterruptedly from Ptolemy IV (Ptolemy II married his sister but had no issue) were married to their brothers and sisters, so as to keep the Ptolemaic blood "pure" and to strengthen the line of succession. King Tutankhamun's mother is reported to have been the half-sister to his father, Cleopatra VII (also called Cleopatra VI) and Ptolemy XIII, who married and became co-rulers of ancient Egypt following their father's death, are the most widely known example. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Telescope] | [TOKENS: 2324] |
Contents Telescope A telescope is a device used to observe distant objects by their emission, absorption, or reflection of electromagnetic radiation. Originally, it was an optical instrument using lenses, curved mirrors, or a combination of both to observe distant objects – an optical telescope. Nowadays, the word "telescope" is defined as a wide range of instruments capable of detecting different regions of the electromagnetic spectrum, and in some cases other types of detectors. The first known practical telescopes were refracting telescopes with glass lenses and were invented in the Netherlands at the beginning of the 17th century. They were used for both terrestrial applications and astronomy. The reflecting telescope, which uses mirrors to collect and focus light, was invented within a few decades of the first refracting telescope. In the 20th century, many new types of telescopes were invented, including radio telescopes in the 1930s and infrared telescopes in the 1960s. Etymology The word telescope was coined in 1611 by the Greek mathematician Giovanni Demisiani for one of Galileo Galilei's instruments presented at a banquet at the Accademia dei Lincei. In the Starry Messenger, Galileo had used the Latin term perspicillum. The root of the word is from the Ancient Greek τῆλε, tele 'far' and σκοπεῖν, skopein 'to look or see'; τηλεσκόπος, teleskopos 'far-seeing'. History The earliest existing record of a telescope was a 1608 patent submitted to the government in the Netherlands by Middelburg spectacle maker Hans Lipperhey for a refracting telescope. The actual inventor is unknown but word of it spread through Europe. Galileo heard about it and, in 1609, built his own version, and made his telescopic observations of celestial objects. The idea that the objective, or light-gathering element, could be a mirror instead of a lens was being investigated soon after the invention of the refracting telescope. The potential advantages of using parabolic mirrors—reduction of spherical aberration and no chromatic aberration—led to many proposed designs and several attempts to build reflecting telescopes. In 1668, Isaac Newton built the first practical reflecting telescope, of a design which now bears his name, the Newtonian reflector. John Dobson invented the Dobsonian telescope in 1956. The invention of the achromatic lens in 1733 partially corrected color aberrations present in the simple lens and enabled the construction of shorter, more functional refracting telescopes. Reflecting telescopes, though not limited by the color problems seen in refractors, were hampered by the use of fast tarnishing speculum metal mirrors employed during the 18th and early 19th century—a problem alleviated by the introduction of silver coated glass mirrors in 1857, and aluminized mirrors in 1932. The maximum physical size limit for refracting telescopes is about 1 meter (39 inches), dictating that the vast majority of large optical researching telescopes built since the turn of the 20th century have been reflectors. The largest reflecting telescopes currently have objectives larger than 10 meters (33 feet), and work is underway on several 30–40m designs. The 20th century also saw the development of telescopes that worked in a wide range of wavelengths from radio to gamma-rays. The first purpose-built radio telescope went into operation in 1937. Since then, a large variety of complex astronomical instruments have been developed. In space Since the atmosphere is opaque for most of the electromagnetic spectrum, only a few bands can be observed from the Earth's surface. These bands are visible – near-infrared and a portion of the radio-wave part of the spectrum. For this reason there are no X-ray or far-infrared ground-based telescopes as these have to be observed from orbit. Even if a wavelength is observable from the ground, it might still be advantageous to place a telescope on a satellite due to issues such as clouds, astronomical seeing and light pollution. The disadvantages of launching a space telescope include cost, size, maintainability and upgradability. Some examples of space telescopes from NASA are the Hubble Space Telescope that detects visible light, ultraviolet, and near-infrared wavelengths, the Spitzer Space Telescope that detects infrared radiation, and the Kepler Space Telescope that discovered thousands of exoplanets. The latest telescope that was launched was the James Webb Space Telescope on 25 December 2021, in Kourou, French Guiana. The Webb telescope detects infrared light. By electromagnetic spectrum The name "telescope" covers a wide range of instruments. Most detect electromagnetic radiation, but there are major differences in how astronomers must go about collecting light (electromagnetic radiation) in different frequency bands. As wavelengths become longer, it becomes easier to use antenna technology to interact with electromagnetic radiation (although it is possible to make very tiny antenna). The near-infrared can be collected much like visible light; however, in the far-infrared and submillimetre range, telescopes can operate more like a radio telescope. For example, the James Clerk Maxwell Telescope observes from wavelengths from 3 μm (0.003 mm) to 2000 μm (2 mm), but uses a parabolic aluminum antenna. On the other hand, the Spitzer Space Telescope, observing from about 3 μm (0.003 mm) to 180 μm (0.18 mm) uses a mirror (reflecting optics). Also using reflecting optics, the Hubble Space Telescope with Wide Field Camera 3 can observe in the frequency range from about 0.2 μm (0.0002 mm) to 1.7 μm (0.0017 mm) (from ultra-violet to infrared light). With photons of the shorter wavelengths, with the higher frequencies, glancing-incident optics, rather than fully reflecting optics are used. Telescopes such as TRACE and SOHO use special mirrors to reflect extreme ultraviolet, producing higher resolution and brighter images than are otherwise possible. A larger aperture does not just mean that more light is collected, it also enables a finer angular resolution. Telescopes may also be classified by location: ground telescope, space telescope, or flying telescope. They may also be classified by whether they are operated by professional astronomers or amateur astronomers. A vehicle or permanent campus containing one or more telescopes or other instruments is called an observatory. Radio telescopes are directional radio antennas that typically employ a large dish to collect radio waves. The dishes are sometimes constructed of a conductive wire mesh whose openings are smaller than the wavelength being observed. Unlike an optical telescope, which produces a magnified image of the patch of sky being observed, a traditional radio telescope dish contains a single receiver and records a single time-varying signal characteristic of the observed region; this signal may be sampled at various frequencies. In some newer radio telescope designs, a single dish contains an array of several receivers; this is known as a focal-plane array. By collecting and correlating signals simultaneously received by several dishes, high-resolution images can be computed. Such multi-dish arrays are known as astronomical interferometers and the technique is called aperture synthesis. The 'virtual' apertures of these arrays are similar in size to the distance between the telescopes. As of 2005, the record array size is many times the diameter of the Earth – using space-based very-long-baseline interferometry (VLBI) telescopes such as the Japanese HALCA (Highly Advanced Laboratory for Communications and Astronomy) VSOP (VLBI Space Observatory Program) satellite. Aperture synthesis is now also being applied to optical telescopes using optical interferometers (arrays of optical telescopes) and aperture masking interferometry at single reflecting telescopes. Radio telescopes are also used to collect microwave radiation, which has the advantage of being able to pass through the atmosphere and interstellar gas and dust clouds. Some radio telescopes such as the Allen Telescope Array are used by programs such as SETI and the Arecibo Observatory to search for extraterrestrial life. An optical telescope gathers and focuses light mainly from the visible part of the electromagnetic spectrum. Optical telescopes increase the apparent angular size of distant objects as well as their apparent brightness. For the image to be observed, photographed, studied, and sent to a computer, telescopes work by employing one or more curved optical elements, usually made from glass lenses and/or mirrors, to gather light and other electromagnetic radiation to bring that light or radiation to a focal point. Optical telescopes are used for astronomy and in many non-astronomical instruments, including: theodolites (including transits), spotting scopes, monoculars, binoculars, camera lenses, and spyglasses. There are three main optical types: A Fresnel imager is a proposed ultra-lightweight design for a space telescope that uses a Fresnel lens to focus light. Beyond these basic optical types there are many sub-types of varying optical design classified by the task they perform such as astrographs, comet seekers and solar telescopes. Most ultraviolet light is absorbed by the Earth's atmosphere, so observations at these wavelengths must be performed from the upper atmosphere or from space. X-rays are much harder to collect and focus than electromagnetic radiation of longer wavelengths. X-ray telescopes can use X-ray optics, such as Wolter telescopes composed of ring-shaped 'glancing' mirrors made of heavy metals that are able to reflect the rays just a few degrees. The mirrors are usually a section of a rotated parabola and a hyperbola, or ellipse. In 1952, Hans Wolter outlined 3 ways a telescope could be built using only this kind of mirror. Examples of space observatories using this type of telescope are the Einstein Observatory, ROSAT, and the Chandra X-ray Observatory. In 2012 the NuSTAR X-ray Telescope was launched which uses Wolter telescope design optics at the end of a long deployable mast to enable photon energies of 79 keV. Higher energy X-ray and gamma ray telescopes refrain from focusing completely and use coded aperture masks: the patterns of the shadow the mask creates can be reconstructed to form an image. X-ray and Gamma-ray telescopes are usually installed on high-flying balloons or Earth-orbiting satellites since the Earth's atmosphere is opaque to this part of the electromagnetic spectrum. An example of this type of telescope is the Fermi Gamma-ray Space Telescope which was launched in June 2008. The detection of very high energy gamma rays, with shorter wavelength and higher frequency than regular gamma rays, requires further specialization. Such detections can be made either with the Imaging Atmospheric Cherenkov Telescopes (IACTs) or with Water Cherenkov Detectors (WCDs). Examples of IACTs are H.E.S.S. and VERITAS with the next-generation gamma-ray telescope, the Cherenkov Telescope Array (CTA), currently under construction. HAWC and LHAASO are examples of gamma-ray detectors based on the Water Cherenkov Detectors. A discovery in 2012 may allow focusing gamma-ray telescopes. At photon energies greater than 700 keV, the index of refraction starts to increase again. Lists of telescopes See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Grey_alien#cite_note-25] | [TOKENS: 2835] |
Contents Grey alien Grey aliens, also referred to as Zeta Reticulans, Roswell Greys, or simply, Greys,[a] are purported extraterrestrial beings. They are frequently featured in claims of close encounter and alien abduction. Greys are typically described as having small, humanoid bodies, smooth, grey skin, disproportionately large, hairless heads, and large, black, almond-shaped eyes. The 1961 Barney and Betty Hill abduction claim was key to the popularization of Grey aliens. Precursor figures have been described in science fiction and similar descriptions appeared in later accounts of the 1947 Roswell UFO incident and early accounts of the 1948 Aztec UFO hoax. The Grey alien is cited an archetypal image of an intelligent non-human creature and extraterrestrial life in general, as well as an iconic trope of popular culture in the age of space exploration. Description Greys are typically depicted as grey-skinned, diminutive humanoid beings that possess reduced forms of, or completely lack, external human body parts such as noses, ears, or sex organs. Their bodies are usually depicted as being elongated, having a small chest, and lacking in muscular definition and visible skeletal structure. Their legs are depicted as being shorter and jointed differently from humans with limbs proportionally different from a human. Greys are depicted as having unusually large heads in proportion to their bodies, and as having no hair, no noticeable outer ears or noses, and small orifices for ears, nostrils, and mouths. In drawings, Greys are almost always shown with very large, opaque, black eyes, without eye whites. They are frequently described as shorter than average adult humans. The association between Grey aliens and Zeta Reticuli originated with the interpretation of a map drawn by Betty Hill by a school-teacher named Marjorie Fish sometime in 1969. Betty Hill, under hypnosis, had claimed to have been shown a map that displayed the aliens' home system and nearby stars. Upon learning of this, Fish attempted to create a model from a drawing produced by Hill, eventually determining that the stars marked as the aliens' home were Zeta Reticuli, a binary star system. History In literature, descriptions of beings similar to Grey aliens predate claims of supposed encounters with them. In 1893, H. G. Wells presented a description of humanity's future appearance in the article "The Man of the Year Million", describing humans as having no mouths, noses, or hair, and with large heads. In 1895, Wells also depicted the Eloi, a successor species to humanity, in similar terms in the novel The Time Machine. Both share many characteristics with future perceptions of Greys. As early as 1917, the occultist Aleister Crowley described a meeting with a "preternatural entity" named Lam that was similar in appearance to a modern Grey. Crowley claimed to have contacted Lam through a process called the "Amalantrah Workings," which he believed allowed humans to contact beings from outer space and across dimensions. Other occultists and ufologists, many of whom have retroactively linked Lam to later Grey encounters, have since described their own visitations from him, with one describing the being as a "cold, computer-like intelligence," and utterly beyond human comprehension. ...the creatures did not resemble any race of humans. They were short, shorter than the average Japanese, and their heads were big and bald, with strong, square foreheads, and very small noses and mouths, and weak chins. What was most extraordinary about them were the eyes—large, dark, gleaming, with a sharp gaze. They wore clothes made of soft grey fabric, and their limbs seemed to be similar to those of humans. In 1933, the Swedish novelist Gustav Sandgren, using the pen name Gabriel Linde, published a science fiction novel called Den okända faran (The Unknown Danger), in which he describes a race of extraterrestrials who wore clothes made of soft grey fabric and were short, with big bald heads, and large, dark, gleaming eyes. The novel, aimed at young readers, included illustrations of the imagined aliens. This description would become the template upon which the popular image of grey aliens is based. The conception remained a niche one until 1965, when newspaper reports of the Betty and Barney Hill abduction made the archetype famous. The alleged abductees, Betty and Barney Hill, claimed that in 1961, humanoid alien beings with greyish skin had abducted them and taken them to a flying saucer. In his 1990 article "Entirely Unpredisposed", Martin Kottmeyer suggested that Barney's memories revealed under hypnosis might have been influenced by an episode of the science-fiction television show The Outer Limits titled "The Bellero Shield", which was broadcast 12 days before Barney's first hypnotic session. The episode featured an extraterrestrial with large eyes, who says, "In all the universes, in all the unities beyond the universes, all who have eyes have eyes that speak." The report from the regression featured a scenario that was in some respects similar to the television show. In part, Kottmeyer wrote: Wraparound eyes are an extreme rarity in science fiction films. I know of only one instance. They appeared on the alien of an episode of an old TV series The Outer Limits entitled "The Bellero Shield." A person familiar with Barney's sketch in "The Interrupted Journey" and the sketch done in collaboration with the artist David Baker will find a "frisson" of "déjà vu" creeping up his spine when seeing this episode. The resemblance is much abetted by an absence of ears, hair, and nose on both aliens. Could it be by chance? Consider this: Barney first described and drew the wraparound eyes during the hypnosis session dated 22 February 1964. "The Bellero Shield" was first broadcast on 10 February 1964. Only twelve days separate the two instances. If the identification is admitted, the commonness of wraparound eyes in the abduction literature falls to cultural forces. — Martin Kottmeyer, Entirely Unpredisposed: The Cultural Background of UFO Reports Carl Sagan echoed Kottmeyer's suspicions in his 1997 book, The Demon Haunted World: Science as a Candle in the Dark, where Invaders from Mars was cited as another potential inspiration. After the Hills' encounter, Greys would go on to become an integral part of ufology and other extraterrestrial-related folklore. This is particularly true in the case of the United States: according to journalist C. D. B. Bryan, 73% of all reported alien encounters in the United States describe Grey aliens, a significantly higher proportion than other countries.: 68 During the early 1980s, Greys were linked to the alleged crash-landing of a flying saucer in Roswell, New Mexico, in 1947. A number of publications contained statements from individuals who claimed to have seen the U.S. military handling a number of unusually proportioned, bald, child-sized beings. These individuals claimed, during and after the incident, that the beings had oversized heads and slanted eyes, but scant other distinguishable facial features. In 1987, novelist Whitley Strieber published the book Communion, which, unlike his previous works, was categorized as non-fiction, and in which he describes a number of close encounters he alleges to have experienced with Greys and other extraterrestrial beings. The book became a New York Times bestseller, and New Line Cinema released a 1989 film adaption that starred Christopher Walken as Strieber. In 1988, Christophe Dechavanne interviewed the French science-fiction writer and ufologist Jimmy Guieu on TF1's Ciel, mon mardi !. Besides mentioning Majestic 12, Guieu described the existence of what he called "the little greys", which later on became better known in French under the name: les Petits-Gris. Guieu later wrote two docudramas, using as a plot the Grey aliens / Majestic-12 conspiracy theory as described by John Lear and Milton William Cooper: the series "E.B.E." (for "Extraterrestrial Biological Entity"): E.B.E.: Alerte rouge (first part) (1990) and E.B.E.: L'entité noire d'Andamooka (second part) (1991).[citation needed] Greys have since become the subject of many conspiracy theories. Many conspiracy theorists believe that Greys represent part of a government-led disinformation or plausible deniability campaign, or that they are a product of government mind-control experiments. During the 1990s, popular culture also began to increasingly link Greys to a number of military-industrial complex and New World Order conspiracy theories. In 1995, filmmaker Ray Santilli claimed to have obtained 22 reels of 16 mm film that depicted the autopsy of a "real" Grey supposedly recovered from the site of the 1947 incident in Roswell. In 2006, though, Santilli announced that the film was not original, but was instead a "reconstruction" created after the original film was found to have degraded. He maintained that a real Grey had been found and autopsied on camera in 1947, and that the footage released to the public contained a percentage of that original footage. Analysis Greys are often involved in alien abduction claims. Among reports of alien encounters, Greys make up about 50% in Australia, 73% in the United States, 48% in continental Europe, and around 12% in the United Kingdom.: 68 These reports include two distinct groups of Greys that differ in height.: 74 Abduction claims are often described as extremely traumatic, similar to an abduction by humans or even a sexual assault in the level of trauma and distress. The emotional impact of perceived abductions can be as great as that of combat, sexual abuse, and other traumatic events. The eyes are often a focus of abduction claims, which often describe a Grey staring into the eyes of an abductee when conducting mental procedures. This staring is claimed to induce hallucinogenic states or directly provoke different emotions. Neurologist Steven Novella proposes that Grey aliens are a byproduct of the human imagination, with the Greys' most distinctive features representing everything that modern humans traditionally link with intelligence. "The aliens, however, do not just appear as humans, they appear like humans with those traits we psychologically associate with intelligence." In 2005, Frederick V. Malmstrom, writing in Skeptic magazine, Volume 11, issue 4, presents his idea that Greys are actually residual memories of early childhood development. Malmstrom reconstructs the face of a Grey through transformation of a mother's face based on our best understanding of early-childhood sensation and perception. Malmstrom's study offers another alternative to the existence of Greys, the intense instinctive response many people experience when presented an image of a Grey, and the act of regression hypnosis and recovered-memory therapy in "recovering" memories of alien abduction experiences, along with their common themes. According to biologist Jack Cohen, the typical image of a Grey, assuming that it would have evolved from a world with different environmental and ecological conditions from Earth, is too physiologically similar to a human to be credible as a representation of an alien. The interdimensional hypothesis, the cryptoterrestrial hypothesis, and the time-traveller hypothesis attempt to provide an alternative explanation to the humanoid anatomy and behavior of these alleged beings. In popular culture Depictions of Grey aliens have gone on to appear in a number of films and television shows, supplanting the previously popular little green men. As early as 1966, for example, the superhero character Ultraman was explicitly based on them, and in 1977 they were featured in Close Encounters of the Third Kind. Greys have also been worked into space opera and other interstellar settings: in Babylon 5, the Greys are referred to as the "Vree", and are depicted as being allies and trade partners of 23rd-century Earth, while in the Stargate franchise they are called the "Asgard" and depicted as ancient astronauts allied with modern-day Earth.[citation needed] South Park refers to them as "visitors". During the 1990s, plotlines wherein Greys were linked to conspiracy theories became common. A well-known example is the Fox television series The X-Files, which first aired in 1993. It combined the quest to find proof of the existence of Grey-like extraterrestrials with a number of UFO conspiracy theory subplots, to form its primary story arc. Other notable examples include the XCOM video game franchise (where they are called "Sectoids"); Dark Skies, first broadcast in 1996, which expanded upon the MJ-12 conspiracy;[citation needed] and American Dad!, which features a Grey-like alien named Roger, whose backstory draws from both the Roswell incident and Area 51 conspiracy theories. The 2011 film Paul tells the story of a Grey named Paul who attributes the Greys' frequent presence in science fiction pop culture to the US government deliberately inserting the stereotypical Grey alien image into mainstream media; this is done so that if humanity came into contact with Paul's species, no immediate shock would occur as to their appearance. Child abduction by Greys is a key plot point in the 2013 film, Dark Skies. Greys appear in Syfy's 2021 science fiction dramedy series Resident Alien. The Greys appear as the main antagonistic faction in the 2023 independent game Greyhill Incident. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Epigram_(programming_language)] | [TOKENS: 622] |
Contents Epigram (programming language) Epigram is a functional programming language with dependent types, and the integrated development environment (IDE) usually packaged with the language. Epigram's type system is strong enough to express program specifications. The goal is to support a smooth transition from ordinary programming to integrated programs and proofs whose correctness can be checked and certified by the compiler. Epigram exploits the Curry–Howard correspondence, also termed the propositions as types principle, and is based on intuitionistic type theory. The Epigram prototype was implemented by Conor McBride based on joint work with James McKinna. Its development is continued by the Epigram group in Nottingham, Durham, St Andrews, and Royal Holloway, University of London in the United Kingdom (UK). The current experimental implementation of the Epigram system is freely available together with a user manual, a tutorial and some background material. The system has been used under Linux, Windows, and macOS. It is currently unmaintained, and version 2, which was intended to implement Observational Type Theory, was never officially released but exists in GitHub. Syntax Epigram uses a two-dimensional, natural deduction style syntax, with versions in LaTeX and ASCII. Here are some examples from The Epigram Tutorial: The following declaration defines the natural numbers: The declaration says that Nat is a type with kind * (i.e., it is a simple type) and two constructors: zero and suc. The constructor suc takes a single Nat argument and returns a Nat. This is equivalent to the Haskell declaration "data Nat = Zero | Suc Nat". In LaTeX, the code is displayed as: The horizontal-line notation can be read as "assuming (what is on the top) is true, we can infer that (what is on the bottom) is true." For example, "assuming n is of type Nat, then suc n is of type Nat." If nothing is on the top, then the bottom statement is always true: "zero is of type Nat (in all cases)." In ASCII: In ASCII: Dependent types Epigram is essentially a typed lambda calculus with generalized algebraic data type extensions, except for two extensions. First, types are first-class entities, of type ⋆ {\displaystyle \star } ; types are arbitrary expressions of type ⋆ {\displaystyle \star } , and type equivalence is defined in terms of the types' normal forms. Second, it has a dependent function type; instead of P → Q {\displaystyle P\rightarrow Q} , ∀ x : P ⇒ Q {\displaystyle \forall x:P\Rightarrow Q} , where x {\displaystyle x} is bound in Q {\displaystyle Q} to the value that the function's argument (of type P {\displaystyle P} ) eventually takes. Full dependent types, as implemented in Epigram, are a powerful abstraction. (Unlike in Dependent ML, the value(s) depended upon may be of any valid type.) A sample of the new formal specification capabilities dependent types bring may be found in The Epigram Tutorial. See also Further reading References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Steven_Novella] | [TOKENS: 2882] |
Contents Steven Novella Steven Paul Novella (born July 29, 1964) is an American clinical neurologist and associate professor at Yale University School of Medicine. Novella is best known for his involvement in the skeptical movement as a host of The Skeptics' Guide to the Universe podcast and as the president of the New England Skeptical Society. He is a fellow of the Committee for Skeptical Inquiry (CSI). Early life and education Novella was born July 29, 1964[citation needed] to Joseph Novella and Patricia Novella née Anderson. He was raised in New Fairfield, Connecticut, and has four siblings. Novella considered becoming a lawyer prior to attending college but decided to go into medicine as a teenager. As an undergraduate, he pursued premed and science. In 1991, Novella earned a medical degree from Georgetown University School of Medicine. He spent the first year of residency at Georgetown University Hospital/Washington Hospital Center in internal medicine. He completed his residency in neurology at Yale–New Haven Hospital in 1995. Novella was board certified in neurology in 1998. Novella's academic specialization is in neurology, including more specifically, amyotrophic lateral sclerosis (ALS), myasthenia gravis and neuromuscular disorders, neurophysiology, and the treatment of hyperactive neurological disorders. Career There is no skepticism without science and the scientific method. It's about how we know what we know. Novella is a proponent of scientific skepticism. In 1996 Novella, his brother Bob, and Perry DeAngelis founded The Connecticut Skeptical Society. The group began to organize in late 1995, when DeAngelis and Novella noticed a lack of listings for their area in Skeptical Inquirer magazine. The group later joined with the Skeptical Inquirers of New England (SINE) and the New Hampshire Skeptical Resource to form the New England Skeptical Society (NESS). Novella has served as the president of the NESS since inception. Novella defines a skeptic as: ... one who prefers beliefs and conclusions that are reliable and valid to ones that are comforting or convenient, and therefore rigorously and openly applies the methods of science and reason to all empirical claims, especially their own. A skeptic provisionally proportions acceptance of any claim to valid logic and a fair and thorough assessment of available evidence, and studies the pitfalls of human reason and the mechanisms of deception so as to avoid being deceived by others or themselves. Skepticism values method over any particular conclusion. In response to a 2007 editorial in The New York Times in which Paul Davies concluded "until science comes up with a testable theory of the laws of the universe, its claim to be free of faith is manifestly bogus," Novella said, It's not actually true because science is not dependent upon faith in a naturalistic world. It just follows the methods as if it is naturalistic... it is not a system of beliefs. People often ask me and they will ask you as skeptics what do you believe? Well, it's not about belief. Do you believe in ESP? It doesn't matter if I believe in ESP. The only thing that matters is what is the evidence for ESP? ...It's very important I think to present skepticism as a method of inquiry not a set of conclusions, not a set of beliefs. Novella is a fellow of the Committee for Skeptical Inquiry and has also been active in the organized skeptical community as a member of the executive committee of Northeast Conference on Science and Skepticism (NECSS). In the early days of the New England Skeptical Society, Novella participated in investigations of paranormal claims, some of which were part of the screening process for the One Million Dollar Paranormal Challenge offered by the James Randi Educational Foundation. Novella investigated such claims as Ouija boards (when the couple claiming they could operate one were properly blindfolded, their powers vanished), the ability to control the flipping of a coin (the claimant turned out to be making some common logical errors in thinking), a mind reader who got zero out of 20 correct, and many dowsers (typically found to be experiencing the Ideomotor phenomenon). Novella and the NESS also examined some phenomena described by people who were not competing for the One Million Dollar prize, such as haunted houses, the ability to communicate with the dead, and recording the voices of ghosts, known as electronic voice phenomenon, or EVP. In May 2005, Novella started The Skeptics' Guide to the Universe (SGU) podcast with Perry DeAngelis, Evan Bernstein, and his brothers Bob and Jay Novella. DeAngelis remained with the show until his death in August 2007. In July 2006, Rebecca Watson joined the podcast as a regular, staying through December 2014. Cara Santa Maria joined the cast in July 2015. Novella hosts the show and handles editing and post-production. In an interview for the Books and Ideas podcast he described his work for the podcast as being a labor of love, and similar to a second job. Novella said the SGU show primarily addresses controversial topics and topics on fringe science, with common content on paranormal or conspiracy theories, health fraud, and issues of consumer protection. In 2007, Novella started a blog, Neurologica, for which he writes on a weekly basis covering subjects generally related to science or skepticism. He is the executive editor of the blog Science-Based Medicine for which he is also a regular contributor, and he is a medical advisor to Quackwatch, an alternative medicine watchdog website. In 2008, Novella signed the Project Steve petition,[non-primary source needed] a tongue-in-cheek parody of the list of "scientists that doubt evolution" produced by creationists. Novella is an associate editor of the Scientific Review of Alternative Medicine, and writes the monthly Weird Science column for the New Haven Advocate newspaper.[citation needed] He created several Dungeons & Dragons campaign and expansion packs. Writing for Skeptical Inquirer, Rob Palmer stated in a review of Novella's book, The Skeptics' Guide to the Universe, that it could serve as a kind of "operations manual" for critical thinking and skepticism. Novella has appeared on several television programs, including Penn & Teller: Bullshit!, The Dr. Oz Show, and Inside Edition. In 2008, he filmed a pilot for a television series called The Skeptologists along with Brian Dunning, Yau-Man Chan, Mark Edward, Michael Shermer, Phil Plait, and Kirsten Sanford. The series has not been picked up by any network.[citation needed] Novella appeared on The Dr. Oz Show segment, "Controversial Medicine: Why your doctor is afraid of alternative health", where he was introduced as "an outspoken critic of alternative medicine." Novella noted that the term "alternative" creates a double standard. "There should be one science-based common-sense standard to figure out what therapies work and are safe." Novella made the point that herbs are medicinals and have been used that way for thousands of years, but the problem is in re-branding them as alternative, marketing them as natural, and therefore arguing that they don't need evidence that they are safe and effective. "At the end of the day, the public was sold products that the evidence shows doesn't work."[independent source needed] On the subject of acupuncture, Novella stated, "I've spent a lot of time reviewing the acupuncture literature ... and the evidence overwhelmingly shows that acupuncture, in fact, doesn't work." In response to Dr. Oz's complaint that Novella is dismissive of an idea that the "way we think [about acupuncture] in the west is that it can't be possible effective." Novella replied, "I didn't say it couldn't possibly work, I said when you look at it, it doesn't work."[independent source needed] Novella led two courses for The Great Courses, "Medical Myths, Lies, and Half-Truths: What We Think We Know May Be Hurting Us" and "Your Deceptive Mind: A Scientific Guide to Critical Thinking Skills". In 2009, Novella was the board chairman when the Institute for Science in Medicine was founded. In January 2010, Novella was elected as a Fellow of the Committee for Skeptical Inquiry. In 2011, Novella was appointed Senior Fellow of the James Randi Educational Foundation, and Director of their Science-Based medicine project. Novella co-owned a local live action role-playing (LARP) game for about 5 years, during which time the owners wrote seven D20 System books. Novella coauthored the adventure gaming book Twin Crowns, a naval and travel expansion for Dungeons & Dragons and Broadsides!, a role-playing game (RPG) based on the D20 System, and Spellbound: A Codex of Ritual Magic, which features "a complete system of magic suitable for any campaign setting" using that system. Novella published a reflective evaluation of the autonomous sensory meridian response, a low grade euphoria characterized by 'a combination of positive feelings, relaxation, and a distinct static-like tingling sensation on the skin', which begins on the scalp before moving down the spine to the base of the neck, sometimes spreading to the back, arms and legs, often prompted by specific acoustic and visual stimuli including the content of some digital videos, and less commonly by intentional attentional control. In a post on Neurologica, Novella said that he investigates such phenomena by asking 'Is it real'? Regarding ASMR, he said: 'I don't think there is a definitive answer, but I am inclined to believe that it is. There are a number of people who seem to have independently [...] experienced and described' it with 'fairly specific details. In this way' ASMR is 'similar to migraine headaches – we know they exist as a syndrome primarily because many different people report the same constellation of symptoms and natural history.' He suggested that ASMR might be a type of pleasurable seizure or another way to activate the 'pleasure response' and advised that functional magnetic resonance imaging and transcranial magnetic stimulation technologies should be used to study the brains of people who experience ASMR in comparison to people who do not, as a way of seeking better scientific understanding of the phenomenon. Tobinick lawsuit On June 9, 2014, Edward Tobinick filed a civil action in Florida Southern District Court naming Steven Novella, Yale University, the Society for Science-Based Medicine, Inc. and SGU Productions, LLC as defendants. The action alleged that in violation of the Lanham Act, Novella "has and continues to publish a false advertisement disparaging Plaintiffs entitled 'Enbrel for Stroke and Alzheimer's', ('the 'Advertisement') and implying that the INR plaintiffs' use of Etanercept is ineffective and useless;" and "The Advertisement is extremely inflammatory and defamatory in nature as it contains multiple false and misleading statements of fact regarding Plaintiffs." "The Advertisement" referred to in the action is an entry for the Science-Based Medicine blog that Novella wrote and posted on May 8, 2013.[independent source needed] On July 14, 2014, Novella's attorney, Marc Randazza, filed an "Opposition to Plaintiff's Motion for Temporary and Preliminary Injunctive Relief". The filing stated that Tobinick was "highly unlikely to prevail in this matter ... as Defendant's statements range from provably true to opinion," that a preliminary injunction "would impose an unlawful prior restraint of speech," and that "an injunction would result in far more harm to Defendants and the public than Plaintiffs' claimed injury." Novella posted a response to the lawsuit on Science-Based Medicine in which he said, "In my opinion he [Tobinick] is using legal thuggery in an attempt to intimidate me and silence my free speech because he finds its content inconvenient".[independent source needed] United States District Judge Robin Rosenberg ordered the case closed on September 30, 2015, and found in judgement for the defendants. Tobinick was unable to show that Novella had profited from his blog post or that it was an advertisement. In 2017, a final appeal affirmed the district court's opinion. Awards Topics of interest Novella often writes and speaks about a variety of topics in areas of alternative medicine, the new age movement, parapsychology, and pseudoscience. As a proponent of scientific skepticism, his writings generally address supporting evidence and scientific consensus. Topics addressed in his writings include: Bibliography A book written by Steve Novella and his Skeptics' Guide co-hosts about scientific skepticism was published in October 2018. The Skeptics' Guide to the Universe: How to Know What's Really Real in a World Increasingly Full of Fake was reviewed by Publishers Weekly, which said: "In plain English and cogent prose, Novella makes skepticism seem mighty, necessary, and accessible all at once... Empowering and illuminating, this thinker's paradise is an antidote to spreading anti-scientific sentiments. Readers will return to its ideas again and again." The subsections of the book ("Neuropsycholological Humility", "Metacognition", "Science and Pseudoscience", and "Iconic Cautionary Tales from History") break the topic into conceptional chunks that are easy for readers with a wide range of backgrounds to digest. Neil deGrasse Tyson's review says: "Thorough, informative, and enlightening, The Skeptics' Guide to the Universe inoculates you against the frailties and shortcomings of human cognition. If this book does not become required reading for us all, we may well see modern civilization unravel before our eyes." Steven, Bob, and Jay Novella published The Skeptics' Guide to the Future in 2022. References External links |
======================================== |
[SOURCE: https://github.com/security/advanced-security/code-security] | [TOKENS: 689] |
Navigation Menu Search code, repositories, users, issues, pull requests... Provide feedback We read every piece of feedback, and take your input very seriously. Saved searches Use saved searches to filter your results more quickly To see all available qualifiers, see our documentation. Application security where found means fixed Secure your code as you build with GitHub Code Security. Detect vulnerabilities early and fix them with Copilot Autofix. 28 min From vulnerability detection to remediation 3X Faster remediation on average with Copilot Autofix 90% Of alert types include AI-powered code suggestions Detect and remediate vulnerabilitiesearly with AI-powered fixes Find security issues in real time with CodeQL’s powerful analysis that traces data flows throughout your application. Get contextual explanations and AI-powered fixes for CodeQL-detected alerts with Copilot Autofix. GitHub Code Security continuously scans your code as you build, helping detect vulnerabilities early, fix them fast with Copilot Autofix, and ship securely. Identify new dependencies and check for vulnerabilities or license issues with the Dependency Review Action. Security should be built in, not bolted on. With Code Security, you can find, fix, and prevent vulnerabilities seamlessly—keeping your software resilient from development to deployment. Best practices for more secure software Take an in-depth look at the current state of application security. Learn how to write more secure code from the start with DevSecOps. Explore common application security pitfalls and how to avoid them. GitHub Code Security empowers developers to secure their code without sacrificing speed. With built-in static analysis, AI-powered remediation, advanced dependency scanning, and proactive vulnerability management, teams can automatically detect, prioritize, and remediate security issues, all within their existing GitHub workflow—allowing them to deliver secure software faster and with greater confidence Copilot Autofix uses AI-powered code suggestions to automatically fix security vulnerabilities identified by CodeQL. When a security vulnerability is detected, Copilot Autofix analyzes the code context, understands the underlying security issue, and generates a precise, contextually appropriate fix. This feature bridges the gap between vulnerability detection and remediation, enabling developers to review and apply AI-suggested fixes directly within their workflow. Security campaigns provide a structured framework for planning, tracking, and implementing security fixes across multiple repositories and teams allowing you to systematically burn down security debt. With With security campaigns, security teams can group related vulnerabilities, prioritize remediation efforts, assign ownership, and monitor progress through a unified dashboard. Security campaigns can be organized by vulnerability type, security initiative, compliance requirement, or any other logical grouping to coordinate security improvements at scale. Dependency review scans pull requests for vulnerable dependencies before they're introduced into your codebase. It evaluates the security impact of dependency changes, identifying vulnerable packages and their severity levels to prevent security issues from being merged. The tool shows detailed dependency changes by comparing the base and head branches, highlighting added, removed, and updated dependencies along with their known vulnerabilities Dependabot alerts now feature the Exploit Prediction Scoring System (EPSS) from the global Forum of Incident Response and Security Teams (FIRST), helping better assess vulnerability risks. EPSS helps organizations prioritize vulnerability remediation by predicting the likelihood of a vulnerability being exploited in the next 30 days. It provides a score ranging from 0 to 1 (0-100%), alongside a percentile ranking to indicate how the vulnerability compares to others. Site-wide Links Get tips, technical guides, and best practices. Twice a month. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_note-Thomas2008-20] | [TOKENS: 10628] |
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mamluk_Sultanate_(Cairo)] | [TOKENS: 21391] |
Contents Mamluk Sultanate The Mamluk Sultanate (Arabic: سلطنة المماليك, romanized: Salṭanat al-Mamālīk), also known as Mamluk Egypt or the Mamluk Empire, was a state that ruled Egypt, the Syrian region and the Hejaz from the mid-13th to early 16th centuries, with Cairo as its capital. It was ruled by a military caste of mamluks (freed slave soldiers) headed by a sultan. The sultanate was established with the overthrow of the Ayyubid dynasty in Egypt in 1250 and was conquered by the Ottoman Empire in 1517. Mamluk history is generally divided into the Turkic or Bahri period (1250–1382) and the Circassian or Burji period (1382–1517), called after the predominant ethnicity or corps of the ruling Mamluks during these respective eras. The first rulers of the sultanate hailed from the mamluk regiments of the Ayyubid sultan al-Salih Ayyub (r. 1240–1249), usurping power from his successor in 1250. The Mamluks under Sultan Qutuz and Baybars routed the Mongols in 1260, halting their southward expansion. They then conquered or gained suzerainty over the Ayyubids' Syrian principalities. Baybars also reestablished the Abbasid dynasty of caliphs in Cairo, though their role was ceremonial. By the end of the 13th century, through the efforts of sultans Baybars, Qalawun (r. 1279–1290) and al-Ashraf Khalil (r. 1290–1293), the Mamluks had conquered the Crusader states, expanded into Makuria (Nubia), Cyrenaica, the Hejaz, and southern Anatolia. The sultanate experienced a long period of stability and prosperity during the third reign of al-Nasir Muhammad (r. 1293–1294, 1299–1309), before giving way to the internal strife characterizing the succession of his sons, when real power was held by senior emirs. One such emir, Barquq, overthrew the sultan in 1382 and again in 1390, inaugurating Burji rule. Mamluk authority across the empire eroded under his successors due to foreign invasions, tribal rebellions, and natural disasters, and the state entered into a long period of financial distress. Under Sultan Barsbay, major efforts were taken to replenish the treasury, particularly monopolization of trade with Europe and tax expeditions into the countryside. He also managed to impose Mamluk authority abroad, forcing Cyprus to submit in 1426. The sultanate stagnated after this. Sultan Qaitbay's long and competent reign (r. 1468–1496) ensured some stability, though it was marked by conflicts with the Ottomans. The last effective sultan was Qansuh al-Ghuri (r. 1501–1516), whose reign was known for heavy-handed fiscal policies, attempted military reforms, and confrontations with the Portuguese in the Indian Ocean. In 1516, he was killed in battle against Ottoman sultan Selim I, who subsequently conquered Egypt in 1517 and ended Mamluk rule. Under Mamluk rule, Cairo reached the peak of its size and wealth before the modern period, becoming one of the largest cities in the world at the time. The sultanate's economy was primarily agrarian, but its geographic position also placed it at the center of trade between Europe and the Indian Ocean. The Mamluks themselves relied on the iqta' system to provide revenues. They were also major patrons of art and architecture: inlaid metalwork, enameled glass, and illuminated Qur'an manuscripts were among the high points of art, while Mamluk architecture still makes up much of the fabric of historic Cairo today and is found throughout their former domains. Name The 'Mamluk Sultanate' is a modern historiographical term. Arabic sources for the period of the Bahri Mamluks refer to the dynasty as the 'State of the Turks' (Dawlat al-Atrak or Dawlat al-Turk) or 'State of Turkey' (al-Dawla al-Turkiyya). During Burji rule, it was also referred to as the 'State of the Circassians' (Dawlat al-Jarakisa). These names emphasized the ethnic origin of the rulers and Mamluk writers did not explicitly highlight their status as slaves, except on rare occasions during the Circassian period. History The mamluk was a manumitted slave, distinguished from the ghulam, or household slave. After thorough training in martial arts, court etiquette and Islamic sciences, these slaves were freed but expected to remain loyal to their master and serve his household. Mamluks formed part of the military apparatus in Syria and Egypt since at least the 9th century, rising to become governing dynasties in Egypt and Syria as the Tulunid and Ikhshidid dynasties. Mamluk regiments constituted the backbone of Egypt's military under Ayyubid rule in the late 12th and early 13th centuries, beginning under the first Ayyubid sultan Saladin (r. 1174–1193), who replaced the Fatimid Caliphate's black African infantry with mamluks. Each Ayyubid sultan and high-ranking emir had a private mamluk corps. Most of the mamluks in the Ayyubids' service were ethnic Kipchak Turks from Central Asia, who, upon entering service, were converted to Sunni Islam and taught Arabic. Mamluks were highly committed to their master, to whom they often referred to as 'father', and were in turn treated more as kinsmen than as slaves. After their manumission, mamluks were given a position in either the courtly administration or the army. Mamluks were preferred to freeborn soldiers because they were raised to view the army and their sultan-ruler as their family and thus considered more loyal than freeborn soldiers who were first loyal to their biological families. The Ayyubid emir and future sultan as-Salih Ayyub acquired about one thousand mamluks (some of them free-born) from Syria, Egypt and Arabia by 1229, while serving as na'ib (viceroy) of Egypt during the absence of his father, Sultan al-Kamil (r. 1218–1238). These mamluks were called the 'Salihiyya' (singular 'Salihi') after their master. Al-Salih became sultan of Egypt in 1240, and, upon his accession, he manumitted and promoted large numbers of his mamluks, provisioning them through confiscated iqtaʿat (akin to fiefs; singular iqtaʿ) from his predecessors' emirs. He created a loyal paramilitary apparatus in Egypt so dominant that contemporaries viewed Egypt as "Salihi-ridden", according to historian Winslow William Clifford. While historian Stephen Humphreys asserts the Salihiyya's increasing dominance of the state did not personally threaten al-Salih due to their fidelity to him, Clifford believes the Salihiyya's autonomy fell short of such loyalty. Tensions between as-Salih and his mamluks culminated in 1249 when Louis IX of France's forces captured Damietta in their bid to conquer Egypt during the Seventh Crusade. Al-Salih opposed the evacuation of Damietta and threatened to punish the city's garrison. This provoked a mutiny by his garrison in al-Mansura, which only dissipated with the intervention of the atabeg al-askar (commander of the military), Fakhr ad-Din ibn Shaykh al-Shuyukh. As the Crusaders advanced, al-Salih died and was succeeded by his Jazira (Upper Mesopotamia)-based son al-Mu'azzam Turanshah. Although the Salihiyya welcomed his succession, Turanshah challenged their dominance in the paramilitary apparatus by promoting his Kurdish retinue from the Jazira and Syria as a counterweight. On 11 February 1250, the Bahriyya, a junior regiment of the Salihiyya commanded by Baybars, defeated the Crusaders at the Battle of al-Mansura. On 27 February, Turanshah arrived in al-Mansura to lead the Egyptian army. On 5 April 1250, the Crusaders evacuated their camp opposite al-Mansura. The Egyptians followed them into the Battle of Fariskur where the Egyptians destroyed the Crusaders on 6 April. King Louis IX and a few of his surviving nobles were taken as prisoners, effectively ending the Seventh Crusade. Turanshah proceeded to place his own entourage and mamluks, known as the 'Mu'azzamiya', in positions of authority at the expense of the Salihiyya. On 2 May 1250, disgruntled Salihi emirs assassinated Turanshah at Fariskur. An electoral college dominated by the Salihiyya then convened to choose a successor to Turanshah among the Ayyubid emirs, with opinion largely split between an-Nasir Yusuf of Damascus and al-Mughith Umar of al-Karak. Consensus settled on al-Salih's widow, Shajar al-Durr. She ensured the Salihiyya's dominance of the paramilitary elite, and inaugurated patronage and kinship ties with the Salihiyya. In particular, she cultivated close ties with the Jamdari (pl. Jamdariyya) and Bahri (pl. Bahriyya) corps, distributing to them iqtaʿ and other privileges. Her efforts and Egyptian military's preference to preserve the Ayyubid state were evident when the Salihi mamluk and atabeg al-askar, Aybak, was rebuffed from monopolizing power by the army and the Bahriyya and Jamdariyya, who all asserted that sultanic authority was exclusive to the Ayyubids. The Bahriyya compelled Aybak to share power with al-Ashraf Musa, a grandson of Sultan al-Kamil. Aybak was the main bulwark against the Bahri and Jamdari emirs, and his promotion as atabeg al-askar led to Bahri rioting in Cairo, the first of many intra-Salihi clashes about his ascendancy. The Bahriyya and Jamdariyya were represented by their patron, Faris al-Din Aktay, a principal organizer of Turanshah's assassination and the recipient of Fakhr ad-Din's large estate by Shajar al-Durr; the latter viewed Aktay as a counterweight to Aybak. Aybak moved against the Bahriyya by shutting their Roda headquarters in 1251 and assassinating Aktay in 1254. Afterward, Aybak purged his retinue and the Salihiyya of perceived dissidents, causing a temporary exodus of Bahri mamluks, most of whom settled in Gaza. The purge caused a shortage of officers, which led Aktay to recruit new supporters from among the army in Egypt and the Turkic Nasiri and Azizi mamluks from Syria, who had defected from an-Nasir Yusuf and moved to Egypt in 1250. Aybak felt threatened by the growing ambitions of the Syrian mamluks' empowered patron Jamal ad-Din Aydughdi. Upon learning of Aydughdi's plot to install an-Nasir Yusuf as sultan, which would leave Aydughdi as practical ruler of Egypt, Aybak imprisoned Aydughdi in Alexandria in 1254 or 1255. Aybak was assassinated on 10 April 1257, possibly on orders from Shajar al-Durr, who was assassinated a week later. Their deaths left a relative power vacuum in Egypt, with Aybak's teenage son, al-Mansur Ali, as heir to the sultanate and Aybak's close aide, Sayf al-Din Qutuz, as strongman. The Bahriyya and al-Mughith Umar made two attempts to conquer Egypt in November 1257 and 1258 but were defeated. They then turned on an-Nasir Yusuf in Damascus, who defeated them at Jericho. An-Nasir Yusuf followed up with a siege of al-Mughith and the Bahriyya at al-Karak, but the growing threat of a Mongol invasion of Syria led the Ayyubid emirs to reconcile, and Baybars to defect to an-Nasir Yusuf. Qutuz deposed Ali in 1259 and purged or arrested the Mu'izziya and any remaining Bahri mamluks in Egypt to eliminate potential opposition. The surviving Mu'izzi and Bahri mamluks went to Gaza, where Baybars had established a shadow state opposed to Qutuz. While mamluk factions fought for control of Egypt and Syria, the Mongols under Hulagu Khan had sacked Baghdad, the intellectual and spiritual center of the Islamic world, in 1258, and proceeded westward, capturing Aleppo and Damascus. Qutuz sent military reinforcements to his erstwhile enemy an-Nasir Yusuf in Syria, and reconciled with the Bahriyya, including Baybars, who was allowed to return to Egypt, to face the common Mongol threat. Hulagu sent emissaries to Qutuz in Cairo, demanding submission to Mongol rule but Qutuz had them killed, an act which historian Joseph Cummins called the "worst possible insult to the Mongol throne". After hearing that Hulagu withdrew from Syria to claim the Mongol throne, Qutuz and Baybars mobilized a 120,000-strong force to conquer Syria. The Mamluks entered Palestine and confronted the Mongol army Hulagu left behind under Kitbuqa in the plains south of Nazareth at the Battle of Ain Jalut in September 1260. The battle ended in a Mongol rout and Kitbuqa's capture and execution. Afterward, the Mamluks recaptured Damascus and the other Syrian cities taken by the Mongols. Upon Qutuz's triumphant return to Cairo, he was assassinated in a Bahri plot. Baybars then assumed power in October 1260, inaugurating Bahri rule. In 1263, Baybars deposed al-Mughith based on allegations of collaboration with the Mongol Ilkhanate of Persia, and thereby consolidated his authority over Islamic Syria. During his early reign, Baybars expanded the Mamluk from 10,000 cavalry to 40,000, with a 4,000-strong royal guard at its core. The new force was rigidly disciplined and highly trained in horsemanship, swordsmanship and archery. To improve intracommunication, Baybars instituted a barid (postal network) extending across Egypt and Syria, which led to large scale building of roads and bridges along the postal route. His military and administrative reforms cemented the power of the Mamluk state. He opened diplomatic channels with the Mongols to stifle their potential alliance with the Christian powers of Europe, while also sowing divisions between the Mongol Ilkhanate and the Mongol Golden Horde. His diplomacy was additionally intended to maintain the flow of Turkic mamluks from Mongol-held Central Asia. With his power in Egypt and Islamic Syria consolidated by 1265, Baybars launched expeditions against the Crusader fortresses throughout Syria, capturing Arsuf in 1265, and Halba and Arqa in 1266. Baybars captured and destroyed fortresses along the Syrian coast to prevent their potential future use by new waves of Crusaders. He often established new cities further inland, irreversibly changing settlement patterns in the region, as was the case, for example, with Ascalon whose inhabitants were moved to Al-Majdal Asqalan. In August 1266, the Mamluks launched a punitive expedition against the Armenian Cilician Kingdom for its alliance with the Mongols, laying waste to numerous Armenian villages and significantly weakening the kingdom. At around the same time, Baybars captured Safed from the Knights Templar, and shortly after, Ramla, both cities in interior Palestine. Unlike the coastal fortresses, the Mamluks strengthened and utilized the interior cities as major garrisons and administrative centers. In 1268, the Mamluks captured Jaffa before conquering the Crusader stronghold of Antioch on 18 May. In 1271, Baybars captured the major Krak des Chevaliers fortress from the Crusader County of Tripoli. Despite an alliance with the Isma'ili Shia Assassins in 1272, in July 1273, the Mamluks, who by then considered the Assassins' independence as problematic, wrested control of their fortresses in the Jabal Ansariya range, including Masyaf. In 1277, Baybars launched an expedition against the Ilkhanids, routing them in Elbistan in Anatolia, but withdrew to avoid overstretching his forces and risk being cut off from Syria by a larger incoming Ilkhanid army. To Egypt's south, Baybars had initiated an aggressive policy toward the Christian Nubian kingdom of Makuria. In 1265, the Mamluks invaded northern Makuria, forcing the Nubian king to become their vassal. Around that time, the Mamluks had conquered the Red Sea areas of Suakin and the Dahlak Archipelago, while attempting to extend their control to the Hejaz (western Arabia), the desert regions west of the Nile, and Barqa (Cyrenaica). In 1268, the Makurian king, David I, overthrew the Mamluks' vassal and in 1272, raided the Mamluk Red Sea port of Aydhab. In 1276, the Mamluks defeated King David of Makuria in the Battle of Dongola and installed their ally Shakanda as king. This brought the fortress of Qasr Ibrim under Mamluk suzerainty. The conquest of Nubia was not permanent and the process of invading the region and installing vassal kings was repeated by Baybars's successors. Nonetheless, Baybars' initial conquest led to the annual expectation of tribute from the Nubians by the Mamluks until the Makurian kingdom's demise in the mid-14th century. Furthermore, the Mamluks received the submission of King Adur of al-Abwab further south. Baybars attempted to establish his Zahirid house as the state's ruling dynasty by appointing his four-year-old son al-Sa'id Baraka as co-sultan in 1264. This represented a break from the Mamluk tradition of choosing the sultan by merit rather than lineage. In July 1277, Baybars died en route to Damascus, and was succeeded by Baraka. Baraka was ousted in a power struggle ending with Qalawun, a top deputy of Baybars, as sultan in November 1279. The Ilkhanids launched a massive offensive against Syria in 1281. The Mamluks were outnumbered by the 80,000-strong Ilkhanid-Armenian-Georgian-Seljuk coalition, but routed the coalition at the battle of Homs, confirming Mamluk dominance in Syria. The Ilkhanids' rout enabled Qalawun to proceed against Crusader holdouts in Syria and in May 1285, he captured and garrisoned the Marqab fortress. Qalawun's early reign was marked by policies intended to garner support from the merchant class, the Muslim bureaucracy and the religious establishment. He eliminated the illegal taxes that burdened the merchants and commissioned extensive building and renovation projects for Islam's holiest sites, such as the Prophet's Mosque in Medina, the al-Aqsa Mosque in Jerusalem and the Ibrahimi Mosque in Hebron. His building activities later shifted to more secular and personal purposes, including his large, multi-division hospital complex in Cairo. After the détente with the Ilkhanids, Qalawun suppressed internal dissent by imprisoning dozens of high-ranking emirs in Egypt and Syria. He diversified the hitherto mostly Turkic mamluk ranks by purchasing numerous non-Turks, particularly Circassians, forming out of them the Burjiyya regiment. Qalawun was the last Salihi sultan and after his death in 1290, his son, al-Ashraf Khalil, drew legitimacy by emphasizing his lineage from Qalawun. Like his predecessors, Khalil's main priorities were organizing the state apparati, defeating the Crusaders and Mongols, integrating Syria, and preserving the flow of new mamluks and weaponry into the empire. Baybars had purchased 4,000 mamluks, Qalawun 6,000–7,000 and by the end of Khalil's reign, there was an estimated total of 10,000 mamluks in the sultanate. In 1291, Khalil captured Acre, the last major Crusader stronghold in Palestine and Mamluk rule consequently extended across all of Syria. Khalil's death in 1293 led to period of factional struggle, with Khalil's prepubescent brother, al-Nasir Muhammad, being overthrown the following year by an ethnic Mongol mamluk of Qalawun, al-Adil Kitbugha, who in turn was succeeded by a Greek mamluk of Qalawun, Husam al-Din Lajin. To consolidate control, Lajin redistributed iqtaʿat to his supporters. He was unable to keep power and al-Nasir Muhammad was restored as sultan in 1298, ruling over a fractious realm until being toppled by Baybars II, a Circassian mamluk of Qalawun, who was wealthier, and more pious and cultured than his immediate predecessors. Early into al-Nasir Muhammad's second reign, the Ilkhanids, whose leader Mahmud Ghazan was a Muslim convert, had invaded Syria and routed a Mamluk army near Homs in the Battle of Wadi al-Khaznadar in 1299. Ghazan largely withdrew from Syria shortly after due to a lack of fodder for their numerous horses and the residual Ilkhanid force retreated in 1300 at the approach of the rebuilt Mamluk army. Another Ilkhanid invasion in 1303 was repelled after a Mamluk victory at the Battle of Marj al-Suffar in the plains south of Damascus. Baybars II ruled for roughly one year before al-Nasir Muhammad became sultan again in 1310, this time ruling for over three decades in a period often considered by historians to be the zenith of the Mamluk empire. To avoid the experiences of his previous two reigns where the mamluks of Qalawun and Khalil held sway and periodically assumed power, al-Nasir Muhammad established a centralized autocracy. In 1310, he imprisoned, exiled or killed any Mamluk emirs that supported those who toppled him in the past, including the Burji mamluks. He assigned iqta'at to over thirty of his own mamluks. Initially, he left most of his father's mamluks undisturbed, but in 1311 and 1316, he imprisoned and executed most of them, and again redistributed iqta'at to his own mamluks. By 1316, the number of mamluks decreased to 2,000. Al-Nasir Muhammad further consolidated power by replacing Caliph al-Mustakfi (r. 1302–1340) with his own appointee, al-Wathiq, as well as compelling the qadi (head judge) to issue legal rulings advancing his interests. Under al-Nasir Muhammad, the Mamluks repulsed an Ilkhanid invasion of Syria in 1313 and concluded a peace treaty with the Ilkhanate in 1322, bringing a long-lasting end to the Mamluk–Mongol wars. Afterward, al-Nasir Muhammad ushered in a period of stability and prosperity through the enactment of major political, economic and military reforms ultimately intended to ensure his continued rule and consolidate the Qalawuni–Bahri regime. Concurrent with his reign was the disintegration of the Ilkhanate into several smaller dynastic states and the consequent Mamluk effort to establish diplomatic and commercial relationships with the new states. Amid conditions reducing the flow of mamluks from the Mongol territories to the sultanate, al-Nasir Muhammad compensated by adopting new methods of training, and military and financial advancement that introduced a great level of permissiveness. This led to relaxed conditions for new mamluks and encouraged the pursuit of military careers in Egypt by aspiring mamluks outside of the empire. Al-Nasir Muhammad died in 1341 and his rule was followed by a succession of descendants in a period marked by political instability. Most of his successors, except for al-Nasir Hasan (r. 1347–1351, 1354–1361) and al-Ashraf Sha'ban (r. 1363–1367), were sultans in name only, with the patrons of the leading mamluk factions holding actual power. The first of al-Nasir Muhammad's sons to accede was al-Mansur Abu Bakr, who al-Nasir Muhammad designated as successor. Al-Nasir Muhammad's senior aide, Qawsun, held real power and imprisoned and executed Abu Bakr and had al-Nasir Muhammad's infant son, al-Ashraf Kujuk, appointed instead. By January 1342, Qawsun and Kujuk were toppled, and the latter's half-brother, al-Nasir Ahmad of al-Karak, was declared sultan. Ahmad relocated to al-Karak and left a deputy to govern in Cairo. This unorthodox arrangement, together with his seclusive and frivolous behavior and his execution of loyal partisans, ended with Ahmad's deposition and replacement by his half-brother al-Salih Isma'il in June 1342. Isma'il ruled until his death in August 1345, and was succeeded by his brother al-Kamil Sha'ban. The latter was killed in a mamluk revolt and was succeeded by his brother al-Muzaffar Hajji, who was also killed in a mamluk revolt in late 1347. After Hajji's death, the senior emirs hastily appointed another son of al-Nasir Muhammad, the twelve-year-old al-Nasir Hasan. Coinciding with Hasan's first reign, in 1347–1348, the Bubonic Plague arrived in Egypt and other plagues followed, causing mass death in the country, which led to major social and economic changes in the region. In 1351, the senior emirs, led by Emir Taz, ousted and replaced Hasan with his brother, al-Salih Salih. The emirs Shaykhu and Sirghitmish deposed Salih and restored Hasan in 1355, after which Hasan gradually purged Taz, Shaykhu and Sirghitmish and their mamluks from his administration. Hasan recruited and promoted the awlad al-nas (descendants of mamluks who did not undergo the enslavement/manumission process) in the military and administration, a process lasted for the remainder of the Bahri period. This caused resentment among Hasan's own mamluks, led by Emir Yalbugha al-Umari, who killed Hasan in 1361. Yalbugha became regent to Hasan's successor, the young son of the late sultan Hajji, al-Mansur Muhammad. By then, mamluk solidarity and loyalty to the emirs had dissipated. To restore discipline and unity within the Mamluk state and military, Yalbugha revived the rigorous training of mamluks used under Baybars and Qalawun. In 1365, a Mamluk attempt to annex Armenia, which had since replaced Crusader Acre as the Christian commercial foothold of Asia, was stifled by an invasion of Alexandria by Peter I of Cyprus. The Mamluks concurrently experienced a deterioration of their lucrative position in international trade and the economy declined, further weakening the Bahri regime. Meanwhile, the harshness of Yalbugha's educational methods and his refusal to rescind his disciplinary reforms provoked a mamluk backlash. Yalbugha was killed by his mamluks in an uprising in 1366. The rebels were supported by Sultan al-Ashraf Sha'ban, who Yalbugha had installed in 1363. Sha'ban ruled as the real power in the sultanate until 1377, when he was killed by mamluk dissidents on his way to Mecca perform the Hajj. Sha'ban was succeeded by his seven-year-old son al-Mansur Ali, though the oligarchy of the senior emirs held the reins of power. Among the senior emirs who rose to prominence under Ali were Barquq and Baraka, both Circassian mamluks of Yalbugha. Barquq was made atabeg al-asakir in 1378, giving him command of the Mamluk army, which he used to oust Baraka in 1380. Ali died in May 1381 and was succeeded by his nine-year-old brother, al-Salih Hajji, with real power held by Barquq as regent. The next year, Barquq toppled al-Salih Hajji and assumed the throne. His accession was enabled by Yalbugha's mamluks, whose corresponding rise to power left Barquq vulnerable. His rule was challenged by a revolt in Syria in 1389 by the Mamluk governors of Malatya and Aleppo, Mintash and Yalbugha al-Nasiri, the latter a mamluk of Yalbugha. The rebels took over Syria and headed for Egypt, prompting Barquq to abdicate in favor of al-Salih Hajji. The alliance between Yalbugha al-Nasiri and Mintash soon fell apart and factional fighting ensued in Cairo, with Mintash ousting Yalbugha. Barquq was arrested and exiled to al-Karak where he rallied support. In Cairo, Barquq's loyalists took the citadel and arrested al-Salih Hajji. This paved the way for Barquq's usurpation of the sultanate once more in February 1390, firmly establishing the Burji regime. The ruling Mamluks of this period were mostly Circassians drawn from the Christian population of the northern Caucasus. Barquq solidified power in 1393, when his forces killed the major opponent to his rule, Mintash, in Syria. Barquq oversaw the mass recruitment of Circassians (estimated at 5,000 recruits) into the mamluk ranks and the restoration of the state's authority throughout its realm in the tradition of Baybars and Qalawun. A major innovation to this system was the division of Egypt into three niyabat (sing. niyaba; provinces), similar to the administrative divisions in Syria. The new Egyptian niyabat were Alexandria, Damanhur and Asyut. Barquq instituted this to better control the Egyptian countryside from the rising strength of the Bedouin tribes. He further dispatched the Berber Hawwara tribesmen of the Nile Delta to Upper Egypt to check the Arab Bedouins. During Barquq's reign, in 1387, the Mamluks had forced the Anatolian entity in Sivas to become a Mamluk vassal. Towards the end of the 14th century, challengers to the Mamluks emerged in Anatolia, including the Ottoman dynasty and the Turkmen allies of Timur, the Aq Qoyunlu and Qara Qoyunlu tribes of southern and eastern Anatolia. Barquq died in 1399 and was succeeded by his eleven-year-old son, an-Nasir Faraj. That year, Timur invaded Syria, sacking Aleppo and Damascus. Timur ended his occupation of Syria in 1402 to fight the Ottomans in Anatolia, whom he deemed a more dangerous threat. Faraj held onto power during this turbulent period, which, in addition to Timur's devastating raids, the rise of Turkmen tribes in the Jazira, and attempts by Barquq's emirs to topple Faraj, also saw a famine in Egypt in 1403, a severe plague in 1405 and a Bedouin revolt that practically ended Mamluk control of Upper Egypt between 1401 and 1413. Mamluk authority throughout the sultanate significantly eroded, while the capital Cairo underwent an economic crisis. Faraj was toppled in 1412 by the Syria-based emirs, Tanam, Jakam, Nawruz and al-Mu'ayyad Shaykh, against whom Faraj had sent seven military expeditions. The emirs could not usurp the throne themselves, and had Caliph al-Musta'in (r. 1406–1413) installed as a puppet sultan; the caliph had the support of the non-Circassian mamluks and legitimacy with the local population. Six months later, Shakyh ousted al-Musta'in after neutralizing his main rival, Nawruz, and assumed the sultanate. Shaykh's main policy was restoring state authority within the empire, which experienced further plagues in 1415–1417 and 1420. Shaykh replenished the treasury through tax collection expeditions akin to raids across the empire to compensate the tax arrears that accumlated under Faraj. Shaykh also commissioned and led military campaigns against the Mamluks' enemies in Anatolia, reasserting the state's influence there. Before Shaykh died in 1421, he attempted to offset the power of the Circassians by importing Turkish mamluks and installing a Turk as atabeg al-asakir to serve as regent for his infant son Ahmad. After his death, a Circassian emir, Tatar, married Shaykh's widow, ousted the atabeg al-asakir and assumed power. Tatar died three months into his reign and was succeeded by Barsbay, another Circassian emir of Barquq, in 1422. Under Barsbay, the Mamluk Sultanate reached its greatest territorial extent and was militarily dominant throughout the region, but his legacy was mixed in the eyes of contemporary commentators who criticized his fiscal methods and economic policies. Barsbay pursued an economic policy of establishing state monopolies over the lucrative trade with Europe, particularly spices, at the expense of local merchants. European merchants were forced to buy spices from state agents who set prices that maximized revenue rather than promoting competition. This monopoly set a precedent for his successors, some of whom established monopolies over other goods such as sugar and textiles. Barsbay compelled Red Sea traders to offload their goods at the Mamluk-held Hejazi port of Jeddah rather than the Yemeni port of Aden to derive the greatest financial gain from the Red Sea transit route to Europe. Barsbay's efforts at monopolization and trade protection were meant to offset the severe financial losses of the agricultural sector due to the frequent recurring plagues that took a heavy toll on the farmers. In the long term, the monopoly over the spice trade had a negative effect on Egyptian commerce and became a motivation for European merchants to seek alternative routes to the east around Africa and across the Atlantic. Barsbay undertook efforts protect the caravan routes to the Hejaz from Bedouin raids. He reduced the independence of the Sharifs of Mecca to a minimum, sent troops to occupy the Hejaz and rein in the Bedouin, and took direct control of much of the region's administration. He aimed to secure the Egyptian Mediterranean coast from Catalan and Genoese piracy. Related to this, he launched campaigns against Cyprus in 1425–1426, during which the island's Lusignan king, Janus, was taken captive, because of his alleged assistance to the pirates; the large ransoms paid to the Mamluks by the Cypriots allowed them to mint new gold coinage for the first time since the 14th century. Janus became Barsbay's vassal, an arrangement enforced on his successors for several decades after. In response to Aq Qoyonlu raids against the Jazira, the Mamluks launched expeditions against them, sacking Edessa and massacring its Muslim inhabitants in 1429 and attacking their capital Amid in 1433. The Aq Qoyonlu consequently recognized Mamluk suzerainty. While the Mamluks succeeded in forcing the Anatolian beyliks to largely submit to their suzerainty, Mamluk authority in Upper Egypt was mostly relegated to the emirs of the Hawwara tribe. The latter had grown wealthy from their burgeoning trade with central Africa and achieved a degree of local popularity due to their piety, education and generally benign treatment of the inhabitants. Barsbay died on 7 June 1438 and, per his wishes, was succeeded by his fourteen-year-old son, al-Aziz Yusuf, with a leading emir of Barsbay, Sayf al-Din Jaqmaq, appointed regent. The usual disputes over succession ensued and after three months Jaqmaq won and became sultan, exiling Yusuf to Alexandria. Jaqmaq maintained friendly relations with the Ottomans. His most important foreign military effort was an abortive campaign to conquer Rhodes from the Knights of St. John, involving three expeditions between 1440 and 1444. Domestically, Jaqmaq largely continued Barsbay's monopolies, though he promised to enact reforms and formally rescinded some tariffs. Jaqmaq died in February 1453. His eighteen-year-old son, al-Mansur Uthman, was installed on the throne but soon lost all support when he tried to buy the loyalty of other mamluks with debased coins. Sayf al-Din Inal, who Barsbay had made his atabeg al-asakir, won enough support to be declared sultan two months after Jaqmaq's death. He ruled when Mehmed II, the Ottoman sultan, conquered Constantinople in 1453 and ordered public celebrations to commemorate the event, much like the celebrations of a Mamluk victory. It is unclear whether Inal and the Mamluks understood the implications of this event. It marked the rise of the Ottomans as a superpower, a status that brought them into increasing conflict with the evermore stagnant Mamluk Sultanate. By then, the state was under severe financial stress, with the state selling off iqta'at properties, depriving the treasury of their tax revenues. Coins based on precious metals nearly disappeared from circulation. Inal died on 26 February 1461. His son, al-Mu'ayyad Ahmad, ruled for a short stint under challenges from the governors of Damascus and Jeddah. A compromise candidate, the Greek Khushqadam al-Mu'ayyadi, was then chosen and eventually neutralized his opposition. His reign was marked by further political difficulties abroad and domestically. Cyprus remained a vassal, but Khushqadam's representative was killed in battle after insulting James II (who had been installed by Inal). At home, Bedouin tribes caused unrest and the sultan's attempts to suppress the Labid tribe in the Nile Delta and against the Hawwara in Upper Egypt had little effect. Khushqadam died on 9 October 1467 and the mamluk emirs initially installed Yalbay al-Mu'ayyadi as his successor. After two months he was replaced by Timurbugha al-Zahiri. Timurbugha was deposed in turn on 31 January 1468, but voluntarily consented to the accession of his second in command, Qaitbay. Qaitbay's 28-year-long reign, the second longest in Mamluk history after al-Nasir Muhammad, was marked by relative stability and prosperity. Historical sources present a sultan whose character was markedly different from other Mamluk rulers. Notably, he disliked engaging in conspiracy, even though this had been a hallmark of Mamluk politics. He had a reputation for being even-handed and treating his colleagues and subordinates fairly, exemplified by his magnanimous treatment of the deposed Timurbugha. These traits seem to have kept internal tensions and conspiracies at bay throughout his reign. While the Mamluk practices of confiscation, extortion, and bribery continued in fiscal matters, under Qaitbay they were practiced in a more systematic way that allowed individuals and institutions to function within a more predictable environment. His engagement with the civil bureaucracy and the ulema (Islamic jurists and scholars) appeared to reflect a genuine commitment to Sunni Islamic law. He was one of the most prolific Mamluk patrons of architecture, second only to al-Nasir Muhammad, and his patronage of religious and civic buildings extended to the provinces beyond Cairo. Nonetheless, Qaitbay operated in an environment of recurring plague epidemics that underpinned a general population decline. Agriculture suffered, the treasury was often stretched thin, and by the end of his reign the economy was still weak. The challenges to Mamluk dominance abroad were also mounting, particularly to the north. Shah Suwar, the leader of the Dulkadirid principality in Anatolia, benefited from Ottoman support and was an excellent military tactician. Meanwhile, Qaitbay supported the ruler of the Karamanid principality, Ahmad. Initially, the Mamluks failed in a series of campaigns against Shah Suwar. The tide turned in 1470–1471 when an agreement was reached between Qaitbay and Mehmed II, by which Qaitbay stopped supporting the Karamanids and the Ottomans stopped supporting the Dulkadirids. Now without Ottoman support, Shah Suwar was defeated in 1471 by a Mamluk expedition led by Qaitbay's senior field commander, Yashbak min Mahdi. Shah Suwar held out in his fortress near Zamantı, before agreeing to surrender himself if his life was spared and he was allowed to remain as a vassal. In the end, Qaitbay was unwilling to let him live and Shah Suwar was betrayed, brought to Cairo, and executed. Shah Budaq was installed as his replacement and as a Mamluk vassal, though the Ottoman-Mamluk rivalry over the Dulkadirid throne continued. The next challenge to Qaitbay was the rise of the Aq Qoyunlu leader Uzun Hasan. The latter led an expedition into Mamluk territory around Aleppo in 1472, but was routed by Yashbak. The next year, Uzun Hassan was more resoundingly defeated in battle against Mehmed II near Erzurum. His son and successor, Ya'qub, resorted to inviting Yashbak min Mahdi to participate in a campaign against Edessa. As this avoided any challenge against Qaitbay's authority, Yashbak accepted. Although initially successful, he was killed during the siege of the city, thus depriving Qaitbay of his most important field commander. In 1489, the Republic of Venice annexed Cyprus. The Venetians promised Qaitbay their occupation would benefit him as well, as their large fleet than could better keep the peace in the eastern Mediterranean than the Cypriots. Venice also agreed to continue the Cypriots' yearly tribute of 8,000 ducats to Cairo. A treaty signed between the two powers in 1490 formalized this arrangement. It was a sign that the Mamluks were now depending partly on the Venetians for naval security. With the death of Mehmed II in 1481 and the accession of his son, Bayezid II, to the Ottoman throne, Ottoman-Mamluk tensions escalated. Bayezid's claim to the throne was challenged by his brother, Jem. The latter fled into exile and Qaitbay granted him sanctuary in Cairo in September 1481. Qaitbay eventually allowed him to return to Anatolia to lead a new attempt against Bayezid. This venture failed and Jem was fled into exile again, this time into Christian hands to the west. Bayezid interpreted Qaitbay's welcome to Jem as direct support for the latter's cause and was furious. Qaitbay also supported the Dulkadirid leader, Ala al-Dawla (who had replaced Shah Budaq), against the Ottomans, but Ala al-Dawla was compelled to shift his loyalty to Bayezid c. 1483 or 1484, which soon triggered the start of an Ottoman–Mamluk war over the next six years. By 1491, both sides were exhausted and an Ottoman embassy arrived in Cairo in the spring. An agreement was concluded and the status quo ante bellum was reaffirmed. During the rest of Qaitbay's reign, no further external conflicts took place. Qaitbay's death on 8 August 1496 inaugurated several years of instability. Eventually, following several brief reigns by other candidates, Qansuh al-Ghuri (or al-Ghawri) was placed on the throne in 1501. Al-Ghuri secured his position over several months and appointed new figures to key posts. His nephew, Tuman Bay was appointed dawadar and his second in command. In Syria, al-Ghuri appointed Sibay, a former rival who opposed him in 1504–1505, as governor of Damascus in 1506. The latter remained a major figure during his reign but he acknowledged Cairo's suzerainty and helped to keep the peace. Al-Ghuri is often viewed negatively by historical commentators, particularly Ibn Iyas, for his draconic fiscal policies. He inherited a state beset by financial problems. In addition to the demographic and economic changes under his predecessors, changes in the organisation of the Mamluk military over time had also resulted in large numbers of soldiers feeling alienated and repeatedly threatening to revolt unless given extra payments, which drained the state's finances. To address the shortfalls, al-Ghuri resorted to heavy-handed and far-reaching taxation and extortion to refill the treasury, which elicited protests that were sometimes violent. He used the raised funds to repair fortresses throughout the region, to commission his own construction projects in Cairo, and to purchase a large number of new mamluks to fill his military ranks. Al-Ghuri also attempted reforms of the Mamluk military. He recognized the impact of gunpowder technology used by the Ottomans and Europeans, but which the Mamluks had eschewed. In 1507, he established a foundry to produce cannons and created a new regiment trained to use them, known as the 'Fifth Corps' (al-Ṭabaqa al-Khamisa). The latter's ranks were filled recruits from outside the traditional mamluk system, including Turkmens, Persians, awlad al-nas, and craftsmen. The traditional mamluk army, however, regarded firearms with contempt and vigorously resisted their incorporation into Mamluk warfare, which prevented al-Ghuri from making effective use of them until the end of his reign. In the meantime, Shah Ismail I had emerged in 1501 and forged the Safavid Empire in Iran. The Safavids styled themselves as champions of Twelver Shi'ism, in direct opposition to the Sunnism of the Mamluks and Ottomans. Tensions along this frontier encouraged al-Ghuri to rely more on the Ottomans for aid, a policy that the Venetians ultimately also urged him to follow in order to counter their common foe, the Portuguese. The latter's expansion into the Indian Ocean was one of the major concerns of al-Ghuri's time. In 1498, the Portuguese navigator Vasco da Gama had circumnavigated Africa and reached India, thus opening a new route for European trade with the east which bypassed the Middle East. This posed a serious threat to Muslim commerce, which was dominant in the area, as well as to the prosperity of Venice, which relied on trade passing from the Indian Ocean to the Mediterranean through Mamluk lands. For over more than a decade, a series of confrontations took place between Portuguese forces in the Indian Ocean and Muslim expeditions sent against them. A Mamluk fleet of fifty ships left from Jeddah in 1506, with assistance of forces from the Gujarat Sultanate. It defeated the Portuguese in 1507 but lost at the Battle of Diu in 1509. In 1515, a joint Ottoman-Mamluk fleet set out under the leadership of Salman Ra'is, but ultimately it did not accomplish much. Selim I, the new Ottoman sultan, defeated the Safavids decisively at the Battle of Chaldiran in 1514. Soon after, he attacked and defeated the Dulkadirids, a Mamluk vassal, for refusing to aid him against the Safavids. Secure now against Ismail I, in 1516 he drew together a great army aiming at conquering Egypt, but to obscure the fact he presented the mobilisation of his army as being part of the war against Ismail I. The war started in 1516 which led to the later incorporation of Egypt and its dependencies in the Ottoman Empire, with Mamluk cavalry proving no match for the Ottoman artillery and the janissaries. On 24 August 1516, at the Battle of Marj Dabiq, the Ottomans were victorious against an army led by al-Ghuri himself. Khayr Bak, the governor of Aleppo, had secretly conspired with Selim and betrayed al-Ghuri, leaving with his troops part-way during the battle. In the subsequent chaos, al-Ghuri was killed. The surviving Mamluk forces returned to Aleppo but were denied entry to the city and marched back to Egypt, harassed along the way. Syria passed into Ottoman possession, and the Ottomans were welcomed in many places as deliverance from the Mamluks. The Mamluk Sultanate survived a little longer until 1517. Tuman Bay, whom al-Ghuri had left as deputy in Cairo, was hastily and unanimously proclaimed sultan on 10 October 1516. The emirs rejected his plan to confront the next Ottoman advance at Gaza, so instead he prepared a final defense at al-Raydaniyya to the north of Cairo. In the early days of 1517, Tuman Bay received news that a Mamluk army was defeated at Gaza. The Ottoman attack at al-Raydaniyya overwhelmed the defenders on 22 January 1517 and reached Cairo. Over the following days, furious fighting continued between Mamluks, locals, and Ottomans, resulting in much damage to the city and three days of pillaging. Selim proclaimed an amnesty on 31 January, at which point many of the remaining Mamluks surrendered. Tuman Bay fled to Bahnasa in Middle Egypt with some of his remaining forces. Selim initially offered the Mamluk sultan peace as an Ottoman vassal, but his messengers were intercepted and killed by mamluks. Tuman Bay, with 4,000 cavalry and some 8,000 infantry, confronted the Ottomans in a final bloody battle near Giza on 2 April 1517, where he was defeated and captured. Selim intended to spare him, but Khayr Bak and Janbirdi al-Ghazali, another former Mamluk commander, persuaded the Ottoman sultan that Tuman Bay was too dangerous to keep alive. Accordingly, the last Mamluk sultan was executed by hanging at Bab Zuwayla, one of Cairo's gates, on 13 April 1517. In reward for his betrayal at Marj Dabiq, Selim installed Khayr Bak as Ottoman governor of Egypt. Janbirdi was appointed governor of Damascus. While the Mamluk Sultanate ceased to exist with the Ottoman conquest and the recruitment of Royal Mamluks ended, the mamluks as a military-social class continued to exist. They constituted a "self-perpetuating, largely Turkish-speaking warrior class" that continued to influence politics under Ottoman rule. They existed as military units in parallel with the more strictly Ottoman regiments like the janissaries and the azabs. The difference between these Ottoman regiments and the Egyptian mamluk regiments became blurred over time as intermarriage became common, resulting in a more mixed social class. During this period, a number of mamluk 'households' formed, with a complex composition including both true mamluks and awlad al-nas, who could also rise to high ranks. Each household was headed by an ustadh, who could be an Ottoman officer or a local civilian. Their patronage extended to include retainers recruited from other Ottoman provinces as well as allies among the local urban population and tribes. Up to the early 17th century, the vast majority of Egyptian mamluks were still of Caucasian or Circassian origin. In the later 17th and 18th centuries, mamluks from other parts of the Ottoman Empire or its frontiers, such as Bosnia and Georgia, began to appear in Egypt. Throughout the Ottoman period, powerful mamluk households and factions struggled for control of important political offices and of Egypt's revenues. Between 1688 and 1755, mamluk beys, allied with Bedouin and factions within the Ottoman garrison, deposed at least thirty-four governors. The mamluks remained a dominating force in Egyptian politics until their final elimination at the hands of Muhammad Ali in 1811. Society By the time the Mamluks took power, Arabic had already been established as the language of religion, culture and the bureaucracy in Egypt, and was widespread among non-Muslim communities there as well. Arabic's wide usage among Muslim and non-Muslim commoners had likely been motivated by their aspiration to learn the language of the ruling and scholarly elite. Another contributing factor was the wave of Arab tribal migration to Egypt and subsequent intermarriage between Arabs and the indigenous population. The Mamluks contributed to the expansion of Arabic in Egypt through their victory over the Mongols and the Crusaders and the subsequent creation of a Muslim haven in Egypt and Syria for Arabic-speaking immigrants from other conquered Muslim lands. The continuing invasions of Syria by Mongol armies led to further waves of Syrian immigrants, including scholars and artisans, to Egypt. Although Arabic was used as the administrative language of the sultanate, a variety of Kipchak Turkic, namely the Mamluk-Kipchak language was the spoken language of the Mamluk ruling elite. According to Petry, "the Mamluks regarded Turkish as their caste's vehicle of communication, even though they themselves spoke Central Asian dialects such as Qipjak, or Circassian, a Caucasic language." According to historian Michael Winter, Turkishness was the distinctive aspect of the Mamluk ruling elite, for only they knew how to speak Turkish and had Turkish names. While the Mamluk elite was ethnically diverse, those who were not Turkic in origin were Turkicized nonetheless. As such, the ethnically Circassian mamluks who gained prominence with the rise of the Burji regime and became the dominant ethnic element of the government, were educated in the Turkish language and were considered to be Turks by the Arabic-speaking population. Kipchak Turkish was also used in writing, but to a lesser extent than Arabic and mainly for a mamluk audience. Over time, it was replaced in this role by Oghuz Turkish due to the growing influence of Turkish Anatolia. The ruling military elite of the sultanate was exclusive to those of mamluk background, with rare exceptions. Ethnicity served as a major factor separating the mostly Turkic or Turkicized Mamluk elite from their Arabic-speaking subjects. Ethnic origin was a key component of an individual mamluk's identity, and ethnic identity manifested itself through given names, dress, access to administrative positions and was indicated by a sultan's nisba. The sons of mamluks, known as the awlad al-nas, did not typically hold positions in the military elite and instead, were often part of the civilian administration or the Muslim religious establishment. Among the Bahri sultans and emirs, there existed a degree of pride of their Kipchak Turkish roots, and their non-Kipchak usurpers such as sultans Kitbuqa, Baybars II and Lajin were often de-legitimized in the Bahri-era sources for their non-Kipchak origins. The Mamluk elites of the Burji period were also apparently proud of their Circassian origins. A wide range of Islamic religious expression existed in Egypt during the early Mamluk era, namely Sunni Islam and its major madhabs (schools of jurisprudence) and different Sufi orders, but also small communities of Ismai'li Shia Muslims, particularly in Upper Egypt. There remained a significant minority of Coptic Christians. Under Saladin, the Ayyubids embarked on a program of reviving and strengthening Sunni Islam in Egypt to counter Christianity, which had been reviving under the religiously benign rule of the Fatimids, and Isma'ilism, the branch of Islam of the Fatimid state. Under the Bahri sultans, the promotion of Sunni Islam was pursued more vigorously than under the Ayyubids. The Mamluks were motivated by personal piety or political expediency for Islam was both an assimilating and unifying factor between the Mamluks and the majority of their subjects; the early mamluks had been brought up as Sunni Muslims and the Islamic faith was the only aspect of life shared between the Mamluk ruling elite and its subjects. While the precedent set by the Ayyubids highly influenced the Mamluk state's embrace of Sunni Islam, the circumstances in the Muslim Middle East in the aftermath of the Crusader and Mongol invasions also left Mamluk Egypt as the last major Islamic power able to confront the Crusaders and the Mongols. Thus, the early Mamluk embrace of Sunni Islam also stemmed from the pursuit of a moral unity within their realm based on the majority views of its subjects. The Mamluks cultivated and utilized Muslim leaders to channel the religious feelings of their Muslim subjects in a manner that did not disrupt the sultanate's authority. Similar to their Ayyubid predecessors, the Bahri sultans favored the Shafi'i madhab, while additionally promoting the other major Sunni madhabs, namely the Maliki, Hanbali and Hanafi. Baybars ended the Ayyubid and early Mamluk tradition of selecting a Shafi'i scholar as qadi al-qudah (chief judge) and instead appointed a qadi al-qudah from each of the four madhabs. This policy was partly motivated to accommodate an increasingly diverse Muslim population whose components had immigrated to Egypt from regions where other madhabs prevailed. The diffusion of the post of qadi al-qudah enabled Mamluk sultans to patronize each madhab and gain more influence over them. Nevertheless, the Shafi'i scholars kept a number of privileges over their counterparts. The Mamluks embraced the Sufi orders in the empire. Sufism was widespread in Egypt by the 13th century, and the Shadhiliyya was the most popular order. The Shadhiliyya lacked an institutional structure and was flexible in its religious thought, allowing it to easily adapt to its local environment. It incorporated Sunni Islamic piety with its basis in the Qur'an and hadith, Sufi mysticism, and elements of popular religion such as sainthood, ziyarat (visitation) to the tombs of saintly or religious individuals, and dhikr (invocation of God). Other Sufi orders with large numbers of adherents were the Rifa'iyya and Badawiyya. While the Mamluks patronized the Sunni ulema through appointments to government office, they patronized the Sufis by funding zawiyas (Sufi lodges). On the other end of the spectrum of Sunni religious expression were the teachings of the Hanbali scholar Ibn Taymiyya, which emphasized stringent moral rigor based on literal interpretations of the Qur'an and the Sunna, and a deep hostility to the aspects of mysticism and popular religious innovations promoted by the Sufis. While Ibn Taymiyya was not a typical representative of Sunni orthodoxy in the sultanate, he was the most prominent Muslim scholar of the Mamluk era and arrested several times by the Mamluks for his religious teachings, which are still influential in the modern Muslim world. Ibn Taymiyya's doctrines were regarded as heretical by the Sunni establishment patronized by the Mamluks. Christians and Jews in the empire were governed by the dual authority of their respective religious institutions and the sultan. The authority of the former extended to many of the everyday aspects of Christian and Jewish life and was not restricted to the religious practices of the two communities. The Mamluk government, often under the official banner of the Pact of Umar which gave Christians and Jews dhimmi (protected peoples) status, determined the taxes paid by Christians and Jews, including the jizya (poll tax on non-Muslims), permission to construct houses of worship, and the public appearance of Christians and Jews. Jews generally fared better than Christians, and the latter experienced more difficulties under the Mamluks than their Muslim predecessors. The association of Christians with the Mongols, due to the latter's use of Armenian and Georgian Christian auxiliaries, the attempted alliance between the Mongols and the Crusader powers, and the massacres of Muslim communities and the sparing of Christians in cities captured by the Mongols, contributed to rising anti-Christian sentiments in the Mamluk era. The manifestations of anti-Christian hostility were mostly spearheaded at the popular level rather than by the Mamluk sultans. The main source of popular hostility was resentment at the privileged positions many Christians held in the Mamluk bureaucracy. The Coptic decline in Egypt occurred under the Bahri sultans and accelerated further under the Burji regime. There were several instances of Egyptian Muslim protests against the wealth of Copts and their employment with the state, and both Muslim and Christian rioters burned down each other's houses of worship during intercommunal clashes. As a result of popular pressure, Copts had their employment in the bureaucracy terminated at least nine times between the late 13th and mid-15th centuries, and on one occasion, in 1301, the government ordered the closure of all churches. Coptic bureaucrats were often restored to their positions after tensions passed. Many Copts were forced to convert to Islam or at least adopted outward expressions of Muslim faith to protect their employment and avoid the jizya and official measures against them. A large wave of Coptic conversions to Islam occurred in the 14th century, as a result of persecution, destruction of churches, and to retain employment. By the end of the Mamluk period, the ratio of Muslims to Christians in Egypt may have risen to 10:1. In Syria, the Mamluks uprooted the local Maronite and Greek Orthodox Christians from the coastal areas to prevent their contact with European powers. The Maronite Church was especially suspected by the Mamluks of collaboration with the Europeans due to the close relations between the Maronite Church and the papacy in Rome and the Christian European powers, particularly Cyprus. The Greek Orthodox Church declined after the Mamluk destruction of its spiritual center, Antioch, and the Timurid destruction of Aleppo and Damascus in 1400. The Syriac Christians also significant declined in Syria due to intra-communal disputes over patriarchal succession and the destruction of churches by the Timurids or local Kurdish tribes. The Mamluks inaugurated a similar decline of the Armenian Orthodox Church after their conquest of the Cilicia in 1374, in addition to the raids of the Timurids in 1386 and the conflict between the Timurids and the Aq Qoyunlu and Kara Qoyonlu tribal confederations in Cilicia. Bedouins were a reserve force in the Mamluk military. During the third reign of al-Nasir Muhammad, the Bedouin tribes, particularly those of Syria, such as the Al Fadl, were strengthened and integrated into the economy. Bedouin tribes were also a major supplier of the Mamluk cavalry's Arabian horses. Qalawun purchased horses from the Bedouin of Barqa, which were inexpensive but of high quality, while al-Nasir Muhammad spent extravagantly for horses from Bedouins in Barqa, Syria, Iraq and Bahrayn (eastern Arabia). Baybars and Qalawun, and the Syrian viceroys of al-Nasir Muhammad during his first two reigns, emirs Salar and Baybars II, were averse to granting Bedouin sheikhs iqtaʿat, and when they did, the iqtaʿat were of low quality. During al-Nasir Muhammad's third reign, the Al Fadl were granted high-quality iqtaʿat in abundance, strengthening the tribe to become the most powerful among the Bedouin of the Syrian Desert. Beyond his personal admiration of the Bedouin, al-Nasir Muhammad's distributed iqtaʿat to the Al Fadl to prevent their defection to the Ilkhanate, which the Al Fadl had frequently done during the early 14th century. Competition over iqtaʿat and the post of amir al-ʿarab (chief commander of the Bedouin) in Syria, led to conflict and rebellion among the tribes, leading to mass bloodshed in Syria in the aftermath of al-Nasir Muhammad's death. The Mamluk leadership in Syria, weakened by the losses of the Black Plague, was unable to quell the Bedouin through military expeditions, so they resolved to assassinate the chiefs of the tribes. The Al Fadl eventually lost favor, to the advantage of the Bedouin tribes around al-Karak under later Bahri sultans. In Egypt, during al-Nasir Muhammad's third reign, the Mamluks had a similar relationship with the Bedouin. The Isa Ibn Hasan al-Hajjan tribe became powerful there after being assigned massive iqtaʿat. The tribe remained strong after al-Nasir Muhammad's death, but frequently rebelled against the succeeding Bahri sultans. They were restored after each rebellion, before the tribe's sheikh was finally executed in 1353. In Sharqiya in Lower Egypt, the Tha'laba tribes were entrusted to supervise the postal routes, but were often unreliable and joined the Al A'id tribe during their raids. Bedouin tribal wars frequently disrupted trade and travel in Upper Egypt, and destroyed cultivated lands and sugar processing plants. In the mid-14th century, the rival Arak and Banu Hilal tribes of Upper Egypt, became de facto rulers of the region, forcing the Mamluks to rely on them for tax collection. The Bedouin were purged from Upper and Lower Egypt by the campaigns of Shaykhu in 1353. Government The Mamluks did not significantly alter the administrative, legal and economic systems they inherited from the Ayyubid state. The Mamluk ruled over essentially the same territory of the Ayyubid state, i.e. Egypt, Syria and the Hejaz. Unlike the collective sovereignty of the Ayyubids, where territory was divided among members of the royal family, the Mamluk state was unitary. Under many Ayyubid sultans, Egypt had paramountcy over the Syrian provinces, but under the Mamluks this paramountcy was consistent and absolute. Cairo remained the capital of the empire and its social, economic and administrative center, with the Citadel of Cairo serving as the sultan's headquarters. The Mamluk sultan was the supreme government authority, while he delegated power to provincial governors known as nuwwab al-saltana (deputy sultans, sing. na'ib al-saltana). The vice-regent of Egypt was the top na'ib, followed by the na'ib of Damascus, then Aleppo, then the nuwwab of al-Karak, Safed, Tripoli, Homs and Hama. In Hama, the Mamluks permitted the Ayyubids to continue governing until 1341 (its popular governor in 1320, Abu'l Fida, was granted the honorary title of sultan by al-Nasir Muhammad), but otherwise the nuwwab of the provinces were mamluk emirs. A consistent accession process occurred with every new sultan. It mostly involved an election by a council of emirs and mamluks (who would proffer an oath of loyalty), the sultan's assumption of the regal title al-malik, a state-organized procession through Cairo led by the sultan, and the reading of the sultan's name in the khutba (Friday prayer sermon). The process was not formalized and the electoral body never defined, but typically consisted of the emirs and mamluks of whichever Mamluk faction held sway; usurpations of the throne by rival factions were relatively common. Despite the electoral nature of accession, dynastic succession was nonetheless a reality at times, especially during the Bahri period, where Baybars' sons Baraka and Solamish succeeded him, before Qalawun usurped the throne and was thereafter succeeded by four generations of direct descendants, with occasional interruptions. Hereditary rule was much less frequent under the Burji regime. Nonetheless, with rare exception, the Burji sultans were all linked to the regime's founder Barquq through blood or mamluk affiliation. The accession of blood relatives to the sultanate was often the result of the decision or indecision of leading Mamluk emirs or the will of the preceding sultan. The latter situation applied to the sultans Baybars, Qalawun, the latter's son, al-Nasir Muhammad and Barquq, who formally arranged for one or more of their sons to succeed them. More often than not, the sons of sultans were elected by the senior emirs with the intention that they serve as convenient figureheads presiding over an oligarchy of the emirs. Lesser-ranked emirs viewed the sultan as a peer whom they entrusted with ultimate authority and as a benefactor whom they expected to guarantee their salaries and monopoly on the military. When emirs felt the sultan was not ensuring their benefits, disruptive riots, coup plots or delays to calls for service were all likely scenarios. Often, the practical restrictions on a sultan's power came from his own khushdashiyya, defined by historian Amalia Levanoni as "the fostering of a common bond between mamluks who belonged to the household of a single master and their loyalty towards him." The foundation of Mamluk organization and factional unity was based on the principles of khushdashiyya, which was a crucial component of a sultan's authority and power. The sultan also derived power from other emirs, with whom there was constant tension, particularly in peacetime. According to Holt, the factious nature of emirs who were not the sultan's khushdashiyya stemmed from their primary loyalty being to their ustadh. Emirs who were part of the sultan's khushdashiyya also rebelled at times, particularly the nuwwab of Syria who had power bases in their provinces. Typically, the faction most loyal to the sultan were the Royal Mamluks, particularly those whom the sultan had personally recruited and manumitted, as opposed to the qaranis, who were recruited by his predecessors. The qaranis occasionally constituted a hostile faction, such as with as-Salih Ayyub and the Qalawuni successors of al-Nasir Muhammad. Among the sultan's responsibilities were issuing and enforcing specific legal orders and general rules, making the decision to go to war, levying taxes for military campaigns, ensuring the proportionate distribution of food supplies throughout the empire and, in some cases, overseeing the investigation and punishment of alleged criminals. The sultan or his appointees led the Hajj caravans from Cairo and Damascus to Mecca in the capacity of amir al-hajj (commander of the Hajj caravan). Starting with Qalawun, the sultans monopolized the provision of the Kiswa (mantle) that was annually draped over the Kaaba, in addition to patronizing Jerusalem's Dome of the Rock. Another prerogative, at least of the early Bahri sultans, was to import as many mamluks as possible, preferably those from the territories of the Mongols. The Mamluks' enemies, namely the Mongol states and their Muslim vassals, the Armenians, and the Crusaders, disrupted the flow of mamluks to the sultanate. Unable to meet the military's need for new mamluks, the sultans often resorted to recruiting wafidiyya (Ilkhanid deserters or prisoners of war). To legitimize their rule, the Mamluks presented themselves as the defenders of Islam, and, beginning with Baybars, sought confirmation of their executive authority from a caliph. The Ayyubids had owed their allegiance to the Abbasid Caliphate, but the latter was destroyed when the Mongols sacked the Abbasid capital Baghdad in 1258 and killed Caliph al-Musta'sim. Three years later, Baybars reestablished the institution of the caliphate by making a member of the Abbasid dynasty, al-Mustansir, caliph, who in turn confirmed Baybars as sultan. The caliph recognized the sultan's authority over Egypt, Syria, the Jazira, Diyar Bakr, the Hejaz and Yemen and any territory conquered from the Crusaders or Mongols. Al-Mustansir's Abbasid successors continued in their official capacity as caliphs, but held no real power. The less than year-long reign of Caliph al-Musta'in as sultan in 1412 was an anomaly. In an anecdotal testament to the caliph's lack of real authority, a group of rebellious mamluks responded to Lajin's presentation of the Caliph al-Hakim's decree asserting Lajin's authority with the following comment, recorded by Ibn Taghribirdi: "Stupid fellow. For God's sake—who pays any heed to the caliph now?" The Abbasid presence was nonetheless an important political asset for the legitimacy of the Mamluk rulers and conferred significant prestige on them. The caliphs themselves also continued to be relevant figureheads even to other Muslim rulers until the end of the 14th century; for example, the sultans of Delhi, the Muzaffarid sultan Muhammad, the Jalayirid sultan Ahmad, and the Ottoman sultan Bayezid I all sought diplomas of investiture from the Abbasid caliphs or declared nominal allegiance to them. During the 15th century, however, the institution of the caliphate declined in importance and the caliphs became little more than religious dignitaries who visited the sultan on special occasions. Among other changes, this was exemplified by a shift in the caliph's ceremonial role during the accession of a Mamluk sultan to power: whereas Baybars formally pledged an oath of allegiance (bay'ah) to the Abbasid caliph al-Mustansir in 1261, some or most later caliphs formally performed a pledge of allegiance to the Mamluk sultan instead. The sultans were products of the military hierarchy, entry into which was essentially restricted to mamluks. Awlad al-nas could enter and rise high within the hierarchy, but typically did not enter military service. Instead, many entered into mercantile, scholastic or other civilian careers. The army Baybars inherited consisted of Kurdish and Turkic tribesmen, refugees from the Ayyubid armies of Syria, and other troops from armies dispersed by the Mongols. After the Battle of Ain Jalut, Baybars restructured the army into three components: the Royal Mamluk regiment, the soldiers of the emirs, and the halqa (non-mamluk soldiers). The Royal Mamluks, were under the direct command of the sultan and the highest-ranking body within the army, entry into which was exclusive. The lower-ranking emirs had their own corps, akin to private armies, which were also mobilized by the sultan as needed. As emirs were promoted, the number of soldiers in their corps increased, and when rival emirs challenged each other's authority, they often utilized their forces, leading to major disruptions of civilian life. The halqa had inferior status to the mamluk regiments. It had its own administrative structure and was under the direct command of the sultan. The halqa regiments declined in the 14th century when professional non-mamluk soldiers generally stopped joining the force. One of Baybars's early reforms was creating a clear and permanent hierarchy, a system which the Ayyubids had lacked. To that end, he established a ranking system for emirs of ten, forty and one hundred, each indicating the number of mamluks assigned to an emir's command. An emir of one hundred could further be assigned one thousand mounted soldiers during battle. Baybars instituted uniformity within the army and ended the improvised nature of the Ayyubid forces in Egypt and Syria. Baybars and Qalawun standardized the undefined Ayyubid policies of distributing iqtaʿat to emirs. This reform created a clear link between an emir's rank and the size of his iqtaʿ. Baybars started biweekly inspections of the troops to verify sultanic orders were implemented, in addition to the periodic inspections where he distributed new arms to the troops. Beginning under Qalawun, the sultan and the military administration recorded all emirs in the empire and defined their roles as part of the right or left flanks of the army during wartime. Gradually, as mamluks filled administrative and courtier posts within the state, Mamluk innovations to the Ayyubid hierarchy were developed. The offices of ustadar (majordomo), hajib (chamberlain), amir jandar (commander of the arsenal) and khazindar (treasurer), which existed during the Ayyubid period, were preserved, but Baybars added the offices of dawadar (secretary or adviser), amir akhur (commander of the royal stables), ru'us al-nawab (chief of the mamluk corps) and amir majlis (commander of the audience). These additional offices were largely ceremonial posts and were closely connected to the military hierarchy. The ustadar (from the Arabic ustadh al-dar, lit. 'master of the house') was the sultan's chief of staff, responsible for organizing the royal court's daily activities, managing the sultan's personal budget, and supervising all of the buildings of the Citadel of Cairo and its staff. The ustadar was often referred to as the ustadar al-aliya (grand master of the house) to distinguish from his subordinate ustadar saghirs (lesser majordomos) who oversaw specific aspects of the court and citadel, such as the sultan's treasury, private property, and the kitchens of the citadel. Emirs had their own ustadars. The ustadar al-aliya became a powerful office in the late 14th century, particularly under Barquq and al-Nasir Faraj, who transferred the responsibilities of the special bureau for their mamluks to the authority of the ustadar, turning the latter into the state's chief financial official. Economy The Mamluk economy essentially consisted of two spheres: the state economy, which was organized like an elite household and controlled by the caste government headed by the sultan, and the free market economy, which was the domain of society and associated with the local subjects, in contrast to the ethnic outsiders of the ruling elite. The Mamluks introduced greater centralization of the economy by organizing the state bureaucracy in Cairo (Damascus and Aleppo already had organized bureaucracies), and the military hierarchy and its associated iqtaʿ system. In Egypt, the centrality of the Nile River facilitated Mamluk centralization of the region. The Mamluks used the same currency system as the Ayyubids, consisting of gold dinars, silver dirhams and copper fulus. The monetary system during the Mamluk period was highly unstable due to frequent monetary changes enacted by the sultans. Increased circulation of copper coins and the increased use of copper in dirhams often led to inflation. The Mamluks created an administrative body called the hisba to supervise the market, with a muhtasib (inspector-general) in charge. There were four muhtasibs based in Cairo, Alexandria, al-Fustat and Lower Egypt. The muhtasib in Cairo was the most important and his position akin to a finance minister. The muhtasib inspected weights and measures and the quality of goods, maintained legal trade, and detected price gouging. A qadi or Muslim scholar occupied the post, but in the 15th century, mamluk emirs began to be appointed as muhtasibs to recompense them during cash shortages or as a result of the gradual shift of the muhtasib's role from the legal realm to enforcement. The iqtaʿ system was inherited from the Ayyubids and further organized under the Mamluks to fit their military needs. Iqtaʿat were a central component of the Mamluk power structure. The iqtaʿ of the Muslims differed from the European concept of fiefs in that the iqtaʿ represented a right to collect revenue from a fixed territory and was accorded to an officer (an emir) as income and a financial source to provision his soldiers. Before the Mamluks' rise, there was a growing tendency of iqtaʿ holders to treat their iqtaʿ as personal, heritable property. The Mamluks effectively ended this, with the exception of some areas, mainly in Mount Lebanon, where longtime Druze iqtaʿ holders (see Buhturids), who became part of the halqa, successfully resisted the abolition of their hereditary iqtaʿat. In the Mamluk era, the iqtaʿ was an emir's main income source, and starting in 1337, Iqtaʿ holders sometimes leased or sold rights to their iqtaʿat to non-mamluks to extract more profits. By 1343, the practice was commonplace and by 1347, the sale of iqta'at became taxed. The iqtaʿ was a more stable revenue source than other methods the Mamluks employed, such as tax hikes, the sale of administrative offices, and extortion of the population. According to historian Jo van Steenbergen, The iqtaʿ system was fundamental in assuring a legitimized, controlled and guaranteed access to the resources of the Syro-Egyptian realm to an upper level of Mamluk society that was primarily military in form and organization. As such it was a fundamental feature of Mamluk society, on the one hand giving way to a military hierarchy that crystallized into an even more developed economic hierarchy and that had substantial economic interests in society at large; on the other hand, it deeply characterized the realm's economic and social development, its agriculture, grain trade, and rural demography in particular. The system consisted of land assignments from the state in return for military services. Land was assessed by the periodic rawk (cadastral survey), which was a survey of land parcels (measured by feddan units), assessment of land quality, and the annual estimated tax revenue of the parcels, and classification of a parcel's legal status as waqf (endowment) or iqtaʿ. The rawk organized the iqtaʿ system and the first was carried out in 1298 under Lajin. A second and final rawk was completed in 1315 under al-Nasir Muhammad and influenced political and economic developments of the Mamluk Sultanate until its fall in the early 16th century. Gradually, the iqtaʿ system was expanded, and increasingly larger areas of kharaj (taxable lands) were appropriated as iqtaʿ lands to meet the fiscal needs of the military, namely payment of emirs and their subordinates. The state resolved to increase allotments by dispersing an emir's iqtaʿat across several provinces and for short terms. This led to iqtaʿ holders neglecting the administrative oversight, maintenance, and infrastructure of their iqtaʿat, and concentrating solely on collecting taxes, resulting in less productivity. Agriculture was the primary source of revenue in the Mamluk economy. Agricultural products were the main exports of Mamluk Egypt, Syria and Palestine. Moreover, the major industries of sugar and textile production depended on crops (sugar cane and cotton). Every agricultural commodity was taxed by the state, with the sultan's treasury taking the largest share of the revenues; emirs and major private brokers followed. An emir's main source of income were the agricultural products of his iqtaʿ. In Egypt, Mamluk centralization of agricultural production was more thorough than in Syria and Palestine. All agriculture in Egypt depended on a single source of irrigation, the Nile, and the measures and rights to irrigation were determined by the river's flooding, whereas in Syria and Palestine, there were multiple sources of mostly rain-fed irrigation, and measures and rights were determined at the local level. Centralization in Syria and Palestine was also more complicated than in Egypt due to the diversity of those regions' geography and their frequent invasions. The state's role in Syro-Palestinian agriculture was restricted to the fiscal administration and to the irrigation networks and other rural infrastructure. Although the degree of centralization was not as high as in Egypt, the Mamluks imposed sufficient control over the Syrian economy to derive significant revenues. The maintenance of the Mamluk army in Syria relied on the state's control over Syrian agricultural revenues. Among the responsibilities of a Mamluk provincial or district governor were repopulating abandoned areas to foster agricultural production, protecting the lands from Bedouin raids, increasing productivity in barren lands (likely through the upkeep and expansion of existing irrigation networks), and devoting special attention to the cultivation of the more arable low-lying regions. To ensure rural life was undisturbed by Bedouin raiding, which disrupted agricultural work or damaged crops and agrarian infrastructure and thus decreased revenues, the Mamluks attempted to prevent Bedouin armament and confiscate existing weapons from them. Egypt and Syria played a central transit role in international trade in the Middle Ages. Early into their rule, the Mamluks expanded the empire's role in foreign trade, with Baybars signing a commercial treaty with Genoa and Qalawun signing a similar agreement with Ceylon. By the 15th century, internal upheaval from Mamluk power struggles, diminishing iqtaʿ revenue from plagues, and the encroachment of abandoned farmlands by Bedouin tribes had led to a financial crisis in the sultanate. To compensate these losses, the Mamluks applied a three-pronged approach: taxing the urban middle classes, boosting production and sale of cotton and sugar to Europe, and profiting from their transit position in the trade between Europe and the Far East. The last was the Mamluks' most lucrative policy and was accomplished by cultivating trade ties with Venice, Genoa and Barcelona, and increasing tariffs on commodities. At this time, the long-established trade between Europe and the Islamic world began to make up a significant part of state revenues as the Mamluks taxed the merchants operating or passing through the empire's ports. Mamluk Egypt was a major producer of textiles and a supplier of raw materials for Western Europe. The frequent outbreaks of the Black Plague led to a decline in the production of textiles, silk products, sugar, glass, soaps, and paper, which coincided with the Europeans' increasing production of these goods. Trade continued nonetheless and despite papal restrictions on trade with the Muslims during the Crusades. Mediterranean trade was dominated by spices, such as pepper, muscat nuts and flowers, cloves and cinnamon, as well as medicinal drugs and indigo. These goods originated in Persia, India, and Southeast Asia and made their way to Europe via the Mamluk ports of Syria and Egypt. These ports were frequented by European merchants, who in turn sold gold and silver ducats and bullion, silk, wool and linen fabrics, furs, wax, honey, and cheeses. Under Barsbay, a state monopoly was established on luxury goods, namely spices, with the state setting prices and collecting a percentage of the profits. In 1387, Barsbay established direct control over Alexandria, the principal Egyptian commercial port, transferring its tax revenues to his personal treasury (diwan al-khass) instead of the imperial treasury, which was linked with the military's iqtaʿ system. In 1429, he ordered the spice trade to Europe be conducted through Cairo before goods reached Alexandria to end the direct transportation of spices from the Red Sea to Alexandria. In the late 15th and early 16th centuries, the Portuguese expansion into Africa and Asia significantly decreased the revenues of the Mamluk–Venetian monopoly on trans-Mediterranean trade. This contributed to and coincided with the fall of the sultanate. Culture Mamluk decorative arts—especially enameled and gilded glass, inlaid metalwork, woodwork, and textiles—were prized around the Mediterranean as well as in Europe, where they had a profound impact on local production. Mamluk glassware influenced the Venetian glass industry. Trade with Iran, India, and China was even more extensive, turning Mamluk cities into centers of both trade and consumption. Imported luxury goods from the East sometimes influenced local artistic vocabularies, as exemplified by the incorporation of Chinese motifs into both objects and architecture. The Mamluks themselves, as former slaves who rose through the ranks by their own efforts, were status-conscious patrons who commissioned luxury objects marked with emblems of their ownership. Architecture was the most significant form of Mamluk patronage and numerous artistic objects were commissioned to furnish Mamluk religious buildings, such as glass lamps, Qur'an manuscripts, brass candlesticks, and wooden minbars. Decorative motifs in one art form were often applied in other art forms, including architecture. Patronage varied over time, but the two high points of the arts were the reigns of al-Nasir Muhammad and Qaitbay. Some art forms also varied in importance over time. For example, enameled glassware was a prominent industry during the first half of the Mamluk period but declined significantly in the 15th century. Most of the surviving examples of carpets, by contrast, date from the end of the Mamluk period. Ceramic production was relatively less important overall, in part because Chinese porcelains were widely available. In the art of manuscript decoration, the Qur'an was the book most commonly produced with a high degree of artistic elaboration. Cairo, Damascus, and Aleppo were among the main centers of manuscript production. Mamluk-period Qur'ans were richly illuminated and exhibit stylistic similarities with those produced under the contemporary Ilkhanids in Iran. The production of high-quality paper at this time also allowed for pages to be larger, which encouraged artists to elaborate new motifs and designs to fill these larger formats. Some manuscripts could be monumental in size; for example, one Qur'an manuscript produced for al-Ashraf Sha'ban measured between 75 and 105 centimetres tall. One of the stylistic features that distinguished Mamluk manuscript decoration was the presence of gilded foliate scrollwork over pastel-coloured backgrounds set within wide margins. Frontispieces were often decorated with star-shaped or hexagonal geometric motifs. Metalware, whether in the form of ewers, basins, or candlesticks, was widely used in various contexts and many examples have survived today. They were made of brass or bronze with inlaid decoration, though in the later periods decoration was often engraved rather than inlaid. The quality and quantity of metalwork was also generally higher in the early period. One of the best examples of this period is the so-called Baptistère of Saint-Louis (kept at the Louvre today), a large brass basin inlaid with arabesques and horizontal scenes of animals, hunters, and riders playing polo. An example of the later period is a series of candlesticks commissioned by Qaitbay for Muhammad's tomb in the Prophet's Mosque in Medina. They are made of engraved brass, with black bitumen filling parts of the surfaces in order to create contrast with the motifs in polished brass. Their decoration consists almost entirely of Arabic calligraphy, with the thuluth script prominently used. Glass lamps were another high point of Mamluk art, particularly those commissioned for mosques. Egypt and Syria already possessed a rich tradition of glassmaking before this period and Damascus was the most important production center during the Mamluk period. Coloured glass had been common in the preceding Ayyubid period, but during the Mamluk period enamel and gilding became the most important techniques of decorating glass. Mosque lamps had a bulbous body with a wide flaring neck at the top. They were produced in the thousands and suspended from the ceiling by chains. Mamluk architecture is distinguished in part by the construction of multi-functional buildings whose floor plans became increasingly creative and complex due to the limited available space in the city and the desire to make monuments visually dominant in their urban surroundings. While Cairo was the main center of patronage, Mamluk architecture also appears in Damascus, Jerusalem, Aleppo, and Medina. Patrons, including sultans and high-ranking emirs, typically set out to build mausoleums for themselves but attached to them charitable structures such as madrasas, zawiyas, sabils (public fountains), or mosques. The revenues and expenses of these charitable complexes were governed by inalienable waqf agreements that also served the secondary purpose of ensuring some form of income or property for the patrons' descendants. The cruciform or four-iwan floor plan was adopted for madrasas and became more common for new monumental complexes than the traditional hypostyle mosque, though the vaulted iwans of the early period were replaced with flat-roofed iwans in the later period. The decoration of monuments also became more elaborate over time, with stone-carving and colored marble paneling and mosaics (including ablaq) replacing stucco as the most dominant architectural decoration. Monumental decorated entrance portals became common compared to earlier periods, often carved with muqarnas. Influences from Syria, Ilkhanid Iran, and possibly even Venice were evident in these trends. Minarets, which were also elaborate, usually consisted of three tiers separated by balconies, with each tier having a different design than the others. Late Mamluk minarets, for example, most typically had an octagonal shaft for the first tier, a round shaft on the second, and a lantern structure with finial on the third level. Domes also transitioned from wooden or brick structures, sometimes of bulbous shape, to pointed stone domes with complex geometric or arabesque motifs carved into their outer surfaces. The peak of this stone dome architecture was achieved under Qaitbay in the late 15th century. After the Ottoman conquest of 1517, new Ottoman-style buildings were introduced, however the Mamluk style continued to be repeated or combined with Ottoman elements in many subsequent monuments. Some building types which first appeared in the late Mamluk period, such as sabil-kuttabs (a combination of sabil and kuttab) and multi-storied caravanserais (wikalas or khans), actually grew in number during the Ottoman period. In modern times, from the late 19th century onwards, a neo-Mamluk style also appeared, partly as a nationalist response against Ottoman and European styles, in an effort to promote local 'Egyptian' styles. Mamluks sultans and emirs had personal blazons, which were important symbols of their status and a distinctive cultural feature of the Mamluk ruling class. With the possible exception of the earliest years of the regime, Mamluks chose their own blazons. This was done while they were emirs and the blazon usually symbolized the office or position they held at this time. The blazon appeared on their banners and it was retained even after they became sultans. Such blazons were an important feature of Mamluk visual culture and they are found on all kinds of objects manufactured for Mamluk patrons. They were also featured in Mamluk architecture, though less consistently. This heraldic practice was unique in the medieval Muslim world. Unlike European heraldry, Mamluk blazons used a much more limited set of images and symbols for their charges: only about forty-five symbols were used. Early Mamluk blazons were simple, usually featuring a single symbol such as a cup, sword, or an animal. Some banners were merely distinguished by patterned fabrics and plain geometric divisions. The blazon of Baybars was a panther, lion, or leopard, while that of Qalawun, according to one author, was a fleur-de-lis. From the late 13th century to the mid 14th century, the crescent moon appears on Mamluk ceramics and some Mamluk coins, either alone or in conjunction with other symbols, though it was rarely used for personal blazons. Starting with al-Nasir Muhammad, epigraphic blazons (with Arabic calligraphy) became part of the heraldic repertoire. From the late 14th to the mid-15th centuries, blazons became more complex and their shields were usually divided into three parts, with the main symbol placed within each division, sometimes in pairs. After this, late Mamluk blazons became even more elaborate but were more homogenous in style. They were filled with details, including up to five or six different symbols. By this point, they were possibly no longer used as individualized personal blazons but perhaps more as general marks of their social class. The Mamluk sultans also followed the Ayyubids in using yellow as the official colour associated with the sultan and used on sultanic banners. Baybars is said to have noted the yellow colour of his banners in opposition to the red banners of Bohemund VI. After Selim II conquered Damascus in 1516, a contemporary writer, Ibn Tulun, noted that the rich yellow silk banner of the Mamluks was replaced by the plain red banner of the Ottomans. Red banners are also known to have been used by the Mamluks, as the historian Ibn Taghribirdi (d. 1470) recorded that Sultan al-Mu'ayyad gifted a red banner to one of his vassals in Anatolia. Various symbols were also used to represent the Mamluk realm in European sources. The Book of Knowledge of All Kingdoms, written by an anonymous European author after 1360, attributes to Cairo a white flag with a blue crescent moon. In the Catalan Atlas of 1375, created by a Majorcan cartographer (likely Abraham Cresques), the Mamluk empire is symbolized by the drawing of a Muslim ruler shown with a green parrot on his arm, the latter possibly a symbol of nobility. Next to this, an icon symbolizing Babylon (alongside Cairo), is marked by a yellow flag with a crescent moon, with the crescent representing Muslim rule. Nearby, the city of Alexandria is marked with a flag containing the panther symbol of the former Sultan Baybars, whose reputation was known from the Crusades. List of sultans See also References Bibliography Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-FOOTNOTEAsakura200041-50] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Animal#cite_note-34] | [TOKENS: 6011] |
Contents Animal Animals are multicellular, eukaryotic organisms belonging to the biological kingdom Animalia (/ˌænɪˈmeɪliə/). With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Animals form a clade, meaning that they arose from a single common ancestor. Over 1.5 million living animal species have been described, of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are as many as 7.77 million animal species on Earth. Animal body lengths range from 8.5 μm (0.00033 in) to 33.6 m (110 ft). They have complex ecologies and interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology, and the study of animal behaviour is known as ethology. The animal kingdom is divided into five major clades, namely Porifera, Ctenophora, Placozoa, Cnidaria and Bilateria. Most living animal species belong to the clade Bilateria, a highly proliferative clade whose members have a bilaterally symmetric and significantly cephalised body plan, and the vast majority of bilaterians belong to two large clades: the protostomes, which includes organisms such as arthropods, molluscs, flatworms, annelids and nematodes; and the deuterostomes, which include echinoderms, hemichordates and chordates, the latter of which contains the vertebrates. The much smaller basal phylum Xenacoelomorpha have an uncertain position within Bilateria. Animals first appeared in the fossil record in the late Cryogenian period and diversified in the subsequent Ediacaran period in what is known as the Avalon explosion. Nearly all modern animal phyla first appeared in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago (Mya), and most classes during the Ordovician radiation 485.4 Mya. Common to all living animals, 6,331 groups of genes have been identified that may have arisen from a single common ancestor that lived about 650 Mya during the Cryogenian period. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa. Humans make use of many other animal species for food (including meat, eggs, and dairy products), for materials (such as leather, fur, and wool), as pets and as working animals for transportation, and services. Dogs, the first domesticated animal, have been used in hunting, in security and in warfare, as have horses, pigeons and birds of prey; while other terrestrial and aquatic animals are hunted for sports, trophies or profits. Non-human animals are also an important cultural element of human evolution, having appeared in cave arts and totems since the earliest times, and are frequently featured in mythology, religion, arts, literature, heraldry, politics, and sports. Etymology The word animal comes from the Latin noun animal of the same meaning, which is itself derived from Latin animalis 'having breath or soul'. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term metazoa is derived from Ancient Greek μετα meta 'after' (in biology, the prefix meta- stands for 'later') and ζῷᾰ zōia 'animals', plural of ζῷον zōion 'animal'. A metazoan is any member of the group Metazoa. Characteristics Animals have several characteristics that they share with other living things. Animals are eukaryotic, multicellular, and aerobic, as are plants and fungi. Unlike plants and algae, which produce their own food, animals cannot produce their own food, a feature they share with fungi. Animals ingest organic material and digest it internally. Animals have structural characteristics that set them apart from all other living things: Typically, there is an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians). Animal development is controlled by Hox genes, which signal the times and places to develop structures such as body segments and limbs. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised into specialised tissues and organs, making the formation of complex structures possible, and allowing cells to be differentiated. The extracellular matrix may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs. Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding. Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids. Ecology Animals are categorised into ecological groups depending on their trophic levels and how they consume organic material. Such groupings include carnivores (further divided into subcategories such as piscivores, insectivores, ovivores, etc.), herbivores (subcategorised into folivores, graminivores, frugivores, granivores, nectarivores, algivores, etc.), omnivores, fungivores, scavengers/detritivores, and parasites. Interactions between animals of each biome form complex food webs within that ecosystem. In carnivorous or omnivorous species, predation is a consumer–resource interaction where the predator feeds on another organism, its prey, who often evolves anti-predator adaptations to avoid being fed upon. Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various antagonistic/competitive coevolutions. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts' living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles which mainly eat sponges. Most animals rely on biomass and bioenergy produced by plants and phytoplanktons (collectively called producers) through photosynthesis. Herbivores, as primary consumers, eat the plant material directly to digest and absorb the nutrients, while carnivores and other animals on higher trophic levels indirectly acquire the nutrients by eating the herbivores or other animals that have eaten the herbivores. Animals oxidise carbohydrates, lipids, proteins and other biomolecules in cellular respiration, which allows the animal to grow and to sustain basal metabolism and fuel other biological processes such as locomotion. Some benthic animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter produced through chemosynthesis (via oxidising inorganic compounds such as hydrogen sulfide) by archaea and bacteria. Animals originated in the ocean; all extant animal phyla, except for Micrognathozoa and Onychophora, feature at least some marine species. However, several lineages of arthropods begun to colonise land around the same time as land plants, probably between 510 and 471 million years ago, during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Other notable animal groups that colonized land environments are Mollusca, Platyhelmintha, Annelida, Tardigrada, Onychophora, Rotifera, Nematoda. Animals occupy virtually all of earth's habitats and microhabitats, with faunas adapted to salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of other organisms. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above 50 °C (122 °F) or in the most extreme cold deserts of continental Antarctica. The collective global geomorphic influence of animals on the processes shaping the Earth's surface remains largely understudied, with most studies limited to individual species and well-known exemplars. Diversity The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to 33.6 metres (110 ft) long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to 10.67 metres (35.0 ft) long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes, and Supersaurus which may have reached 39 metres. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 μm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 μm when fully grown. The following table lists estimated numbers of described extant species for the major animal phyla, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011.[a] 3,000–6,500 4,000–25,000 Evolutionary origin Evidence of animals is found as long ago as the Cryogenian period. 24-Isopropylcholestane (24-ipc) has been found in rocks from roughly 650 million years ago; it is only produced by sponges and pelagophyte algae. Its likely origin is from sponges based on molecular clock estimates for the origin of 24-ipc production in both groups. Analyses of pelagophyte algae consistently recover a Phanerozoic origin, while analyses of sponges recover a Neoproterozoic origin, consistent with the appearance of 24-ipc in the fossil record. The first body fossils of animals appear in the Ediacaran, represented by forms such as Charnia and Spriggina. It had long been doubted whether these fossils truly represented animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialised for aerobic metabolism they became fully dependent on oxygen in their environments. Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess Shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artefact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do. Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges. Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures. Phylogeny Animals are monophyletic, meaning they are derived from a common ancestor. Animals are the sister group to the choanoflagellates, with which they form the Choanozoa. Ros-Rocher and colleagues (2021) trace the origins of animals to unicellular ancestors, providing the external phylogeny shown in the cladogram. Uncertainty of relationships is indicated with dashed lines. The animal clade had certainly originated by 650 mya, and may have come into being as much as 800 mya, based on molecular clock evidence for different phyla. Holomycota (inc. fungi) Ichthyosporea Pluriformea Filasterea The relationships at the base of the animal tree have been debated. Other than Ctenophora, the Bilateria and Cnidaria are the only groups with symmetry, and other evidence shows they are closely related. In addition to sponges, Placozoa has no symmetry and was often considered a "missing link" between protists and multicellular animals. The presence of hox genes in Placozoa shows that they were once more complex. The Porifera (sponges) have long been assumed to be sister to the rest of the animals, but there is evidence that the Ctenophora may be in that position. Molecular phylogenetics has supported both the sponge-sister and ctenophore-sister hypotheses. In 2017, Roberto Feuda and colleagues, using amino acid differences, presented both, with the following cladogram for the sponge-sister view that they supported (their ctenophore-sister tree simply interchanging the places of ctenophores and sponges): Porifera Ctenophora Placozoa Cnidaria Bilateria Conversely, a 2023 study by Darrin Schultz and colleagues uses ancient gene linkages to construct the following ctenophore-sister phylogeny: Ctenophora Porifera Placozoa Cnidaria Bilateria Sponges are physically very distinct from other animals, and were long thought to have diverged first, representing the oldest animal phylum and forming a sister clade to all other animals. Despite their morphological dissimilarity with all other animals, genetic evidence suggests sponges may be more closely related to other animals than the comb jellies are. Sponges lack the complex organisation found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues, unlike all other animals. They typically feed by drawing in water through pores, filtering out small particles of food. The Ctenophora and Cnidaria are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. Animals in both phyla have distinct tissues, but these are not organised into discrete organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm. The tiny placozoans have no permanent digestive chamber and no symmetry; they superficially resemble amoebae. Their phylogeny is poorly defined, and under active research. The remaining animals, the great majority—comprising some 29 phyla and over a million species—form the Bilateria clade, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and in the Nephrozoa there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side. A modern consensus phylogenetic tree for the Bilateria is shown below. Xenacoelomorpha Ambulacraria Chordata Ecdysozoa Spiralia Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures. Genetic studies have considerably changed zoologists' understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. It is often suggested that the basalmost bilaterians are the Xenacoelomorpha, with all other bilaterians belonging to the subclade Nephrozoa. However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians. Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage. Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm. The main deuterostome taxa are the Ambulacraria and the Chordata. Ambulacraria are exclusively marine and include acorn worms, starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals. The protostomes include the Ecdysozoa, named after their shared trait of ecdysis, growth by moulting, Among the largest ecdysozoan phyla are the arthropods and the nematodes. The rest of the protostomes are in the Spiralia, named for their pattern of developing by spiral cleavage in the early embryo. Major spiralian phyla include the annelids and molluscs. History of classification In the classical era, Aristotle divided animals,[d] based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, two legs, rational soul) down through the live-bearing tetrapods (with blood, four legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about. In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then, the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes une espèce de chaos ('a chaotic mess')[e] and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created nine phyla apart from vertebrates (where he still had four phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians. In his 1817 Le Règne Animal, Georges Cuvier used comparative anatomy to group the animals into four embranchements ('branches' with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860. In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia. In human culture The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined. Invertebrates including cephalopods, crustaceans, insects—principally bees and silkworms—and bivalve or gastropod molluscs are hunted or farmed for food, fibres. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool and silk are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture. Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug trabectedin are based on toxins or other molecules of animal origin. People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts. A wide variety of animals are kept as pets, from invertebrates such as tarantulas, octopuses, and praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. A wide variety of terrestrial and aquatic animals are hunted for sport. The signs of the Western and Chinese zodiacs are based on animals. In China and Japan, the butterfly has been seen as the personification of a person's soul, and in classical representation the butterfly is also the symbol of the soul. Animals have been the subjects of art from the earliest times, both historical, as in ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürer's 1515 The Rhinoceros, and George Stubbs's c. 1762 horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies. Animals including insects and mammals feature in mythology and religion. The scarab beetle was sacred in ancient Egypt, and the cow is sacred in Hinduism. Among other mammals, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Flow_network] | [TOKENS: 2898] |
Contents Flow network In graph theory, a flow network (also known as a transportation network) is a directed graph where each edge has a capacity and each edge receives a flow. The amount of flow on an edge cannot exceed the capacity of the edge. Often in operations research, a directed graph is called a network, the vertices are called nodes and the edges are called arcs. A flow must satisfy the restriction that the amount of flow into a node equals the amount of flow out of it, unless it is a source, which has only outgoing flow, or sink, which has only incoming flow. A flow network can be used to model traffic in a computer network, circulation with demands, fluids in pipes, currents in an electrical circuit, or anything similar in which something travels through a network of nodes. As such, efficient algorithms for solving network flows can also be applied to solve problems that can be reduced to a flow network, including survey design, airline scheduling, image segmentation, and the matching problem. Definition A network is a directed graph G = (V, E) with a non-negative capacity function c for each edge, and without multiple arcs (i.e. edges with the same source and target nodes). Without loss of generality, we may assume that if (u, v) ∈ E, then (v, u) is also a member of E. Additionally, if (v, u) ∉ E then we may add (v, u) to E and then set the c(v, u) = 0. If two nodes in G are distinguished – one as the source s and the other as the sink t – then (G, c, s, t) is called a flow network. Flows Flow functions model the net flow of units between pairs of nodes, and are useful when asking questions such as what is the maximum number of units that can be transferred from the source node s to the sink node t? The amount of flow between two nodes is used to represent the net amount of units being transferred from one node to the other. The excess function xf : V → ℝ represents the net flow entering a given node u (i.e. the sum of the flows entering u) and is defined by x f ( u ) = ∑ w ∈ V f ( w , u ) − ∑ w ∈ V f ( u , w ) . {\displaystyle x_{f}(u)=\sum _{w\in V}f(w,u)-\sum _{w\in V}f(u,w).} A node u is said to be active if xf (u) > 0 (i.e. the node u consumes flow), deficient if xf (u) < 0 (i.e. the node u produces flow), or conserving if xf (u) = 0. In flow networks, the source s is deficient, and the sink t is active. Pseudo-flows, feasible flows, and pre-flows are all examples of flow functions. The value |f| of a feasible flow f for a network, is the net flow into the sink t of the flow network, that is: |f| = xf (t). Note, the flow value in a network is also equal to the total outgoing flow of source s, that is: |f| = −xf (s). Also, if we define A as a set of nodes in G such that s ∈ A and t ∉ A, the flow value is equal to the total net flow going out of A (i.e. |f| = f out(A) − f in(A)). The flow value in a network is the total amount of flow from s to t. Concepts useful to flow problems Flow decomposition is a process of breaking down a given flow into a collection of path flows and cycle flows. Every flow through a network can be decomposed into one or more paths and corresponding quantities, such that each edge in the flow equals the sum of all quantities of paths that pass through it. Flow decomposition is a powerful tool in optimization problems to maximize or minimize specific flow parameters. We do not use multiple arcs within a network because we can combine those arcs into a single arc. To combine two arcs into a single arc, we add their capacities and their flow values, and assign those to the new arc: Along with the other constraints, the skew symmetry constraint must be remembered during this step to maintain the direction of the original pseudo-flow arc. Adding flow to an arc is the same as adding an arc with the capacity of zero.[citation needed] The residual capacity of an arc e with respect to a pseudo-flow f is denoted cf, and it is the difference between the arc's capacity and its flow. That is, cf (e) = c(e) − f(e). From this we can construct a residual network, denoted Gf (V, Ef), with a capacity function cf which models the amount of available capacity on the set of arcs in G = (V, E). More specifically, capacity function cf of each arc (u, v) in the residual network represents the amount of flow which can be transferred from u to v given the current state of the flow within the network. This concept is used in Ford–Fulkerson algorithm which computes the maximum flow in a flow network. Note that there can be an unsaturated path (a path with available capacity) from u to v in the residual network, even though there is no such path from u to v in the original network.[citation needed] Since flows in opposite directions cancel out, decreasing the flow from v to u is the same as increasing the flow from u to v. An augmenting path is a path (u1, u2, ..., uk) in the residual network, where u1 = s, uk = t, and for all ui, ui + 1 (cf (ui, ui + 1) > 0) (1 ≤ i < k). More simply, an augmenting path is an available flow path from the source to the sink. A network is at maximum flow if and only if there is no augmenting path in the residual network Gf. The bottleneck is the minimum residual capacity of all the edges in a given augmenting path. See example explained in the "Example" section of this article. The flow network is at maximum flow if and only if it has a bottleneck with a value equal to zero. If any augmenting path exists, its bottleneck weight will be greater than 0. In other words, if there is a bottleneck value greater than 0, then there is an augmenting path from the source to the sink. However, we know that if there is any augmenting path, then the network is not at maximum flow, which in turn means that, if there is a bottleneck value greater than 0, then the network is not at maximum flow. The term "augmenting the flow" for an augmenting path means updating the flow f of each arc in this augmenting path to equal the capacity c of the bottleneck. Augmenting the flow corresponds to pushing additional flow along the augmenting path until there is no remaining available residual capacity in the bottleneck. Sometimes, when modeling a network with more than one source, a supersource is introduced to the graph. This consists of a vertex connected to each of the sources with edges of infinite capacity, so as to act as a global source. A similar construct for sinks is called a supersink. Example In Figure 1 you see a flow network with source labeled s, sink t, and four additional nodes. The flow and capacity is denoted f / c {\displaystyle f/c} . Notice how the network upholds the capacity constraint and flow conservation constraint. The total amount of flow from s to t is 5, which can be easily seen from the fact that the total outgoing flow from s is 5, which is also the incoming flow to t. By the skew symmetry constraint, from c to a is -2 because the flow from a to c is 2. In Figure 2 you see the residual network for the same given flow. Notice how there is positive residual capacity on some edges where the original capacity is zero in Figure 1, for example for the edge ( d , c ) {\displaystyle (d,c)} . This network is not at maximum flow. There is available capacity along the paths ( s , a , c , t ) {\displaystyle (s,a,c,t)} , ( s , a , b , d , t ) {\displaystyle (s,a,b,d,t)} and ( s , a , b , d , c , t ) {\displaystyle (s,a,b,d,c,t)} , which are then the augmenting paths. The bottleneck of the ( s , a , c , t ) {\displaystyle (s,a,c,t)} path is equal to min ( c ( s , a ) − f ( s , a ) , c ( a , c ) − f ( a , c ) , c ( c , t ) − f ( c , t ) ) {\displaystyle \min(c(s,a)-f(s,a),c(a,c)-f(a,c),c(c,t)-f(c,t))} = min ( c f ( s , a ) , c f ( a , c ) , c f ( c , t ) ) {\displaystyle =\min(c_{f}(s,a),c_{f}(a,c),c_{f}(c,t))} = min ( 5 − 3 , 3 − 2 , 2 − 1 ) {\displaystyle =\min(5-3,3-2,2-1)} = min ( 2 , 1 , 1 ) = 1 {\displaystyle =\min(2,1,1)=1} . Applications Picture a series of water pipes, fitting into a network. Each pipe is of a certain diameter, so it can only maintain a flow of a certain amount of water. Anywhere that pipes meet, the total amount of water coming into that junction must be equal to the amount going out, otherwise we would quickly run out of water, or we would have a buildup of water. We have a water inlet, which is the source, and an outlet, the sink. A flow would then be one possible way for water to get from source to sink so that the total amount of water coming out of the outlet is consistent. Intuitively, the total flow of a network is the rate at which water comes out of the outlet. Flows can pertain to people or material over transportation networks, or to electricity over electrical distribution systems. For any such physical network, the flow coming into any intermediate node needs to equal the flow going out of that node. This conservation constraint is equivalent to Kirchhoff's current law. Flow networks also find applications in ecology: flow networks arise naturally when considering the flow of nutrients and energy between different organisms in a food web. The mathematical problems associated with such networks are quite different from those that arise in networks of fluid or traffic flow. The field of ecosystem network analysis, developed by Robert Ulanowicz and others, involves using concepts from information theory and thermodynamics to study the evolution of these networks over time. Classifying flow problems The simplest and most common problem using flow networks is to find what is called the maximum flow, which provides the largest possible total flow from the source to the sink in a given graph. There are many other problems which can be solved using max flow algorithms, if they are appropriately modeled as flow networks, such as bipartite matching, the assignment problem and the transportation problem. Maximum flow problems can be solved in polynomial time with various algorithms (see table). The max-flow min-cut theorem states that finding a maximal network flow is equivalent to finding a cut of minimum capacity that separates the source and the sink, where a cut is the division of vertices such that the source is in one division and the sink is in another. Richard Peng, Maximilian Probst Gutenberg, Sushant Sachdeva In a multi-commodity flow problem, you have multiple sources and sinks, and various "commodities" which are to flow from a given source to a given sink. This could be for example various goods that are produced at various factories, and are to be delivered to various given customers through the same transportation network. In a minimum cost flow problem, each edge u , v {\displaystyle u,v} has a given cost k ( u , v ) {\displaystyle k(u,v)} , and the cost of sending the flow f ( u , v ) {\displaystyle f(u,v)} across the edge is f ( u , v ) ⋅ k ( u , v ) {\displaystyle f(u,v)\cdot k(u,v)} . The objective is to send a given amount of flow from the source to the sink, at the lowest possible price. In a circulation problem, you have a lower bound ℓ ( u , v ) {\displaystyle \ell (u,v)} on the edges, in addition to the upper bound c ( u , v ) {\displaystyle c(u,v)} . Each edge also has a cost. Often, flow conservation holds for all nodes in a circulation problem, and there is a connection from the sink back to the source. In this way, you can dictate the total flow with ℓ ( t , s ) {\displaystyle \ell (t,s)} and c ( t , s ) {\displaystyle c(t,s)} . The flow circulates through the network, hence the name of the problem. In a network with gains or generalized network each edge has a gain, a real number (not zero) such that, if the edge has gain g, and an amount x flows into the edge at its tail, then an amount gx flows out at the head. In a source localization problem, an algorithm tries to identify the most likely source node of information diffusion through a partially observed network. This can be done in linear time for trees and cubic time for arbitrary networks and has applications ranging from tracking mobile phone users to identifying the originating source of disease outbreaks. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_note-48] | [TOKENS: 8773] |
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Es_(Unix_shell)] | [TOKENS: 419] |
Contents rc (Unix shell) rc (for "run commands") is the command-line interpreter for Version 10 Unix and Plan 9 from Bell Labs operating systems. It resembles the Bourne shell, but its syntax is somewhat simpler. It was created by Tom Duff, who is better known for an unusual C programming language construct ("Duff's device"). A port of the original rc to Unix is part of Plan 9 from User Space. A rewrite of rc for Unix-like operating systems by Byron Rakitzis is also available but includes some incompatible changes. Rc uses C-like control structures instead of the original Bourne shell's ALGOL-like structures, except that it uses an if not construct instead of else in the original implementation but uses else im Byron Rakitzis implementation, and has a Bourne-like for loop to iterate over lists. In rc, all variables are lists of strings, which eliminates the need for constructs like "$@". Variables are not re-split when expanded. The language is described in Duff's paper. Influences es (for "extensible shell") is an open source, command line interpreter developed by Rakitzis and Paul Haahr that uses a scripting language syntax influenced by the rc shell. It was originally based on code from Byron Rakitzis's clone of rc for Unix. Extensible shell is intended to provide a fully functional programming language as a Unix shell. It does so by introducing "program fragments" in braces as a new datatype, lexical scoping via let, and some more minor improvements. The bulk of es development occurred in the early 1990s, after the shell was introduced at the Winter 1993 USENIX conference in San Diego. Official releases appear to have ceased after 0.9-beta-1 in 1997, and es lacks features present in more popular shells, such as zsh and bash. A public domain fork of es is active as of 2019[update]. Examples The Bourne shell script: is expressed in rc as: Rc also supports more dynamic piping: References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/V380_Orionis] | [TOKENS: 764] |
Contents V380 Orionis V380 Ori is a young multiple star system located near the Orion Nebula in the constellation Orion, thought to be somewhere between 1 and 3 million years old. It lies at the centre of NGC 1999 and is the primary source lighting up this and other nebulae in the region. System V380 Orionis is a multiple star system containing at least three stars. A very faint cool star 9" away is also thought to be gravitationally bound, making it a hierarchical quadruple system. Two infrared sources within NGC 1999 have been listed as companions in some catalogues, but are not thought to be stars. When discovered, they were referred to as V380 Ori-B and V-380 Ori-C, a notation which can lead to confusion. The main component is visible as the 10th magnitude variable star at the centre of NGC 1999, referred to as the primary. Speckle interferometry shows a cool companion separated by 0.15", approximately 62 AU, referred to as the tertiary. Spectroscopy shows a third star at a projected separation less than 0.33 AU, referred to as the secondary. The two closest stars, the primary and tertiary, are surrounded by a circumstellar disk, lying almost edge-on to observers on earth. The fourth star has a projected separation of 4,000 AU and is receding from the other three. The system is believed to have formed with all four stars close together, but interacted to eject the smallest star into an unstable but gravitationally bound orbit around 20,000 years ago. The primary and secondary, the two closest stars, are calculated to orbit every 104 days. The radial velocity signatures in the spectrum have a large margin of uncertainty and the orbit is poorly defined. Comparing the mass ratio found from the orbit with masses assumed from other physical properties suggests that the orbit is seen close to pole-on. Properties The primary star is a hot white Herbig Ae/Be star that has been variously assigned spectral types between B9 and A1. It has a surface temperature of 10,500 ± 500 K, is around 2.87 times as massive as the sun, 3 times its radius, and 100 times as luminous. It has a strong magnetic field which varies every 4.1 days and this is assumed to be the star's rotation period. Models show that the axis of rotation is inclined at 32 degrees. It is a variable star, considered an Orion variable, with occasional fading and other variability caused by obscuration from the surrounding dust. The apparent magnitude varies irregularly between 10.2 and 10.7. The properties of the star are calculated based on its maximum brightness, assumed to be the least obscured. The secondary is a T Tauri star, detected by distinctive spectral lines that could not be produced by the hotter primary star, that has a surface temperature of 5,500 ± 500 K, is around 1.6 times as massive as the sun, twice its radius, and three times as luminous. The nature of the tertiary component is uncertain. No spectral lines have been seen originating from this component. The fourth star, sometimes called V380 Orionis B, is a small, cool object of spectral type M5 or M6 that is either a red dwarf or brown dwarf. Nebulosity One of the component stars of V380 Orionis appears to have launched a astrophysical jet that helped to clear the keyhole-shaped hole in the surrounding nebula known as NGC 1999. The system is surrounded by a bow shock—the total structure over 17 light-years (5.3 parsecs) across. References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Skeptic_(U.S._magazine)] | [TOKENS: 1186] |
Contents Skeptic (American magazine) Skeptic, colloquially known as Skeptic magazine, is a quarterly science education and science advocacy magazine published internationally by The Skeptics Society, a nonprofit organization devoted to promoting scientific skepticism and resisting the spread of pseudoscience, superstition, and irrational beliefs. First published in 1992, the magazine had a circulation of over 40,000 subscribers in 2000. History, format and structure The magazine was co-founded in late 1991 by Michael Shermer and Pat Linse as they formed the Skeptics Society. The magazine was first published in early 1992. It is published through Millennium Press. As of July 2021, Shermer remained the publisher and editor-in-chief of the magazine. The magazine's co-publisher and art director was Pat Linse, until her death in July 2021. Other noteworthy members of its editorial board include, or have included, evolutionary biologist Richard Dawkins, Pulitzer Prize-winning scientist Jared Diamond, magician and escape artist turned educator James “The Amazing” Randi, actor, comedian, and Saturday Night Live alumna Julia Sweeney, professional mentalist Mark Edward, science writer Daniel Loxton, Lawrence M. Krauss and Christof Koch. Skeptic has an international circulation with over 40,000 subscriptions and is on newsstands in the U.S. and Canada as well as Europe, Australia, and other countries. The cover story of the magazine's first issue paid tribute to scientist and science-fiction writer Isaac Asimov. According to Shermer, Asimov died when the issue was going to print, so artist Linse produced a pencil portrait of the author.[citation needed] As Asimov wrote a number of stories featuring robots and coined the term "robotics", the cover of volume 12, #2 (2006), which is devoted to the topic of artificial intelligence, depicts a robot sitting on a park bench reading that first issue. Every issue of the magazine opens with a description of The Skeptics Society and its mission statement, which is to explore subjects such as creationism, pyramid power, Bigfoot, pseudohistorical claims (as in the examples of Holocaust denial and extreme Afrocentrism), the use or misuse of theory and statistics, conspiracy theories, urban myths, witch-hunts, mass hysterias, genius and intelligence, and cultural influences on science, as well as controversies involving protosciences at the leading edge of established science, and even fads like cryonics and low-carb diets. In addition to publishing the magazine, the Society also: In 2011, the magazine had three regular columnists: James Randi wrote "'Twas Brillig…", Harriet A. Hall wrote "The Skep Doc" and Karen Stollznow wrote "Bad Language". The magazine's page count was between approximately 100 and 110 pages until the 2010s. It was reduced to approximately 80 pages with Vol. 16 No. 3 (2011).[citation needed] As of 2018[update], the magazine had two regular columnists: Harriet A. Hall and Carol Tavris. In 2021, the magazine's 100th edition, Vol. 26 No. 2 included a retrospective of over 40 years of Skeptic artwork and covers by Linse and Loxton. Each issue features an editorial. In the past this was provided by James Randi, and was often a reaction to stories from mainstream news media such as the 2005 story by the ABC newsmagazine Primetime Live featuring a Brazilian faith healer, João Teixeira.[citation needed] Other times Randi wrote about topics he had investigated in the past, such as alleged dowsers, alleged psychics like Sylvia Browne, and UFOs.[citation needed] The magazine also features a large correspondence section called "Forum". This includes not only letters from lay readers but also in-depth comments and rebuttals from professionals, contributing to extended academic debate across issues raised in past editions.[citation needed] The bulk of the magazine treats a variety of topics. Cover stories have ranged from examination of alleged UFOs in religious icons and theories of the likelihood of artificial intelligence to tributes to influential skeptics including Isaac Asimov and Ernst Mayr. Some editions feature special sections devoted to a particular topic or theme that is examined through multiple articles by different authors, such as intelligent design.[citation needed] Bound into most issues is a 10-page young-readers' section called Junior Skeptic. Heralded by a cover printed on glossy paper (the rest of the magazine is printed on non-glossy stock), Junior Skeptic focuses on one topic, or provides practical instruction written and illustrated in a style more appealing to children. Daniel Loxton is the Editor of Junior Skeptic. He writes and illustrates most issues. The first edition of Junior Skeptic appeared in volume 6, #2 of Skeptic (2000). Official podcasts In April 2006, an independent, skeptical talk program called Skepticality was relaunched as Skepticality: The Official Podcast of Skeptic Magazine. New episodes of the show are released on a biweekly basis. The show is produced by the original, continuing show hosts (Robynn McCarthy and Derek Colanduno) in collaboration with staff of Skeptic magazine. In 2009, a second official podcast was added. MonsterTalk critically examines the science behind cryptozoological and legendary creatures, such as Bigfoot, the Loch Ness Monster and werewolves. Monster Talk is hosted by Blake Smith and Karen Stollznow, and previously Ben Radford. Blake Smith produces the show. Collections Editorial board The editorial board is composed of the following people: See also References External links |
======================================== |
[SOURCE: https://www.reddit.com/help/privacypolicy] | [TOKENS: 12798] |
Reddit Privacy Policy Effective: Jan 06, 2026. Last Revised: Jan 06, 2026. Introduction At Reddit, we believe that privacy is a right. We want to empower our users to be the masters of their identity. In this privacy policy, we want to help you understand how and why Reddit (“Reddit,” “we” or “us”) collects, uses, and shares information about you when you use our websites, mobile apps, widgets, APIs, emails, and other online products and services (collectively, the "Services") or when you otherwise interact with us or receive a communication from us. We want this privacy policy to empower you to make better choices about how you use Reddit. We’d love for you to read the whole policy, but if you don’t, here is the TL;DR: Reddit is a public platform. Reddit is a public platform. Our communities are largely public and anyone can see your profile, posts, and comments. We collect minimal information about you. We collect minimal information that can be used to identify you by default. If you want to just browse, you don’t need an account. If you want to create an account to participate in a subreddit, we don’t require you to give us your real name. We don’t track your precise location. You can even browse anonymously. You can share as much or as little about yourself as you want when using Reddit. We use data to make Reddit a better place. Any data we collect is used primarily to provide our Services, which are focused on allowing people to come together and form communities. We don’t sell your personal data to third parties, including data brokers. All of our users get privacy rights - not just those in select countries. Privacy rights are for everyone. At Reddit, anyone can request a copy of their data, account deletion, or information about our policies. If you have questions about how we use data, just ask. We’ve tried our best to make this as easy to understand as possible but sometimes privacy policies are still confusing. If you need help understanding this policy or anything about Reddit, just ask. Reddit Is a Public Platform Much of the information on the Services is public and accessible to everyone, even without an account. When you submit content (for example, a post, comment, or chat message) to a public part of the Services, any visitors to and users of our Services will be able to see that content, the username associated with that content, and the date and time you originally submitted that content. That content and information may also be available in search results on internet search engines like Google or in responses provided by an AI chatbot like OpenAI’s ChatGPT. You should take the public nature of the Services into consideration before posting. By using the Services, you are directing us to share this information publicly and freely. Your Reddit account has a profile page that is public. Your profile contains information about your activities on the Services, such as your username, prior posts and comments, karma, trophies and achievement badges, profile display name, about section, social links, avatar or profile image, moderator, contributor, and Reddit Premium status, communities you are active in, and how long you have been a member of the Services (your cake day). We offer social sharing features that let you share content or actions you take on our Services with other media. Your use of these features enables the sharing of certain information with your friends or the public, depending on the settings you establish with the third party that provides the social sharing feature. For more information about the purpose and scope of data collection and processing in connection with social sharing features, please visit the privacy policies of the third parties that provide these social sharing features (for example, Tumblr, Facebook, and X). Reddit allows moderators to access Reddit content and information using moderator bots and tools. Reddit also allows other third parties to access Reddit public content and information using Reddit’s developer services, including Reddit Embeds, our APIs, Developer Platform, and similar technologies. We limit third-party access to this content and aggregate information (for example, voting ratios) before sharing. We also require third parties to pay licensing fees for access to larger quantities of content and information. Reddit’s Developer Terms are our standard terms governing how these services are used by third parties. Please review our Public Content Policy for more information about how your public content is publicly available and accessible to anyone with access to the internet. What Information We Collect Information You Provide to Us We collect information you provide to us directly when you use the Services. This includes: Account Information You don’t need an account to use Reddit. If you create a Reddit account, your account will have a username, which you provide or which is automatically generated. Your username is public, and it doesn’t have to be related to your real name. You may need to provide a password, depending on whether you register using an email address, Single Sign-On (SSO) feature (such as Apple or Google), or phone number. If your account does not have a password, we may send you a SMS message for verification purposes. We also ask you to select an interest, or multiple interests, during account creation (for example, history, nature, sports) to help generate content and community recommendations or select more relevant advertising for you. When you use Reddit, you may also provide other information, such as a bio, gender, birthday, location, language, profile picture, or social link. You can remove or revise this information at any time. We also store your user account preferences and settings. We may ask for such information prior to you creating a username or account to help improve your experience exploring Reddit. Public Content You Submit Public content you submit includes your public posts, comments, and chat messages, usernames, and some of your profile information, as well as related metadata. Public content you submit may also include text, links, images, gifs, audio, videos, software, and tools. Our Public Content Policy applies to this content. Non-Public Content You Submit Non-public content you submit includes your saved drafts of posts or comments, your non-public messages with other users (such as private messages, private chats, and modmail), and your reports and other communications with moderators and with us. Non-public content you submit may also include text, links, images, gifs, audio, videos, software, and tools. It also includes information you submit when you fill out a form or survey, participate in Reddit-sponsored activities, promotions, or programs, request customer support, or otherwise communicate with us. Actions You Take We collect information about the actions you take when using the Services. This includes your interactions with the platform and content, like voting, saving, hiding, and reporting. It also includes your interactions with other users, such as following and blocking. We collect your interactions with communities, like your subscriptions or moderator status. Transactional Information If you purchase products or services from us or otherwise through the Services, you will have to provide payment information in order to complete your purchase. Reddit uses industry-standard payment processor services (such as Stripe) to handle payment information, and those services are subject to separate terms and conditions and privacy policies. We will collect information about the product or service you are purchasing (for example, purchase dates, amounts paid or received, and expiration and renewal dates). We may also collect public blockchain data and addresses, such as when you purchase or create a Collectible Avatar, receive or mint an NFT, or create a Reddit Vault. However, we never store Reddit Vault private key information. Information We Collect As You Use Our Services We automatically collect information as you use our Services. This includes: Logs Data We collect device and network connection information when you access and use the Services. This may include your IP address, user-agent string, browser type, operating system, referral URLs, device information (such as device IDs), device settings, and mobile carrier name. Except for the IP address used to create your account, Reddit will delete any IP addresses collected after 100 days. Usage Information We collect information about how you use our Services, like pages visited, how you interact with content, ads, and communities, upvotes/downvotes, links clicked, the requested URL, and search terms. Information Collected From Cookies and Similar Technologies We may receive information from cookies, which are pieces of data your browser stores and sends back to us when making requests, and similar technologies. We use this information to deliver and maintain our Services, improve your experience, understand user activity, personalize content and advertisements, measure the effectiveness of advertising on and off Reddit, and improve the quality of our Services. For example, we store and retrieve information about your preferred language and other settings. See our Cookie Notice for more information about how Reddit uses cookies. For more information on how you can disable cookies, please see “Your Rights and Choices” below. Location Information We automatically collect information about your approximate location based on our Logs Data. We may also receive location information from you when you choose to share such information on our Services, including via the Location Customization setting or by associating your content with a location. Public Content Related to You Information that we collect about you based on your interactions on our Services will appear on your public profile. This includes your karma scores, trophies and achievement badges, moderator, contributor, and Reddit Premium status, communities you are active in, and how long you have been a member of the Services (your cake day), as well as related metadata. Our Public Content Policy applies to this content. Inferred Information We infer attributes such as age range, gender, and/or preferred language(s) based on the information we have about you. Information Collected from Other Sources We may receive information about you from other sources, including from other users and third parties, and combine that information with the other information we have about you. For example, we may receive demographic or interest information about you from third parties, including advertisers (such as the fact that an advertiser is interested in showing you an ad), and combine it with data you have provided to Reddit, using a common account identifier such as a hash of an email address or a mobile-device ID. You can control how we use this information to personalize advertisements for you by visiting the User Settings menu in your account, as described in the section titled “Your Rights and Choices” below. Linked Services If you authorize or link a third-party service, such as an unofficial mobile app client, to access your Reddit account, Reddit receives information about your use of that service when it uses that authorization. Linking services may also cause the other service to send us information about your account with that service. For example, if you sign in to Reddit with a third-party identity provider, that provider may share an email address with us. To learn how information is shared with linked services, see “How We Share Your Information” below. Information Collected From Integrations We also may receive information about you, including log and usage data and cookie information, from third-party sites that integrate our Services, including our embeds and advertising technology. For example, when you visit a site that uses Reddit Embeds, we may receive information about the web page you visited. Similarly, if an advertiser incorporates Reddit’s ad technology, Reddit may receive limited information about your activity on the advertiser’s site or app, such as whether you bought something from the advertiser. You can control how we use this information to personalize the Services and ads on and off Reddit for you as described in “Your Rights and Choices” below. Information Collected if You Use Certain Reddit Offerings Reddit Ads Users If you use Reddit Ads (Reddit’s self-serve ads platform at ads.reddit.com) on behalf of your business or organization, we collect some additional information. To sign up for Reddit Ads, you must provide your name, email address, and information about your company. Reddit may make information about your company public to comply with applicable law. If you purchase advertising services, you will need to provide transactional information as described above in “Information We Collect - Transactional Information,” and we may also require additional documentation to verify your identity. When using Reddit Ads, we may record a session replay of your visit for customer service, troubleshooting, and usability research purposes. Reddit Pro Users If you sign up for Reddit Pro, on behalf of yourself, your business or your organization, we may collect some additional information from you, including your profile type, name, industry category, website, and organization size, if applicable. If you request certain limited Reddit Pro features, we may request additional documentation to verify your identity. Reddit Program Participants and Potential Participants If you sign up to participate in a Reddit Program (for example, our Contributor Program), we collect, directly or indirectly through our providers, some additional information from you (such as your name, date of birth, address, email, tax, government ID, and payment information). This additional information is used and shared with our third-party payment and compliance providers to determine your eligibility to participate in the applicable Reddit Program, facilitate payments, and comply with law. Those providers and their services are subject to separate terms and conditions and privacy policies. Information Collected by Third Parties Embedded Content Reddit displays some linked content in-line on the Services via embeds. For example, Reddit posts that link to YouTube or X may load the linked video or tweet within Reddit directly from those services to your device so you don’t have to leave Reddit to see it. In general, Reddit does not control how third-party services collect data when they serve you their content directly via these embeds. As a result, embedded content is not covered by this privacy policy but by the policies of the service from which the content is embedded. Audience Measurement We partner with service providers that perform audience measurement to learn demographic information about the population that uses Reddit. To provide this demographic information, these companies collect cookie information to recognize your device. How We Use Your Information We use information about you to: Provide, maintain, and improve the Services; Provide you personalized, age-appropriate services, content, and features; Provide you with relevant advertising, including personalized advertising on Reddit; Optimize and measure the effectiveness of ads shown on our Services; Advertise Reddit Services to you on other sites and apps, including measuring the performance of those ads; Help protect the safety of Reddit and our users, which includes blocking suspected spammers, addressing abuse, age restricting certain content and enforcing our rights, the Reddit User Agreement and our other terms and policies, as well as comply with applicable law; Research and develop new services; Send you technical notices, updates, security alerts, invoices, and other support and administrative messages; Provide customer service; Communicate with you about products, services, offers, promotions, events, and programs, and provide other news and information we think will be of interest to you (for information about how to opt out of these communications, see “Your Rights and Choices” below); and Monitor and analyze trends, usage, and activities in connection with our Services. How We Share Your Information In addition to the ways that public content is shared as described above in “Reddit Is a Public Platform,” we may share information in the following ways: With your consent. We may share information about you with your consent or at your direction. For example, at your direction through social sharing features that allow you to share content with third party sites like Tumblr and X. With linked services. If you link your Reddit account with a third-party service, Reddit will share the information you authorize with that third-party service. You can control this sharing as described in "Your Rights and Choices" below. With our service providers. We may share information with vendors, consultants, and other service providers who need access to such information to carry out work for us. Their use of personal data will be subject to appropriate confidentiality and security measures. A few examples: (i) payment processors who process transactions on our behalf, (ii) cloud providers who host our data and our services, (iii) third-party ad serving and measurement providers who help us and advertisers serve relevant ads and measure the performance of ads (by disclosing information such as cookie IDs, your IP address, and/or a hashed version of your email; these third parties may combine that information with other information they already have about you to provide services to Reddit), (iv) age verification providers who help us confirm your age, and (v) compliance providers who help us determine your eligibility to participate in Reddit Programs. To comply with the law. We may share information if we believe disclosure is in accordance with, or required by, any applicable law, regulation, legal process, or governmental request, including, but not limited to, meeting national security or law enforcement requirements. To the extent the law allows it, we will attempt to provide you with prior notice before disclosing your information in response to such a request. Our Transparency Report has additional information about how we respond to government requests. In an emergency. We may share information if we believe it's necessary to prevent imminent and serious bodily harm to a person. To enforce our rights and promote safety and security. We may share information if we believe your actions are inconsistent with our User Agreement, rules, or other Reddit terms and policies, or to protect the rights, property, and safety of the Services, ourselves, and others. With our affiliates. We may share information between and among Reddit, and any of our parents, affiliates, subsidiaries, and other companies under common control and ownership. Aggregated or de-identified information. We may share information about you that has been aggregated or anonymized such that it cannot reasonably be used to identify you. For example, we may show the total number of times a post has been upvoted without identifying who the visitors were, or we may tell an advertiser how many people saw their ad. How We Protect Your Information We take measures to help protect information about you from loss, theft, misuse and unauthorized access, disclosure, alteration, and destruction. For example, we use HTTPS while information is being transmitted. We also enforce technical and administrative access controls to limit which of our employees have access to non-public personal information. You can help maintain the security of your account by configuring two-factor authentication. We store the information we collect for as long as it is necessary for the purpose(s) for which we originally collected it. We may retain certain information for legitimate business purposes and/or if we believe doing so is in accordance with, or as required by, any applicable law. For example, if you violate our policies and your account is suspended or banned, we may store the identifiers used to create the account (such as phone number) to prevent you from creating new accounts. Your Rights and Choices You have choices about how to protect and limit the collection, use, and sharing of information about you when you use the Services. Depending on where you live, you may also have the right to correction/rectification of your personal information, to opt out of certain advertising practices, or to withdraw consent for processing where you have previously provided consent. Please see “Additional Information for EEA, Swiss, and UK Users” and “Additional Information for California & Other U.S. State Users.” Below we explain how to exercise each of these rights. Reddit does not discriminate against users for exercising their rights under data protection laws. Accessing and Changing Your Information You can access your information and change or correct certain information through the Services. See our Help Center page for more information. You can also request a copy of the personal information Reddit maintains about you by following the process described here. Deleting Your Account You may delete your account at any time from the settings page in your account. For more information, please visit our Help Center. When you delete your account, your profile is no longer visible to other users and disassociated from content you posted under that account. Please note, however, that the posts, comments, and messages you submitted prior to deleting your account will still be visible to others unless you first delete the specific content. After you submit a request to delete your account, we initiate a deletion process to safely and completely remove your data from our servers or retain it only in anonymized form. After running our process, Reddit will not be able to provide access to deleted data. We may also retain certain information about you for legitimate business purposes and/or if we believe doing so is in accordance with, or as required by, any applicable law. Controlling Linked Services’ Access to Your Account You can review the services you have permitted to access your account and revoke access to individual services by visiting your account’s Apps page (for third-party app authorizations), the Connected Accounts section of your Account Settings (for Google Sign-In, Sign in with Apple, and connected X accounts), and the Social links section of your Profile Settings. Controlling Personalized Advertising You may opt out of us using information we collect from third parties, including advertising partners, to personalize the ads you see on and off Reddit. To do so, visit the Privacy section of the User Settings in your account here, if using desktop, and in your Account Settings if using the Reddit mobile app. Controlling the Use of Cookies Most web browsers are set to accept cookies by default. If you prefer, you can usually choose to set your browser to remove or reject first- and third-party cookies. Please note that if you choose to remove or reject cookies, this could affect the availability and functionality of our Services. For more information on controlling how cookies and similar technologies are used on Reddit, see our Cookie Notice. Controlling Advertising and Analytics Some analytics providers we partner with may provide specific opt-out mechanisms and we may provide, as needed and as available, additional tools and third-party services that allow you to better understand tracking technologies and how you can opt out. For example, you may manage the use and collection of certain information by Google Analytics via the Google Analytics Opt-out Browser Add-on. You can opt out of the Audience Measurement services provided by Nielsen and Quantcast. If you have an account with us and you visit our site with an opt-out preference signal (such as Global Privacy Control) enabled, we will treat that as an opt-out request. You may also generally opt out of receiving personalized advertisements from certain third-party advertisers and ad networks. To learn more about these advertisements or to opt out, please visit the sites of the Digital Advertising Alliance and the Network Advertising Initiative, or if you are a user in the European Economic Area, Your Online Choices. Do Not Track Most modern web browsers give you the option to send a Do Not Track signal to the sites you visit, indicating that you do not wish to be tracked. However, there is no accepted standard for how a site should respond to this signal, and we do not take any action in response to this signal. Instead, in addition to publicly available third-party tools, we offer you the choices described in this policy to manage the collection and use of information about you. Controlling Promotional Communications You may opt out of receiving some or all categories of promotional communications from us by following the instructions in those communications or by updating your email options in your account preferences here. If you opt out of promotional communications, we may still send you non-promotional communications, such as information about your account or your use of the Services. Controlling Mobile Notifications With your consent, we may send promotional and non-promotional push notifications or alerts to your mobile device. You can deactivate these messages at any time by changing the notification settings on your mobile device. Controlling Location Information You can control how we use location information for feed and recommendations customization via the Location Customization setting in your Account Settings. If you have questions or are not able to submit a request to exercise your rights using the mechanisms above, you may also email us at redditdatarequests@reddit.com from the email address that you have verified with your Reddit account, or submit your requests here. Before we process a request from you about your personal information, we need to verify the request via your access to your Reddit account or to a verified email address associated with your Reddit account. If we deny your request, you may appeal our decision by contacting us at redditdatarequests@reddit.com. International Data Transfers Reddit, Inc., is based in the United States and we process and store information on servers located in the United States. We may store information on servers and equipment in other countries depending on a variety of factors, including the locations of our users and service providers. By accessing or using the Services or otherwise providing information to us, you consent to the processing, transfer, and storage of information in and to the United States and other countries, where you may not have the same rights as you do under local law. When we transfer the personal data of users in the EEA, UK and/or Switzerland, we rely on the Standard Contractual Clauses approved by the European Commission for such transfers or other transfer mechanisms deemed ‘adequate’ under applicable laws. EU-U.S., UK Extension, and Swiss-U.S. Data Privacy Frameworks Disclosure Reddit, Inc., complies with the EU-U.S. Data Privacy Framework (EU-U.S. DPF) and the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. Data Privacy Framework (Swiss-U.S. DPF) as set forth by the U.S. Department of Commerce. Reddit, Inc., has certified to the U.S. Department of Commerce that it adheres to the EU-U.S. Data Privacy Framework Principles (EU-U.S. DPF Principles) with regard to the processing of personal data received from the European Union and the United Kingdom in reliance on the EU-U.S. DPF and the UK Extension to the EU-U.S. DPF. Reddit, Inc., has certified to the U.S. Department of Commerce that it adheres to the Swiss-U.S. Data Privacy Framework Principles (Swiss-U.S. DPF Principles) with regard to the processing of personal data received from Switzerland in reliance on the Swiss-U.S. DPF. If there is any conflict between the terms in this privacy policy and the EU-U.S. DPF Principles and/or the Swiss-U.S. DPF Principles, the Principles shall govern. To learn more about the Data Privacy Framework (DPF) Program, and to view our certification, please visit https://www.dataprivacyframework.gov/. Reddit, Inc.’s compliance with the DPF is subject to the investigatory and enforcement powers of the U.S. Federal Trade Commission. In accordance with the DPF, Reddit, Inc., is also liable for onward transfers to third parties that process personal information in a way that does not follow the DPF unless Reddit, Inc., was not responsible for the event giving rise to any alleged damage. In some situations, the DPF Framework gives you the right to invoke binding arbitration. You can do this to resolve complaints not resolved by other means, as described in Annex I to the DPF framework. In compliance with the EU-U.S. DPF and the UK Extension to the EU-U.S. DPF and the Swiss-U.S. DPF, Reddit, Inc., commits to refer unresolved complaints concerning our handling of personal data received in reliance on the EU-U.S. DPF and the UK Extension to the EU-U.S. DPF and the Swiss-U.S. DPF to JAMS, an alternative dispute resolution provider based in the United States. If you do not receive timely acknowledgment of your DPF Principles-related complaint from us, or if we have not addressed your DPF Principles-related complaint to your satisfaction, please visit https://www.jamsadr.com/dpf-dispute-resolution for more information or to file a complaint. The services of JAMS are provided at no cost to you. Additional Information for EEA, Swiss, and UK Users If you live in the European Economic Area (“EEA”) or Switzerland, Reddit Netherlands B.V. is the controller of information processed in connection with the Reddit platform and this policy. If you live in the United Kingdom (“UK”), Reddit, Inc., is the data controller and Reddit UK Limited is Reddit’s UK GDPR Article 27 Representative. Users in the EEA, Switzerland, and UK have the right to request access to, rectification of, or erasure of their personal data; to data portability in certain circumstances; to request restriction of processing; to object to processing; and to withdraw consent for processing where they have previously provided consent. These rights can be exercised as described in the “Your Rights and Choices" section above. EEA and Swiss users also have the right to lodge a complaint with their local supervisory authority. As required by applicable law, we collect and process information about individuals in the EEA, Switzerland, and UK only where we have a legal basis for doing so. Our legal bases depend on the Services you use and how you use them. We process your information where: It is necessary to fulfill our contract with you, including to provide, operate, and improve the Services, provide customer support, personalize features and to protect the safety and security of the Services; It satisfies a legitimate interest (which is not overridden by your data protection interests), such as preventing fraud, ensuring network and information security, enforcing our rules and policies, protecting our legal rights and interests, research and development, personalizing the Services, and marketing and promoting the Services; You have consented for us to do so for a specific purpose; or We need to process your information to comply with our legal obligations. Additional Information for California & Other US State Users Some US state laws, including the California Consumer Privacy Act (“CCPA”), as amended, require us to provide residents with some additional information and rights, which we address in this section. In the last 12 months, we collected the following categories of personal information from residents, depending on the Services used: Identifiers and account information, like your Reddit username, email address, phone number, IP address, and cookie information. Commercial information, including information about transactions you undertake with us. Internet or other electronic network activity information, such as information about your activity on our Services and limited information about your activity on the services of advertisers who use our advertising technology. Geolocation information based on your IP address. Your messages with other users (for example, private messages, private chats, and modmail). Audiovisual information in pictures, audio, or video content submitted to Reddit. Professional or employment-related information or demographic information, but only if you explicitly provide it to us, such as by filling out a survey. Inferences we make based on other collected data, for purposes such as recommending content, advertising, and analytics. You can find more information about (a) what we collect and sources of that information in the “What Information We Collect” section of this notice, (b) the business and commercial purposes for collecting that information in the “How We Use Your Information” section, and (c) the categories of third parties with whom we share that information and the purpose for sharing that information in the “How We Share Your Information” section. Depending on your jurisdiction, and subject to exceptions and limitations provided by local law, in addition to the rights listed in “Your Rights and Choices” above, you may have: the right to opt out of any sales or sharing of your personal information, to request access to and information about our data practices, and to request deletion or correction of your personal information, as well as the right not to be discriminated against for exercising your privacy rights. For users in states with applicable privacy laws, you may exercise your rights via your Privacy Settings. Reddit does not “sell” personal information as those terms are defined under applicable state privacy laws. Reddit does not have knowledge that it “sells” or “shares” the personal information of users under 16 years of age. We do not use or disclose sensitive personal information except to provide you the Services or as otherwise permitted by applicable regulations. We do not engage in profiling of consumers in furtherance of automated decisions that produce legal or similarly significant effects as those terms are defined in applicable regulations. You may exercise your rights to access, delete, or correct your personal information as described in the “Your Rights and Choices” section of this notice. When you make a request, we will verify your identity by asking you to sign into your account or if necessary by requesting additional information from you. You may also make a rights request using an authorized agent. If you submit a rights request from an authorized agent who does not provide a valid power of attorney, we may ask the authorized agent to provide proof that you gave the agent signed permission to submit the request to exercise rights on your behalf. In the absence of a valid power of attorney, we may also require you to verify your own identity directly with us or confirm to us that you otherwise provided the authorized agent permission to submit the request. If you have any questions or concerns, you may reach us using the methods described under “Your Rights and Choices” or by emailing us at redditdatarequests@reddit.com. Children Children under the age of 13 are not allowed to create an account or otherwise use the Services. Additionally, if you are located outside the United States, you must be over the age required by the laws of your country to create an account or otherwise use the Services. Changes to This Policy We may change this Privacy Policy from time to time. If we do, we will let you know by revising the date at the top of the policy. If the changes, in our sole discretion, are material, we may also notify you by sending an email to the address associated with your account (if you have chosen to provide an email address) or by otherwise providing notice through our Services. We encourage you to review the Privacy Policy regularly to stay informed about our information practices and the ways you can help protect your privacy. By continuing to use our Services after Privacy Policy changes go into effect, you agree to be bound by the revised policy. Contact Us If you have other questions about this Privacy Policy, please contact us at: Reddit, Inc. 548 Market St. #16093 San Francisco, California 94104 If you live in the United States, the data controller responsible for your information is Reddit, Inc. For email inquiries, you may email us at redditdatarequests@reddit.com or reach our Data Protection Office at dpo@reddit.com. Users in the European Economic Area or Switzerland may contact us at: Reddit Netherlands B.V. Euro Business Center Keizersgracht 62, 1015CS Amsterdam Netherlands dpo@reddit.com United Kingdom users may contact us at: Reddit UK Limited, 5 New Street Square, London, United Kingdom, EC4A 3TW ukrepresentative@reddit.com Reddit Privacy Policy Introduction At Reddit, we believe that privacy is a right. We want to empower our users to be the masters of their identity. In this privacy policy, we want to help you understand how and why Reddit (“Reddit,” “we” or “us”) collects, uses, and shares information about you when you use our websites, mobile apps, widgets, APIs, emails, and other online products and services (collectively, the "Services") or when you otherwise interact with us or receive a communication from us. We want this privacy policy to empower you to make better choices about how you use Reddit. We’d love for you to read the whole policy, but if you don’t, here is the TL;DR: Reddit is a public platform. Our communities are largely public and anyone can see your profile, posts, and comments. We collect minimal information that can be used to identify you by default. If you want to just browse, you don’t need an account. If you want to create an account to participate in a subreddit, we don’t require you to give us your real name. We don’t track your precise location. You can even browse anonymously. You can share as much or as little about yourself as you want when using Reddit. Any data we collect is used primarily to provide our Services, which are focused on allowing people to come together and form communities. We don’t sell your personal data to third parties, including data brokers. Privacy rights are for everyone. At Reddit, anyone can request a copy of their data, account deletion, or information about our policies. We’ve tried our best to make this as easy to understand as possible but sometimes privacy policies are still confusing. If you need help understanding this policy or anything about Reddit, just ask. Reddit Is a Public Platform Much of the information on the Services is public and accessible to everyone, even without an account. When you submit content (for example, a post, comment, or chat message) to a public part of the Services, any visitors to and users of our Services will be able to see that content, the username associated with that content, and the date and time you originally submitted that content. That content and information may also be available in search results on internet search engines like Google or in responses provided by an AI chatbot like OpenAI’s ChatGPT. You should take the public nature of the Services into consideration before posting. By using the Services, you are directing us to share this information publicly and freely. Your Reddit account has a profile page that is public. Your profile contains information about your activities on the Services, such as your username, prior posts and comments, karma, trophies and achievement badges, profile display name, about section, social links, avatar or profile image, moderator, contributor, and Reddit Premium status, communities you are active in, and how long you have been a member of the Services (your cake day). We offer social sharing features that let you share content or actions you take on our Services with other media. Your use of these features enables the sharing of certain information with your friends or the public, depending on the settings you establish with the third party that provides the social sharing feature. For more information about the purpose and scope of data collection and processing in connection with social sharing features, please visit the privacy policies of the third parties that provide these social sharing features (for example, Tumblr, Facebook, and X). Reddit allows moderators to access Reddit content and information using moderator bots and tools. Reddit also allows other third parties to access Reddit public content and information using Reddit’s developer services, including Reddit Embeds, our APIs, Developer Platform, and similar technologies. We limit third-party access to this content and aggregate information (for example, voting ratios) before sharing. We also require third parties to pay licensing fees for access to larger quantities of content and information. Reddit’s Developer Terms are our standard terms governing how these services are used by third parties. Please review our Public Content Policy for more information about how your public content is publicly available and accessible to anyone with access to the internet. What Information We Collect Account Information You don’t need an account to use Reddit. If you create a Reddit account, your account will have a username, which you provide or which is automatically generated. Your username is public, and it doesn’t have to be related to your real name. You may need to provide a password, depending on whether you register using an email address, Single Sign-On (SSO) feature (such as Apple or Google), or phone number. If your account does not have a password, we may send you a SMS message for verification purposes. We also ask you to select an interest, or multiple interests, during account creation (for example, history, nature, sports) to help generate content and community recommendations or select more relevant advertising for you. When you use Reddit, you may also provide other information, such as a bio, gender, birthday, location, language, profile picture, or social link. You can remove or revise this information at any time. We also store your user account preferences and settings. We may ask for such information prior to you creating a username or account to help improve your experience exploring Reddit. Public Content You Submit Public content you submit includes your public posts, comments, and chat messages, usernames, and some of your profile information, as well as related metadata. Public content you submit may also include text, links, images, gifs, audio, videos, software, and tools. Our Public Content Policy applies to this content. Non-Public Content You Submit Non-public content you submit includes your saved drafts of posts or comments, your non-public messages with other users (such as private messages, private chats, and modmail), and your reports and other communications with moderators and with us. Non-public content you submit may also include text, links, images, gifs, audio, videos, software, and tools. It also includes information you submit when you fill out a form or survey, participate in Reddit-sponsored activities, promotions, or programs, request customer support, or otherwise communicate with us. Actions You Take We collect information about the actions you take when using the Services. This includes your interactions with the platform and content, like voting, saving, hiding, and reporting. It also includes your interactions with other users, such as following and blocking. We collect your interactions with communities, like your subscriptions or moderator status. Transactional Information If you purchase products or services from us or otherwise through the Services, you will have to provide payment information in order to complete your purchase. Reddit uses industry-standard payment processor services (such as Stripe) to handle payment information, and those services are subject to separate terms and conditions and privacy policies. We will collect information about the product or service you are purchasing (for example, purchase dates, amounts paid or received, and expiration and renewal dates). We may also collect public blockchain data and addresses, such as when you purchase or create a Collectible Avatar, receive or mint an NFT, or create a Reddit Vault. However, we never store Reddit Vault private key information. Logs Data We collect device and network connection information when you access and use the Services. This may include your IP address, user-agent string, browser type, operating system, referral URLs, device information (such as device IDs), device settings, and mobile carrier name. Except for the IP address used to create your account, Reddit will delete any IP addresses collected after 100 days. Usage Information We collect information about how you use our Services, like pages visited, how you interact with content, ads, and communities, upvotes/downvotes, links clicked, the requested URL, and search terms. Information Collected From Cookies and Similar Technologies We may receive information from cookies, which are pieces of data your browser stores and sends back to us when making requests, and similar technologies. We use this information to deliver and maintain our Services, improve your experience, understand user activity, personalize content and advertisements, measure the effectiveness of advertising on and off Reddit, and improve the quality of our Services. For example, we store and retrieve information about your preferred language and other settings. See our Cookie Notice for more information about how Reddit uses cookies. For more information on how you can disable cookies, please see “Your Rights and Choices” below. Location Information We automatically collect information about your approximate location based on our Logs Data. We may also receive location information from you when you choose to share such information on our Services, including via the Location Customization setting or by associating your content with a location. Public Content Related to You Information that we collect about you based on your interactions on our Services will appear on your public profile. This includes your karma scores, trophies and achievement badges, moderator, contributor, and Reddit Premium status, communities you are active in, and how long you have been a member of the Services (your cake day), as well as related metadata. Our Public Content Policy applies to this content. Inferred Information We infer attributes such as age range, gender, and/or preferred language(s) based on the information we have about you. Linked Services If you authorize or link a third-party service, such as an unofficial mobile app client, to access your Reddit account, Reddit receives information about your use of that service when it uses that authorization. Linking services may also cause the other service to send us information about your account with that service. For example, if you sign in to Reddit with a third-party identity provider, that provider may share an email address with us. To learn how information is shared with linked services, see “How We Share Your Information” below. Information Collected From Integrations We also may receive information about you, including log and usage data and cookie information, from third-party sites that integrate our Services, including our embeds and advertising technology. For example, when you visit a site that uses Reddit Embeds, we may receive information about the web page you visited. Similarly, if an advertiser incorporates Reddit’s ad technology, Reddit may receive limited information about your activity on the advertiser’s site or app, such as whether you bought something from the advertiser. You can control how we use this information to personalize the Services and ads on and off Reddit for you as described in “Your Rights and Choices” below. Reddit Ads Users If you use Reddit Ads (Reddit’s self-serve ads platform at ads.reddit.com) on behalf of your business or organization, we collect some additional information. To sign up for Reddit Ads, you must provide your name, email address, and information about your company. Reddit may make information about your company public to comply with applicable law. If you purchase advertising services, you will need to provide transactional information as described above in “Information We Collect - Transactional Information,” and we may also require additional documentation to verify your identity. When using Reddit Ads, we may record a session replay of your visit for customer service, troubleshooting, and usability research purposes. Reddit Pro Users If you sign up for Reddit Pro, on behalf of yourself, your business or your organization, we may collect some additional information from you, including your profile type, name, industry category, website, and organization size, if applicable. If you request certain limited Reddit Pro features, we may request additional documentation to verify your identity. Reddit Program Participants and Potential Participants If you sign up to participate in a Reddit Program (for example, our Contributor Program), we collect, directly or indirectly through our providers, some additional information from you (such as your name, date of birth, address, email, tax, government ID, and payment information). This additional information is used and shared with our third-party payment and compliance providers to determine your eligibility to participate in the applicable Reddit Program, facilitate payments, and comply with law. Those providers and their services are subject to separate terms and conditions and privacy policies. Embedded Content Reddit displays some linked content in-line on the Services via embeds. For example, Reddit posts that link to YouTube or X may load the linked video or tweet within Reddit directly from those services to your device so you don’t have to leave Reddit to see it. In general, Reddit does not control how third-party services collect data when they serve you their content directly via these embeds. As a result, embedded content is not covered by this privacy policy but by the policies of the service from which the content is embedded. Audience Measurement We partner with service providers that perform audience measurement to learn demographic information about the population that uses Reddit. To provide this demographic information, these companies collect cookie information to recognize your device. How We Use Your Information We use information about you to: How We Share Your Information In addition to the ways that public content is shared as described above in “Reddit Is a Public Platform,” we may share information in the following ways: How We Protect Your Information We take measures to help protect information about you from loss, theft, misuse and unauthorized access, disclosure, alteration, and destruction. For example, we use HTTPS while information is being transmitted. We also enforce technical and administrative access controls to limit which of our employees have access to non-public personal information. You can help maintain the security of your account by configuring two-factor authentication. We store the information we collect for as long as it is necessary for the purpose(s) for which we originally collected it. We may retain certain information for legitimate business purposes and/or if we believe doing so is in accordance with, or as required by, any applicable law. For example, if you violate our policies and your account is suspended or banned, we may store the identifiers used to create the account (such as phone number) to prevent you from creating new accounts. Your Rights and Choices You have choices about how to protect and limit the collection, use, and sharing of information about you when you use the Services. Depending on where you live, you may also have the right to correction/rectification of your personal information, to opt out of certain advertising practices, or to withdraw consent for processing where you have previously provided consent. Please see “Additional Information for EEA, Swiss, and UK Users” and “Additional Information for California & Other U.S. State Users.” Below we explain how to exercise each of these rights. Reddit does not discriminate against users for exercising their rights under data protection laws. Accessing and Changing Your Information You can access your information and change or correct certain information through the Services. See our Help Center page for more information. You can also request a copy of the personal information Reddit maintains about you by following the process described here. Deleting Your Account You may delete your account at any time from the settings page in your account. For more information, please visit our Help Center. When you delete your account, your profile is no longer visible to other users and disassociated from content you posted under that account. Please note, however, that the posts, comments, and messages you submitted prior to deleting your account will still be visible to others unless you first delete the specific content. After you submit a request to delete your account, we initiate a deletion process to safely and completely remove your data from our servers or retain it only in anonymized form. After running our process, Reddit will not be able to provide access to deleted data. We may also retain certain information about you for legitimate business purposes and/or if we believe doing so is in accordance with, or as required by, any applicable law. Controlling Linked Services’ Access to Your Account You can review the services you have permitted to access your account and revoke access to individual services by visiting your account’s Apps page (for third-party app authorizations), the Connected Accounts section of your Account Settings (for Google Sign-In, Sign in with Apple, and connected X accounts), and the Social links section of your Profile Settings. Controlling Personalized Advertising You may opt out of us using information we collect from third parties, including advertising partners, to personalize the ads you see on and off Reddit. To do so, visit the Privacy section of the User Settings in your account here, if using desktop, and in your Account Settings if using the Reddit mobile app. Controlling the Use of Cookies Most web browsers are set to accept cookies by default. If you prefer, you can usually choose to set your browser to remove or reject first- and third-party cookies. Please note that if you choose to remove or reject cookies, this could affect the availability and functionality of our Services. For more information on controlling how cookies and similar technologies are used on Reddit, see our Cookie Notice. Controlling Advertising and Analytics Some analytics providers we partner with may provide specific opt-out mechanisms and we may provide, as needed and as available, additional tools and third-party services that allow you to better understand tracking technologies and how you can opt out. For example, you may manage the use and collection of certain information by Google Analytics via the Google Analytics Opt-out Browser Add-on. You can opt out of the Audience Measurement services provided by Nielsen and Quantcast. If you have an account with us and you visit our site with an opt-out preference signal (such as Global Privacy Control) enabled, we will treat that as an opt-out request. You may also generally opt out of receiving personalized advertisements from certain third-party advertisers and ad networks. To learn more about these advertisements or to opt out, please visit the sites of the Digital Advertising Alliance and the Network Advertising Initiative, or if you are a user in the European Economic Area, Your Online Choices. Do Not Track Most modern web browsers give you the option to send a Do Not Track signal to the sites you visit, indicating that you do not wish to be tracked. However, there is no accepted standard for how a site should respond to this signal, and we do not take any action in response to this signal. Instead, in addition to publicly available third-party tools, we offer you the choices described in this policy to manage the collection and use of information about you. Controlling Promotional Communications You may opt out of receiving some or all categories of promotional communications from us by following the instructions in those communications or by updating your email options in your account preferences here. If you opt out of promotional communications, we may still send you non-promotional communications, such as information about your account or your use of the Services. Controlling Mobile Notifications With your consent, we may send promotional and non-promotional push notifications or alerts to your mobile device. You can deactivate these messages at any time by changing the notification settings on your mobile device. Controlling Location Information You can control how we use location information for feed and recommendations customization via the Location Customization setting in your Account Settings. If you have questions or are not able to submit a request to exercise your rights using the mechanisms above, you may also email us at redditdatarequests@reddit.com from the email address that you have verified with your Reddit account, or submit your requests here. Before we process a request from you about your personal information, we need to verify the request via your access to your Reddit account or to a verified email address associated with your Reddit account. If we deny your request, you may appeal our decision by contacting us at redditdatarequests@reddit.com. International Data Transfers Reddit, Inc., is based in the United States and we process and store information on servers located in the United States. We may store information on servers and equipment in other countries depending on a variety of factors, including the locations of our users and service providers. By accessing or using the Services or otherwise providing information to us, you consent to the processing, transfer, and storage of information in and to the United States and other countries, where you may not have the same rights as you do under local law. When we transfer the personal data of users in the EEA, UK and/or Switzerland, we rely on the Standard Contractual Clauses approved by the European Commission for such transfers or other transfer mechanisms deemed ‘adequate’ under applicable laws. EU-U.S., UK Extension, and Swiss-U.S. Data Privacy Frameworks Disclosure Reddit, Inc., complies with the EU-U.S. Data Privacy Framework (EU-U.S. DPF) and the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. Data Privacy Framework (Swiss-U.S. DPF) as set forth by the U.S. Department of Commerce. Reddit, Inc., has certified to the U.S. Department of Commerce that it adheres to the EU-U.S. Data Privacy Framework Principles (EU-U.S. DPF Principles) with regard to the processing of personal data received from the European Union and the United Kingdom in reliance on the EU-U.S. DPF and the UK Extension to the EU-U.S. DPF. Reddit, Inc., has certified to the U.S. Department of Commerce that it adheres to the Swiss-U.S. Data Privacy Framework Principles (Swiss-U.S. DPF Principles) with regard to the processing of personal data received from Switzerland in reliance on the Swiss-U.S. DPF. If there is any conflict between the terms in this privacy policy and the EU-U.S. DPF Principles and/or the Swiss-U.S. DPF Principles, the Principles shall govern. To learn more about the Data Privacy Framework (DPF) Program, and to view our certification, please visit https://www.dataprivacyframework.gov/. Reddit, Inc.’s compliance with the DPF is subject to the investigatory and enforcement powers of the U.S. Federal Trade Commission. In accordance with the DPF, Reddit, Inc., is also liable for onward transfers to third parties that process personal information in a way that does not follow the DPF unless Reddit, Inc., was not responsible for the event giving rise to any alleged damage. In some situations, the DPF Framework gives you the right to invoke binding arbitration. You can do this to resolve complaints not resolved by other means, as described in Annex I to the DPF framework. In compliance with the EU-U.S. DPF and the UK Extension to the EU-U.S. DPF and the Swiss-U.S. DPF, Reddit, Inc., commits to refer unresolved complaints concerning our handling of personal data received in reliance on the EU-U.S. DPF and the UK Extension to the EU-U.S. DPF and the Swiss-U.S. DPF to JAMS, an alternative dispute resolution provider based in the United States. If you do not receive timely acknowledgment of your DPF Principles-related complaint from us, or if we have not addressed your DPF Principles-related complaint to your satisfaction, please visit https://www.jamsadr.com/dpf-dispute-resolution for more information or to file a complaint. The services of JAMS are provided at no cost to you. Additional Information for EEA, Swiss, and UK Users If you live in the European Economic Area (“EEA”) or Switzerland, Reddit Netherlands B.V. is the controller of information processed in connection with the Reddit platform and this policy. If you live in the United Kingdom (“UK”), Reddit, Inc., is the data controller and Reddit UK Limited is Reddit’s UK GDPR Article 27 Representative. Users in the EEA, Switzerland, and UK have the right to request access to, rectification of, or erasure of their personal data; to data portability in certain circumstances; to request restriction of processing; to object to processing; and to withdraw consent for processing where they have previously provided consent. These rights can be exercised as described in the “Your Rights and Choices" section above. EEA and Swiss users also have the right to lodge a complaint with their local supervisory authority. As required by applicable law, we collect and process information about individuals in the EEA, Switzerland, and UK only where we have a legal basis for doing so. Our legal bases depend on the Services you use and how you use them. We process your information where: Additional Information for California & Other US State Users Some US state laws, including the California Consumer Privacy Act (“CCPA”), as amended, require us to provide residents with some additional information and rights, which we address in this section. In the last 12 months, we collected the following categories of personal information from residents, depending on the Services used: You can find more information about (a) what we collect and sources of that information in the “What Information We Collect” section of this notice, (b) the business and commercial purposes for collecting that information in the “How We Use Your Information” section, and (c) the categories of third parties with whom we share that information and the purpose for sharing that information in the “How We Share Your Information” section. Depending on your jurisdiction, and subject to exceptions and limitations provided by local law, in addition to the rights listed in “Your Rights and Choices” above, you may have: the right to opt out of any sales or sharing of your personal information, to request access to and information about our data practices, and to request deletion or correction of your personal information, as well as the right not to be discriminated against for exercising your privacy rights. For users in states with applicable privacy laws, you may exercise your rights via your Privacy Settings. Reddit does not “sell” personal information as those terms are defined under applicable state privacy laws. Reddit does not have knowledge that it “sells” or “shares” the personal information of users under 16 years of age. We do not use or disclose sensitive personal information except to provide you the Services or as otherwise permitted by applicable regulations. We do not engage in profiling of consumers in furtherance of automated decisions that produce legal or similarly significant effects as those terms are defined in applicable regulations. You may exercise your rights to access, delete, or correct your personal information as described in the “Your Rights and Choices” section of this notice. When you make a request, we will verify your identity by asking you to sign into your account or if necessary by requesting additional information from you. You may also make a rights request using an authorized agent. If you submit a rights request from an authorized agent who does not provide a valid power of attorney, we may ask the authorized agent to provide proof that you gave the agent signed permission to submit the request to exercise rights on your behalf. In the absence of a valid power of attorney, we may also require you to verify your own identity directly with us or confirm to us that you otherwise provided the authorized agent permission to submit the request. If you have any questions or concerns, you may reach us using the methods described under “Your Rights and Choices” or by emailing us at redditdatarequests@reddit.com. Children Children under the age of 13 are not allowed to create an account or otherwise use the Services. Additionally, if you are located outside the United States, you must be over the age required by the laws of your country to create an account or otherwise use the Services. Changes to This Policy We may change this Privacy Policy from time to time. If we do, we will let you know by revising the date at the top of the policy. If the changes, in our sole discretion, are material, we may also notify you by sending an email to the address associated with your account (if you have chosen to provide an email address) or by otherwise providing notice through our Services. We encourage you to review the Privacy Policy regularly to stay informed about our information practices and the ways you can help protect your privacy. By continuing to use our Services after Privacy Policy changes go into effect, you agree to be bound by the revised policy. Contact Us If you have other questions about this Privacy Policy, please contact us at: Reddit, Inc. 548 Market St. #16093 San Francisco, California 94104 If you live in the United States, the data controller responsible for your information is Reddit, Inc. For email inquiries, you may email us at redditdatarequests@reddit.com or reach our Data Protection Office at dpo@reddit.com. Users in the European Economic Area or Switzerland may contact us at: Reddit Netherlands B.V. Euro Business Center Keizersgracht 62, 1015CS Amsterdam Netherlands dpo@reddit.com United Kingdom users may contact us at: Reddit UK Limited, 5 New Street Square, London, United Kingdom, EC4A 3TW ukrepresentative@reddit.com |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Sony_Music_Entertainment_Japan] | [TOKENS: 2127] |
Contents Sony Music Entertainment Japan Sony Music Entertainment (Japan) Inc. (SMEJ), also known as Sony Music Japan, is a Japanese entertainment company wholly owned by Sony Group Corporation. SMEJ's extensive operations encompass record labels, music publishing, anime production, and event organization. Founded in 1968 as CBS/Sony, the company operates independently from the United States–based Sony Music Entertainment due to its diversity and strength in the Japanese market. Its prominent subsidiaries include Sony Music Labels, which manages and operates its various record labels; Sony Music Solutions, which provides comprehensive support services like physical distribution, merchandise sales, and event planning; and the animation production company, Aniplex. The company holds a dominant position in the Anime song market, with its artists providing songs for several series per year. Sony Music Japan has long utilized anime productions as a major platform for its artists, particularly through its subsidiary Aniplex. The establishment of the Sacra Music label in 2017 further cemented this focus, dedicated specifically to managing artists prominent in the anisong genre, such as LiSA and Aimer. This strategy leverages the global popularity of anime titles to propel Japanese artists to international audiences. Sony Music does not hold the trademark rights to the Columbia name in Japan; therefore, releases from Columbia Records (outside of Japan) are issued under the Sony Records label in Japan, though they retain the usage of the "walking eye" logo. The rights to the Columbia name and trademark are instead controlled by Nippon Columbia, which served as the licensee for the American Columbia Records until 1968. With Sony Corporation of America's buyout of Bertelsmann's stake in Sony BMG, Sony Music Entertainment Japan stepped in to acquire outstanding shares of BMG Japan from Sony BMG, making it a wholly owned subsidiary of Sony Music Japan. History The idea for a CBS/Sony joint venture came in 1967 from Harvey Schein, then President of Columbia Records International, who had spent a decade traveling the world building CBS's international company. In 1972, Schein would leave CBS to become the president of Sony Corporation of America. Sony Music Entertainment Japan was officially incorporated in March 1968 as a Tokyo-based 50/50 joint venture between Sony and U.S. conglomerate CBS to distribute the latter's music releases in Japan. The company was incorporated as CBS/Sony Records and with Sony co-founder Akio Morita as president. Norio Ohga, who himself was a musician, was part of the management team from the formation of the company and served as president and representative director since April 1970. In 1972, when CBS/Sony was generating robust profits, Ohga was named chairman and, at the same time, gained further responsibility and influence within Sony. He would continue to work for the music company one morning a week. In 1980, Toshio Ozawa succeeded Ohga as president. In 1983, the company was renamed CBS/Sony Group. In January 1988, after more than a year of negotiations, Sony acquired the CBS Records Group and the 50% of CBS/Sony Group that it did not already own. In March 1988, four wholly owned subsidiaries were folded into CBS/Sony Group: CBS/Sony Inc., Epic/Sony Records Inc., CBS/Sony Records Inc. and Sony Video Software International. The company was renamed Sony Music Entertainment (Japan), Inc. On November 22, 1991, Sony Music began trading on the Tokyo Stock Exchange, initially offered at its 6,800 yen per share subscription price, but fell to 5,700 yen due to no buyers. Shugo Matsuo was named new president in January 1992, replacing Toshio Ozawa, who was appointed to the post of chairman. Overall sales for the fiscal year ending March 31, 1991, were 83.8 billion yen with a pretax profit of 9.2 billion yen. In June 1996, Ryokichi Kunugi became the new president. Shugo Matsuo was named chairman. Shigeo Maruyama was appointed to the new post of CEO on October 1, 1997, and replaced Kunugi as president in February 1998. In August 1998, the logo was changed from the original "Walking Eye" to the current one. As of 2019, Mizuno Michinori is the official CEO of the company. In May 2018, SMEJ, through its Sony Creative Products division, acquired a 39% stake in the Peanuts comic strip franchise from DHX Media. Sony Music Entertainment announced the launch of its first video game publishing label, Unties, in October 2017. Unties will publish indie games for the PlayStation 4, PlayStation VR, Nintendo Switch, and PC. The name was selected by Sony as representative of helping to "unleash" the power of independent video game development and "unshackle" such developers from the traditional video game publishing process. Unties' first release was Tiny Metal, a turn-based tactics video game developed by Area 35, for the Nintendo Switch, PS4, and PC. The game was first premiered at PAX West Indie Megabooth. Published Azure Reflections, a side-scrolling bullet hell developed by Souvenir Circ., on May 15, 2018, for the PS4. Published Touhou Gensou Wanderers Reloaded, a roguelike rpg developed by Aqua Style, for the PS4, Nintendo Switch, and PC. Published Necrosphere, a platformer developed by Cat Nigiri, for the PS4, Nintendo Switch, PC, and PSVita. Published Midnight Sanctuary, a VR/3D Novel game developed by CAVYHOUSE, for the PS4, Nintendo Switch and PC. Published Tokyo Dark, a visual novel mystery adventure hybrid developed by Cherrymochi, for the PC. Published Chiki-Chiki Boxy Racers, an arcade racing game developed by Pocket, for the Nintendo Switch on August 30, 2018. Scheduled to publish on Last Standard, a 3d action game developed by I From Japan, intended for PC. Scheduled to publish The Good Life, a daily-life rpg developed by White Owls Inc., for the PS4 and PC. Scheduled to publish Merkava Avalanche, a 3d cavalry warfare action game developed by WinterCrownWorks, for the PC. Scheduled to publish Olija, an action adventure game developed by Skeleton Crew Studio, for the PC. Scheduled to publish Deemo Reborn, a music rhythm and urban fantasy game developed by Taiwanese studio Rayak, for the PS4 with PSVR support. Scheduled to publish Giraffe and Anika, a 3d adventure game developed by Atelier Mimina, for the PS4, Nintendo Switch and PC. Scheduled to publish 3rd Eye, a 2d horror exploration game, based on the Touhou franchise, for the PS4, Nintendo Switch, and PC. Scheduled to publish Gensokyo Defenders, a tower-defense game developed by Neetpia, for the PS4 and Nintendo Switch. In 2019, Unties was dropped from the Sony group and became the new company Phoenixx. The company's leading role on the Japanese market was increasingly challenged by labels such as Avex (where SMEJ formerly owned 5 percent of shares). Net sales for the fiscal year ending March 31, 1997, were down 10% to 103 billion yen, while net income fell 41% to 7.7 billion yen. The market share at that time was less than 18%. In August 1997, Dreams Come True, until that point Sony Music Entertainment Japan's best-selling act, signed a worldwide multi-album deal with competing U.S. label Virgin Records America. Since then it was said that SMEJ ceded to Avex's challenge, but SMEJ bounced back and regained leadership from its indie rival until 2012. SMEJ netted 22.4 billion yen for 1H 2012 and 14.3% of the market, second behind Avex (24.95 B yen, 15.9%). In May 2017, SMEJ, through subsidiary Sony Music Marketing (now Sony Music Solutions), acquired the physical retail and distribution rights to releases of another rival, Warner Music Japan. On June 11, 2025, SMEJ, via Sony Music Labels acquired the rights to the Spookiz series including its characters from Keyring. Group Companies Aniplex Inc. is the SMEJ subsidiary responsible for the production, distribution and licensing of Japanese animation and related media. Established in September 1995, it became a wholly owned subsidiary of Sony Music Japan in 2001. Aniplex has been involved in various major anime franchises like Fullmetal Alchemist, Puella Magi Madoka Magica, and Demon Slayer. The company also produces stage plays and publishes video games, notably the highly successful mobile game Fate/Grand Order. Sony Music Labels Inc. (SML) is the primary subsidiary of SMEJ's music division, tasked with the consolidated management and operation of the company's many record labels and large musical artist roster. SML was established to create a unified strategy across various genres and market segments. Its high-profile imprints include Sony Music Records, Epic Records Japan, Ki/oon Music, and Sacra Music. Sony Music Solutions Inc. (SMS) serves as the comprehensive services and infrastructure arm of SMEJ. Its extensive responsibilities include the manufacturing, packaging, and physical distribution of music and video content for all group labels. Beyond logistics, SMS is the key provider for fan-facing activities, managing concert and live event production, organizing 2.5D musicals and exhibitions, overseeing the planning and sales of merchandise, and operating official fan clubs. The subsidiary also develops various digital and technology-based solutions to support the group's entertainment businesses. Sony Music Artists Inc. (SMA) is the major talent and artist management agency within the SMEJ group. It handles the careers of numerous Japanese musicians, actors, voice actors, and tarento. SMA provides management, booking, and promotional services, operating as a crucial link between the artists and the recording labels and production houses. Labels and sublabels Notable artists Other people See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-FOOTNOTEAsakura200037,_64-51] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.