text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/w/index.php?title=Mars&action=info] | [TOKENS: 46]
Contents Information for "Mars" Basic information Page protection Edit history Page properties This page is a member of 22 hidden categories (help): Pages transcluded onto the current version of this page (help): Lint errors External tools
========================================
[SOURCE: https://en.wikipedia.org/wiki/Computer#References] | [TOKENS: 10628]
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abลซ Rayhฤn al-Bฤซrลซnฤซ in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abลซ Rayhฤn al-Bฤซrลซnฤซ invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620โ€“1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musรฉe d'Art et d'Histoire of Neuchรขtel, Switzerland, and still operates. In 1831โ€“1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand โ€“ this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y โˆ’ z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5โ€“10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoffโ€“Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metalโ€“oxideโ€“silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metalโ€“oxideโ€“semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries โ€“ and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as followsโ€” this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operationโ€”although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or โˆ’128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machineโ€“based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember โ€“ a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languagesโ€”some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Churchโ€“Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/X_(formerly_Twitter)] | [TOKENS: 17723]
Contents Twitter X, formerly known as Twitter,[b] is an American microblogging and social networking service. It is one of the world's largest social media platforms and one of the most-visited websites. Users can share short text messages, images, and videos in short posts (commonly and unofficially known as "tweets") and like other users' content. The platform also includes direct messaging, video and audio calling, bookmarks, lists, communities, Grok chatbot integration, job search, and a social audio feature (X Spaces). Users can vote on context added by approved users using the Community Notes feature. The platform, initially called twttr, was created in March 2006 by Jack Dorsey, Noah Glass, Biz Stone, and Evan Williams, and was launched in July of that year; it was renamed Twitter some months later. The platform grew quickly; by 2012 more than 100 million users produced 340 million daily tweets. Twitter, Inc., was based in San Francisco, California, and had more than 25 offices around the world. A signature characteristic of the service initially was that posts were required to be brief. Posts were initially limited to 140 characters, which was changed to 280 characters in 2017. The limitation was removed for subscribed accounts in 2023. 10% of users produce over 80% of tweets. In 2020, it was estimated that approximately 48 million accounts (15% of all accounts) were run by Internet bots rather than humans. The service is owned by the American company X Corp., which was established to succeed the prior owner Twitter, Inc. in March 2023 following the October 2022 acquisition of Twitter by Elon Musk for US$44 billion. Musk stated that his goal with the acquisition was to promote free speech on the platform. Since his acquisition, the platform has been criticized for enabling the increased spread of disinformation and hate speech. Linda Yaccarino succeeded Musk as CEO on June 5, 2023, with Musk remaining as the chairman and the chief technology officer. In July 2023, Musk announced that Twitter would be rebranded to "X" and the bird logo would be retired, a process which was completed by May 2024. In March 2025, X Corp. was acquired by xAI, Musk's artificial intelligence company. The deal, an all-stock transaction, valued X at $33 billion, with a full valuation of $45 billion when factoring in $12 billion in debt. Meanwhile, xAI itself was valued at $80 billion. In July 2025, Linda Yaccarino stepped down from her role as CEO. The X site has been accused of becoming increasingly right-wing and catering to hate groups. In February 2026, the platform's owner, xAI, was acquired by SpaceX; this marked the sixth time where the social media platform has changed hands and the third time under Musk's ownership. History Jack Dorsey claims to have introduced the idea of an individual using an SMS service to communicate to a small group in 2006. The original project code name for the service was twttr, an idea that Williams later ascribed to Noah Glass, inspired by Flickr and the five-character length of American SMS short codes. The decision was also partly due to the fact that the domain twitter.com was already in use, and it was six months after the launch of twttr that the crew purchased the domain and changed the name of the service to Twitter. Work on the project started in February 2006. The first Twitter prototype, developed by Dorsey and contractor Florian Weber, was used as an internal service for Odeo employees. The full version was introduced publicly on July 15, 2006. In October 2006, Biz Stone, Evan Williams, Dorsey, and other members of Odeo formed Obvious Corporation and acquired Odeo from the investors and shareholders. Williams fired Glass, who was silent about his part in Twitter's startup until 2011. Twitter spun off into its own company in April 2007. The tipping point for Twitter's popularity was the 2007 South by Southwest Interactive (SXSWi) conference. During the event, Twitter usage increased from 20,000 tweets per day to 60,000. The company experienced rapid initial growth thereafter. In 2009, Twitter won the "Breakout of the Year" Webby Award. In February 2010, Twitter users were sending 50 million tweets per day. By March 2010, the company recorded over 70,000 registered applications. In June 2010, about 65 million tweets were posted each day, equaling about 750 tweets sent each second, according to Twitter. As noted on Compete.com, Twitter moved up to the third-highest-ranking social networking site in January 2009 from its previous rank of twenty-second. From September through October 2010, the company began rolling out "New Twitter", an entirely revamped edition of twitter.com. Changes included the ability to see pictures and videos without leaving Twitter itself by clicking on individual tweets which contain links to images and clips from a variety of supported websites, including YouTube and Flickr, and a complete overhaul of the interface. In 2019, Twitter was announced to be the 10th most downloaded mobile app of the decade, from 2010 to 2019. On March 21, 2012, Twitter celebrated its sixth birthday by announcing that it had 140 million users, a 40% rise from September 2011, who were sending 340 million tweets per day. On June 5, 2012, a modified logo was unveiled through the company blog, removing the text to showcase the slightly redesigned bird as the sole symbol of Twitter. On December 18, 2012, Twitter announced it had surpassed 200 million monthly active users.[citation needed] In September 2013, the company's data showed that 200 million users sent over 400 million tweets daily, with nearly 60% of tweets sent from mobile devices. In April 2014, Twitter underwent a redesign that made the site resemble Facebook somewhat, with a profile picture and biography in a column left to the timeline, and a full-width header image with parallax scrolling effect.[c] Late in 2015, it became apparent that growth had slowed, according to Fortune, Business Insider, Marketing Land and other news websites including Quartz (in 2016). In 2019, Twitter released another redesign of its user interface. By the start of 2019[update], Twitter had more than 330 million monthly active users. Twitter then experienced considerable growth during the COVID-19 pandemic in 2020. The platform also was increasingly used for misinformation related to the pandemic. Twitter started marking tweets which contained misleading information, and adding links to fact-checks. In 2021, Twitter began the research phase of Bluesky, an open source decentralized social media protocol where users can choose which algorithmic curation they want. The same year, Twitter also released Twitter Spaces, a social audio feature; "super follows", a way to subscribe to creators for exclusive content; and a beta of "ticketed Spaces", which makes access to certain audio rooms paid. Twitter unveiled a redesign in August 2021, with adjusted colors and a new Chirp font, which improves the left-alignment of most Western languages. Elon Musk completed his acquisition of Twitter in October 2022; Musk acted as CEO of Twitter until June 2023 when he was succeeded by Linda Yaccarino. Twitter was rebranded to X on July 23, 2023, and its domain name changed from twitter.com to x.com on May 17, 2024. Yaccarino resigned on July 9, 2025. Now operating as X, the platform closely resembles its predecessor but includes additional features such as long-form texts, account monetization options, audio-video calls, integration with xAI's Grok chatbot, job search, and a repurposing of the platform's verification system as a subscription premium. Several legacy Twitter features were removed from the site after Musk acquired Twitter, including Circles, NFT profile pictures, and the experimental pronouns in profiles feature. Musk's aims included transforming X into a "digital town square" and an "everything app" akin to WeChat. X has faced significant controversy post-rebranding. Issues such as the release of the Twitter Files, suspension of ten journalists' accounts, and labeling media outlets as "state-affiliated" and restricting their visibility have sparked criticism. Despite Musk stepping down as CEO, X continues to struggle with challenges such as viral misinformation, hate speech (especially antisemitism), and child pornography. In response to allegations it deemed unfair, X Corp. has pursued legal action against nonprofit organizations Media Matters and the Center for Countering Digital Hate. Appearance and features Posts (commonly and unofficially called "tweets") are publicly visible by default, but senders can restrict message delivery to their followers. Users can mute users they do not wish to interact with, block accounts from viewing their posts, and remove accounts from their followers list. Users can post via the website, compatible external applications, or by Short Message Service (SMS). Users may subscribe to other users' postsโ€”this is known as "following" and subscribers are known as "followers" or "tweeps", a portmanteau of Twitter and peeps. Posts can be forwarded by other users to their own feed, a process commonly called a "retweet" (officially "repost"). In 2015, Twitter launched "quote tweet", a feature (now named "quote repost") which allows users to add a comment to their post, embedding one post in the other. Users can also "like" individual tweets. The counters for likes, retweets, and replies appear next to the respective buttons in timelines such as on profile pages and search results. Counters for likes and reposts exist on a post's standalone page too. Since 2020, quote tweets have their own counter. Until the legacy desktop front end that was discontinued in 2020, a row with miniature profile pictures of up to ten liking or retweeting users was displayed, as well as a tweet reply counter next to the according button on a tweet's page. Twitter allows users to update their profile via their phones either by text messaging or by apps. Twitter announced in a tweet in 2022, that the ability to edit a tweet was being tested for select users. Eventually, all Twitter Blue subscribers would be able to use the feature. Users can group posts together by topic or type by use of hashtags โ€“ words or phrases prefixed with a "#" sign. Similarly, the "@" sign followed by a username is used for mentioning or replying to other users. In 2014, Twitter introduced hashflags, special hashtags that automatically generate a custom emoji next to them for a period of time. Hashflags may be generated by Twitter themselves or purchased by corporations. To repost a message from another user and share it with one's own followers, a user can click the repost button within the post. Users can reply to other accounts' replies. Users can hide replies to their messages and select who can reply to each of their tweets before sending them: anyone, accounts who follow the post's author, specific accounts, or none. The original, strict 140 character limit was gradually relaxed. In 2016, Twitter announced that attachments, links, and media such as photos, videos, and the person's handle, would no longer count. In 2017, Twitter handles were similarly excluded and Twitter doubled its character limitation to 280. Under the new limit, glyphs are counted as a variable number of characters, depending upon the script they are from. From 2023 Twitter Blue users could create posts with up to 4,000 characters in length. t.co is a URL shortening service created by Twitter. It is only available for links posted to Twitter and not general use. All links posted to Twitter use a t.co wrapper. Twitter intended the service to protect users from malicious sites, and to use it to track clicks on links within tweets. In June 2011, Twitter announced its own integrated photo-sharing service that enables users to upload a photo and attach it to a Tweet right from Twitter.com. Users now have the ability to add pictures to Twitter's search by adding hashtags to the tweet. Twitter plans to provide photo galleries designed to gather and syndicate all photos that a user has uploaded on Twitter and third-party services such as TwitPic. In 2016 Twitter introduced the ability to add a caption of up to 480 characters to each image attached to a tweet, accessible via screen reading software or by hovering the mouse above a picture inside TweetDeck. In 2022, Twitter made the ability to add and view captions globally available. Descriptions can be added to any uploaded image with a limit of 1000 characters. Images that have a description will feature a badge that says ALT in the bottom left corner, which will bring up the description when clicked. In 2015, Twitter began to roll out the ability to attach poll questions to tweets. Polls are open for up to 7 days, and voters are not identified. In Twitter's early years, users could communicate with Twitter using SMS. This was discontinued in most countries in April 2020 after hackers exposed vulnerabilities. In 2016, Twitter began to place a larger focus on live streaming video programming, hosting events including streams of the Republican and Democratic conventions, and winning a bid for non-exclusive streaming rights to ten NFL games in 2016. In 2017, Twitter announced that it planned to construct a 24-hour streaming video channel hosted within the service, featuring content from various partners. Twitter announced a number of new and expanded partnerships for its streaming video services at the event, including Bloomberg, BuzzFeed, Cheddar, IMG Fashion, Live Nation Entertainment, Major League Baseball, MTV and BET, NFL Network, the PGA Tour, The Players' Tribune, Ben Silverman and Howard T. Owens' Propagate, The Verge, Stadium and the WNBA. as of the first quarter of 2017[update], Twitter had over 200 content partners, who streamed over 800 hours of video over 450 events. Twitter Spaces is a social audio feature that enables users to host or participate in a live-audio virtual environment called space for conversation. A maximum of 13 people are allowed onstage. The feature was initially limited to users with at least 600 followers, but since October 2021, any Twitter user can create a Space. In March 2020, Twitter began to test a stories feature known as "fleets" in some markets, which officially launched on November 17, 2020. Fleets could contain text and media, are only accessible for 24 hours after they are posted, and are accessed within the Twitter app; Twitter announced it would start implementing advertising into fleets in June 2021. Fleets were removed in August 2021; Twitter had intended for fleets to encourage more users to tweet regularly, but instead they were generally used by already-active users. Twitter introduced its "trends" feature in mid-2008, an algorithmic lists of trending topics among users. A word or phrase mentioned can become "trending topic" based on an algorithm. Because a relatively small number of users can affect trending topics through a concerted campaign, the feature has been the targeted of concerted manipulation campaigns. While some campaigns are innocuous, others have promoted conspiracy theories or hoaxes, or sought to amplify extremist messages. Some featured trends are globally displayed, while others are limited to a specific country. A 2021 study by EPFL researchers found that frequent "ephemeral astroturfing" efforts targeted at Trends; from 2015 to 2019, "47% of local trends in Turkey and 20% of global trends are fake, created from scratch by bots...The fake trends discovered include phishing apps, gambling promotions, disinformation campaigns, political slogans, hate speech against vulnerable populations and even marriage proposals." The MIT Technology Review reported that, as of 2022, Twitter "sometimes manually overrides particularly objectionable trends" and, for some trends, used both algorithmic and human input to select representative tweets with context. In late 2009, the "Twitter Lists" feature was added, making it possible for users to follow a curated list of accounts all at once, rather than following individual users. Currently,[when?] lists can be set to either public or private. Public lists may be recommended to users via the general Lists interface and appear in search results. If a user follows a public list, it will appear in the "View Lists" section of their profile, so that other users may quickly find it and follow it as well. Private lists can only be followed if the creator shares a specific link to their list. Lists add a separate tab to the Twitter interface with the title of the list, such as "News" or "Economics". In October 2015, Twitter introduced "Moments"โ€”a feature that allows users to curate tweets from other users into a larger collection. Twitter initially intended the feature to be used by its in-house editorial team and other partners; they populated a dedicated tab in Twitter's apps, chronicling news headlines, sporting events, and other content. In September 2016, creation of moments became available to all Twitter users. On October 21, 2021, a report based on a "long-running, massive-scale randomized experiment" that analyzed "millions of tweets sent between 1 April and 15 August 2020", found that Twitter's machine learning recommendation algorithm amplified right-leaning politics on personalized user Home timelines.: 1 The report compared seven countries with active Twitter users where data was available (Germany, Canada, the United Kingdom, Japan, France, and Spain) and examined tweets "from major political groups and politicians".: 4 Researchers used the 2019 Chapel Hill Expert Survey (CHESDATA) to position parties on political ideology within each country.: 4 The "machine learning algorithms", introduced by Twitter in 2016, personalized 99% of users' feeds by displaying tweets (even older tweets and retweets from accounts the user had not directly followed) that the algorithm had "deemed relevant" to the users' past preferences.: 4 Twitter randomly chose 1% of users whose Home timelines displayed content in reverse-chronological order from users they directly followed.: 2 In 2026, a report released in Nature found X's feed algorithm promotes conservative content and deprioritizes traditional media in favor of conservative activists. Based on an analysis of 4,965 participants in 2023 over 7 weeks, it found switching from a chronological feed to an algorithmic feed increased engagement and shifted users to adopt conservative political positions, particularly around "policy priorities, perceptions of criminal investigations into Donald Trump and views on the war in Ukraine". It also found that exposure to algorithmic content led users to follow conservative political accounts and that users kept following them even after the algorithm was switched off.: 1 Twitter had mobile apps for iPhone, iPad, and Android. In April 2017, Twitter introduced Twitter Lite, a progressive web app designed for regions with unreliable and slow Internet connections, with a size of less than one megabyte, designed for devices with limited storage capacity. On June 3, 2021, Twitter announced a paid subscription service called Twitter Blue. Following Twitter's rebranding to "X", the subscription service was initially renamed to X Blue (or simply Blue), and, on August 5, 2023, was rebranded as X Premium (or simply Premium). The subscription provides additional premium features to the service. In November 2023 a "Premium+" subscription was launched, with a higher monthly fee giving benefits such as the omission of adverts on For You and Following feeds. In November 2022, Musk announced plans to add account verification and the ability to upload longer audio and video to Twitter Blue. A previous perk offering advertising-free news articles from participating publishers was dropped, but Musk stated that Twitter did want to work with publishers on a similar "paywall bypass" perk. Musk had pushed for a more expensive version of Twitter Blue following his takeover, arguing that it would be needed to offset a decline in advertising revenue. Twitter states that paid verification is required to help reduce fraudulent accounts. The verification marker was included in a premium tier of Twitter Blue introduced on November 9, 2022, priced at US$7.99 per month. On November 11, 2022, after the introduction of this feature led to prominent issues involving accounts using the feature to impersonate public figures and companies, Twitter Blue with verification was temporarily suspended. After about a month, Twitter Blue was relaunched on December 12, 2022, though for those purchasing the service through the iOS app store, the cost will be $10.99 a month as to offset the 30% revenue split that Apple takes. Twitter initially grandfathered users and entities that had gained verification due to their status as public figures, referring to them as "legacy verified accounts" that "may or may not be notable". On March 25, 2023, it was announced that "legacy" verification status would be removed; a subscription will be required to retain verified status, costing $1,000 per-month for organizations (which are designated with a gold verified symbol), plus an additional $50 for each "affiliate". The change was originally scheduled for April 1, 2023, but was delayed to April 20, 2023, following criticism of the changes. Musk also announced plans for the "For You" timeline to prioritize verified accounts and user followers only beginning April 15, 2023, and threatened to only allow verified users to participate in polls (although the latter change has yet to occur). Effective April 21, 2023, X requires companies to participate in the verified organizations program to purchase advertising on the platform, although companies that spend at least $1,000 on advertising per-month automatically receive membership in the program at no additional cost. From April 25, 2023, verified users are now prioritized in replies to tweets. In 2021, the company opened applications for its premium subscription options called Super Follows. This lets eligible accounts charge $2.99, $4.99 or $9.99 per month to subscribe to the account. The launch only generated about $6,000 in its first two weeks. In 2023, the Super Follows feature was rebranded as simply "subscriptions", allowing users to publish exclusive long-form posts and videos for their subscribers; the pivot in marketing was reportedly intended to help compete with Substack. In May 2021, Twitter began testing a Tip Jar feature on its iOS and Android clients. The feature allows users to send monetary tips to certain accounts, providing a financial incentive for content creators on the platform. The Tip Jar is optional and users can choose whether or not to enable tips for their account. On September 23, 2021, Twitter announced that it will allow users to tip users on the social network with bitcoin. The feature will be available for iOS users. Previously, users could tip with fiat currency using services such as Square's Cash App and PayPal's Venmo. Twitter will integrate the Strike bitcoin lightning wallet service. It was noted that at this current time, Twitter will not take a cut of any money sent through the tips feature. On August 27, 2021, Twitter rolled out Ticketed Spaces, which let Twitter Spaces hosts charge between $1 and $999 for access to their rooms. In April 2022, Twitter announced that it will partner with Stripe, Inc. for piloting cryptocurrency payouts for limited users in the platform. Eligible users of Ticketed Spaces and Super Follows will be able to receive their earnings in the form of USD coin, a stablecoin whose value is that of the U.S. dollar. Users can also hold their earnings in crypto wallets, and then exchange them into other cryptocurrencies. In July 2021, Twitter began testing a "Shop module" for iOS users in the US, allowing accounts associated with brands to display a carousel of cards on their profiles showcasing products. Unlike the Buy button, where order fulfillment was handed from within Twitter, these cards are external links to online storefronts from which the products may be purchased. In March 2022, Twitter expanded the test to allow companies to showcase up to 50 products on their profiles. In November 2021, Twitter introduced support for "shoppable" live streams, in which brands can hold streaming events that similarly display banners and pages highlighting products that are featured in the presentation. In January 2025, X announced plans to introduce an "X Money Account" feature in 2025. The product would be a digital wallet and enable X users to move funds between traditional bank accounts and their digital wallet and make instant peer-to-peer payments. Visa was announced as partnering with X on the project and, at least initially, cryptocurrencies would not be supported. Usage Daily user estimates vary as the company does not publish statistics on active accounts. A February 2009 Compete.com blog entry ranked Twitter as the third most used social network based on their count of 6 million unique monthly visitors and 55 million monthly visits. An April 2017 a statista.com blog entry ranked Twitter as the tenth most used social network based on their count of 319 million monthly visitors. Its global user base in 2017 was 328 million. According to Musk, the platform had 500 million monthly active users in March 2023, 550 million in March 2024, and 600 million in May 2024. In 2009, Twitter was mainly used by older adults who might not have used other social sites before Twitter. According to comScore only 11% of Twitter's users were aged 12 to 17. According to a study by Sysomos in June 2009, women made up a slightly larger Twitter demographic than menโ€”53% over 47%. It also stated that 5% of users accounted for 75% of all activity. According to Quantcast, 27 million people in the US used Twitter in September 2009; 63% of Twitter users were under 35 years old; 60% of Twitter users were Caucasian, but a higher than average (compared to other Internet properties) were African American/black (16%) and Hispanic (11%); 58% of Twitter users have a total household income of at least US$60,000. The prevalence of African American Twitter usage and in many popular hashtags has been the subject of research studies. Twitter grew from 100 million monthly active users (MAUs) in September 2011, to 255 million in March 2014, and more than 330 million in early 2019. In 2013, there were over 100 million users actively using Twitter daily and about 500 million tweets every day. A 2016 Pew research poll found that Twitter is used by 24% of all online US adults. It was equally popular with men and women (24% and 25% of online Americans respectively), but more popular with younger generations (36% of 18โ€“29-year olds). A 2019 survey conducted by the Pew Foundation found that Twitter users are three times as likely to be younger than 50 years old, with the median age of adult U.S. users being 40. The survey found that 10% of users who are most active on Twitter are responsible for 80% of all tweets. San Antonio-based market-research firm Pear Analytics analyzed 2,000 tweets (originating from the United States and in English) over a two-week period in August 2009 from 11:00 am to 5:00 pm (CST) and separated them into six categories. Pointless babble made up 40%, with 38% being conversational. Pass-along value had 9%, self-promotion 6% with spam and news each making 4%. Despite Jack Dorsey's own open contention that a message on Twitter is "a short burst of inconsequential information", social networking researcher danah boyd responded to the Pear Analytics survey by arguing that what the Pear researchers labeled "pointless babble" is better characterized as "social grooming" or "peripheral awareness" (which she justifies as persons "want[ing] to know what the people around them are thinking and doing and feeling, even when co-presence isn't viable"). Similarly, a survey of Twitter users found that a more specific social role of passing along messages that include a hyperlink is an expectation of reciprocal linking by followers. According to research published in April 2014, around 44% of user accounts have never tweeted. About 22% of Americans say they have used Twitter, according to a 2019 Pew Research Center survey. In 2009, Nielsen Online reported that Twitter had a user-retention rate of 40%. Many people stop using the service after a month; therefore the site may potentially reach only about 10% of all Internet users. Noting how demographics of Twitter users differ from the average Americans, commentators have cautioned against media narratives that treat Twitter as representative of the population, adding that only 10% of users Tweet actively, and that 90% of Twitter users have Tweeted no more than twice. In 2016, shareholders sued Twitter, alleging it "artificially inflated its stock price by misleading them about user engagement". The company announced on September 20, 2021, that it would pay $809.5 million to settle this class-action lawsuit. As of 2026[update], usage of X on mobile devices has been in a progressive decline. At the same time, mobile usage of the rival social network Threads has been increasing. In January 2026, Threads usage finally rose to match that of X. X still remains the dominant network on non-mobile traffic. Branding Before its rebranding to X, Twitter was internationally identifiable by its signature bird logo, or the Twitter Bird. The original logo, which was simply the word Twitter, was in use from its launch in March 2006. It was accompanied by an image of a bird which was later discovered to be a piece of clip art created by the British graphic designer Simon Oxley. A new logo had to be redesigned by founder Biz Stone with help from designer Philip Pascuzzo, which resulted in a more cartoon-like bird in 2009. This version had been named "Larry the Bird" after Larry Bird of the NBA's Boston Celtics fame. Within a year, the Larry the Bird logo underwent a redesign by Stone and Pascuzzo to eliminate the cartoon features, leaving a solid silhouette of Larry the Bird that was used from 2010 through 2012. In 2012, Douglas Bowman created a further simplified version of Larry the Bird, keeping the solid silhouette but making it more similar to a mountain bluebird. This logo was simply called the "Twitter Bird" and was used until July 2023. On July 22, 2023, Elon Musk announced that the service would be rebranded to "X", in his pursuit of creating an "everything app". Musk's profile picture, along with the platform's official accounts, and the icons when browsing/signing up for the platform, were updated to reflect the new logo. The logo resembles the Unicode mathematical alphanumeric symbol U+1D54F ๐• MATHEMATICAL DOUBLE-STRUCK CAPITAL X for the letter "X" styled in double-strike bold. Mike Proulx of The New York Times was critical of this change, saying the brand value has been "wiped out". Mike Carr says the new logo gives a "'Big Brother' tech overlord vibe" in contrast to the "cuddly" nature of the previous bird logo. Users review bombed the newly rebranded "X" app on the iOS App Store on the day it was revealed, and Rolling Stone's Miles Klee said that the rebrand "reeks of desperation". Finances On April 13, 2010, Twitter announced plans to offer paid advertising for companies that would be able to purchase "promoted tweets" to appear in selective search results on the Twitter website, similar to Google Adwords' advertising model. Users' photos can generate royalty-free revenue for Twitter, and an agreement with World Entertainment News Network (WENN) was announced in May 2011. Twitter generated an estimated US$139.5 million in advertising sales during 2011. In June 2011, Twitter announced that it would offer small businesses a self-service advertising system. The self-service advertising platform was launched in March 2012 to American Express card members and merchants in the U.S. on an invite-only basis. To continue their advertising campaign, Twitter announced on March 20, 2012, that promoted tweets would be introduced to mobile devices. In April 2013, Twitter announced that its Twitter Ads self-service platform, consisting of promoted tweets and promoted accounts, was available to all U.S. users without an invite. On August 3, 2016, Twitter launched Instant Unlock Card, a new feature that encourages people to tweet about a brand to earn rewards and use the social media network's conversational ads. The format itself consists of images or videos with call-to-action buttons and a customizable hashtag. In October 2017, Twitter banned the Russian media outlets RT and Sputnik from advertising on their website following the conclusions of the U.S. national intelligence report the previous January that both Sputnik and RT had been used as vehicles for Russia's interference in the 2016 US presidential election. Maria Zakharova for the Russian foreign ministry said the ban was a "gross violation" by the US of free speech. In October 2019, Twitter announced it would stop running political ads on its ad platform effective November 22. This resulted from several spurious claims made by political ads. Company CEO Dorsey clarified that Internet advertising had great power and was extremely effective for commercial advertisers, the power brings significant risks to politics where crucial decisions impact millions of lives. The company reversed the ban in August 2023, publishing criteria governing political advertising which do not allow the promotion of false or misleading content, and requiring advertisers to comply with laws, with compliance being the sole responsibility of the advertiser. In April 2022, Twitter announced a ban on "misleading" advertisements that go against "the scientific consensus on climate change". While the company did not give full guidelines, it stated that the decisions would be made with the help of "authoritative sources", including the Intergovernmental Panel on Climate Change. A 2025 article in The Wall Street Journal reported that Verizon, Ralph Lauren Corporation, and at least four other companies signed advertising contracts with X following legal threats from Musk and CEO Linda Yaccarino. X has been fined several times for non-compliance with laws and regulations. On May 25, 2022, Twitter was fined $150 million by the Federal Trade Commission and the United States Department of Justice for collecting users' contact details and using them for targeted advertising. In December 2025, the European Commission fined X โ‚ฌ120 million for alleged non-compliance with requirements of the Digital Services Act. Days later, X banned the European Commission from advertising on the platform. Technology X relies on open-source software. The X web interface uses the Ruby on Rails framework, deployed on a performance enhanced Ruby Enterprise Edition implementation of Ruby.[needs update] In the early days of Twitter, tweets were stored in MySQL databases that were temporally sharded (large databases were split based on time of posting). After the huge volume of tweets coming in caused problems reading from and writing to these databases, the company decided that the system needed re-engineering. From Spring 2007 to 2008, the messages were handled by a Ruby persistent queue server called Starling. Since 2009, implementation has been gradually replaced with software written in Scala. The switch from Ruby to Scala and the JVM has given Twitter a performance boost from 200 to 300 requests per second per host to around 10,000โ€“20,000 requests per second per host. This boost was greater than the 10x improvement that Twitter's engineers envisioned when starting the switch. The continued development of Twitter has also involved a switch from monolithic development of a single app to an architecture where different services are built independently and joined through remote procedure calls. As of April 6, 2011, Twitter engineers confirmed that they had switched away from their Ruby on Rails search stack to a Java server they call Blender. Individual tweets are registered under unique IDs called snowflakes, and geolocation data is added using 'Rockdove'. The URL shortener t.co then checks for a spam link and shortens the URL. Next, the tweets are stored in a MySQL database using Gizzard, and the user receives an acknowledgement that the tweets were sent. Tweets are then sent to search engines via the Firehose API. The process is managed by FlockDB and takes an average of 350 ms. On August 16, 2013, Raffi Krikorian, Twitter's vice president of platform engineering, shared in a blog post that the company's infrastructure handled almost 143,000 tweets per second during that week, setting a new record. Krikorian explained that Twitter achieved this record by blending its homegrown and open source technologies. The service's API allows other web services and applications to integrate with Twitter. Developer interest in Twitter began immediately following its launch, prompting the company to release the first version of its public API in September 2006. The API quickly became iconic as a reference implementation for public REST APIs and is widely cited in programming tutorials. From 2006 until 2010, Twitter's developer platform experienced strong growth and a highly favorable reputation. Developers built upon the public API to create the first Twitter mobile phone clients as well as the first URL shortener. Between 2010 and 2012, however, Twitter made a number of decisions that were received unfavorably by the developer community. In 2010, Twitter mandated that all developers adopt OAuth authentication with just 9 weeks of notice. Later that year, Twitter launched its own URL shortener, in direct competition with some of its most well-known third-party developers. And in 2012, Twitter introduced stricter usage limits for its API, "completely crippling" some developers. While these moves successfully increased the stability and security of the service, they were broadly perceived as hostile to developers, causing them to lose trust in the platform. In July 2020, Twitter released version 2.0 of the public API and began showcasing Twitter apps made by third-party developers on its Twitter Toolbox section in April 2022. In January 2023, Twitter ended third-party access to its APIs, forcing all third-party Twitter clients to shut down. This was controversial among the developer community, as many third-party apps predated the company's official apps, and the change was not announced beforehand. Twitterrific's Sean Heber confirmed in a blog post that the 16-year-old app has been discontinued. "We are sorry to say that the app's sudden and undignified demise is due to an unannounced and undocumented policy change by an increasingly capricious Twitter โ€“ a Twitter that we no longer recognize as trustworthy nor want to work with any longer." In February 2023, Twitter announced it would be ending free access to Twitter API, and began offering paid tier plans with a more limited access. On April 17, 2012, Twitter announced it would implement an "Innovators Patent Agreement" which would obligate Twitter to only use its patents for defensive purposes.[clarify] Twitter has a history of both using and releasing open-source software while overcoming technical challenges of their service. A page in their developer documentation thanks dozens of open-source projects which they have used, from revision control software like Git to programming languages such as Ruby and Scala. Software released as open source by the company includes the Gizzard Scala framework for creating distributed datastores, the distributed graph database FlockDB, the Finagle library for building asynchronous RPC servers and clients, the TwUI user interface framework for iOS, and the Bower client-side package manager. The popular Bootstrap frontend framework was also started at Twitter and is 10th most popular repository on GitHub. On March 31, 2023, Twitter released the source code for Twitter's recommendation algorithm, which determines what tweets show up on the user's personal timeline, to GitHub. According to Twitter's blog post: "We believe that we have a responsibility, as the town square of the internet, to make our platform transparent. So today we are taking the first step in a new era of transparency and opening much of our source code to the global community." Elon Musk, the CEO at the time, had been promising the move for a while โ€“ on March 24, 2022, before he owned the site, he polled his followers about whether Twitter's algorithm should be open source, and around 83% of the responses said "yes". In February, he promised it would happen within a week before pushing back the deadline to March 31. Twitter updated its source code repository in January 2026 with a new algorithm that depends on an external large language model (such as Grok) to evaluate posts, leading researchers to describe the disclosed source code as lacking in transparency. Also in March 2023, Twitter suffered a security attack which resulted in proprietary code being released. Twitter then had the leaked source code removed. Twitter introduced the first major redesign of its user interface in September 2010, adopting a dual-pane layout with a navigation bar along the top of the screen, and an increased focus on the inline embedding of multimedia content. Critics considered the redesign an attempt to emulate features and experiences found in mobile apps and third-party Twitter clients. The new layout was revised in 2011 with a focus on continuity with the web and mobile versions, introducing "Connect" (interactions with other users such as replies) and "Discover" (further information regarding trending topics and news headlines) tabs, an updated profile design, and moving all content to the right pane (leaving the left pane dedicated to functions and the trending topics list). In March 2012, Twitter became available in Arabic, Farsi, Hebrew and Urdu, the first right-to-left language versions of the site. In 2023 the Twitter Web site listed 34 languages supported by Twitter.com. In September 2012, a new layout for profiles was introduced, with larger "covers" that could be customized with a custom header image, and a display of the user's recent photos posted. The "Discover" tab was discontinued in April 2015, and was succeeded on the mobile app by an "Explore" tabโ€”which features trending topics and moments. In September 2018, Twitter began to migrate selected web users to its progressive web app (based on its Twitter Lite experience for mobile web), reducing the interface to two columns. Migrations to this iteration of Twitter increased in April 2019, with some users receiving it with a modified layout. In July 2019, Twitter officially released this redesign, with no further option to opt-out while logged in. It is designed to further-unify Twitter's user experience between the web and mobile application versions, adopting a three-column layout with a sidebar containing links to common areas (including "Explore" that has been merged with the search page) which previously appeared in a horizontal top bar, profile elements such as picture and header images and biography texts merged into the same column as the timeline, and features from the mobile version (such as multi-account support, and an opt-out for the "top tweets" mode on the timeline). In response to early Twitter security breaches, the United States Federal Trade Commission (FTC) brought charges against the service; the charges were settled on June 24, 2010. This was the first time the FTC had taken action against a social network for security lapses. The settlement requires Twitter to take a number of steps to secure users' private information, including maintenance of a "comprehensive information security program" to be independently audited biannually. After a number of high-profile hacks of official accounts, including those of the Associated Press and The Guardian, in April 2013, Twitter announced a two-factor login verification as an added measure against hacking. On July 15, 2020, a major hack of Twitter affected 130 high-profile accounts, both verified and unverified ones such as Barack Obama, Bill Gates, and Elon Musk; the hack allowed bitcoin scammers to send tweets via the compromised accounts that asked the followers to send bitcoin to a given public address, with the promise to double their money. Within a few hours, Twitter disabled tweeting and reset passwords from all verified accounts. Analysis of the event revealed that the scammers had used social engineering to obtain credentials from Twitter employees to access an administration tool used by Twitter to view and change these accounts' personal details as to gain access as part of a "smash and grab" attempt to make money quickly, with an estimated US$120,000 in bitcoin deposited in various accounts before Twitter intervened. Several law enforcement entities including the FBI launched investigations into the attack. On August 5, 2022, Twitter disclosed that a bug introduced in a June 2021 update to the service allowed threat actors to link email addresses and phone numbers to twitter user's accounts. The bug was reported through Twitter's bug bounty program in January 2022 and subsequently fixed. While Twitter originally believed no one had taken advantage of the vulnerability, it was later revealed that a user on the online hacking forum Breach Forums had used the vulnerability to compile a list of over 5.4 million user profiles, which they offered to sell for $30,000. The information compiled by the hacker includes user's screen names, location and email addresses which could be used in phishing attacks or used to deanonymize accounts running under pseudonyms. During an outage, Twitter users were at one time shown the "fail whale" error message image created by Yiying Lu, illustrating eight orange birds using a net to hoist a whale from the ocean captioned "Too many tweets! Please wait a moment and try again." Web designer and Twitter user Jen Simmons was the first to coin the term "fail whale" in a September 2007 tweet. In a November 2013 Wired interview Chris Fry, VP of Engineering at that time, noted that the company had taken the "fail whale" out of use as the platform was now more stable. Twitter had approximately 98% uptime in 2007 (or about six full days of downtime). The downtime was particularly noticeable during events popular with the technology industry such as the 2008 Macworld Conference & Expo keynote address. As of January 16, 2026, Twitter's error message during an outage is "Something went wrong, but don't fret โ€“ Let's give it another shot." User accounts In June 2009, after being criticized by Kanye West and sued by Tony La Russa over unauthorized accounts run by impersonators, the company launched their "Verified Accounts" program. Twitter stated that an account with a "blue tick" verification badge indicates "we've been in contact with the person or entity the account is representing and verified that it is approved". In July 2016, Twitter announced a public application process to grant verified status to an account "if it is determined to be of public interest" and that verification "does not imply an endorsement". Verified status allows access to some features unavailable to other users, such as only seeing mentions from other verified accounts. In November 2020, Twitter announced a relaunch of its verification system in 2021. According to the new policy, Twitter verifies six different types of accounts; for three of them (companies, brands, and influential individuals like activists), the existence of a Wikipedia page will be one criterion for showing that the account has "Off Twitter Notability". Twitter states that it will re-open public verification applications at some point in "early 2021". In October 2022, after the takeover of Twitter by Elon Musk, it was reported that verification would instead be included in the paid Twitter Blue service, and that existing verified accounts would lose their status if they do not subscribe. On November 1, Musk confirmed that verification would be included in Blue in the future, dismissing the existing verification system as a "lords & peasants system". After concerns over the possibility of impersonation, Twitter subsequently reimplemented a second "Official" marker, consisting of a grey tick and "Official" text displayed under the username, for high-profile accounts of "government and commercial entities". In December 2022, the "Official" text was replaced by a gold checkmark for organizations, as well as a grey check mark for government and multilateral accounts. In March 2023, the gold check mark was made available for organizations to purchase through the Verified Organizations program (formerly called Twitter Blue for Business). Tweets are public, but users can also send private "direct messages". Information about who has chosen to follow an account and who a user has chosen to follow is also public, though accounts can be changed to "protected" which limits this information (and all tweets) to approved followers. Twitter collects personally identifiable information about its users and shares it with third parties as specified in its privacy policy. The service also reserves the right to sell this information as an asset if the company changes hands.[non-primary source needed] Advertisers can target users based on their history of tweets and may quote tweets in ads directed specifically to the user. Twitter launched the beta version of their "Verified Accounts" service on June 11, 2009, allowing people with public profiles to announce their account name. The profile pages of these accounts display a badge indicating their status. On December 14, 2010, the United States Department of Justice issued a subpoena directing Twitter to provide information for accounts registered to or associated with WikiLeaks. Twitter decided to notify its users and said, "... it's our policy to notify users about law enforcement and governmental requests for their information, unless we are prevented by law from doing so." In May 2011, a claimant known as "CTB" in the case of CTB v Twitter Inc. took action against Twitter at the High Court of Justice of England and Wales, requesting that the company release details of account holders. This followed gossip posted on Twitter about professional footballer Ryan Giggs's private life. This led to the 2011 British privacy injunctions controversy and the "super-injunction". Tony Wang, the head of Twitter in Europe, said that people who do "bad things" on the site would need to defend themselves under the laws of their own jurisdiction in the event of controversy and that the site would hand over information about users to the authorities when it was legally required to do so. He also suggested that Twitter would accede to a UK court order to divulge names of users responsible for "illegal activity" on the site. Twitter acquired Dasient, a startup that offers malware protection for businesses, in January 2012. Twitter announced plans to use Dasient to help remove hateful advertisers on the website. Twitter also offered a feature which would allow tweets to be removed selectively by country, before deleted tweets used to be removed in all countries. The first use of the policy was to block the account of German neo-Nazi group Besseres Hannover on October 18, 2012. The policy was used again the following day to remove anti-Semitic French tweets with the hashtag #unbonjuif ("a good Jew"). After the sharing of images showing the killing of American journalist James Foley in 2014, Twitter said that in certain cases it would delete pictures of people who had died after requests from family members and "authorized individuals". In 2015, following updated terms of service and privacy policy, Twitter users outside the United States were legally served by the Ireland-based Twitter International Company instead of Twitter, Inc. The change made these users subject to Irish and European Union data protection laws. On April 8, 2020, Twitter announced that users outside of the European Economic Area or United Kingdom (thus subject to GDPR) will no longer be allowed to opt out of sharing "mobile app advertising measurements" to Twitter third-party partners. On October 9, 2020, Twitter took additional steps to counter misleading campaigns ahead of the 2020 US Election. Twitter's new temporary update encouraged users to "add their own commentary" before retweeting a tweet, by making 'quoting tweet' a mandatory feature instead of optional. The social network giant aimed at generating context and encouraging the circulation of more thoughtful content. After limited results, the company ended this experiment in December 2020. On May 25, 2022, Twitter was fined $150 million for collecting users' phone numbers and email addresses used for security and using them for targeted advertising, required to notify its users, and banned from profiting from "deceptively collected data". The Federal Trade Commission (FTC) and the Department of Justice stated that Twitter violated a 2011 agreement not to use personal security data for targeted advertising. In September 2024, the FTC released a report summarizing 9 company responses (including from X) to orders made by the agency pursuant to Section 6(b) of the Federal Trade Commission Act of 1914 to provide information about user and non-user data collection (including of children and teenagers) and data use by the companies that found that the companies' user and non-user data practices put individuals vulnerable to identity theft, stalking, unlawful discrimination, emotional distress and mental health issues, social stigma, and reputational harm. In August 2013, Twitter announced plans to introduce a "report abuse" button for all versions of the site following uproar, including a petition with 100,000 signatures, over Tweets that included rape and death threats to historian Mary Beard, feminist campaigner Caroline Criado-Perez and the member of parliament Stella Creasy. Twitter announced new reporting and blocking policies in December 2014, including a blocking mechanism devised by Randi Harper, a target of GamerGate. In February 2015, CEO Dick Costolo said he was 'frankly ashamed' at how poorly Twitter handled trolling and abuse, and admitted Twitter had lost users as a result. As per a research study conducted by IT for Change on abuse and misogynistic trolling on Twitter directed at Indian women in public-political life, women perceived to be ideologically left-leaning, dissenters, Muslim women, political dissenters, and political commentators and women from opposition parties received a disproportionate amount of abusive and hateful messages on Twitter. In 2016, Twitter announced the creation of the Twitter Trust & Safety Council to help "ensure that people feel safe expressing themselves on Twitter". The council's inaugural members included 50 organizations and individuals. The announcement of Twitter's "Trust & Safety Council" was met with objection from parts of its userbase. Critics accused the member organizations of being heavily skewed towards "the restriction of hate speech" and a Reason article expressed concern that "there's not a single uncompromising anti-censorship figure or group on the list". Twitter banned 7,000 accounts and limited 150,000 more that had ties to QAnon on July 21, 2020. The bans and limits came after QAnon-related accounts began harassing other users through practices of swarming or brigading, coordinated attacks on these individuals through multiple accounts in the weeks prior. Those accounts limited by Twitter will not appear in searches nor be promoted in other Twitter functions. Twitter said they will continue to ban or limit accounts as necessary, with their support account stating "We will permanently suspend accounts Tweeting about these topics that we know are engaged in violations of our multi-account policy, coordinating abuse around individual victims, or are attempting to evade a previous suspension". In September 2021, Twitter began beta testing a feature called Safety Mode. The functionality aims to limit unwelcome interactions through automated detection of negative engagements. If a user has Safety Mode enabled, authors of tweets that are identified by Twitter's technology as being harmful or exercising uninvited behavior will be temporarily unable to follow the account, send direct messages, or see tweets from the user with the enabled functionality during the temporary block period. Jarrod Doherty, senior product manager at Twitter, stated that the technology in place within Safety Mode assesses existing relationships to prevent blocking accounts that the user frequently interacts with. In January 2016, Twitter was sued by the widow of a U.S. man killed in the 2015 Amman shooting attack, claiming that allowing the Islamic State of Iraq and the Levant (ISIL) to continually use the platform, including direct messages in particular, constituted the provision of material support to a terrorist organization, which is illegal under U.S. federal law. Twitter disputed the claim, stating that "violent threats and the promotion of terrorism deserve no place on Twitter and, like other social networks, our rules make that clear". The lawsuit was dismissed by the United States District Court for the Northern District of California, upholding the Section 230 safe harbor, which dictates that the operators of an interactive computer service are not liable for the content published by its users. The lawsuit was revised in August 2016, providing comparisons to other telecommunications devices. The second amended complaint was dismissed by the district court, a decision affirmed on appeal to the U.S. Court of Appeals for the Ninth Circuit on January 31, 2018. Twitter suspended multiple parody accounts that satirized Russian politics in May 2016, sparking protests and raising questions about where the company stands on freedom of speech. Following public outcry, Twitter restored the accounts the next day without explaining why the accounts had been suspended. The same day, Twitter, along with Facebook, Google, and Microsoft, jointly agreed to a European Union code of conduct obligating them to review "[the] majority of valid notifications for removal of illegal hate speech" posted on their services within 24 hours. In August 2016, Twitter stated that it had banned 235,000 accounts over the past six months, bringing the overall number of suspended accounts to 360,000 accounts in the past year, for violating policies banning use of the platform to promote extremism. On May 10, 2019, Twitter announced that they suspended 166,513 accounts for promoting terrorism in the Julyโ€“December 2018 period, saying there was a steady decrease in terrorist groups trying to use the platform owing to its "zero-tolerance policy enforcement". According to Vijaya Gadde, Legal, Policy and Trust and Safety Lead at Twitter, there was a reduction of 19% terror related tweets from the previous reporting period (Januaryโ€“June 2018). As of July 30, 2020, Twitter will block URLs in tweets that point to external websites that contain malicious content (such as malware and phishing content) as well as hate speech, speech encouraging violence, terrorism, child sexual exploitation, breaches of privacy, and other similar content that is already banned as part of the content of tweets on the site. Users that frequently point to such sites may have their accounts suspended. Twitter said this was to bring their policy in line to prevent users from bypassing their tweet content restrictions by simply linking to the banned content. After the onset of protests by Donald Trump's supporters across the US in January 2021, Twitter suspended more than 70,000 accounts, stating that they shared "harmful QAnon-associated content" at a large scale, and were "dedicated to the propagation of this conspiracy theory across the service". One of the accounts suspended was then-former-president Trump's account; in February 2025, X settled a lawsuit filed by Trump in response to his suspension paying Trump approximately $10 million. Between January and late July 2017, Twitter had identified and shut down over 7,000 fake accounts created by Iranian influence operations. In May 2018, in response to scrutiny over the misuse of Twitter by those seeking to maliciously influence elections, Twitter announced that it would partner with the nonprofit organization Ballotpedia to add special labels verifying the authenticity of political candidates running for election in the U.S. In December 2019, Twitter removed 5,929 accounts for violating their manipulation policies. The company investigated and attributed these accounts to a single state-run information operation, which originated in Saudi Arabia. The accounts were reported to be a part of a larger group of 88,000 accounts engaged in spammy behavior. However, Twitter did not disclose all of them as some could possibly be legitimate accounts taken over through hacking. In March 2021, Twitter suspended around 3,500 fake accounts that were running a campaign to influence the American audience, after the US intelligence officials concluded that the assassination of The Washington Post journalist Jamal Khashoggi was "approved" by the Saudi Crown Prince Mohammed bin Salman. These Saudi accounts were working in two languages, English and Arabic, to influence public opinion around the issue. Many accounts commented directly on the tweets of US-based media houses, including The Post, CNN, CBS News and The Los Angeles Times. Twitter was unable to identify the source of the influence campaign. As of 2022[update], the top four countries spreading state-linked Twitter misinformation are Russia, China, Iran and Saudi Arabia. In November 2025, X began displaying various information on user accounts for transparency such as location, history, and username changes to combat bots and other malicious accounts. Other X users and media commenters noted seeming inconsistencies between some prominent usersโ€™ claimed location or nationality and the newly displayed data, with experts claiming that these accounts were likely used for โ€œrage farmingโ€ or as foreign influence operations. A bot is a computer program that can automatically tweet, retweet, and follow other accounts. Twitter's open application programming interface and the availability of cloud servers make it possible for bots to exist within the social networking site. Benign bots may generate creative content and relevant product updates, whereas malicious bots can make unpopular people seem popular, push irrelevant products on users, and spread misinformation, spam or slander. Bots amass significant influence and have been noted to sway elections, influence the stock market, appeal to the public, and attack governments. As of 2013[update], Twitter said there were 20 million fake accounts on Twitter, representing less than 5% of active users. A 2020 estimate put the figure at 15% of all accounts or around 48 million accounts. Society Twitter had been used for a variety of purposes in many industries and scenarios. For example, it has been used to organize protests, including the protests over the 2009 Moldovan election, the 2009 student protests in Austria, the 2009 Gazaโ€“Israel conflict, the 2009 Iranian green revolution, the 2010 Toronto G20 protests, the 2010 Bolivarian Revolution, the 2010 Stuttgart 21 protests in Germany, the 2011 Egyptian Revolution, 2011 England riots, the 2011 United States Occupy movement, the 2011 anti-austerity movement in Spain, the 2011 Aganaktismenoi movements in Greece, the 2011 demonstration in Rome, the 2011 Wisconsin labor protests, the 2012 Gazaโ€“Israel conflict, the 2013 protests in Brazil, and the 2013 Gezi Park protests in Turkey. The service was also used as a form of civil disobedience: In 2010, users expressed outrage over the Twitter joke trial by copying a controversial joke about bombing an airport and attaching the hashtag #IAmSpartacus, a reference to the film Spartacus (1960) and a sign of solidarity and support to a man controversially prosecuted after posting a tweet joking about bombing an airport if they canceled his flight. #IAmSpartacus became the number one trending topic on Twitter worldwide. Another case of civil disobedience happened in the 2011 British privacy injunction debate, where several celebrities who had taken out anonymized injunctions were identified by thousands of users in protest to traditional journalism being censored. According to documents leaked by Edward Snowden and published in July 2014, the United Kingdom's GCHQ has a tool named BIRDSONG for "automated posting of Twitter updates" and a tool named BIRDSTRIKE for "Twitter monitoring and profile collection". During the 2019โ€“20 Hong Kong protests, Twitter suspended a core group of 1,000 "fake" accounts and an associated network of 200,000 accounts for operating a disinformation campaign that was linked to the Chinese government. On June 12, 2020, Twitter suspended over 7,000 accounts from Turkey because those accounts were fake profiles, designed to support the Turkish president, Recep Tayyip ErdoฤŸan, and were managed by a central authority. Turkey's communication director said that the decision was illogical, biased, and politically motivated. Turkey blocked access to Twitter twice, once after voice recordings appeared on Twitter in which ErdoฤŸan ordered his son to stash away millions of dollars and another time for 12 hours in the aftermath of the earthquake of February 2023, when ErdoฤŸan blamed the people for a disinformation campaign as they criticized the Government for their lack of help. In May 2021, Twitter labeled one of the tweets by Sambit Patra, a spokesman of the local ruling party BJP in India, as "manipulated media", leading to Twitter's offices in Delhi and Gurgaon being raided by the local police. Later, the Indian government released a statement in July 2021 claiming Twitter has lost its liability protection concerning user-generated content. This was brought on by Twitter's failure to comply with the new IT rules introduced in 2021, with a filing stating that the company failed to appoint executives to govern user content on the platform. In 2025, X sued the Indian government for using the IT Act to block tweets and other content on its platform. According to a report by Reuters, the United States ran a propaganda campaign to spread disinformation about the Sinovac Chinese COVID-19 vaccine, including using fake social media accounts on Twitter to spread the disinformation that the Sinovac vaccine contained pork-derived ingredients and was therefore haram under Islamic law. The campaign primarily targeted people in the Philippines and used a social media hashtag for "China is the virus" in Tagalog. Twitter allows pornographic content as long as it is marked "sensitive" by uploaders, which puts it behind an interstice and hides it from minors. The "super-follow" feature is said to enable competition with the subscription site OnlyFans, used mainly by sex workers. Many performers use Twitter's service to market and grow their porn businesses, attracting users to paywalled services like OnlyFans by distributing photos and short video clips as advertisements. In April 2022, Twitter convened a "Red Team" for the project of ACM, "Adult Content Monetization", as it is known internally. Eventually, the project was abandoned, because of the difficulty of implementing Real ID. A February 2021 report from the company's Health team begins, "While the amount of CSE (child sexual exploitation) online has grown exponentially, Twitter's investment in technologies to detect and manage the growth has not." Until February 2022, the only way for users to flag illegal content was to flag it as "sensitive media", a broad category that left much of the worst material unprioritized for moderation. In a February report, employees wrote that Twitter, along with other Tech Companies have "accelerated the pace of CSE content creation and distribution to a breaking point where manual detection, review, and investigations no longer scale" by allowing pornography and failing to invest in systems that could effectively monitor it. The working group made several recommendations, but they were not taken up and the group was disbanded. As part of its efforts to monetize porn, Twitter held an internal investigation which reported in April 2022, "Twitter cannot accurately detect child sexual exploitation and non-consensual nudity at scale." John Doe et al. v. Twitter, a civil lawsuit filed in the 9th Circuit Court, alleges that Twitter benefited from sex trafficking and refused to remove the illegal tweets when first informed of them. In an amicus brief filed in the case, the NCMEC said, "The children informed the company that they were minors, that they had been 'baited, harassed, and threatened' into making the videos, that they were victims of 'sex abuse' under investigation by law enforcement" but Twitter failed to remove the videos, "allowing them to be viewed by hundreds of thousands of the platform's users". Some major brands, including Dyson, Mazda, Forbes, and PBS Kids suspended their marketing campaigns and pulled their ads from the platform after an investigation showed that Twitter failed to suspend 70% of the accounts that shared or solicited the prohibited content. In December 2025, social media users reported that X's integrated chatbot, Grok, would allow users to noncensually strip individuals, including minors, or show them performing sexually explicit and pornographic acts. The majority of these prompts were targeted at women and girls. Images generated by Grok since December 2025 have been disproportionately of people in bikinis, transparent clothes, and the like, with users being able to generate such images through prompts such as "put her in a bikini". This scandal would lead to significant criticism from lawmakers across the world, calls for bans on X, as well as legal crackdowns on X and xAI for, amongst other reasons, the facilitation of sexual abuse, revenge porn, and child pornography. A practical use for Twitter's real-time functionality is as an effective de facto emergency communication system for breaking news. It was neither intended nor designed for high-performance communication, but the idea that it could be used for emergency communication was not lost on the creators, who knew that the service could have wide-reaching effects early on when the company used it to communicate during earthquakes. Another practical use that is being studied is Twitter's ability to track epidemics and how they spread. Additionally Twitter serves as a real-time sensor for natural disasters such as bushfires and earthquakes. Twitter has been adopted as a communication and learning tool in educational and research settings mostly in colleges and universities. It has been used as a backchannel to promote student interactions, especially in large-lecture courses. Research has found that using Twitter in college courses helps students communicate with each other and faculty, promotes informal learning, allows shy students a forum for increased participation, increases student engagement, and improves overall course grades. Twitter has been an increasingly growing in the field of education as an effective tool that can be used to encourage learning and idea, or knowledge sharing, in and outside the classroom. By using or creating hashtags, students and educators are able to communicate under specific categories of their choice to enhance and promote education. A broad example of a hashtag used in education is "edchat", to communicate with other teachers and people using that hashtag. Once teachers find someone they want to talk to, they can either direct message the person or narrow down the hashtag to make the topic of the conversation more specific, using hashtags for scichat (science), engchat (English), sschat (social studies). Jonathan Zittrain, professor of Internet law at Harvard Law School, said that "the qualities that make Twitter seem inane and half-baked are what makes it so powerful." In that same vein, and with Sigmund Freud in mind, political communications expert Matthew Auer observed that well-crafted tweets by public figures often deliberately mix trivial and serious information so as to appeal to all three parts of the reader's personality: the id, ego, and superego. The poets Mira Gonzalez and Tao Lin published a book titled Selected Tweets featuring selections of their tweets over some eight years. The novelist Rick Moody wrote a short story for Electric Literature called "Some Contemporary Characters", composed entirely of tweets. Many commentators have suggested that Twitter radically changed the format of reporting due to instant, short, and frequent communication. According to The Atlantic writers Benjamin M. Reilly and Robinson Meyer, Twitter has an outsized impact on the public discourse and media. "Something happens on Twitter; celebrities, politicians and journalists talk about it, and it's circulated to a wider audience by Twitter's algorithms; journalists write about the dustup." This can lead to an argument on a Twitter feed looking like a "debate roiling the country... regular people are left with a confused, agitated view of our current political discourse". In a 2018 article in the Columbia Journalism Review, Matthew Ingram argued much the same about Twitter's "oversized role" and that it promotes immediacy over newsworthiness. In some cases, inauthentic and provocative tweets were taken up as common opinion in mainstream articles. Writers in several outlets unintentionally cited the opinions of Russian Internet Research Agency-affiliated accounts. World leaders and their diplomats have taken note of Twitter's rapid expansion and have been increasingly using Twitter diplomacy, the use of Twitter to engage with foreign publics and their own citizens. US Ambassador to Russia, Michael A. McFaul has been attributed as a pioneer of international Twitter diplomacy. He used Twitter after becoming ambassador in 2011, posting in English and Russian. On October 24, 2014, Queen Elizabeth II sent her first tweet to mark the opening of the London Science Museum's Information Age exhibition. A 2013 study by website Twiplomacy found that 153 of the 193 countries represented at the United Nations had established government Twitter accounts. The same study also found that those accounts amounted to 505 Twitter handles used by world leaders and their foreign ministers, with their tweets able to reach a combined audience of over 106 million followers. According to an analysis of accounts, the heads of state of 125 countries and 139 other leading politicians have Twitter accounts that have between them sent more than 350,000 tweets and have almost 52 million followers. However, only 30 of these do their own tweeting, more than 80 do not subscribe to other politicians and many do not follow any accounts. The Twitter account for the pope was set up in 2012. As of February 2025[update], it has 18 million followers (@Pontifex). Twitter is banned completely in Russia, Iran, China and North Korea, and has been intermittently blocked in numerous countries, including Egypt, Iraq, Nigeria, Turkey, Venezuela and Turkmenistan, on different basis. In 2016, Twitter cooperated with the Israeli government to remove certain content originating outside Israel from tweets seen in Israel. In the 11th biannual transparency report published on September 19, 2017, Twitter said that Turkey was the first among countries where about 90% of removal requests came from, followed by Russia, France and Germany. Twitter stated that between July 1 and December 31, 2018, "We received legal demands relating to 27,283 accounts from 47 different countries, including Bulgaria, Kyrgyzstan, Macedonia, and Slovenia for the first time." As part of evidence to a U.S. Senate Enquiry, the company admitted that their systems "detected and hid" several hundred thousand tweets relating to the 2016 Democratic National Committee email leak. During the curfew in Jammu and Kashmir after revocation of its autonomous status on August 5, 2019, the Indian government approached Twitter to block accounts accused of spreading anti-India content; by October 25, nearly one million tweets had been removed as a result. In March 2022, shortly after Russia's censorship of Twitter, a Tor onion service link was created by the platform to allow people to access the website, even in countries with heavy Internet censorship. In 2025, India ordered X to block 8,000 accounts to users within India, under threat of fines. X criticized the government's orders and encouraged affected users to seek legal recourse. X uses Age Verify with ID or Photo Selfie for users to access sensitive content like pornography in the UK, EU and EEA to comply with Online Safety Act 2023 and EU's Digital Service. Twitter removed more than 88,000 propaganda accounts linked to Saudi Arabia. Twitter removed tweets from accounts associated with the Russian Internet Research Agency that had tried to influence public opinion during and after the 2016 US election. In June 2020, Twitter also removed 175,000 propaganda accounts that were spreading biased political narratives for the Chinese Communist Party, the United Russia Party, or Turkey's President Erdogan, identified based on centralized behavior. Twitter also removed accounts linked to the governments of Armenia, Egypt, Cuba, Serbia, Honduras, Indonesia and Iran. Twitter suspended Pakistani accounts tied to government officials for posting tweets about the Kashmir conflict between India and Pakistan. In February 2021, Twitter removed accounts in India that criticized Prime Minister Narendra Modi's government for its conduct during Indian farmers' protests in 2020โ€“2021. At the start of the 2020 COVID-19 pandemic, numerous tweets reported false medical information related to the pandemic. Twitter announced a new policy in which they would label tweets containing misinformation going forward. In April 2020, Twitter removed accounts which defended President Rodrigo Duterte's response to the spread of COVID-19 in the Philippines. In November 2020, then Chief Technology Officer and future CEO of Twitter Parag Agrawal, when asked by MIT Technology Review about balancing the protection of free speech as a core value and the endeavour to combat misinformation, said: "Our role is not to be bound by the First Amendment, but our role is to serve a healthy public conversation ... focus less on thinking about free speech, but thinking about how the times have changed." Musk had been critical of Twitter's moderation of misinformation prior to his acquisition of the company. After the transition, Musk eliminated the misinformation moderation team, and stopped enforcing its policy on labeling tweets with misleading information about coronavirus. While Twitter had joined a voluntary program under the European Union's to fight disinformation in June 2022, Musk pulled the company out of the program in May 2023. In August 2020, development of Birdwatch was announced, initially described as a moderation tool. Twitter first launched the Birdwatch program in January 2021, intended as a way to debunk misinformation and propaganda, with a pilot program of 1,000 contributors, weeks after the January 6 United States Capitol attack. The aim was to "build Birdwatch in the open, and have it shaped by the Twitter community". In November 2021, Twitter updated the Birdwatch moderation tool to limit the visibility of contributors' identities by creating aliases for their accounts, in an attempt to limit bias towards the author of notes. Twitter then expanded access to notes made by the Birdwatch contributors in March 2022, giving a randomized set of US users the ability to view notes attached to tweets and rate them, with a pilot of 10,000 contributors. On average, contributors were noting 43 times a day in 2022 prior to the Russian invasion of Ukraine. This then increased to 156 on the day of the invasion, estimated to be a very small portion of the misleading posts on the platform. By March 1, only 359 of 10,000 contributors had proposed notes in 2022, while a Twitter spokeswoman described plans to scale up the program, with the focus on "ensuring that Birdwatch is something people find helpful and can help inform understanding". By September 2022, the program had expanded to 15,000 users. In October 2022, the most commonly published notes were related to COVID-19 misinformation based on historical usage. In November 2022, at the request of new owner Elon Musk, Birdwatch was rebranded to Community Notes, taking an open-source approach to deal with misinformation, and expanded to Europe and countries outside of the US. Twitter Inc. v. Taamneh, alongside Gonzalez v. Google, were heard by the United States Supreme Court during its 2022โ€“2023 term. Both cases dealt with Internet content providers and whether they are liable for terrorism-related information posted by their users. In the case of Twitter v. Taamneh, the case asked if Twitter and other social media services are liable for user-generated terrorism content under the Antiterrorism and Effective Death Penalty Act of 1996 and are beyond their Section 230 protections. The court ruled in May 2023 that the charges brought against Twitter and other companies were not permissible under the Antiterrorism Act, and did not address the Section 230 question. This decision also supported the Court's per curiam decision in Gonzalez returning that case to the lower court for review in light of the Twitter decision. In 2016, Twitter shareholder Doris Shenwick filed a lawsuit against Twitter, Inc., claiming executives misled investors over the company's growth prospects. In 2021, Twitter agreed to pay $809.5 million to settle. In May 2022, Twitter agreed to pay $150 million to settle a lawsuit started by the Department of Justice and the Federal Trade Commission. The lawsuit concerned Twitter's use of email addresses and phone numbers of Twitter users to target advertisements at them. The company also agreed to third-party audits of its data privacy program. On November 3, 2022, on the eve of expected layoffs, a group of Twitter employees based in San Francisco and Cambridge filed a lawsuit in the U.S. District Court in San Francisco. Naming five current or former workers as plaintiffs, the suit accused the company of violating federal and state laws that govern notice of employment termination. The federal law in question is the Worker Adjustment and Retraining Notification (WARN) Act, and the state law in question is California's state WARN Act. On November 20, 2023, X filed a lawsuit against Media Matters, a media watchdog group. The lawsuit alleges defamation by Media Matters following its publication of a report claiming that advertisements for major brands were displayed alongside posts promoting Adolf Hitler and the Nazi Party. On August 6, 2024, X filed an antitrust lawsuit in the Northern District of Texas against the World Federation of Advertisers, Unilever, Mars, CVS and ร˜rsted, alleging that the advertisers had conspired via their participation in the Global Alliance for Responsible Media to withhold "billions of dollars in advertising revenue" from the platform. The World Federation Of Advertisers created the Global Alliance for Responsible Media in 2019 to address "illegal or harmful content on digital media platforms and its monetization via advertising". On August 13, 2024, the Workplace Relations Commission ordered X to pay โ‚ฌ550,000 to former senior staffer Gary Rooney in an unfair dismissal case. X had argued that Rooney's failure to check "yes" at the bottom of an email from Elon Musk constituted resignation. Criticism The platform has faced significant controversy since its buying by Musk and re-branding to X, including an increase in misinformation, hate speech and antisemitism. According to a report published by the "Never Again" Association, X refuses to remove hate speech or ignores reports. Researchers have called for greater transparency especially ahead of national elections, based on findings that the platform algorithm favors a small number of popular accounts, in particular right-leaning users. In July 2025, Musk and xAI's artificial intelligence tool, Grok, faced backlash from X users and the Anti-Defamation League regarding a series of antisemitic tweets made in response to the July 2025 Central Texas floods. The Grok account acknowledged the "inappropriate" posts and removed the comments. The incident is reported to have happened just days after Musk announced updates to Grok, noting that users should see "a difference when you ask Grok questions." Statistics As of May 2025[update], the ten X accounts with the most followers were: The "Oscar Selfie" orchestrated by 86th Academy Awards host Ellen DeGeneres during the March 2, 2014, broadcast was, at the time, the most retweeted image ever. The photo of twelve celebrities broke the previous retweet record within forty minutes and was retweeted over 1.8 million times in the first hour. On May 9, 2017, Ellen's record was broken by Carter Wilkerson (@carterjwm) by collecting nearly 3.5 million retweets in a little over a month. This record was broken when Yusaku Maezawa announced a giveaway on Twitter in January 2019, accumulating 4.4 million retweets. A similar tweet he made in December 2019 was retweeted 3.8 million times. The most tweeted moment in the history of Twitter occurred on August 2, 2013; during a Japanese television airing of the Studio Ghibli film Castle in the Sky, fans simultaneously tweeted the word balse (ใƒใƒซใ‚น)โ€”the incantation for a destruction spell used during its climax, after it was uttered in the film. There was a global peak of 143,199 tweets in one second, beating the previous record of 33,388. The most discussed event in Twitter history occurred on October 24, 2015; the hashtag ("#ALDubEBTamangPanahon") for Tamang Panahon, a live special episode of the Filipino variety show Eat Bulaga! at the Philippine Arena, centering on its popular on-air couple AlDub, attracted 41 million tweets.[non-primary source needed] The most-discussed sporting event in Twitter history was the 2014 FIFA World Cup semi-final between Brazil and Germany on July 8, 2014. According to Guinness World Records, the fastest pace to a million followers was set by actor Robert Downey Jr. in 23 hours and 22 minutes in April 2014. This record was later broken by Caitlyn Jenner, who joined the site on June 1, 2015, and amassed a million followers in just 4 hours and 3 minutes. See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Computer#Notes] | [TOKENS: 10628]
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abลซ Rayhฤn al-Bฤซrลซnฤซ in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abลซ Rayhฤn al-Bฤซrลซnฤซ invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620โ€“1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musรฉe d'Art et d'Histoire of Neuchรขtel, Switzerland, and still operates. In 1831โ€“1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand โ€“ this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y โˆ’ z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5โ€“10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoffโ€“Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metalโ€“oxideโ€“silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metalโ€“oxideโ€“semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries โ€“ and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as followsโ€” this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operationโ€”although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or โˆ’128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machineโ€“based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember โ€“ a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languagesโ€”some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Churchโ€“Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/List_of_programming_languages#F] | [TOKENS: 115]
List of programming languages This is an index to notable programming languages, in current or historical use. Dialects of BASIC (which have their own page), esoteric programming languages, and markup languages are not included. A programming language does not need to be imperative or Turing-complete, but must be executable and so does not include markup languages such as HTML or XML, but does include domain-specific languages such as SQL and its dialects. A B C D E F G H I J K L M N O P Q R S T U V W X Y Z See also
========================================
[SOURCE: https://he.wikipedia.org/wiki/MS-DOS] | [TOKENS: 527]
ืชื•ื›ืŸ ืขื ื™ื™ื ื™ื MS-DOS MS-DOS (ืจืืฉื™ ืชื™ื‘ื•ืช ืฉืœ MicroSoft Disk Operating System) ื”ื™ื ืžืขืจื›ืช ื”ืคืขืœื” ืขื‘ื•ืจ ืžื—ืฉื‘ื™ื ืื™ืฉื™ื™ื ืžืืจื›ื™ื˜ืงื˜ื•ืจืช x86 ืืฉืจ ื ื›ืชื‘ื” ื‘ืžืงื•ืจ ืขืœ ื™ื“ื™ ื˜ื™ื ืคื˜ืจืกื•ืŸ ื•ื”ื™ื™ืชื” ื‘ื‘ืขืœื•ืช Seattle Computer Products ืขื“ ืฉื ืจื›ืฉื” ืขืœ ื™ื“ื™ ื—ื‘ืจืช ืžื™ืงืจื•ืกื•ืคื˜ ื‘ึพ1981. ื–ื• ื”ื™ื™ืชื” ืžืขืจื›ืช ื”ื”ืคืขืœื” ื”ืคื•ืคื•ืœืจื™ืช ื‘ื™ื•ืชืจ ืžืžืฉืคื—ืช DOS, ื•ื”ืฉื›ื™ื—ื” ื‘ื™ื•ืชืจ ื‘ืขื•ืœื ื‘ืฉื ื•ืช ื”ึพ80 ื•ื”ึพ90 ืฉืœ ื”ืžืื” ื”ึพ20. ืงื“ืžื” ืœื” ืžืขืจื›ืช ื”ื”ืคืขืœื” M-DOS ืืฉืจ ื™ืฆืื” ื‘ึพ1979. MS-DOS ื ื•ืขื“ื” ืœืฉื™ืžื•ืฉ ื‘ืขื–ืจืช ื”ืžืขื‘ื“ 8086 ืฉืœ ืื™ื ื˜ืœ ื•ืกืคืฆื™ืคื™ืช ืขื‘ื•ืจ IBM PC ื•ืชื•ืืžื™ื•. ืžืขืจื›ืช ื”ื”ืคืขืœื” ื”ื•ื—ืœืคื” ื‘ื”ื“ืจื’ื” ืขืœ ื™ื“ื™ ืžืขืจื›ื•ืช ืืฉืจ ื”ืฆื™ืขื• ืžืžืฉืง ืžืฉืชืžืฉ ื’ืจืคื™ (GUI), ื•ื‘ื™ื™ื—ื•ื“ ืžืฉืคื—ืช ืžืขืจื›ื•ืช ื”ื”ืคืขืœื” Microsoft Windows. ื”ื’ืจืกื” ื”ืื—ืจื•ื ื” ืฉืคื•ืชื—ื” ื”ื™ื™ืชื” ื’ืจืกืช MS-DOS 8.0. ืงื™ืฉื•ืจื™ื ื—ื™ืฆื•ื ื™ื™ื
========================================
[SOURCE: https://en.wikipedia.org/w/index.php?title=PlayStation_(console)&oldid=1336981971] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006โ€”over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protรฉgรฉ. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendoโ€“Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ยฅ39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to ยฃ700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even closeโ€”Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, โ€” with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) โ€” as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a ยฃ20 million marketing budget during the Christmas season compared to Sega's ยฃ4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999โ€“2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least ยฃ100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006โ€”over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256ร—224 to 640ร—480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consolesโ€”including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears outโ€”usually unevenlyโ€”due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41โ„2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsลซshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5โ€”for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everythingโ€”the whole PlayStation formatโ€”is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/w/index.php?title=Mars&oldid=1337894205] | [TOKENS: 11899]
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from โˆ’153 to 20 ยฐC (โˆ’243 to 68 ยฐF), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3โ€“2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Poleโ€“Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42โ€“56 kilometres (26โ€“35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ยฑ 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150โ€“180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650โ€“1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000โ€“2400 K, compared to 5400โ€“6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ยฑ 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mรคdler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mรคdler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mรคdler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gรฉrard de Vaucouleurs for the definition of 0.0ยฐ longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20โ€“25 km (12โ€“16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Poleโ€“Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" โ€“ or araneiforms โ€“ are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer โ€“ about 3 ฮผm thick if deposited with uniform thickness between 58ยฐ north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 ฮผm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 ยฐC (54 ยฐF). Martian surface temperatures vary from lows of about โˆ’110 ยฐC (โˆ’166 ยฐF) to highs of up to 35 ยฐC (95 ยฐF) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30ยฐ latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50โ€“300 parts per million of water, which is enough to cover the entire planet to a depth of 200โ€“1,000 metres (660โ€“3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above โˆ’23 ยฐC, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ยฑ 1.7 10โˆ’4) is five to seven times the amount on Earth (D/H = 1.56 10โˆ’4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19ยฐ relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about โˆ’3.0 to โˆ’1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: ฮ†ฯฮทฯ‚). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit โ€“ where the orbital period would match the planet's period of rotation โ€“ rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as ฮ ฯ…ฯฯŒฮตฮนฯ‚. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (็ซๆ˜Ÿ) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433โ€“437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability โ€“ the ability of a world to develop environmental conditions favorable to the emergence of life โ€“ favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System โ†’ Local Interstellar Cloud โ†’ Local Bubble โ†’ Gould Belt โ†’ Orion Arm โ†’ Milky Way โ†’ Milky Way subgroup โ†’ Local Group โ†’ Local Sheet โ†’ Local Volume โ†’ Virgo Supercluster โ†’ Laniakea Supercluster โ†’ Piscesโ€“Cetus Supercluster Complex โ†’ Local Hole โ†’ Observable universe โ†’ UniverseEach arrow (โ†’) may be read as "within" or "part of".
========================================
[SOURCE: https://en.wikipedia.org/wiki/Districts_of_Israel] | [TOKENS: 1024]
Contents Districts of Israel There are six main administrative districts of Israel, known in Hebrew as mekhozot (ืžึฐื—ื•ึนื–ื•ึนืช; sing. ืžึธื—ื•ึนื–, makhoz) and in Arabic as mintaqah. There are also 15 subdistricts of Israel, known in Hebrew nafot (ื ึธืคื•ึนืช; sing. ื ึธืคึธื”, nafa) and in Arabic as qadaa. Each subdistrict is further divided into natural regions, which in turn are further divided into council-level divisions: whether they might be cities, municipalities, or regional councils. The present division into districts was established in 1953, to replace the divisions inherited from the British Mandate. It has remained substantially the same ever since; a second proclamation of district boundaries issued in 1957โ€”which remains in force as of 2023โ€”only affirmed the existing boundaries in place. The figures in this article are based on numbers from the Israeli Central Bureau of Statistics and so include all places under Israeli civilian rule including those Israeli-occupied territories where this is the case. Therefore, Golan Subdistrict and its four natural regions are included in the number of subdistricts and natural regions even though it is not recognized by the United Nations or the international community as Israeli territory. Similarly, the population figure below for the Jerusalem District was calculated including East Jerusalem whose annexation by Israel is similarly not recognized by the United Nations and the international community. The Judea and Samaria Area, however, is not included in the number of districts and subdistricts as Israel has not applied its civilian jurisdiction in that part of the West Bank. Administration The districts have no elected institutions of any kind, although they do possess councils composed of representatives of central government ministries and local authorities for planning and building purposes. Their administration is undertaken by a District Commissioner appointed by the Minister of the Interior. Each district also has a District Court. Since the District Commissioners are considered part of the Ministry of the Interior's bureaucracy, they can only exercise functions falling within the purview of other ministries if the appropriate Minister authorizes them. This authorization is rarely granted, as other government ministries and institutions (for example, the Ministry of Health and the Police) establish their own divergent systems of districts. Jerusalem District Jerusalem District (Hebrew: ืžึฐื—ื•ึนื– ื™ึฐืจื•ึผืฉึธืืœึทื™ึดื, Mehoz Yerushalayim) Natural regions: Northern District Northern District (Hebrew: ืžึฐื—ื•ึนื– ื”ึทืฆึธึผืคื•ึผืŸ, Mehoz HaTzafon) Subdistricts and natural regions: Haifa District Haifa District (Hebrew: ืžึฐื—ื•ึนื– ื—ึตื™ืคึธื”, Mehoz Heifa) Subdistricts and natural regions: Central District Central District (ืžึฐื—ื•ึนื– ื”ึทืžึถึผืจึฐื›ึธึผื–โ€Ž, Mehoz HaMerkaz) Subdistricts and natural regions: Tel Aviv District Tel Aviv District (Hebrew: ืžึฐื—ื•ึนื– ืชึตึผืœึพืึธื‘ึดื™ื‘, Mehoz Tel Aviv) Natural regions: Southern District Southern District (Hebrew: ืžึฐื—ื•ึนื– ื”ึทื“ึธึผืจื•ึนื, Mehoz HaDarom) Subdistricts and natural regions: Formerly the Hof Aza Regional Council with a population of approx. 10,000 Israelis was a part of this district, but the Israeli communities that constituted it were evacuated when the disengagement plan was implemented on the Gaza Strip. Since the withdrawal, the Coordination and Liaison Administration operates there.[citation needed] Judea and Samaria Area Judea and Samaria Area (Hebrew: ืึตื–ื•ึนืจ ื™ึฐื”ื•ึผื“ึธื” ื•ึฐืฉืื•ึนืžึฐืจื•ึนืŸโ€ฌ, Ezor Yehuda VeShomron) The name Judea and Samaria for this geographical area is based on terminology from the Hebrew Bible and other sources relating to ancient Israel and Judah/Judea. The territory has been under Israeli control since the 1967 Six-Day War but not annexed by Israel, pending negotiations regarding its status. It is part of historic Israel, which leads to politically contentious issues. However, it is not recognized as part of the State of Israel by the United Nations. There are no subdistricts or administratively declared "natural regions" in the Judea and Samaria Area. See also Notes References External links
========================================
[SOURCE: https://techcrunch.com/author/jagmeet-singh/] | [TOKENS: 188]
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Jagmeet Singh Reporter, TechCrunch Jagmeet covers startups, tech policy-related updates, and all other major tech-centric developments from India for TechCrunch. He previously worked as a principal correspondent at NDTV. You can contact or verify outreach from Jagmeet by emailing mail@journalistjagmeet.com. Latest from Jagmeet Singh ยฉ 2025 TechCrunch Media LLC.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Orionids] | [TOKENS: 551]
Contents Orionids The Orionids meteor shower, often shortened to the Orionids, is one of two meteor showers associated with Halley's Comet (the other one being the Eta Aquariids). The Orionids are named because the point they appear to come from (the radiant) lies in the constellation of Orion. The shower occurs annually, lasting approximately one week in late October. In some years, meteors may occur at rates of 50โ€“70 per hour. Orionid outbursts occurred in 585, 930, 1436, 1439, 1465, and 1623. The Orionids occur at the ascending node of Halley's comet. The ascending node reached its closest distance to Earth around 800 BCE. Currently Earth approaches Halley's orbit at a distance of 0.154 AU (23.0 million km; 14.3 million mi; 60 LD) during the Orionids. The next outburst might be in 2070 as a result of particles trapped in a 2:13 mean-motion resonance with Jupiter. History Meteor showers were connected to comets in the 1800s. E.C. Herrick made an observation in 1839 and 1840 about the activity present in the October night skies. Alexander Herschel produced the first documented record that gave accurate forecasts for the next meteor shower. The Orionids meteor shower is produced by Halley's Comet, which was named after astronomer Edmund Halley and last passed through the inner Solar System in 1986 on its 75โ€“76 year orbit. When the comet passes through the Solar System, the Sun sublimates some of the ice, allowing rock particles to break away from the comet. These particles continue on the comet's trajectory and appear as meteors when they enter Earth's upper atmosphere. The meteor shower radiant is located in Orion about 10 degrees northeast of Betelgeuse. The Orionids normally peak around October 21โ€“22 and are fast meteors that make atmospheric entry at about 66 km/s (150,000 mph). Halley's comet is also responsible for creating the Eta Aquariids, which occur each May as a result of Earth passing close to the descending node of Halley's comet. An outburst with a zenithal hourly rate of over 100 occurred on 21 October 2006 as a result of Earth passing through the 1266 BCE, 1198 BCE, and 911 BCE meteoroid streams. In 2015, the meteor shower peaked on October 26. Some Orionid showers have had double peaks, as well as plateaus of activity lasting several days. Gallery See also References External links
========================================
[SOURCE: https://en.wikipedia.org/w/index.php?title=Special:CiteThisPage&page=Mars&id=1337894205&wpFormIdentifier=titleform] | [TOKENS: 551]
Contents Cite This Page IMPORTANT NOTE: Most educators and professionals do not consider it appropriate to use tertiary sources such as encyclopedias as a sole source for any informationโ€”citing an encyclopedia as an important reference in footnotes or bibliographies may result in censure or a failing grade. Wikipedia articles should be used for background information, as a reference for correct terminology and search terms, and as a starting point for further research. As with any community-built reference, there is a possibility for error in Wikipedia's contentโ€”please check your facts against multiple sources and read our disclaimers for more information. Bibliographic details for "Mars" Please remember to check your manual of style, standards guide or instructor's guidelines for the exact syntax to suit your needs. For more detailed advice, see Citing Wikipedia. Citation styles for "Mars" Wikipedia contributors. (2026, February 12). Mars. In Wikipedia, The Free Encyclopedia. Retrieved 10:11, February 21, 2026, from https://en.wikipedia.org/w/index.php?title=Mars&oldid=1337894205 Wikipedia contributors. "Mars." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 12 Feb. 2026. Web. 21 Feb. 2026. Wikipedia contributors, 'Mars', Wikipedia, The Free Encyclopedia, 12 February 2026, 01:16 UTC, <https://en.wikipedia.org/w/index.php?title=Mars&oldid=1337894205> [accessed 21 February 2026] Wikipedia contributors, "Mars," Wikipedia, The Free Encyclopedia, https://en.wikipedia.org/w/index.php?title=Mars&oldid=1337894205 (accessed February 21, 2026). Wikipedia contributors. Mars [Internet]. Wikipedia, The Free Encyclopedia; 2026 Feb 12, 01:16 UTC [cited 2026 Feb 21]. Available from: https://en.wikipedia.org/w/index.php?title=Mars&oldid=1337894205. Mars, https://en.wikipedia.org/w/index.php?title=Mars&oldid=1337894205 (last visited Feb. 21, 2026). Wikipedia contributors. Mars. Wikipedia, The Free Encyclopedia. February 12, 2026, 01:16 UTC. Available at: https://en.wikipedia.org/w/index.php?title=Mars&oldid=1337894205. Accessed February 21, 2026. When using the LaTeX package url (\usepackage{url} somewhere in the preamble), which tends to give much more nicely formatted web addresses, the following may be preferred:
========================================
[SOURCE: https://www.fast.ai/posts/2016-10-08-teaching-philosophy.html] | [TOKENS: 1897]
Providing a Good Education in Deep Learning Rachel Thomas October 8, 2016 On this page Paul Lockhart, a Columbia math PhD, former Brown professor, and K-12 math teacher, writes in the influential essay A Mathematicianโ€™s Lament of a nightmare world where children are not allowed to listen to or play music until they have spent over a decade mastering music notation and theory, spending classes transposing sheet music into a different key. In art class, students study colors and applicators, but arenโ€™t allowed to actually paint until college. Sound absurd? This is how math is taughtโ€“we require students to spend years doing rote memorization, and learning dry, disconnected โ€œfundamentalsโ€ that we claim will pay off later, long after most of them quit the subject. Unfortunately, this is where several of the few resources on deep learning beginโ€“asking learners to follow along with the definition of the Hessian and theorems for the Taylor approximation of your loss function, without ever giving examples of actual working code. Iโ€™m not knocking calculus. I love calculus and have even taught it at the college level, but I donโ€™t think itโ€™s a good or helpful introduction to deep learning. So many students are turned off to math because of the dull way itโ€™s taught, with tedious, repetitive exercises, and a curriculum that saves so many fun parts (such as graph theory, counting and permutations, group theory) to so late that everyone except math majors has abandoned the subject. And the gate-keepers of deep learning are doing something similar whenever they ask that you can derive the multivariate chain rule or give the theoretical underpinnings of KL divergence before theyโ€™ll teach you how to use a neural net to handle your own projects. Weโ€™ll be leveraging the best available research on teaching methods to try to fix these problems with technical teaching, including: In the end, what weโ€™re talking about is good education. Thatโ€™s what we most care about. Here are more of our thoughts on good education: Good education starts with โ€œthe whole gameโ€ Just as kids have a sense of what baseball is before they start batting practice, we want you to have a sense of the big picture of deep learning well before you study calculus and the chain rule. Weโ€™ll move from the big picture down to the details (which is the opposite direction than most education, which tries to teach all the individual elements before putting them together). For a good example of how this works, watch Jeremyโ€™s talk on recurrent neural networksโ€“he starts with 3 line RNN using a highly featured library, then removes the library builds his own architecture using a GPU framework, and then removes the framework and builds everything from scratch in gritty detail just using basic python. In a book that inspires us, David Perkins, a Harvard education professor with a PhD from MIT in Artificial Intelligence, calls the approach of not doing anything complicated until youโ€™ve taught all the individual elements first a disease: โ€œelementitisโ€. Itโ€™s like batting practice without knowing what the game baseball is. The elements can seem boring or pointless when you donโ€™t know how they fit in with the big picture. And itโ€™s hard to stay motivated when youโ€™re not able to work on problems you care about, or have a sense of how the technical details fit into the whole. Perhaps this is why studies have shown that the intrinsic motivation of school children steadily declines from 3rd grade to 8th grade (the only range of years studied). Good education equips you to work on the questions you care about Whether youโ€™re excited to identify if plants are diseased from pictures of their leaves, auto-generate knitting patterns, diagnose TB from x-rays, or determine when a raccoon is using your cat door, we will get you using deep learning on your own problems (via pre-trained models from others) as quickly as possible, and then will progressively drill into more details. Youโ€™ll learn how to use deep learning to solve your own problems at state-of-the-art accuracy within the first 30 minutes of the first lesson! There is a pernicious myth out there that you need to have computing resources and datasets the size of those at Google to be able to do deep learning, and itโ€™s not true. Good education is not overly complicated. Have you watched Jeremy implement modern deep learning optimization methods in Excel? If not, go watch it (starts at 4:50min in the video) and come back. This is often considered a complex topic, yet after weeks of work Jeremy figured out how to make it so easy it seems obvious. If you truly understand something, you can explain it in an accessible way, and maybe even implement it in Excel! Complicated jargon and obtuse technical definitions arise out of laziness, or when the speaker is unsure of the meat of what theyโ€™re saying and hides behind their peripheral knowledge. Good education is inclusive. It doesnโ€™t put up any unnecessary barriers. It doesnโ€™t make you feel bad if you didnโ€™t start coding at age 12, if you have a non-traditional background, if you canโ€™t afford a mac, if youโ€™re working on a non-traditional problem, or if you didnโ€™t go to an elite college. We want our course to be as accessible as possible. I care deeply about inclusion, and spent months researching and writing each of my widely read articles with practical tips on how we can increase diversity in tech, as well as spending a year and a half teaching full-stack software development to women full-time. Currently deep learning is even more homogenous than tech in general, which is scary for such a powerful and impactful field. We are going to change this. Good education motivates the study of underlying technical concepts. Having a big picture understanding gives you more of a framework to place the fundamentals in. Seeing what deep learning is capable of and how you can use it is the best motivation for the more dry or tedious parts. โ€œPlaying baseball is more interesting than batting practice, playing pieces of music more interesting than practicing scales, and engaging in some junior version of historical or mathematical inquiry more interesting than memorizing dates or doing sums,โ€ writes Perkins. Building a working model for a problem that interests you is more interesting than writing a proof (for most people!) Good education encourages you to make mistakes. In the most viewed TED talk of all time, education expert Sir Ken Robinson argues that by stigmatizing mistakes, our school systems destroy the childrenโ€™s innate creative capacity. โ€œIf youโ€™re not prepared to be wrong, youโ€™ll never end up with anything original,โ€ says Robinson. Teaching deep learning with a code-heavy approach in interactive Jupyter notebooks is a great setup for trying lots of things, making mistakes, and easily changing what youโ€™re doing. Good education leverages existing resources There is no need to reinvent teaching materials where good ones already exist. If you need to brush up on matrix multiplication, weโ€™ll refer you to Khan Academy. If youโ€™re fascinated by X and want to go deeper, weโ€™ll recommend you read Y. Our goal is to help you achieve your deep learning goals, not to be the sole resource in getting you there. Good education encourages creativity Lockhart argues that it would be better to not teach math at all, then to teach such a mangled form of it that alienates most of the population from the beauty of math. He describes math as a โ€œrich and fascinating adventure of the imaginationโ€ and defines it as โ€œthe art of explanationโ€, although it is rarely taught that way. The biggest wins for deep learning will come when you apply it to the outside domains youโ€™re an expert in and the problems youโ€™re passionate about. This will require you to be creative. Good education teaches you to ask questions, not just to answer them Even those who seem to thrive under traditional education methods are still poorly served by them. I received a mostly traditional approach to education (although I had a few exceptional teachers at all stages and particularly at Swarthmore). I excelled at school, aced exams, and generally enjoyed learning. I loved math, going on to earn a math PhD at Duke University. While I was great at problem sets and exams, this traditional approach did me a huge disservice when it came to preparing me for doctoral research and my professional career. I was no longer being given well-formulated, appropriately scoped problems by teachers. I could no longer learn every incremental building block before setting to work on a task. As Perkins writes about his struggles with finding a good dissertation topic, I too had learned how to solve problems I was given, but not how to find and scope interesting problems on my own. I now see my previous academic successes as a weakness Iโ€™ve had to overcome professionally. When I began studying deep learning, I enjoyed reading the math theorems and proofs, but this didnโ€™t actually help me build deep learning models. Good education is evidence-based We love data and the scientific method, and we are interested in techniques that have been supported by research. Spaced repetition learning is one such evidence-backed technique, where learners revisit a topic periodically, just before they would forget it. Jeremy used this technique to obtain impressive results in teaching himself Chinese. The whole game method of learning dovetails nicely with spaced repetition learning in that we will revisit topics, going into more and more low level details each time, but always returning to the big picture.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Central_District_(Israel)] | [TOKENS: 254]
Contents Central District (Israel) The Central District (Hebrew: ืžื—ื•ื– ื”ืžืจื›ื–, Meแธฅoz haMerkaz; Arabic: ุงู„ู…ู†ุทู‚ุฉ ุงู„ูˆุณุทู‰, Minแนญaqat al-Wasaแนญ) of Israel is one of six administrative districts, including most of the Sharon region. It is further divided into four sub-districts: Petah Tikva, Ramla, Sharon, and Rehovot. The district's largest city is Rishon LeZion. The district's population as of 2017 was 2,115,800. According to the Israeli Central Bureau of Statistics, 88% of the population is Jewish, 8.2% is Arab, and 4% are "non-classified", being mostly former Soviet Union immigrants of partial or nominal Jewish ethnic heritage or household members of Jews. Administrative local authorities Economy El Al Airlines maintains its corporate headquarters on the grounds of Ben Gurion Airport and in the Central District. See also References 31ยฐ56โ€ฒN 34ยฐ52โ€ฒE๏ปฟ / ๏ปฟ31.933ยฐN 34.867ยฐE๏ปฟ / 31.933; 34.867
========================================
[SOURCE: https://techcrunch.com/video/the-creator-economys-ad-revenue-problem-and-indias-ai-ambitions/] | [TOKENS: 697]
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us The creator economyโ€™s ad revenue problem and Indiaโ€™s AI ambitions Loading the playerโ€ฆ The creator economy is evolving fast, and ad revenue alone isnโ€™t cutting it anymore. YouTubers are launching product lines, acquiring startups, and building actual business empires. In fact, MrBeastโ€™s company bought fintech startup Step, and his chocolate business is outearning his media arm. This isnโ€™t just one creatorโ€™s strategy. For many, itโ€™s the new playbook. On this episode of TechCrunchโ€™s Equity podcast, hosts Kirsten Korosec, Anthony Ha, and Rebecca Bellan unpack how creators are diversifying beyond ads, whether their model can scale beyond the top 1%, everything happing at Indiaโ€™s AI Impact Summit, and more of the weekโ€™s headlines. Topics Audio Producer Theresa Loconsolo is an audio producer at TechCrunch focusing on Equity, the networkโ€™s flagship podcast. Before joining TechCrunch in 2022, she was one of 2 producers at a four-station conglomerate where she wrote, recorded, voiced and edited content, and engineered live performances and interviews from guests like lovelytheband. Theresa is based in New Jersey and holds a bachelors degree in Communication from Monmouth University. You can contact or verify outreach from Theresa by emailing theresa.loconsolo@techcrunch.com. You can contact or verify outreach from Theresa by emailing theresa.loconsolo@techcrunch.com. You can contact or verify outreach from Theresa by emailing theresa.loconsolo@techcrunch.com. Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building whatโ€™s next. Donโ€™t miss these one-time savings. Most Popular FBI says ATM โ€˜jackpottingโ€™ attacks are on the rise, and netting hackers millions in stolen cash Metaโ€™s own research found parental supervision doesnโ€™t really help curb teensโ€™ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts donโ€™t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isnโ€™t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) Subscribe for the industryโ€™s biggest tech news Every weekday and Sunday, you can get the best of TechCrunchโ€™s coverage. TechCrunch's AI experts cover the latest news in the fast-moving field. Every Monday, gets you up to speed on the latest advances in aerospace. Startups are the core of TechCrunch, so get our best coverage delivered weekly. By submitting your email, you agree to our Terms and Privacy Notice. Related ยฉ 2025 TechCrunch Media LLC.
========================================
[SOURCE: https://en.wikipedia.org/wiki/List_of_programming_languages#G] | [TOKENS: 115]
List of programming languages This is an index to notable programming languages, in current or historical use. Dialects of BASIC (which have their own page), esoteric programming languages, and markup languages are not included. A programming language does not need to be imperative or Turing-complete, but must be executable and so does not include markup languages such as HTML or XML, but does include domain-specific languages such as SQL and its dialects. A B C D E F G H I J K L M N O P Q R S T U V W X Y Z See also
========================================
[SOURCE: https://en.wikipedia.org/wiki/Gurpurb] | [TOKENS: 901]
Contents Gurpurb Gurpurab (Punjabi: เจ—เฉเจฐเจชเฉเจฐเจฌ (Gurmukhi)), alternatively spelt as Gurpurb or Gurpurub, in Sikh tradition is a celebration of an anniversary of a Guru's birth marked by the holding of a festival. Gurpurab of Guru Nanak The birthday of Guru Nanak, the founder of the Sikh religion, comes in the month of November, but the date varies from year to year according to the lunar Indian calendar. The birthday celebrations last three days. Generally two days before the birthday, Akhand Path is performed in the Gurdwaras. One day before the birthday, a procession is organised which is led by the Panj Piare and the Palki (Palanquin) of Guru Granth Sahib and followed by teams of singers singing hymns, brass bands playing different tunes, 'Gatka' (martial art) teams show their swordsmanship, and processionists singing the chorus. The procession passes through the main roads and streets of the town which are covered with buntings and decorated gates and the leaders inform the people of the message of Guru Nanak. On the anniversary day, the programme begins early in the morning at about 4 or 5 am with the singing of Asa di Var (morning hymns) and hymns from the Sikh scriptures followed by Katha (exposition of the scripture) and lectures and recitation of poems in praise of the Guru. The celebrations go on till about 2 pm. After Ardas and distribution of Karah Parshad, a special Langar is served on the day of Gurpurab. Some Gurdwaras also hold night prayer sessions. This begins around sunset when Rehras (evening prayer) is recited. This is followed by Kirtan till late in the night. Sometimes a Kavi-darbar (poetic symposium) is also held to enable the poets to pay their tributes to the Guru in their own verses. At about 1:20 am, the actual time of the birth, the congregation starts singing Gurbani. The function ends about 2 am. The Sikhs who cannot join the celebrations for some reason, or in places where there are no Gurdwaras, hold the ceremony in their own homes by performing Kirtan, Path, Ardas, Karah Parsad and Langar. And they celebrate it with great pomp and joy Gurpurabs for other Gurus Guru Gobind Singh, the tenth Guru's birthday generally falls in December or January. The celebrations are similar to those of Guru Nanak's birthday, namely Akhand Path, procession and Kirtan, Katha, and Langar. The martyrdom anniversary of Guru Arjan, the fifth Guru, falls in May or June, the hottest months in India. He was tortured to death under the orders of Mughal Emperor, Jahangir, at Lahore on 25 May 1606. Celebrations consist of Kirtan, Katha, lectures, Karah Parsad and Langar in the Gurdwara. Because of summer heat, chilled sweetened drink made from milk, sugar, essence and water, called chhabeel is freely distributed in Gurdwaras and in neighbourhoods to everybody irrespective of their religious beliefs. Guru Tegh Bahadur, the ninth Guru, was arrested under orders of Mughal Emperor, Aurangzeb. As he refused to change his religion and accept Islam, he was beheaded on 11 November 1675 at Chandi Chowk, Delhi. Usually one-day celebrations of his martyrdom are organised in the Gurdwaras. Three days before his death, Guru Gobind Singh conferred on 3 October 1708, the guruship of the Sikhs on Guru Granth Sahib. On this day, special one-day celebrations are organised with Kirtan, Katha, lectures, Ardas, Karah Parsad and Langar. Sikhs rededicate themselves to follow the teachings of the Gurus contained in the scripture. In 2008, the tercentenary of this Gurpurab, popularized as 300 Saal Guru de Naal was celebrated by the Sikhs worldwide with the main celebrations held at Hazur Sahib, Nanded. See also References Sources
========================================
[SOURCE: https://en.wikipedia.org/wiki/Close_encounter] | [TOKENS: 769]
Contents Close encounter In ufology, a close encounter is an event in which a person witnesses an unidentified flying object (UFO) at relatively close range, where the possibility of mis-identification is presumably greatly reduced. This terminology and the system of classification behind it were first suggested in astronomer and UFO researcher J. Allen Hynek's book The UFO Experience: A Scientific Inquiry (1972). Categories beyond Hynek's original three have been added by others but have not gained universal acceptance, mainly because they lack the scientific rigor that Hynek aimed to bring to ufology. Distant sightings more than 150 meters (500 ft) from the witness are classified as daylight discs, nocturnal lights, or radar/visual reports. Sightings within about 150 meters (500 ft) are sub-classified as various types of close encounters. Hynek and others argued that a claimed close encounter must occur within about 150 meters (500 ft) to greatly reduce or eliminate the possibility of misidentifying conventional aircraft or other known phenomena. Hynek's scale became well known after being referenced in the classic sci-fi film Close Encounters of the Third Kind (1977), which is named after the third level of the scale. Promotional posters for the film featured the three levels of the scale, and Hynek himself makes a cameo appearance near the end of the film.[citation needed] Hynek's scale Hynek devised a six-fold classification for UFO sightings. The six levels are arranged according to increasing proximity: Close encounters of the third kind (CE3) may imply first contact. UFO researcher Ted Bloecher proposed six sub-types for the close encounters of the third kind in Hynek's scale: Extensions of Hynek's scale After Hynek's death in 1986, his colleague Jacques Vallรฉe extended Hynek's classification system by two steps, specifically close encounters of the fourth and fifth kinds, as published in Vallรฉe's book Confrontations: A Scientist's Search for Alien Contact (1990). The Mutual UFO Network (MUFON) immediately adopted the extensions to the classification scale and has used them ever since.[citation needed] A close encounter of the fourth kind is a UFO event in which a human is abducted by a UFO or its occupants. This type was not included in Hynek's original close encounters scale. Hynek's former associate Jacques Vallรฉe argued in the Journal of Scientific Exploration that the fourth kind should refer to "cases when witnesses experienced a transformation of their sense of reality", to also include non-abduction cases where absurd, hallucinatory or dreamlike events are associated with UFO encounters.[unreliable source?] The film The Fourth Kind (2009) makes reference to this category.[citation needed] As stated in Vallรฉe's Confrontations (1990), a close encounter of the fifth kind is where an alien abductee receives some manner of physical effect from their close encounter, typically either injury or healing. Several years after Vallรฉe's classification updates, some preferred that a close encounter of the fifth kind instead refer to human-initiated contact with extraterrestrial life forms or advanced interstellar civilizations, claiming direct communication between aliens and humans. This alternate interpretation of what a close encounter of the fifth kind (ce5) should represent has been attributed to Steven M. Greer. While technically not an extension of the Vallรฉe scale that measures result-oriented data, this replacement of the originally coined CE5 classification has become popular in marketing human-initiated contact events.[citation needed] In a CE5 event, individuals or groups use specific protocols to establish communication or interaction with extraterrestrial beings. These protocols primarily involve the use of contact meditation and use of sounds or signals. Close encounters of the fifth kind is also referred to as human initiated close encounter. See also References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Krishna_Janmashtami] | [TOKENS: 3759]
Contents Krishna Janmashtami Furthermore, when specifying the masa, one of two traditions are applicable, viz. amฤnta / pลซrแน‡imฤnta. If a festival falls in the waning phase of the moon, these two traditions identify the same lunar day as falling in two different (but successive) masa. Krishna Janmashtami (Sanskrit: เค•เฅƒเคทเฅเคฃเคœเคจเฅเคฎเคพเคทเฅเคŸเคฎเฅ€, romanized: Kแน›แนฃแน‡ajanmฤแนฃแนญamฤซ), also known simply as Krishnashtami, Janmashtami, or Gokulashtami, is an annual Hindu festival that celebrates the birth of Krishna, the eighth avatar of Vishnu. In the prominent Hindu scriptures, such as the Mahabharata, Bhagavata Purana, Gita Govinda, etc. Krishna has been identified as the supreme God and the source of all avatars. Krishna's birth is celebrated and observed on the eighth day (Ashtami) of the dark fortnight (Krishna Paksha) in Shravana Masa (according to the amanta tradition) or Bhadrapada Masa (according to the purnimanta tradition). This overlaps with August or September of the Gregorian calendar. It is an important festival, particularly in the Vaishnavism tradition of Hinduism. The celebratory customs associated with Janmashtami include a celebration festival, reading and recitation of religious texts, dance and enactments of the life of Krishna according to the Bhagavata Purana, devotional singing till midnight (the time of Krishna's birth), and fasting (upavasa), amongst other things. Some break their day long fast at midnight with a feast. Krishna Janmashtami is widely celebrated across India and abroad. Etymology The meaning of the Sanskrit word Janmashtami can be understood by splitting it into the two words, "Janma" and "Ashtami." The word "Janma" means birth and the word "Ashtami" means eight; thus, Krishna Janmashtami is the celebration of Krishna's birth on the eighth day of the dark fortnight (Krishna Paksha) in the month of Bhadrapada, also called Shravan, which falls in Augustโ€“September of the Gregorian calendar. History Information about Krishna's life is noted in the Mahabharata, the Puranas, and Bhagavata Purana. Krishna is the eighth son of Devaki (mother) and Vasudeva (father). Surrounding the time of his birth, persecution was rampant, freedoms were being denied, and King Kamsa's life was threatened. Krishna was born within a prison in Mathura, India where his parents were constrained by his uncle, Kamsa. During Devaki's wedding, Kamsa was warned by a celestial voice that Devaki's eighth son would be the cause of his death. To thwart this prophecy, Kamsa imprisoned his sister Devaki and her husband, killing the first six of their newborns shortly after birth. The guards responsible for keeping watch over Devaki's cell fell asleep and the cell doors were miraculously opened at the time of Krishna's birth. These events allowed Vasudeva to send Krishna across the Yamuna River to his foster parents, Yashoda (mother) and Nanda (father). This legend is celebrated on Janmashtami by people keeping fasts, singing devotional songs of love for Krishna, and keeping a vigil into the night. Throughout Krishna's childhood and young adult life, Balarama, Krishna's half-brother, was a "constant companion" for him. Balarama joined Krishna in the major events that are celebrated in Vraja, Brindavan, Dravarka, and Mathura such as stealing butter, chasing calves, playing in the cow pens, and participating in wrestling matches. Observance and celebrations Krishna Janmashtami holds significant importance to Krishnaites as well as Hindus around the world, and it is celebrated in diverse forms depending on their regional and cultural customs. Hindus celebrate Janmashtami by fasting, singing, praying together, preparing and sharing special food, night vigils, and visiting Krishna or Vishnu temples. The places of Mathura and Vrindavan are visited by pilgrims. Some mandirs organize recitation of Bhagavad Gita in the days leading up to Janmashtami. Many northern Indian communities organize dance-drama events called Rasa Lila or Krishna Lila. The tradition of Rasa Lila is particularly popular in the Mathura region, in northeastern states of India such as Manipur and Assam, and in parts of Rajasthan and Gujarat. It is acted out by numerous teams of amateur artists, cheered on by their local communities, and these drama-dance plays begin a few days before each Janmashtami. People decorate their houses with flowers and light. On this day, people chant "Hare Krishna hare Krishna, Krishna- Krishna Hare Hare". The Janmashtami celebration is followed by Dahi Handi, which is celebrated the next day. After Krishna's midnight hour birth, forms of baby Krishna are bathed and clothed, then placed in a cradle. The devotees then break their fast by sharing food and sweets. Women draw tiny footprints outside their house doors and kitchen, walking towards their house, a symbolism for Krishna's journey into their homes. Janmashtami is the largest festival in the Braj region of north India, in cities such as Mathura where Krishna was born according to Krishnaism, and in Vrindavan where he grew up. Vaishnava communities in these cities in Uttar Pradesh, as well as others in the state, as well as locations in Rajasthan, Delhi, Haryana, Uttarakhand and Himalayan north celebrate Janmashtami. Krishna temples are decorated and lighted up, they attract numerous visitors on the day, while Krishna devotees hold bhakti events and keep night vigil. The festival typically falls as the monsoons in north India have begun retreating, fields laden with crops and rural communities have time to play. In the northern states, Janmashtami is celebrated with the Raslila tradition, which literally means "play (Lila) of delight, essence (Rasa)". This is expressed as solo or group dance and drama events at Janmashtami, wherein Krishna related compositions are sung, music accompanies the performance, while actors and audience share and celebrate the performance by clapping hands to mark the beat. The childhood pranks of Krishna and the love affairs of Radha-Krishna are particularly popular. According to Christian Roy and other scholars, these Radha-Krishna love stories are Hindu symbolism for the longing and love of the human soul for the divine or Brahman. Poetry describing the feats of Krishna became popular in the fifteenth and sixteenth centuries within the Braj region, and is written according to a vernacular called "Braj basha" (present-day "Hindi"/dialect of "Hindi"). The Braj basha poems of Surdas (collectively known as the Sursagar) are popularly recalled, some of which describe the birth and childhood of Krishna. In Jammu region, Janmashtami is popularly known by the name "Thogre/Thakure da Vrat" (meaning Vrat dedicated to Thakur i.e. Shri Krishna). Observing a Phalaahari Vrat for complete one day is the major ritual in the festival. The day is marked by numerous Phalaahari Dhaams or Bhandaras in the streets of Jammu region's prominent towns. Janmashtami marks the beginning of the kite-flying season in the Jammu region whereby locals gather and fly decorated kites from their rooftops. While on the other hand, girls and women decorate their palms by applying Teera, dye of an indigenous plant. Another ceremony associated with Janmashtami in Jammu region is "Deyaa Parna" in which Dogras donate cereal grains in the name of their ancestors & Kuldevtas. A holy tree called jand is worshipped by women on this day. Special rotis called draupads are prepared and offered to cows and deities. Janmashtami is celebrated as Zaram Satam (Janam Saptami) by the native Kashmiri Pandits of Kashmir. The festival is associated with observing a vrat the whole day and visiting the Thokur Kuth (Krishna Mandir) at midnight. At night, puja is performed in the temples which includes performing abhishek (ritual bath) to the murti of Krishna, and singing bhajans (devotional songs). Food items appropriate for fasting, such as gaer or singhada lapsi (made from waterchestnut flour), fruits, and dried fruits are consumed on this day. Janmashtami (popularly known as "Dahi Handii" in Maharashtra) is celebrated in cities such as Mumbai, Latur, Nagpur and Pune. It is a celebration of joy and facilitator of social oneness. Dahi Handi is an enactment of how Krishna, during his childhood, would steal butter. This story is the theme of numerous reliefs on temples across India, as well as literature and dance-drama repertoire, symbolizing the joyful innocence of children, that love and life's play is the manifestation of god. It is common practice for youth groups to celebrate the festival by participating in Dahi Handi, which involves hanging a clay pot, filled with buttermilk, at a significant height. Once hung, several youth groups compete to reach the pot by creating a human pyramid and breaking it open. The spilled contents are considered as prasada (celebratory offering). It is a public spectacle, cheered and welcomed as a community event. In Dwarka, Gujarat โ€“ where Krishna is believed to have established his kingdom โ€“ people celebrate the festival with a tradition similar to Dahi Handi, called Makhan Handi (pot with freshly churned butter). Others perform folk dances such and garba and raas, sing bhajans, and visit Krishna temples such as at the Dwarkadhish Temple or Nathdwara. In the Kutch district region, farmers decorate their bullock carts and take out Krishna processions, with group singing and dancing. The day is of special importance to followers of the Pushtimarg and the Swaminarayan movement. The works of Gujarati poets, Narsinh Mehta (1414โ€“1480 c.e.), Dayaram (1777โ€“1852) and Rajasthani poet Mirabai (c.1500), are popularly revisited and sung during Janmashtami. Their works are categorized as part of the bhakti tradition, or devotional poetry dedicated to Krishna. In Kerala, people celebrate in September, according to the Malayalam calendar. In Tamil Nadu, people decorate the floor with kolams (decorative pattern drawn with rice batter). Geetha Govindam and other such devotional songs are sung in praise of Krishna. Little footprints, representing Krishna as an infant, are drawn from the threshold of the house till the pooja (prayer) room, which depicts the arrival of Krishna into the house. Reciting from the Bhagavad Gita is also a popular practice. The offerings made to Krishna include fruits, betel and butter. Milk-based items, such as sweet seedai and verkadalai urundai, are prepared. The festival is celebrated in the evening as Krishna was born at midnight. Most people observe a strict fast on this day. In Andhra Pradesh and Telangana, recitation of shlokas and devotional songs are the characteristics of this festival. Another unique feature of this festival is that young boys are dressed up as Krishna and they visit neighbours and friends. The people of Andhra Pradesh observe a fast too. Various kinds of sweets such as chakodi, murukku, and seedai are offered to Krishna on this day. Joyful chanting of Krishna's name takes place in quite a few temples of the state. The number of temples dedicated to Krishna are few. The reason being that people have taken to worship him through paintings and not idols.[citation needed] Popular south Indian temples dedicated to Krishna are Rajagopalaswamy Temple in Mannargudi in the Tiruvarur district, Pandavadhoothar temple in Kanchipuram, Sri Krishna temple at Udupi, and the Krishna temple at Guruvayur are dedicated to the memory of Vishnu's incarnation as Krishna. It is believed that the murti (idol) of Krishna installed in Guruvayur is originally from his kingdom in Dwarka โ€“ which is believed to be submerged in the sea. Janmashtami is widely celebrated by Krishnaite and Hindu Vaishnava communities of eastern and northeastern India. The widespread tradition of celebrating Krishna in these regions is credited to the efforts and teachings of 15th and 16th century Sankardeva and Chaitanya Mahaprabhu. Sankardeva introduced the musical composition, Borgeet, and dance-drama styles โ€“ Ankia Naat and Sattriya - that is now popular in West Bengal and Assam. In Manipur state, a traditional dance - Raas Leela - inspired by the theme of love and devotion between Krishna, Radha and the gopis, is enacted using the Manipuri dance style. The contextual roots of these dance drama arts are found in the ancient text Natya Shastra, but with influences from the culture fusion between India and southeast Asia. On Janmashtami, parents dress up their children as Krishna or the gopis. Temples and community centers are decorated with regional flowers and leaves, while groups recite or listen to the tenth chapter of the Bhagavata Purana, and the Bhagavata Gita. Janmashtami is a major festival celebrated with fasts, vigil, recitation of scriptures and Krishna prayers in Manipur. Dancers performing Raslila are a notable annual tradition during Janmashtami in Mathura and Vrindavan. Children play the Likol Sannaba game in the Meetei Krishnaite community. The Shree Govindajee Temple and the ISKCON temples particularly mark the Janmashtami festival. Janmashtami is celebrated in Assam at homes, in community centers called Namghars (Assamese: เฆจเฆพเฆฎเฆ˜เงฐ). According to the tradition, the devotees sing the Nam, perform pujas and sharing food and Prasada. In the eastern state of Odisha, specifically the region around Puri and in Nabadwip, West Bengal, the festival is also referred to as Sri Krishna Jayanti or simply Sri Jayanti. People celebrate Janmashtami by fasting and worship until midnight. The Bhagavata Purana is recited from the 10th chapter, a section dedicated to the life of Krishna. The next day is called "Nanda Ucchhaba" or the joyous celebration of Krishna's foster parents Nanda and Yashoda. Devotees keep fasting during the entire day of Janmashtami. They bring water from Ganga to bathe Radha Madhaba during their abhisheka ceremony. A grand abhisheka is performed at midnight for the small Radha Madha. In Odisha, the Jagannath Temple in Puri, best known for its grand Ratha Yatra celebrations, perform a Ratha Yatra during Janmashtami. Outside India About eighty percent of the population of Nepal identify themselves as Hindus and celebrate Krishna Janmashtami. They observe Janmashtami by fasting until midnight. It is a national holiday in Nepal. The devotees recite the Bhagavad Gita and sing religious songs called bhajans and kirtans. The temples of Krishna are decorated. Shops, posters and houses carry Krishna motifs. Janmashtami is a national holiday in Bangladesh. On Janmashtami, a procession starts from Dhakeshwari Temple in Dhaka, the National Temple of Bangladesh, and then proceeds through the streets of Old Dhaka. The procession dates back to 1902, but was stopped in 1948. The procession was resumed in 1989. At least a quarter of the population in Fiji practices Hinduism, and this holiday has been celebrated in Fiji since the first Indian indentured laborers landed there. Janmashtami in Fiji is known as "Krishna Ashtami". Most Hindus in Fiji have ancestors that originated from Uttar Pradesh, Bihar, and Tamil Nadu, making this an especially important festival for them. Fiji's Janmashtami celebrations are unique in that they last for eight days, leading up to the eighth day, the day Krishna was born. During these eight days, Hindus gather at homes and at temples with their 'mandalis,' or devotional groups at evenings and night, and recite the Bhagavat Purana, sing devotional songs for Krishna, and distribute Prasadam. Janmashtami is celebrated by Pakistani Hindus in the Shri Swaminarayan Mandir in Karachi with the singing of bhajans and delivering of sermons on Krishna. It is an optional holiday in Pakistan. Prior to the Partition of India, Dera Ghazi Khan was the center of a Janmashtami fair at the thallฤ of Kevalarฤma. This fair is now recreated in Inder Puri, New Delhi. In Arizona, United States, Governor Janet Napolitano was the first American leader to greet a message on Janmashtami, while acknowledging ISKCON. The festival is also celebrated widely by Krishnaites in the Caribbean countries of Guyana, Trinidad and Tobago, Jamaica and Suriname. Many Hindus in these countries originate from Tamil Nadu, Uttar Pradesh and Bihar; descendants of indentured immigrants from Tamil Nadu, Uttar Pradesh, Bihar, Bengal, and Orissa. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Clairvoyance] | [TOKENS: 2693]
Contents Clairvoyance Clairvoyance (/klษ›ษ™rหˆvษ”ษช.ษ™ns/; from French clair 'clear' and voyance 'vision') is the claimed ability to acquire information that would be considered impossible to get through scientifically proven sensations, thus classified as extrasensory perception, or "sixth sense". Any person who is claimed to have such ability is said to be a clairvoyant (/klษ›ษ™rหˆvษ”ษช.ษ™nt/) ('one who sees clearly'). Claims for the existence of paranormal and psychic abilities such as clairvoyance have not been supported by scientific evidence. Parapsychology explores this possibility, but the existence of the paranormal is not accepted by the scientific community. The scientific community widely considers parapsychology, including the study of clairvoyance, a pseudoscience. Usage Pertaining to the ability of clear-sightedness, clairvoyance refers to the paranormal ability to see persons and events that are distant in time or space. It can be divided into roughly three classes: precognition, the ability to perceive or predict future events, retrocognition, the ability to see past events, and remote viewing, the perception of contemporary events happening outside the range of normal perception. The English connotations of seeing through time are not always present in concepts of clairvoyance in other languages.: 144 The French usage, for example, has more emphasis on spatial connotations and arises from the Latin etymology of clear-seeing/clear vision.: 144 In history and religion Throughout history, there have been numerous places and times in which people have claimed themselves, or others, to be clairvoyant. In several religions, stories of certain individuals being able to see things far removed from their immediate sensory perception are commonplace, especially within pagan religions where oracles were used. Prophecy often involved some degree of clairvoyance, especially when future events were predicted. This ability is sometimes attributed to a higher power rather than the person performing it. A number of Christian saints were said to be able to see or know things that were far removed from their immediate sensory perception as a kind of gift from God, including Charbel Makhlouf, Padre Pio, and Anne Catherine Emmerich in Catholicism and Gabriel Urgebadze, Paisios Eznepidis and John Maximovitch in Eastern Orthodoxy. Jesus in the Gospels is also recorded as being able to know things far removed from his immediate human perception. Some Christians today also share the same claim.[citation needed] In Jainism, clairvoyance is regarded as one of the five kinds of knowledge. The beings of hell and heaven (devas) are said to possess clairvoyance by birth. According to Jain text Sarvฤrthasiddhi, "this kind of knowledge has been called avadhi as it ascertains matter in downward range or knows objects within limits". The Chinese term for clairvoyance and clairvoyant is qianliyan (literally, "thousand-mile eyes").: 143 The origin of this usage is Qianliyan, a Daoist guardian deity often depicted as a statute guarding Mazu temples in East Asia.: 143 Qianliyan's sight ability carried over to Buddhist representations, symbolizing divine faculties of seeing.: 143 Rudolf Steiner, famous as a clairvoyant himself, claimed that it is easy for a clairvoyant to confuse their own emotional and spiritual being with the objective spiritual world. Parapsychology The earliest record of somnambulist clairvoyance is credited to the Marquis de Puysรฉgur, a follower of Franz Mesmer, who in 1784 was treating a local dull-witted peasant named Victor Race. During treatment, Race reportedly went into a trance and underwent a personality change, becoming fluent and articulate, and giving diagnosis and prescription for his own disease as well as those of others. Clairvoyance was a reported ability of some mediums during the spiritualist period of the late 19th and early 20th centuries, and psychics of many descriptions have claimed clairvoyant ability up to the present day. Early researchers of clairvoyance included William Gregory, Gustav Pagenstecher, and Rudolf Tischner. Clairvoyance experiments were reported in 1884 by Charles Richet. Playing cards were enclosed in envelopes and a subject under hypnosis attempted to identify them. The subject was reported to have been successful in a series of 133 trials but the results dropped to chance level when performed before a group of scientists in Cambridge. J. M. Peirce and E. C. Pickering reported a similar experiment in which they tested 36 subjects over 23,384 trials. They did not find above chance scores. Ivor Lloyd Tuckett (1911) and Joseph McCabe (1920) analyzed early cases of clairvoyance and concluded they were best explained by coincidence or fraud. In 1919, the magician P. T. Selbit staged a sรฉance at his flat in Bloomsbury. The spiritualist Arthur Conan Doyle attended and declared the clairvoyance manifestations genuine. A significant development in clairvoyance research came when J. B. Rhine, a parapsychologist at Duke University, introduced a standard methodology, with a standard statistical approach to analyzing data, as part of his research into extrasensory perception. A number of psychological departments attempted and failed to repeat Rhine's experiments. At Princeton University, W. S. Cox (1936) produced 25,064 trials with 132 subjects in a playing card ESP experiment. Cox concluded: "There is no evidence of extrasensory perception either in the 'average man' or of the group investigated or in any particular individual of that group. The discrepancy between these results and those obtained by Rhine is due either to uncontrollable factors in experimental procedure or to the difference in the subjects." Four other psychological departments failed to replicate Rhine's results. It was revealed that Rhine's experiments contained methodological flaws and procedural errors. Eileen Garrett was tested by Rhine at Duke University in 1933 with Zener cards. Certain symbols were placed on the cards and sealed in an envelope, and she was asked to guess their contents. She performed poorly and later criticized the tests by claiming the cards lacked a psychic energy called "energy stimulus" and that she could not perform clairvoyance on command. The parapsychologist Samuel Soal and his colleagues tested Garrett in May 1937. Most of the experiments were carried out in the Psychological Laboratory at the University College London. A total of over 12,000 guesses were recorded but Garrett failed to produce above chance level. Soal wrote: "In the case of Mrs. Eileen Garrett we fail to find the slightest confirmation of Dr. J. B. Rhine's remarkable claims relating to her alleged powers of extra-sensory perception. Not only did she fail when I took charge of the experiments, but she failed equally when four other carefully trained experimenters took my place." Remote viewing, also known as remote sensing, remote perception, telesthesia and travelling clairvoyance, is the alleged paranormal ability to perceive a remote or hidden target without support of the senses. A well-known recent study of remote viewing is the US government-funded project at the Stanford Research Institute from the 1970s through the mid-1990s. In 1972, Harold E. Puthoff and Russell Targ initiated a series of human subject studies to determine whether participants (the viewers or percipients) could reliably identify and accurately describe salient features of remote locations (targets). In the early studies, a human sender was typically present at the remote location as part of the experiment protocol. A three-step process was used. First, target conditions to be experienced by the senders were randomly selected. Second, in the viewing step, participants were asked to verbally express or sketch their impressions of the remote scene. Third, these descriptions were matched by separate judges, as closely as possible, with the intended targets. The term remote viewing was coined to describe this overall process. The first paper by Puthoff and Targ on remote viewing was published in Nature in March 1974; in it, the team reported some degree of remote viewing success. After the publication of these findings, other attempts to replicate the experiments were carried out with remotely linked groups using computer conferencing. The psychologists David Marks and Richard Kammann attempted to replicate Targ and Puthoff's remote viewing experiments at the Stanford Research Institute. In a series of 35 studies, they could not do so, so they investigated the original experiments' procedure. Marks and Kammann discovered that the notes given to the judges in Targ and Puthoff's experiments contained clues as to which order they were carried out, such as referring to yesterday's two targets, or the date of the session at the top of the page. They concluded that these clues explained the experiment's high hit rates. Marks achieved 100% accuracy without visiting any of the sites but by using cues. James Randi has written that controlled tests by several other researchers, eliminating several sources of cuing and extraneous evidence present in the original tests, produced negative results. Students were also able to solve Puthoff and Targ's locations from the clues inadvertently included in the transcripts. In 1980, Charles Tart claimed that a rejudging of the transcripts from one of Targ and Puthoff's experiments revealed an above-chance result. Targ and Puthoff again refused to provide copies of the transcripts, and they were not made available for study until July 1985, when it was discovered they still contained sensory cues. Marks and Christopher Scott (1986) wrote: "considering the importance for the remote viewing hypothesis of adequate cue removal, Tart's failure to perform this basic task seems beyond comprehension. As previously concluded, remote viewing has not been demonstrated in the experiments conducted by Puthoff and Targ, only the repeated failure of the investigators to remove sensory cues." In 1982, Robert G. Jahn, then Dean of the School of Engineering at Princeton University, wrote a comprehensive review of psychic phenomena from an engineering perspective. His paper included numerous references to remote viewing studies at the time. Statistical flaws in his work have been proposed by others in the parapsychological community and the general scientific community. Scientific reception According to scientific research, clairvoyance is generally explained as the result of confirmation bias, expectancy bias, fraud, hallucination, self-delusion, sensory leakage, subjective validation, wishful thinking or failures to appreciate the base rate of chance occurrences and not as a paranormal power. Parapsychology is generally regarded by the scientific community as a pseudoscience. In 1988, the US National Research Council concluded "The committee finds no scientific justification from research conducted over a period of 130 years, for the existence of parapsychological phenomena." Skeptics say that if clairvoyance were a reality, it would have become abundantly clear. They also contend that those who believe in paranormal phenomena do so for merely psychological reasons. According to David G. Myers (Psychology, 8th ed.): The search for a valid and reliable test of clairvoyance has resulted in thousands of experiments. One controlled procedure has invited 'senders' to telepathically transmit one of four visual images to 'receivers' deprived of sensation in a nearby chamber (Bem & Honorton, 1994). The result? A reported 32 percent accurate response rate, surpassing the chance rate of 25 percent. But follow-up studies have (depending on who was summarizing the results) failed to replicate the phenomenon or produced mixed results (Bem & others, 2001; Milton & Wiseman, 2002; Storm, 2000, 2003).One skeptic, magician James Randi, had a longstanding offer of U.S. $1 millionโ€”"to anyone who proves a genuine psychic power under proper observing conditions" (Randi, 1999). French, Australian, and Indian groups have parallel offers of up to 200,000 euros to anyone with demonstrable paranormal abilities (CFI, 2003). Large as these sums are, the scientific seal of approval would be worth far more to anyone whose claims could be authenticated. To refute those who say there is no ESP, one need only produce a single person who can demonstrate a single, reproducible ESP phenomenon. So far, no such person has emerged. Randi's offer has been publicized for three decades and dozens of people have been tested, sometimes under the scrutiny of an independent panel of judges. Still, nothing. "People's desire to believe in the paranormal is stronger than all the evidence that it does not exist." Susan Blackmore, "Blackmore's first law", 2004. Clairvoyance is considered a hallucination by mainstream psychiatry. In popular culture See also References Bibliography Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Procedural_programming] | [TOKENS: 853]
Contents Procedural programming Procedural programming is a programming paradigm, classified as imperative programming, that involves implementing the behavior of a computer program as procedures (a.k.a. functions, subroutines) that call each other. The resulting program is a series of steps that forms a hierarchy of calls to its constituent procedures. The first major procedural programming languages appeared c. 1957โ€“1964, including Fortran, ALGOL, COBOL, PL/I and BASIC. Pascal and C were published c. 1970โ€“1972. Computer processors provide hardware support for procedural programming through a stack register and instructions for calling procedures and returning from them. Hardware support for other types of programming is possible, like Lisp machines or Java processors, but no attempt was commercially successful.[contradictory] Development practices Certain software development practices are often employed with procedural programming in order to enhance quality and lower development and maintenance costs. Modularity is about organizing the procedures of a program into separate modulesโ€”each of which has a specific and understandable purpose. Minimizing the scope of variables and procedures can enhance software quality by reducing the cognitive load of procedures and modules. A program lacking modularity or wide scoping tends to have procedures that consume many variables that other procedures also consume. The resulting code is relatively hard to understand and to maintain. Since a procedure can specify a well-defined interface and be self-contained it supports code reuseโ€”in particular via the software library. Comparison with other programming paradigms Procedural programming is classified as an imperative programming, because it involves direct command of execution. Procedural is a sub-class of imperative since procedural includes block and scope concepts, whereas imperative describes a more general concept that does not require such features. Procedural languages generally use reserved words that define blocks, such as if, while, and for, to implement control flow, whereas non-structured imperative languages (i.e. assembly language) use goto and branch tables for this purpose. Also classified as imperative, object-oriented programming (OOP) involves dividing a program implementation into objects that expose behavior (methods) and data (members) via a well-defined interface. In contrast, procedural programming is about dividing the program implementation into variables, data structures, and subroutines. An important distinction is that while procedural involves procedures to operate on data structures, OOP bundles the two together. An object is a data structure and the behavior associated with that data structure. Some OOP languages support the class concept which allows for creating an object based on a definition. Nomenclature varies between the two, although they have similar semantics: The principles of modularity and code reuse in functional languages are fundamentally the same as in procedural languages, since they both stem from structured programming. For example: The main difference between the styles is that functional programming languages remove or at least deemphasize the imperative elements of procedural programming. The feature set of functional languages is therefore designed to support writing programs as much as possible in terms of pure functions: Many functional languages, however, are in fact impurely functional and offer imperative/procedural constructs that allow the programmer to write programs in procedural style, or in a combination of both styles. It is common for input/output code in functional languages to be written in a procedural style. There do exist a few esoteric functional languages (like Unlambda) that eschew structured programming precepts for the sake of being difficult to program in (and therefore challenging). These languages are the exception to the common ground between procedural and functional languages. In logic programming, a program is a set of premises, and computation is performed by attempting to prove candidate theorems. From this point of view, logic programs are declarative, focusing on what the problem is, rather than on how to solve it. However, the backward reasoning technique, implemented by SLD resolution, used to solve problems in logic programming languages such as Prolog, treats programs as goal-reduction procedures. Thus clauses of the form: have a dual interpretation, both as procedures and as logical implications: A skilled logic programmer uses the procedural interpretation to write programs that are effective and efficient, and uses the declarative interpretation to help ensure that programs are correct. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Ramla_Subdistrict] | [TOKENS: 140]
Contents Ramla Subdistrict The Ramla subdistrict is one of Israel's subdistricts in the Central District. There are three principal cities in the subdistrict: Ramla, Lod, and Modi'in-Maccabim-Re'ut. History The subdistrict is composed mostly of the eastern half of what was during Mandatory Palestine the Ramle Subdistrict. References 31ยฐ56โ€ฒ0.3โ€ณN 34ยฐ52โ€ฒ16.7โ€ณE๏ปฟ / ๏ปฟ31.933417ยฐN 34.871306ยฐE๏ปฟ / 31.933417; 34.871306 This geography of Israel article is a stub. You can help Wikipedia by adding missing information.
========================================
[SOURCE: https://he.wikipedia.org/wiki/MacOS] | [TOKENS: 1777]
ืชื•ื›ืŸ ืขื ื™ื™ื ื™ื macOS macOS (ื‘ืขื‘ืจ ื ืงืจืื” OS X) ื”ื™ื ืžืขืจื›ืช ื”ืคืขืœื” ืฉืœ ื—ื‘ืจืช Apple ืขื‘ื•ืจ ืกื“ืจืช ืžื—ืฉื‘ื™ ื”"ืžืง" (ืžืงื™ื ื˜ื•ืฉ) ืฉืœื”. ื”ืžืขืจื›ืช ืžื‘ื•ืกืกืช ืขืœ ืžืขืจื›ื•ืช BSD (ื™ื•ื ื™ืงืก) ืชื•ืฆืจืช ืžืขื‘ื“ื•ืช ื‘ืœ ื•ืžืฉื•ืœื‘ ื‘ื” ืžืžืฉืง ืžืฉืชืžืฉ ื’ืจืคื™ ืชื•ืฆืจืช ื—ื‘ืจืช Apple, ื”ืžืขืจื›ืช ืงื™ื™ืžืช ื‘ืกื“ืจืช ืžื—ืฉื‘ื™ ื”ืžืง. macOS ืฉื•ื ื” ื‘ืขื™ืฆื•ื‘ื” ืž-Windows ืืš ื›ื•ืœืœืช "ื—ืœื•ื ื•ืช" ื‘ืฉื•ืœื—ืŸ ื”ืขื‘ื•ื“ื” ืฉืœื”. ืžืขืจื›ืช ื”ื”ืคืขืœื” ื‘ืœืขื“ื™ืช ืœืžื—ืฉื‘ื™ื ืžืชื•ืฆืจืช ืืคืœ, ืขืœ ืืฃ ืฉืžื”ื“ื•ืจืชื” ื”ื™ืฉื ื”, Mac OS Classic, ืชืžื›ื” ื’ื ื‘ืžื—ืฉื‘ื™ื "ืชื•ืืžื™-ืžืงื™ื ื˜ื•ืฉ". ื”ืžืขืจื›ืช ืžื‘ื•ืกืกืช ืขืœ Mac OS Classic ืฉื”ื™ื™ืชื” ื‘ืžื—ืฉื‘ื™ ืืคืœ ื”ื™ืฉื ื™ื ื™ื•ืชืจ ืขื“ ืฉื ืช 2001. macOS ืžืชืื™ืžื” ื’ื ืœืžื—ืฉื‘ื™ื ื ื™ืฉืื™ื (MacBook) ื•ื’ื ืœืžื—ืฉื‘ื™ื ื ื™ื™ื—ื™ื (iMac ,Mac Pro ,Mac mini). ื‘-13 ื‘ื™ื•ื ื™ 2016, ื‘ืขืช ื”ื”ืฉืงื” ืฉืœ ื”ื’ืจืกื” ืฉืžืกืคืจื” ื”ื•ื 10.12 (macOS Sierra), ื”ืชื’ืœื” ื›ื™ ืืคืœ ื”ื—ืœื™ื˜ื” ืœืฉื ื•ืช ืืช ืฉื ืžืขืจื›ืช ื”ื”ืคืขืœื” ืžืชื•ืฆืจืชื” ืž-OS X ืœ-macOS, ื‘ืชื•ืจ ืžืขื™ืŸ ืžื—ื•ื•ื” ืœืฉื ื”ืžืงื•ืจื™ ืฉืœ ืžืขืจื›ืช ื”ื”ืคืขืœื”, Mac OS. ืžื‘ื ื” ื™ื™ืฉื•ืžื™ื ืžืขืจื›ืช ื”ื”ืคืขืœื” macOS ื›ื•ืœืœืช ืชื•ื›ื ื” ืœื ื™ื”ื•ืœ ื•ื”ืฉืžืขืช ืงื•ื‘ืฆื™ ืžื•ื–ื™ืงื” (iTunes), ืชื•ื›ื ื” ืœืขืจื™ื›ื” ื‘ืกื™ืกื™ืช ืฉืœ ืงื•ื‘ืฆื™ ื•ื™ื“ืื• (iMovie (ืื ')) ื•ืชื•ื›ื ืช ืœื•ื— ืฉื ื”, ื“ืคื“ืคืŸ (ืกืคืืจื™), ื•ืžืกืคืจ ืขื•ืจื›ื™ ื˜ืงืกื˜. ื”ื—ืœ ืžื’ืจืกื” 10.7 ืžืฆื•ืจืคืช ื’ื ื—ื ื•ืช ืžืงื•ื•ื ืช ืœืงื ื™ื™ืช ื•ื”ื•ืจื“ืช ื™ื™ืฉื•ืžื™ื ืฉื ืงืจืืช โ€œ App Store " (ื—ื ื•ืช ื™ื™ืฉื•ืžื™ื ืฉืœ ืžืงื™ื ื˜ื•ืฉ) ืืœื™ื” ืžืคืชื—ื™ื ื™ื›ื•ืœื™ื ืœื”ืขืœื•ืช ืืช ื™ืฆื™ืจื•ืชื™ื”ื ื•ืœืžื›ื•ืจ ืื•ืชืŸ ืื ืขื‘ืจื• ืืช ื‘ื“ื™ืงื•ืช ืืคืœ ื•ืื ื”ื ืจืฉื•ืžื™ื ืืฆืœื” ื›ืžืคืชื—ื™ื ื‘ืขืœื•ืช ืฉืœ 99 ื“ื•ืœืจื™ื ืœืฉื ื”. ื‘ื ื•ืกืฃ ืœื™ื™ืฉื•ืžื™ื ื”ืžื•ืชืงื ื™ื ืืคืœ ืžืฆื™ืขื” ืœืžืฉืชืžืฉ ื™ื™ืฉื•ืžื™ื ืœืžืงื™ื ื˜ื•ืฉ ืฉื‘ื™ื ื™ื”ื ื—ื‘ื™ืœืช ื™ื™ืฉื•ืžื™ื ืžืฉืจื“ื™ื™ื (iWork: Pages, Keynote, Numbers), ืชื•ื›ื ื•ืช ืœืขื™ื‘ื•ื“ ืงื•ื‘ืฆื™ ืชืžื•ื ื•ืช, ืฉืžืข, ืกืจื˜ื™ื ื•ืงื•ื‘ืฆื™ ePub ืœืคืจืกื•ื ืกืคืจื™ื ื‘ืชื•ื›ื ื•ืช ืกืคืจื™ื ืืœืงื˜ืจื•ื ื™ืช. ืืคืœ ื ื•ืชื ืช ืœืžื”ื ื“ืกื™ื ื’ื ืขื–ืจื™ื ืœื™ืฆื™ืจืช ื”ื™ื™ืฉื•ืžื™ื, ืœืžืฉืœ Xcode ืžืืคืฉืจ ืชื›ื ื•ืช ื‘ื›ืชื™ื‘ื” ื‘ืฉืคื•ืช ื”ืชื›ื ื•ืช ื”ื ืชืžื›ื•ืช ื‘-macOS ืื• ืžืžืฉืง ื’ืจืคื™ ืœื‘ื—ื™ืจืช ืžืจืื” ื”ืชื•ื›ื ื” ื•ื—ืœืงื™ื” ืžืชืคืจื™ื˜ื™ื. ืงื™ื™ืžื•ืช ื’ื ืชื•ื›ื ื•ืช ื”ืžืืคืฉืจื•ืช ืœื”ืจื™ืฅ ืžื›ื•ื ื” ื•ื™ืจื˜ื•ืืœื™ืช ืขืœ ื”ืžืงื™ื ื˜ื•ืฉ, ื›ืฉืขืœ ื”ืžื›ื•ื ื” ื”ื•ื•ื™ืจื˜ื•ืืœื™ืช ื ื™ืชืŸ ื’ื ืœื”ืจื™ืฅ ืžืขืจื›ื•ืช ื”ืคืขืœื” ืื—ืจื•ืช, ืœืžืฉืœ ื—ืœื•ื ื•ืช, ืœื™ื ื•ืงืก, ืื• ืกื•ืœืืจื™ืก. ื“ื•ื’ืžืื•ืช ืœืžื›ื•ื ื•ืช ื”ื•ื•ื™ืจื˜ื•ืืœื™ื•ืช ื”ืŸ VMware,โ€ื•ึพVirtualBox ืฉืœ ืื•ืจืงืœ. ื’ืจืกืื•ืช ื‘ื’ืจืกืื•ืช 10.0 โ€“ 10.8 ื”ืชื‘ืกืก ื›ื™ื ื•ื™ ืžืขืจื›ืช ื”ื”ืคืขืœื” ืขืœ ืื—ื“ ืžื”ื—ืชื•ืœื™ื ื”ื’ื“ื•ืœื™ื. ื”ื—ืœ ืžื’ืจืกื” 10.9 ืžืชื‘ืกืกื™ื ื”ืฉืžื•ืช ืขืœ ืžืงื•ืžื•ืช ื‘ืงืœื™ืคื•ืจื ื™ื”. ื‘ื’ืจืกืื•ืช 10.8 ื•ืื™ืœืš, ื”ืชื‘ืกืก ื›ื™ื ื•ื™ื™ืŸ ืขืœ ืฉื ืืชืจื™ื ื‘ืžื“ื™ื ืช ืงืœื™ืคื•ืจื ื™ื”, ื‘ื” ืžื‘ื•ืกืกืช ืืคืœ. ืงื™ืฉื•ืจื™ื ื—ื™ืฆื•ื ื™ื™ื ื”ืขืจื•ืช ืฉื•ืœื™ื™ื ืฉื™ืจื•ืชื™ ืขื ืŸ: MobileMe โ€ข iCloud
========================================
[SOURCE: https://techcrunch.com/author/theresa-loconsolo/] | [TOKENS: 289]
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Theresa Loconsolo Audio Producer, TechCrunch Theresa Loconsolo is an audio producer at TechCrunch focusing on Equity, the networkโ€™s flagship podcast. Before joining TechCrunch in 2022, she was one of 2 producers at a four-station conglomerate where she wrote, recorded, voiced and edited content, and engineered live performances and interviews from guests like lovelytheband. Theresa is based in New Jersey and holds a bachelors degree in Communication from Monmouth University. You can contact or verify outreach from Theresa by emailing theresa.loconsolo@techcrunch.com. You can contact or verify outreach from Theresa by emailing theresa.loconsolo@techcrunch.com. You can contact or verify outreach from Theresa by emailing theresa.loconsolo@techcrunch.com. Latest from Theresa Loconsolo ยฉ 2025 TechCrunch Media LLC.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Python_(programming_language)#cite_note-1] | [TOKENS: 4314]
Contents Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language. Python 3.0, released in 2008, was a major revision and not completely backward-compatible with earlier versions. Beginning with Python 3.5, capabilities and keywords for typing were added to the language, allowing optional static typing. As of 2026[update], the Python Software Foundation supports Python 3.10, 3.11, 3.12, 3.13, and 3.14, following the project's annual release cycle and five-year support policy. Python 3.15 is currently in the alpha development phase, and the stable release is expected to come out in October 2026. Earlier versions in the 3.x series have reached end-of-life and no longer receive security updates. Python has gained widespread use in the machine learning community. It is widely taught as an introductory programming language. Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index, which ranks based on searches in 24 platforms. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands. It was designed as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Python implementation began in December 1989. Van Rossum first released it in 1991 as Python 0.9.0. Van Rossum assumed sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from responsibilities as Python's "benevolent dictator for life" (BDFL); this title was bestowed on him by the Python community to reflect his long-term commitment as the project's chief decision-maker. (He has since come out of retirement and is self-titled "BDFL-emeritus".) In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python derives from the British comedy series Monty Python's Flying Circus. (See ยง Naming.) Python 2.0 was released on 16 October 2000, featuring many new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, and then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. It no longer receives security patches or updates. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e., "2.7.18+" (plus 3.11), with the plus signifying (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, and was a major revision and not completely backward-compatible with earlier versions, with some new semantics and changed syntax. Python 2.7.18, released in 2020, was the last release of Python 2. Several releases in the Python 3.x series have added new syntax to the language, and made a few (considered very minor) backward-incompatible changes. As of January 2026[update], Python 3.14.3 is the latest stable release. All older 3.x versions had a security update down to Python 3.9.24 then again with 3.9.25, the final version in 3.9 series. Python 3.10 is, since November 2025, the oldest supported branch. Python 3.15 has an alpha released, and Android has an official downloadable executable available for Python 3.14. Releases receive two years of full support followed by three years of security support. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming โ€“ including metaprogramming and metaobjects. Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language' because it is purposely designed to be able to integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the "Lisp tradition". It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML. Python's core philosophy is summarized in the Zen of Python (PEP 20) written by Tim Peters, which includes aphorisms such as these: However, Python has received criticism for violating these principles and adding unnecessary language bloat. Responses to these criticisms note that the Zen of Python is a guideline rather than a rule. The addition of some new features had been controversial: Guido van Rossum resigned as Benevolent Dictator for Life after conflict about adding the assignment expression operator in Python 3.8. Nevertheless, rather than building all functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which represented the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar, while giving developers a choice in their coding methodology. Python lacks do .. while loops, which Rossum considered harmful. In contrast to Perl's motto "there is more than one way to do it", Python advocates an approach where "there should be one โ€“ and preferably only one โ€“ obvious way to do it". In practice, however, Python provides many ways to achieve a given goal. There are at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli is a Fellow at the Python Software Foundation and Python book author; he wrote that "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers typically prioritize readability over performance. For example, they reject patches to non-critical parts of the CPython reference implementation that would offer increases in speed that do not justify the cost of clarity and readability.[failed verification] Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. Also, it is possible to transpile to other languages. However, this approach either fails to achieve the expected speed-up, since Python is a very dynamic language, or only a restricted subset of Python is compiled (with potential minor semantic changes). Python is meant to be a fun language to use. This goal is reflected in the name โ€“ a tribute to the British comedy group Monty Python โ€“ and in playful approaches to some tutorials and reference materials. For instance, some code examples use the terms "spam" and "eggs" (in reference to a Monty Python sketch), rather than the typical terms "foo" and "bar". A common neologism in the Python community is pythonic, which has a broad range of meanings related to program style: Pythonic code may use Python idioms well; be natural or show fluency in the language; or conform with Python's minimalist philosophy and emphasis on readability. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Python's statements include the following: The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typingโ€”in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations; according to Van Rossum, the language never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, data can be passed through multiple stack levels. Python's expressions include the following: In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example: A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement. Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them. Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support the semantics of only the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Python includes a module typing including several type names for type annotations. Also, mypy supports a Python compiler called mypyc, which leverages type annotations for optimization. 1.33333 frozenset() Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the modulo operator, a remainder can be negative, e.g., 4 % -3 == -2.) Also, Python offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0. Also, it offers the matrixโ€‘multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively. Division between integers produces floating-point results. The behavior of division has changed significantly over time: In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division. Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. Also, the rounding implies that the equation b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is โˆ’1.0. Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation. Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs: To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header. Code examples "Hello, World!" program: Program to calculate the factorial of a non-negative integer: Libraries Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specificationsโ€”for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333โ€”but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations. As of 13 March 2025,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. Development environments Most[which?] Python implementations (including CPython) include a readโ€“evalโ€“print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately. Also, CPython is bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.[citation needed] Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting. Standard desktop IDEs include PyCharm, Spyder, and Visual Studio Code; there are web browser-based IDEs, such as the following environments: Implementations CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard since version 3.11. Older versions use the C89 standard with several select C99 features, but third-party extensions are not limited to older C versionsโ€”e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python. CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms. All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past. All alternative implementations have at least slightly different semantics. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Creating an executable with Python often is done by bundling an entire Python interpreter into the executable, which causes binary sizes to be massive for small programs, yet there exist implementations that are capable of truly compiling Python. Alternative implementations include the following: Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version. Just-in-time Python compilers have been developed, but are now unsupported: There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python: There are also specialized compilers: Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax: A performance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game. There are several approaches to optimizing Python performance, despite the inherent slowness of an interpreted language. These approaches include the following strategies or tools: Language Development Python's development is conducted mostly through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases have three types, distinguished by which part of the version number is incremented: Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development. The major academic conference on Python is PyCon. Also, there are special Python mentoring programs, such as PyLadies. Naming Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. Also, the official Python documentation contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas". Languages influenced by Python See also Notes References Further reading External links
========================================