text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Telecommunications] | [TOKENS: 8291] |
Contents Telecommunications Telecommunication, often used in its plural form or abbreviated as telecom, is the transmission of information over a distance using electrical or electronic means, typically through cables, radio waves, or other communication technologies. These means of transmission may be divided into communication channels for multiplexing, allowing for a single medium to transmit several concurrent communication sessions. Long-distance technologies invented during the 19th, 20th and 21st centuries generally use electric power, and include the electrical telegraph, telephone, television, and radio. Early telecommunication networks used metal wires as the medium for transmitting signals. These networks were used for telegraphy and telephony for many decades. In the first decade of the 20th century, a revolution in wireless communication began with breakthroughs including those made in radio communications by Guglielmo Marconi, who won the 1909 Nobel Prize in Physics. Other early pioneers in electrical and electronic telecommunications include co-inventors of the telegraph Charles Wheatstone and Samuel Morse, numerous inventors and developers of the telephone including Antonio Meucci, Philipp Reis, Elisha Gray and Alexander Graham Bell, inventors of radio Edwin Armstrong and Lee de Forest, as well as inventors of television like Vladimir K. Zworykin, John Logie Baird and Philo Farnsworth. Since the 1960s, the proliferation of digital technologies has meant that voice communications have gradually been supplemented by data. The physical limitations of metallic media prompted the development of optical fibre. The Internet, a technology independent of any given medium, has provided global access to services for individual users and further reduced location and time limitations on communications. Definition At the 1932 Plenipotentiary Telegraph Conference and the International Radiotelegraph Conference in Madrid, the two organizations merged to form the International Telecommunication Union (ITU). They defined telecommunication as "any telegraphic or telephonic communication of signs, signals, writing, facsimiles and sounds of any kind, by wire, wireless or other systems or processes of electric signaling or visual signaling (semaphores)." The definition was later reconfirmed, according to Article 1.3 of the ITU Radio Regulations, which defined it as "Any transmission, emission or reception of signs, signals, writings, images and sounds or intelligence of any nature by wire, radio, optical, or other electromagnetic systems". As such, slow communications technologies like postal mail and pneumatic tubes are excluded from the telecommunication's definition. The term telecommunication was coined in 1904 by the French engineer and novelist Édouard Estaunié, who defined it as "remote transmission of thought through electricity". Telecommunication is a compound noun formed from the Greek prefix tele- (τῆλε), meaning distant, far off, or afar, and the Latin verb communicare, meaning to share. Communication was first used as an English word in the late 14th century. It comes from Old French comunicacion (14c., Modern French communication), from Latin communicationem (nominative communication), noun of action from past participle stem of communicare, "to share, divide out; communicate, impart, inform; join, unite, participate in," literally, "to make common", from communis. History Many transmission media have been used for long-distance communication throughout history, from smoke signals, beacons, semaphore telegraphs, signal flags, and optical heliographs to wires and empty space made to carry electromagnetic signals. Long distance communication was used long before the discovery of electricity and electromagnetism enabled the invention of telecommunications. A few of the many ingenious methods for communicating over distances prior to that are described here. Homing pigeons have been used throughout history by different cultures. Pigeon post had Persian roots and was later used by the Romans to aid their military. Frontinus claimed Julius Caesar used pigeons as messengers in his conquest of Gaul. The Greeks also conveyed the names of the victors at the Olympic Games to various cities using homing pigeons. In the early 19th century, the Dutch government used the system in Java and Sumatra. And in 1849, Paul Julius Reuter started a pigeon service to fly stock prices between Aachen and Brussels, a service that operated for a year until the gap in the telegraph link was closed. In the Middle Ages, chains of beacons were commonly used on hilltops as a means of relaying a signal. Beacon chains suffered the drawback that they could only pass a single bit of information, so the meaning of the message such as "the enemy has been sighted" had to be agreed upon in advance. One notable instance of their use was during the Spanish Armada, when a beacon chain relayed a signal from Plymouth to London. In 1792, Claude Chappe, a French engineer, built the first fixed visual telegraphy system (or semaphore line) between Lille and Paris. However semaphore suffered from the need for skilled operators and expensive towers at intervals of ten to thirty kilometres (six to nineteen miles). As a result of competition from the electrical telegraph, the last commercial line was abandoned in 1880. On July 25, 1837, the first commercial electrical telegraph was demonstrated by English inventor Sir William Fothergill Cooke and English scientist Sir Charles Wheatstone. Both inventors viewed their device as "an improvement to the [existing] electromagnetic telegraph" and not as a new device. Samuel Morse independently developed a version of the electrical telegraph that he unsuccessfully demonstrated on September 2, 1837. His code was an important advance over Wheatstone's signaling method. The first transatlantic telegraph cable was successfully completed on July 27, 1866, allowing transatlantic telecommunication for the first time. After early attempts to develop a talking telegraph by Antonio Meucci and a telefon by Johann Philipp Reis, a patent for the conventional telephone was filed by Alexander Bell in February 1876 (just a few hours before Elisha Gray filed a patent caveat for a similar device). The first commercial telephone services were set up by the Bell Telephone Company in 1878 and 1879 on both sides of the Atlantic in the cities of New Haven and London. In 1894, Italian inventor Guglielmo Marconi began developing a wireless communication using the then-newly discovered phenomenon of radio waves, demonstrating, by 1901, that they could be transmitted across the Atlantic Ocean. This was the start of wireless telegraphy by radio. On 17 December 1902, a transmission from the Marconi station in Glace Bay, Nova Scotia, Canada, became the world's first radio message to cross the Atlantic from North America. In 1904, a commercial service was established to transmit nightly news summaries to subscribing ships, which incorporated them into their onboard newspapers. World War I accelerated the development of radio for military communications. After the war, commercial radio AM broadcasting began in the 1920s and became an important mass medium for entertainment and news. World War II again accelerated the development of radio for the wartime purposes of aircraft and land communication, radio navigation, and radar. Development of stereo FM broadcasting of radio began in the 1930s in the United States and the 1940s in the United Kingdom, displacing AM as the dominant commercial standard in the 1970s. On March 25, 1925, John Logie Baird demonstrated the transmission of moving pictures at the London department store Selfridges. Baird's device relied upon the Nipkow disk by Paul Nipkow and thus became known as the mechanical television. It formed the basis of experimental broadcasts done by the British Broadcasting Corporation beginning on 30 September 1929. Vacuum tubes use thermionic emission of electrons from a heated cathode for a number of fundamental electronic functions such as signal amplification and current rectification. The simplest vacuum tube, the diode invented in 1904 by John Ambrose Fleming, contains only a heated electron-emitting cathode and an anode. Electrons can only flow in one direction through the device—from the cathode to the anode. Adding one or more control grids within the tube enables the current between the cathode and anode to be controlled by the voltage on the grid or grids. These devices became a key component of electronic circuits for the first half of the 20th century and were crucial to the development of radio, television, radar, sound recording and reproduction, long-distance telephone networks, and analogue and early digital computers. While some applications had used earlier technologies such as the spark gap transmitter for radio or mechanical computers for computing, it was the invention of the thermionic vacuum tube that made these technologies widespread and practical, leading to the creation of electronics. For most of the 20th century, televisions depended on a kind of vacuum tube — the cathode ray tube — invented by Karl Ferdinand Braun. The first version of such a television to show promise was produced by Philo Farnsworth and demonstrated to his family on 7 September 1927. After World War II, interrupted experiments resumed and television became an important home entertainment broadcast medium. Also in the 1940s, the invention of semiconductor devices made it possible to produce solid-state devices, which are smaller, cheaper, and more efficient, reliable, and durable than vacuum tubes. Starting in the mid-1960s, vacuum tubes were replaced with the transistor. Vacuum tubes still have some applications for certain high-frequency amplifiers. On 11 September 1940, George Stibitz transmitted problems for his Complex Number Calculator in New York using a teletype and received the computed results back at Dartmouth College in New Hampshire. This configuration of a centralized computer (mainframe) with remote dumb terminals remained popular well into the 1970s. In the 1960s, Paul Baran and, independently, Donald Davies started to investigate packet switching, a technology that sends a message in portions to its destination asynchronously without passing it through a centralized mainframe. A four-node network emerged on 5 December 1969, constituting the beginnings of the ARPANET, which by 1981 had grown to 213 nodes. ARPANET eventually merged with other networks to form the Internet. While Internet development was a focus of the Internet Engineering Task Force (IETF) who published a series of Request for Comments documents, other networking advancements occurred in industrial laboratories, such as the local area network (LAN) developments of Ethernet (1983), Token Ring (1984)[citation needed] and Star network topology. The effective capacity to exchange information worldwide through two-way telecommunication networks grew from 281 petabytes (PB) of optimally compressed information in 1986 to 471 PB in 1993 to 2.2 exabytes (EB) in 2000 to 65 EB in 2007. This is the informational equivalent of two newspaper pages per person per day in 1986, and six entire newspapers per person per day by 2007. Given this growth, telecommunications play an increasingly important role in the world economy and the global telecommunications industry was about a $4.7 trillion sector in 2012. The service revenue of the global telecommunications industry was estimated to be $1.5 trillion in 2010, corresponding to 2.4% of the world's gross domestic product (GDP). Technical concepts Modern telecommunication is founded on a series of key concepts that experienced progressive development and refinement in a period of well over a century: Telecommunication technologies may primarily be divided into wired and wireless methods. Overall, a basic telecommunication system consists of three main parts that are always present in some form or another: In a radio broadcasting station, the station's large power amplifier is the transmitter and the broadcasting antenna is the interface between the power amplifier and the free space channel. The free space channel is the transmission medium and the receiver's antenna is the interface between the free space channel and the receiver. Next, the radio receiver is the destination of the radio signal, where it is converted from electricity to sound. Telecommunication systems are occasionally "duplex" (two-way systems) with a single box of electronics working as both the transmitter and a receiver, or a transceiver (e.g., a mobile phone). The transmission electronics and the receiver electronics within a transceiver are quite independent of one another. This can be explained by the fact that radio transmitters contain power amplifiers that operate with electrical powers measured in watts or kilowatts, but radio receivers deal with radio powers measured in microwatts or nanowatts. Hence, transceivers have to be carefully designed and built to isolate their high-power circuitry and their low-power circuitry from each other to avoid interference. Telecommunication over fixed lines is called point-to-point communication because it occurs between a transmitter and a receiver. Telecommunication through radio broadcasts is called broadcast communication because it occurs between a powerful transmitter and numerous low-power but sensitive radio receivers. Telecommunications in which multiple transmitters and multiple receivers have been designed to cooperate and share the same physical channel are called multiplex systems. The sharing of physical channels using multiplexing often results in significant cost reduction. Multiplexed systems are laid out in telecommunication networks and multiplexed signals are switched at nodes through to the correct destination terminal receiver. Communications can be encoded as analogue or digital signals, which may in turn be carried by analogue or digital communication systems. Analogue signals vary continuously with respect to the information, while digital signals encode information as a set of discrete values (e.g., a set of ones and zeroes). During propagation and reception, information contained in analogue signals is degraded by undesirable noise. Commonly, the noise in a communication system can be expressed as adding or subtracting from the desirable signal via a random process. This form of noise is called additive noise, with the understanding that the noise can be negative or positive at different instances. Unless the additive noise disturbance exceeds a certain threshold, the information contained in digital signals will remain intact. Their resistance to noise represents a key advantage of digital signals over analogue signals. However, digital systems fail catastrophically when noise exceeds the system's ability to autocorrect. On the other hand, analogue systems fail gracefully: as noise increases, the signal becomes progressively more degraded but still usable. Also, digital transmission of continuous data unavoidably adds quantization noise to the output. This can be reduced, but not eliminated, only at the expense of increasing the channel bandwidth requirement. The term channel has two different meanings. In one meaning, a channel is the physical medium that carries a signal between the transmitter and the receiver. Examples of this include the atmosphere for sound communications, glass optical fibres for some kinds of optical communications, coaxial cables for communications by way of the voltages and electric currents in them, and free space for communications using visible light, infrared waves, ultraviolet light, and radio waves. Coaxial cable types are classified by RG type or radio guide, terminology derived from World War II. The various RG designations are used to classify the specific signal transmission applications. This last channel is called the free space channel. The sending of radio waves from one place to another has nothing to do with the presence or absence of an atmosphere between the two. Radio waves travel through a perfect vacuum just as easily as they travel through air, fog, clouds, or any other kind of gas. The other meaning of the term channel in telecommunications is seen in the phrase communications channel, which is a subdivision of a transmission medium so that it can be used to send multiple streams of information simultaneously. For example, one radio station can broadcast radio waves into free space at frequencies in the neighbourhood of 94.5 MHz (megahertz) while another radio station can simultaneously broadcast radio waves at frequencies in the neighbourhood of 96.1 MHz. Each radio station would transmit radio waves over a frequency bandwidth of about 180 kHz (kilohertz), centred at frequencies such as the above, which are called the "carrier frequencies". Each station in this example is separated from its adjacent stations by 200 kHz, and the difference between 200 kHz and 180 kHz (20 kHz) is an engineering allowance for the imperfections in the communication system. In the example above, the free space channel has been divided into communications channels according to frequencies, and each channel is assigned a separate frequency bandwidth in which to broadcast radio waves. This system of dividing the medium into channels according to frequency is called frequency-division multiplexing. Another term for the same concept is wavelength-division multiplexing, which is more commonly used in optical communications when multiple transmitters share the same physical medium. Another way of dividing a communications medium into channels is to allocate each sender a recurring segment of time (a time slot, for example, 20 milliseconds out of each second), and to allow each sender to send messages only within its own time slot. This method of dividing the medium into communication channels is called time-division multiplexing (TDM), and is used in optical fibre communication. Some radio communication systems use TDM within an allocated FDM channel. Hence, these systems use a hybrid of TDM and FDM. The shaping of a signal to convey information is known as modulation. Modulation can be used to represent a digital message as an analogue waveform. This is commonly called "keying"—a term derived from the older use of Morse Code in telecommunications—and several keying techniques exist (these include phase-shift keying, frequency-shift keying, and amplitude-shift keying). The Bluetooth system, for example, uses phase-shift keying to exchange information between various devices. In addition, there are combinations of phase-shift keying and amplitude-shift keying which is called (in the jargon of the field) quadrature amplitude modulation (QAM) that are used in high-capacity digital radio communication systems. Modulation can also be used to transmit the information of low-frequency analogue signals at higher frequencies. This is helpful because low-frequency analogue signals cannot be effectively transmitted over free space. Hence the information from a low-frequency analogue signal must be impressed into a higher-frequency signal (known as the carrier wave) before transmission. There are several different modulation schemes available to achieve this [two of the most basic being amplitude modulation (AM) and frequency modulation (FM)]. An example of this process is a disc jockey's voice being impressed into a 96 MHz carrier wave using frequency modulation (the voice would then be received on a radio as the channel 96 FM). In addition, modulation has the advantage that it may use frequency division multiplexing (FDM). A telecommunications network is a collection of transmitters, receivers, and communications channels that send messages to one another. Some digital communications networks contain one or more routers that work together to transmit information to the correct user. An analogue communications network consists of one or more switches that establish a connection between two or more users. For both types of networks, repeaters may be necessary to amplify or recreate the signal when it is being transmitted over long distances. This is to combat attenuation that can render the signal indistinguishable from the noise. Another advantage of digital systems over analogue is that their output is easier to store in memory, i.e., two voltage states (high and low) are easier to store than a continuous range of states. Societal impact Telecommunication has a significant social, cultural and economic impact on modern society. In 2008, estimates placed the telecommunication industry's revenue at US$4.7 trillion or just under three per cent of the gross world product (official exchange rate). Several following sections discuss the impact of telecommunication on society. On the microeconomic scale, companies have used telecommunications to help build global business empires. This is self-evident in the case of online retailer Amazon.com but, according to academic Edward Lenert, even the conventional retailer Walmart has benefited from better telecommunication infrastructure compared to its competitors. In cities throughout the world, home owners use their telephones to order and arrange a variety of home services ranging from pizza deliveries to electricians. Even relatively poor communities have been noted to use telecommunication to their advantage. In Bangladesh's Narsingdi District, isolated villagers use cellular phones to speak directly to wholesalers and arrange a better price for their goods. In Côte d'Ivoire, coffee growers share mobile phones to follow hourly variations in coffee prices and sell at the best price. On the macroeconomic scale, Lars-Hendrik Röller and Leonard Waverman suggested a causal link between good telecommunication infrastructure and economic growth. Few dispute the existence of a correlation although some argue it is wrong to view the relationship as causal. Because of the economic benefits of good telecommunication infrastructure, there is increasing worry about the inequitable access to telecommunication services amongst various countries of the world—this is known as the digital divide. A 2003 survey by the International Telecommunication Union (ITU) revealed that roughly a third of countries have fewer than one mobile subscription for every 20 people and one-third of countries have fewer than one land-line telephone subscription for every 20 people. In terms of Internet access, roughly half of all countries have fewer than one out of 20 people with Internet access. From this information, as well as educational data, the ITU was able to compile an index that measures the overall ability of citizens to access and use information and communication technologies. Using this measure, Sweden, Denmark and Iceland received the highest ranking while the African countries Niger, Burkina Faso and Mali received the lowest. Telecommunication has played a significant role in social relationships. Nevertheless, devices like the telephone system were originally advertised with an emphasis on the practical dimensions of the device (such as the ability to conduct business or order home services) as opposed to the social dimensions. It was not until the late 1920s and 1930s that the social dimensions of the device became a prominent theme in telephone advertisements. New promotions started appealing to consumers' emotions, stressing the importance of social conversations and staying connected to family and friends. Since then the role that telecommunications has played in social relations has become increasingly important. In recent years,[when?] the popularity of social networking sites has increased dramatically. These sites allow users to communicate with each other as well as post photographs, events and profiles for others to see. The profiles can list a person's age, interests, sexual preference and relationship status. In this way, these sites can play important role in everything from organising social engagements to courtship. Prior to social networking sites, technologies like short message service (SMS) and the telephone also had a significant impact on social interactions. In 2000, market research group Ipsos MORI reported that 81% of 15- to 24-year-old SMS users in the United Kingdom had used the service to coordinate social arrangements and 42% to flirt. In cultural terms, telecommunication has increased the public's ability to access music and film. With television, people can watch films they have not seen before in their own home without having to travel to the video store or cinema. With radio and the Internet, people can listen to music they have not heard before without having to travel to the music store. Telecommunication has also transformed the way people receive their news. A 2006 survey (right table) of slightly more than 3,000 Americans by the non-profit Pew Internet and American Life Project in the United States the majority specified television or radio over newspapers. Telecommunication has had an equally significant impact on advertising. TNS Media Intelligence reported that in 2007, 58% of advertising expenditure in the United States was spent on media that depend upon telecommunication. Regulation Many countries have enacted legislation which conforms to the International Telecommunication Regulations established by the International Telecommunication Union (ITU), which is the "leading UN agency for information and communication technology issues". In 1947, at the Atlantic City Conference, the ITU decided to "afford international protection to all frequencies registered in a new international frequency list and used in conformity with the Radio Regulation". According to the ITU's Radio Regulations adopted in Atlantic City, all frequencies referenced in the International Frequency Registration Board, examined by the board and registered on the International Frequency List "shall have the right to international protection from harmful interference". From a global perspective, there have been political debates and legislation regarding the management of telecommunication and broadcasting. The history of broadcasting discusses some debates in relation to balancing conventional communication such as printing and telecommunication such as radio broadcasting. The onset of World War II brought on the first explosion of international broadcasting propaganda. Countries, their governments, insurgents, terrorists, and militiamen have all used telecommunication and broadcasting techniques to promote propaganda. Patriotic propaganda for political movements and colonization started the mid-1930s. In 1936, the BBC broadcast propaganda to the Arab World to partly counter similar broadcasts from Italy, which also had colonial interests in North Africa. Modern political debates in telecommunication include the reclassification of broadband Internet service as a telecommunications service (also called net neutrality), regulation of phone spam, and expanding affordable broadband access. Modern media According to data collected by Gartner and Ars Technica sales of main consumer's telecommunication equipment worldwide in millions of units was: In a telephone network, the caller is connected to the person to whom they wish to talk by switches at various telephone exchanges. The switches form an electrical connection between the two users and the setting of these switches is determined electronically when the caller dials the number. Once the connection is made, the caller's voice is transformed to an electrical signal using a small microphone in the caller's handset. This electrical signal is then sent through the network to the user at the other end where it is transformed back into sound by a small speaker in that person's handset. As of 2015[update], the landline telephones in most residential homes are analogue—that is, the speaker's voice directly determines the signal's voltage. Although short-distance calls may be handled from end-to-end as analogue signals, increasingly telephone service providers are transparently converting the signals to digital signals for transmission. The advantage of this is that digitized voice data can travel side by side with data from the Internet and can be perfectly reproduced in long-distance communication (as opposed to analogue signals that are inevitably impacted by noise). Mobile phones have had a significant impact on telephone networks. Mobile phone subscriptions now outnumber fixed-line subscriptions in many markets. Sales of mobile phones in 2005 totalled 816.6 million with that figure being almost equally shared amongst the markets of Asia/Pacific (204 m), Western Europe (164 m), CEMEA (Central Europe, the Middle East and Africa) (153.5 m), North America (148 m) and Latin America (102 m). In terms of new subscriptions over the five years from 1999, Africa has outpaced other markets with 58.2% growth. Increasingly these phones are being serviced by systems where the voice content is transmitted digitally such as GSM or W-CDMA with many markets choosing to deprecate analog systems such as AMPS. There have also been dramatic changes in telephone communication behind the scenes. Starting with the operation of TAT-8 in 1988, the 1990s saw the widespread adoption of systems based on optical fibres. The benefit of communicating with optical fibres is that they offer a drastic increase in data capacity. TAT-8 itself was able to carry 10 times as many telephone calls as the last copper cable laid at that time and today's optical fibre cables are able to carry 25 times as many telephone calls as TAT-8. This increase in data capacity is due to several factors: First, optical fibres are physically much smaller than competing technologies. Second, they do not suffer from crosstalk which means several hundred of them can be easily bundled together in a single cable. Lastly, improvements in multiplexing have led to an exponential growth in the data capacity of a single fibre. Assisting communication across many modern optical fibre networks is a protocol known as Asynchronous Transfer Mode (ATM). The ATM protocol allows for the side-by-side data transmission mentioned in the second paragraph. It is suitable for public telephone networks because it establishes a pathway for data through the network and associates a traffic contract with that pathway. The traffic contract is essentially an agreement between the client and the network about how the network is to handle the data; if the network cannot meet the conditions of the traffic contract it does not accept the connection. This is important because telephone calls can negotiate a contract so as to guarantee themselves a constant bit rate, something that will ensure a caller's voice is not delayed in parts or cut off completely. There are competitors to ATM, such as Multiprotocol Label Switching (MPLS), that perform a similar task and are expected to supplant ATM in the future. In a broadcast system, the central high-powered broadcast tower transmits a high-frequency electromagnetic wave to numerous low-powered receivers. The high-frequency wave sent by the tower is modulated with a signal containing visual or audio information. The receiver is then tuned so as to pick up the high-frequency wave and a demodulator is used to retrieve the signal containing the visual or audio information. The broadcast signal can be either analogue (signal is varied continuously with respect to the information) or digital (information is encoded as a set of discrete values). The broadcast media industry is at a critical turning point in its development, with many countries moving from analogue to digital broadcasts. This move is made possible by the production of cheaper, faster and more capable integrated circuits. The chief advantage of digital broadcasts is that they prevent a number of complaints common to traditional analogue broadcasts. For television, this includes the elimination of problems such as snowy pictures, ghosting and other distortion. These occur because of the nature of analogue transmission, which means that perturbations due to noise will be evident in the final output. Digital transmission overcomes this problem because digital signals are reduced to discrete values upon reception and hence small perturbations do not affect the final output. In a simplified example, if a binary message 1011 was transmitted with signal amplitudes [1.0 0.0 1.0 1.0] and received with signal amplitudes [0.9 0.2 1.1 0.9] it would still decode to the binary message 1011— a perfect reproduction of what was sent. From this example, a problem with digital transmissions can also be seen in that if the noise is great enough it can significantly alter the decoded message. Using forward error correction a receiver can correct a handful of bit errors in the resulting message but too much noise will lead to incomprehensible output and hence a breakdown of the transmission. In digital television broadcasting, there are three competing standards that are likely to be adopted worldwide. These are the ATSC, DVB and ISDB standards; the adoption of these standards thus far is presented in the captioned map. All three standards use MPEG-2 for video compression. ATSC uses Dolby Digital AC-3 for audio compression, ISDB uses Advanced Audio Coding (MPEG-2 Part 7) and DVB has no standard for audio compression but typically uses MPEG-1 Part 3 Layer 2. The choice of modulation also varies between the schemes. In digital audio broadcasting, standards are much more unified with practically all countries choosing to adopt the Digital Audio Broadcasting standard (also known as the Eureka 147 standard). The exception is the United States which has chosen to adopt HD Radio. HD Radio, unlike Eureka 147, is based upon a transmission method known as in-band on-channel transmission that allows digital information to piggyback on normal AM or FM analog transmissions. However, despite the pending switch to digital, analog television remains being transmitted in most countries. An exception is the United States that ended analog television transmission (by all but the very low-power TV stations) on 12 June 2009 after twice delaying the switchover deadline. Kenya also ended analog television transmission in December 2014 after multiple delays. For analogue television, there were three standards in use for broadcasting colour TV (see a map on adoption here). These are known as PAL (German designed), NTSC (American designed), and SECAM (French designed). For analogue radio, the switch to digital radio is made more difficult by the higher cost of digital receivers. The choice of modulation for analogue radio is typically between amplitude (AM) or frequency modulation (FM). To achieve stereo playback, an amplitude modulated subcarrier is used for stereo FM, and quadrature amplitude modulation is used for stereo AM or C-QUAM. The Internet is a worldwide network of computers and computer networks that communicate with each other using the Internet Protocol (IP). Any computer on the Internet has a unique IP address that can be used by other computers to route information to it. Hence, any computer on the Internet can send a message to any other computer using its IP address. These messages carry with them the originating computer's IP address allowing for two-way communication. The Internet is thus an exchange of messages between computers. It is estimated that 51% of the information flowing through two-way telecommunications networks in the year 2000 were flowing through the Internet (most of the rest (42%) through the landline telephone). By 2007 the Internet clearly dominated and captured 97% of all the information in telecommunication networks (most of the rest (2%) through mobile phones). As of 2008[update], an estimated 21.9% of the world population has access to the Internet with the highest access rates (measured as a percentage of the population) in North America (73.6%), Oceania/Australia (59.5%) and Europe (48.1%). In terms of broadband access, Iceland (26.7%), South Korea (25.4%) and the Netherlands (25.3%) led the world. The Internet works in part because of protocols that govern how the computers and routers communicate with each other. The nature of computer network communication lends itself to a layered approach where individual protocols in the protocol stack run more-or-less independently of other protocols. This allows lower-level protocols to be customized for the network situation while not changing the way higher-level protocols operate. A practical example of why this is important is because it allows a web browser to run the same code regardless of whether the computer it is running on is connected to the Internet through an Ethernet or Wi-Fi connection. Protocols are often talked about in terms of their place in the OSI reference model (pictured on the right), which emerged in 1983 as the first step in an unsuccessful attempt to build a universally adopted networking protocol suite. For the Internet, the physical medium and data link protocol can vary several times as packets traverse the globe. This is because the Internet places no constraints on what physical medium or data link protocol is used. This leads to the adoption of media and protocols that best suit the local network situation. In practice, most intercontinental communication will use the Asynchronous Transfer Mode (ATM) protocol (or a modern equivalent) on top of optic fibre. This is because for most intercontinental communication the Internet shares the same infrastructure as the public switched telephone network. At the network layer, things become standardized with the Internet Protocol (IP) being adopted for logical addressing. For the World Wide Web, these IP addresses are derived from the human-readable form using the Domain Name System (e.g., 72.14.207.99 is derived from Google). At the moment, the most widely used version of the Internet Protocol is version four but a move to version six is imminent. At the transport layer, most communication adopts either the Transmission Control Protocol (TCP) or the User Datagram Protocol (UDP). TCP is used when it is essential every message sent is received by the other computer whereas UDP is used when it is merely desirable. With TCP, packets are retransmitted if they are lost and placed in order before they are presented to higher layers. With UDP, packets are not ordered nor retransmitted if lost. Both TCP and UDP packets carry port numbers with them to specify what application or process the packet should be handled by. Because certain application-level protocols use certain ports, network administrators can manipulate traffic to suit particular requirements. Examples are to restrict Internet access by blocking the traffic destined for a particular port or to affect the performance of certain applications by assigning priority. Above the transport layer, there are certain protocols that are sometimes used and loosely fit in the session and presentation layers, most notably the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. These protocols ensure that data transferred between two parties remains completely confidential. Finally, at the application layer, are many of the protocols Internet users would be familiar with such as HTTP (web browsing), POP3 (e-mail), FTP (file transfer), IRC (Internet chat), BitTorrent (file sharing) and XMPP (instant messaging). Voice over Internet Protocol (VoIP) allows data packets to be used for synchronous voice communications. The data packets are marked as voice-type packets and can be prioritized by the network administrators so that the real-time, synchronous conversation is less subject to contention with other types of data traffic which can be delayed (i.e., file transfer or email) or buffered in advance (i.e., audio and video) without detriment. That prioritization is fine when the network has sufficient capacity for all the VoIP calls taking place at the same time and the network is enabled for prioritization, i.e., a private corporate-style network, but the Internet is not generally managed in this way and so there can be a big difference in the quality of VoIP calls over a private network and over the public Internet. Despite the growth of the Internet, the characteristics of local area networks (LANs)—computer networks that do not extend beyond a few kilometres—remain distinct. This is because networks on this scale do not require all the features associated with larger networks and are often more cost-effective and efficient without them. When they are not connected with the Internet, they also have the advantages of privacy and security. However, purposefully lacking a direct connection to the Internet does not provide assured protection from hackers, military forces, or economic powers. These threats exist if there are any methods for connecting remotely to the LAN. Wide area networks (WANs) are private computer networks that may extend for thousands of kilometres. Once again, some of their advantages include privacy and security. Prime users of private LANs and WANs include armed forces and intelligence agencies that must keep their information secure and secret. In the mid-1980s, several sets of communication protocols emerged to fill the gaps between the data-link layer and the application layer of the OSI reference model. These included AppleTalk, IPX, and NetBIOS with the dominant protocol set during the early 1990s being IPX due to its popularity with MS-DOS users. TCP/IP existed at this point, but it was typically only used by large government and research facilities. As the Internet grew in popularity and its traffic was required to be routed into private networks, the TCP/IP protocols replaced existing local area network technologies. Additional technologies, such as DHCP, allowed TCP/IP-based computers to self-configure in the network. Such functions also existed in the AppleTalk/ IPX/ NetBIOS protocol sets. Whereas Asynchronous Transfer Mode (ATM) or Multiprotocol Label Switching (MPLS) are typical data-link protocols for larger networks such as WANs; Ethernet and Token Ring are typical data-link protocols for LANs. These protocols differ from the former protocols in that they are simpler, e.g., they omit features such as quality of service guarantees, and offer medium access control. Both of these differences allow for more economical systems. Despite the modest popularity of Token Ring in the 1980s and 1990s, virtually all LANs now use either wired or wireless Ethernet facilities. At the physical layer, most wired Ethernet implementations use copper twisted-pair cables (including the common 10BASE-T networks). However, some early implementations used heavier coaxial cables and some recent implementations (especially high-speed ones) use optical fibres. When optic fibres are used, the distinction must be made between multimode fibres and single-mode fibres. Multimode fibres can be thought of as thicker optical fibres that are cheaper to manufacture devices for, but that suffer from less usable bandwidth and worse attenuation—implying poorer long-distance performance. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Lod#cite_ref-87] | [TOKENS: 4733] |
Contents Lod Lod (Hebrew: לוד, fully vocalized: לֹד), also known as Lydda (Ancient Greek: Λύδδα) and Lidd (Arabic: اللِّدّ, romanized: al-Lidd, or اللُّدّ, al-Ludd), is a city 15 km (9+1⁄2 mi) southeast of Tel Aviv and 40 km (25 mi) northwest of Jerusalem in the Central District of Israel. It is situated between the lower Shephelah on the east and the coastal plain on the west. The city had a population of 90,814 in 2023. Lod has been inhabited since at least the Neolithic period. It is mentioned a few times in the Hebrew Bible and in the New Testament. Between the 5th century BCE and up until the late Roman period, it was a prominent center for Jewish scholarship and trade. Around 200 CE, the city became a Roman colony and was renamed Diospolis (Ancient Greek: Διόσπολις, lit. 'city of Zeus'). Tradition identifies Lod as the 4th century martyrdom site of Saint George; the Church of Saint George and Mosque of Al-Khadr located in the city is believed to have housed his remains. Following the Arab conquest of the Levant, Lod served as the capital of Jund Filastin; however, a few decades later, the seat of power was transferred to Ramla, and Lod slipped in importance. Under Crusader rule, the city was a Catholic diocese of the Latin Church and it remains a titular see to this day.[citation needed] Lod underwent a major change in its population in the mid-20th century. Exclusively Palestinian Arab in 1947, Lod was part of the area designated for an Arab state in the United Nations Partition Plan for Palestine; however, in July 1948, the city was occupied by the Israel Defense Forces, and most of its Arab inhabitants were expelled in the Palestinian expulsion from Lydda and Ramle. The city was largely resettled by Jewish immigrants, most of them expelled from Arab countries. Today, Lod is one of Israel's mixed cities, with an Arab population of 30%. Lod is one of Israel's major transportation hubs. The main international airport, Ben Gurion Airport, is located 8 km (5 miles) north of the city. The city is also a major railway and road junction. Religious references The Hebrew name Lod appears in the Hebrew Bible as a town of Benjamin, founded along with Ono by Shamed or Shamer (1 Chronicles 8:12; Ezra 2:33; Nehemiah 7:37; 11:35). In Ezra 2:33, it is mentioned as one of the cities whose inhabitants returned after the Babylonian captivity. Lod is not mentioned among the towns allocated to the tribe of Benjamin in Joshua 18:11–28. The name Lod derives from a tri-consonantal root not extant in Northwest Semitic, but only in Arabic (“to quarrel; withhold, hinder”). An Arabic etymology of such an ancient name is unlikely (the earliest attestation is from the Achaemenid period). In the New Testament, the town appears in its Greek form, Lydda, as the site of Peter's healing of Aeneas in Acts 9:32–38. The city is also mentioned in an Islamic hadith as the location of the battlefield where the false messiah (al-Masih ad-Dajjal) will be slain before the Day of Judgment. History The first occupation dates to the Neolithic in the Near East and is associated with the Lodian culture. Occupation continued in the Levant Chalcolithic. Pottery finds have dated the initial settlement in the area now occupied by the town to 5600–5250 BCE. In the Early Bronze, it was an important settlement in the central coastal plain between the Judean Shephelah and the Mediterranean coast, along Nahal Ayalon. Other important nearby sites were Tel Dalit, Tel Bareqet, Khirbat Abu Hamid (Shoham North), Tel Afeq, Azor and Jaffa. Two architectural phases belong to the late EB I in Area B. The first phase had a mudbrick wall, while the late phase included a circulat stone structure. Later excavations have produced an occupation later, Stratum IV. It consists of two phases, Stratum IVb with mudbrick wall on stone foundations and rounded exterior corners. In Stratum IVa there was a mudbrick wall with no stone foundations, with imported Egyptian potter and local pottery imitations. Another excavations revealed nine occupation strata. Strata VI-III belonged to Early Bronze IB. The material culture showed Egyptian imports in strata V and IV. Occupation continued into Early Bronze II with four strata (V-II). There was continuity in the material culture and indications of centralized urban planning. North to the tell were scattered MB II burials. The earliest written record is in a list of Canaanite towns drawn up by the Egyptian pharaoh Thutmose III at Karnak in 1465 BCE. From the fifth century BCE until the Roman period, the city was a centre of Jewish scholarship and commerce. According to British historian Martin Gilbert, during the Hasmonean period, Jonathan Maccabee and his brother, Simon Maccabaeus, enlarged the area under Jewish control, which included conquering the city. The Jewish community in Lod during the Mishnah and Talmud era is described in a significant number of sources, including information on its institutions, demographics, and way of life. The city reached its height as a Jewish center between the First Jewish-Roman War and the Bar Kokhba revolt, and again in the days of Judah ha-Nasi and the start of the Amoraim period. The city was then the site of numerous public institutions, including schools, study houses, and synagogues. In 43 BC, Cassius, the Roman governor of Syria, sold the inhabitants of Lod into slavery, but they were set free two years later by Mark Antony. During the First Jewish–Roman War, the Roman proconsul of Syria, Cestius Gallus, razed the town on his way to Jerusalem in Tishrei 66 CE. According to Josephus, "[he] found the city deserted, for the entire population had gone up to Jerusalem for the Feast of Tabernacles. He killed fifty people whom he found, burned the town and marched on". Lydda was occupied by Emperor Vespasian in 68 CE. In the period following the destruction of Jerusalem in 70 CE, Rabbi Tarfon, who appears in many Tannaitic and Jewish legal discussions, served as a rabbinic authority in Lod. During the Kitos War, 115–117 CE, the Roman army laid siege to Lod, where the rebel Jews had gathered under the leadership of Julian and Pappos. Torah study was outlawed by the Romans and pursued mostly in the underground. The distress became so great, the patriarch Rabban Gamaliel II, who was shut up there and died soon afterwards, permitted fasting on Ḥanukkah. Other rabbis disagreed with this ruling. Lydda was next taken and many of the Jews were executed; the "slain of Lydda" are often mentioned in words of reverential praise in the Talmud. In 200 CE, emperor Septimius Severus elevated the town to the status of a city, calling it Colonia Lucia Septimia Severa Diospolis. The name Diospolis ("City of Zeus") may have been bestowed earlier, possibly by Hadrian. At that point, most of its inhabitants were Christian. The earliest known bishop is Aëtius, a friend of Arius. During the following century (200-300CE), it's said that Joshua ben Levi founded a yeshiva in Lod. In December 415, the Council of Diospolis was held here to try Pelagius; he was acquitted. In the sixth century, the city was renamed Georgiopolis after St. George, a soldier in the guard of the emperor Diocletian, who was born there between 256 and 285 CE. The Church of Saint George and Mosque of Al-Khadr is named for him. The 6th-century Madaba map shows Lydda as an unwalled city with a cluster of buildings under a black inscription reading "Lod, also Lydea, also Diospolis". An isolated large building with a semicircular colonnaded plaza in front of it might represent the St George shrine. After the Muslim conquest of Palestine by Amr ibn al-'As in 636 CE, Lod which was referred to as "al-Ludd" in Arabic served as the capital of Jund Filastin ("Military District of Palaestina") before the seat of power was moved to nearby Ramla during the reign of the Umayyad Caliph Suleiman ibn Abd al-Malik in 715–716. The population of al-Ludd was relocated to Ramla, as well. With the relocation of its inhabitants and the construction of the White Mosque in Ramla, al-Ludd lost its importance and fell into decay. The city was visited by the local Arab geographer al-Muqaddasi in 985, when it was under the Fatimid Caliphate, and was noted for its Great Mosque which served the residents of al-Ludd, Ramla, and the nearby villages. He also wrote of the city's "wonderful church (of St. George) at the gate of which Christ will slay the Antichrist." The Crusaders occupied the city in 1099 and named it St Jorge de Lidde. It was briefly conquered by Saladin, but retaken by the Crusaders in 1191. For the English Crusaders, it was a place of great significance as the birthplace of Saint George. The Crusaders made it the seat of a Latin Church diocese, and it remains a titular see. It owed the service of 10 knights and 20 sergeants, and it had its own burgess court during this era. In 1226, Ayyubid Syrian geographer Yaqut al-Hamawi visited al-Ludd and stated it was part of the Jerusalem District during Ayyubid rule. Sultan Baybars brought Lydda again under Muslim control by 1267–8. According to Qalqashandi, Lydda was an administrative centre of a wilaya during the fourteenth and fifteenth century in the Mamluk empire. Mujir al-Din described it as a pleasant village with an active Friday mosque. During this time, Lydda was a station on the postal route between Cairo and Damascus. In 1517, Lydda was incorporated into the Ottoman Empire as part of the Damascus Eyalet, and in the 1550s, the revenues of Lydda were designated for the new waqf of Hasseki Sultan Imaret in Jerusalem, established by Hasseki Hurrem Sultan (Roxelana), the wife of Suleiman the Magnificent. By 1596 Lydda was a part of the nahiya ("subdistrict") of Ramla, which was under the administration of the liwa ("district") of Gaza. It had a population of 241 households and 14 bachelors who were all Muslims, and 233 households who were Christians. They paid a fixed tax-rate of 33,3 % on agricultural products, including wheat, barley, summer crops, vineyards, fruit trees, sesame, special product ("dawalib" =spinning wheels), goats and beehives, in addition to occasional revenues and market toll, a total of 45,000 Akçe. All of the revenue went to the Waqf. In 1051 AH/1641/2, the Bedouin tribe of al-Sawālima from around Jaffa attacked the villages of Subṭāra, Bayt Dajan, al-Sāfiriya, Jindās, Lydda and Yāzūr belonging to Waqf Haseki Sultan. The village appeared as Lydda, though misplaced, on the map of Pierre Jacotin compiled in 1799. Missionary William M. Thomson visited Lydda in the mid-19th century, describing it as a "flourishing village of some 2,000 inhabitants, imbosomed in noble orchards of olive, fig, pomegranate, mulberry, sycamore, and other trees, surrounded every way by a very fertile neighbourhood. The inhabitants are evidently industrious and thriving, and the whole country between this and Ramleh is fast being filled up with their flourishing orchards. Rarely have I beheld a rural scene more delightful than this presented in early harvest ... It must be seen, heard, and enjoyed to be appreciated." In 1869, the population of Ludd was given as: 55 Catholics, 1,940 "Greeks", 5 Protestants and 4,850 Muslims. In 1870, the Church of Saint George was rebuilt. In 1892, the first railway station in the entire region was established in the city. In the second half of the 19th century, Jewish merchants migrated to the city, but left after the 1921 Jaffa riots. In 1882, the Palestine Exploration Fund's Survey of Western Palestine described Lod as "A small town, standing among enclosure of prickly pear, and having fine olive groves around it, especially to the south. The minaret of the mosque is a very conspicuous object over the whole of the plain. The inhabitants are principally Moslim, though the place is the seat of a Greek bishop resident of Jerusalem. The Crusading church has lately been restored, and is used by the Greeks. Wells are found in the gardens...." From 1918, Lydda was under the administration of the British Mandate in Palestine, as per a League of Nations decree that followed the Great War. During the Second World War, the British set up supply posts in and around Lydda and its railway station, also building an airport that was renamed Ben Gurion Airport after the death of Israel's first prime minister in 1973. At the time of the 1922 census of Palestine, Lydda had a population of 8,103 inhabitants (7,166 Muslims, 926 Christians, and 11 Jews), the Christians were 921 Orthodox, 4 Roman Catholics and 1 Melkite. This had increased by the 1931 census to 11,250 (10,002 Muslims, 1,210 Christians, 28 Jews, and 10 Bahai), in a total of 2475 residential houses. In 1938, Lydda had a population of 12,750. In 1945, Lydda had a population of 16,780 (14,910 Muslims, 1,840 Christians, 20 Jews and 10 "other"). Until 1948, Lydda was an Arab town with a population of around 20,000—18,500 Muslims and 1,500 Christians. In 1947, the United Nations proposed dividing Mandatory Palestine into two states, one Jewish state and one Arab; Lydda was to form part of the proposed Arab state. In the ensuing war, Israel captured Arab towns outside the area the UN had allotted it, including Lydda. In December 1947, thirteen Jewish passengers in a seven-car convoy to Ben Shemen Youth Village were ambushed and murdered.In a separate incident, three Jewish youths, two men and a woman were captured, then raped and murdered in a neighbouring village. Their bodies were paraded in Lydda’s principal street. The Israel Defense Forces entered Lydda on 11 July 1948. The following day, under the impression that it was under attack, the 3rd Battalion was ordered to shoot anyone "seen on the streets". According to Israel, 250 Arabs were killed. Other estimates are higher: Arab historian Aref al Aref estimated 400, and Nimr al Khatib 1,700. In 1948, the population rose to 50,000 during the Nakba, as Arab refugees fleeing other areas made their way there. A key event was the Palestinian expulsion from Lydda and Ramle, with the expulsion of 50,000-70,000 Palestinians from Lydda and Ramle by the Israel Defense Forces. All but 700 to 1,056 were expelled by order of the Israeli high command, and forced to walk 17 km (10+1⁄2 mi) to the Jordanian Arab Legion lines. Estimates of those who died from exhaustion and dehydration vary from a handful to 355. The town was subsequently sacked by the Israeli army. Some scholars, including Ilan Pappé, characterize this as ethnic cleansing. The few hundred Arabs who remained in the city were soon outnumbered by the influx of Jews who immigrated to Lod from August 1948 onward, most of them from Arab countries. As a result, Lod became a predominantly Jewish town. After the establishment of the state, the biblical name Lod was readopted. The Jewish immigrants who settled Lod came in waves, first from Morocco and Tunisia, later from Ethiopia, and then from the former Soviet Union. Since 2008, many urban development projects have been undertaken to improve the image of the city. Upscale neighbourhoods have been built, among them Ganei Ya'ar and Ahisemah, expanding the city to the east. According to a 2010 report in the Economist, a three-meter-high wall was built between Jewish and Arab neighbourhoods and construction in Jewish areas was given priority over construction in Arab neighborhoods. The newspaper says that violent crime in the Arab sector revolves mainly around family feuds over turf and honour crimes. In 2010, the Lod Community Foundation organised an event for representatives of bicultural youth movements, volunteer aid organisations, educational start-ups, businessmen, sports organizations, and conservationists working on programmes to better the city. In the 2021 Israel–Palestine crisis, a state of emergency was declared in Lod after Arab rioting led to the death of an Israeli Jew. The Mayor of Lod, Yair Revivio, urged Prime Minister of Israel Benjamin Netanyahu to deploy Israel Border Police to restore order in the city. This was the first time since 1966 that Israel had declared this kind of emergency lockdown. International media noted that both Jewish and Palestinian mobs were active in Lod, but the "crackdown came for one side" only. Demographics In the 19th century and until the Lydda Death March, Lod was an exclusively Muslim-Christian town, with an estimated 6,850 inhabitants, of whom approximately 2,000 (29%) were Christian. According to the Israel Central Bureau of Statistics (CBS), the population of Lod in 2010 was 69,500 people. According to the 2019 census, the population of Lod was 77,223, of which 53,581 people, comprising 69.4% of the city's population, were classified as "Jews and Others", and 23,642 people, comprising 30.6% as "Arab". Education According to CBS, 38 schools and 13,188 pupils are in the city. They are spread out as 26 elementary schools and 8,325 elementary school pupils, and 13 high schools and 4,863 high school pupils. About 52.5% of 12th-grade pupils were entitled to a matriculation certificate in 2001.[citation needed] Economy The airport and related industries are a major source of employment for the residents of Lod. Other important factories in the city are the communication equipment company "Talard", "Cafe-Co" - a subsidiary of the Strauss Group and "Kashev" - the computer center of Bank Leumi. A Jewish Agency Absorption Centre is also located in Lod. According to CBS figures for 2000, 23,032 people were salaried workers and 1,405 were self-employed. The mean monthly wage for a salaried worker was NIS 4,754, a real change of 2.9% over the course of 2000. Salaried men had a mean monthly wage of NIS 5,821 (a real change of 1.4%) versus NIS 3,547 for women (a real change of 4.6%). The mean income for the self-employed was NIS 4,991. About 1,275 people were receiving unemployment benefits and 7,145 were receiving an income supplement. Art and culture In 2009-2010, Dor Guez held an exhibit, Georgeopolis, at the Petach Tikva art museum that focuses on Lod. Archaeology A well-preserved mosaic floor dating to the Roman period was excavated in 1996 as part of a salvage dig conducted on behalf of the Israel Antiquities Authority and the Municipality of Lod, prior to widening HeHalutz Street. According to Jacob Fisch, executive director of the Friends of the Israel Antiquities Authority, a worker at the construction site noticed the tail of a tiger and halted work. The mosaic was initially covered over with soil at the conclusion of the excavation for lack of funds to conserve and develop the site. The mosaic is now part of the Lod Mosaic Archaeological Center. The floor, with its colorful display of birds, fish, exotic animals and merchant ships, is believed to have been commissioned by a wealthy resident of the city for his private home. The Lod Community Archaeology Program, which operates in ten Lod schools, five Jewish and five Israeli Arab, combines archaeological studies with participation in digs in Lod. Sports The city's major football club, Hapoel Bnei Lod, plays in Liga Leumit (the second division). Its home is at the Lod Municipal Stadium. The club was formed by a merger of Bnei Lod and Rakevet Lod in the 1980s. Two other clubs in the city play in the regional leagues: Hapoel MS Ortodoxim Lod in Liga Bet and Maccabi Lod in Liga Gimel. Hapoel Lod played in the top division during the 1960s and 1980s, and won the State Cup in 1984. The club folded in 2002. A new club, Hapoel Maxim Lod (named after former mayor Maxim Levy) was established soon after, but folded in 2007. Notable people Twin towns-sister cities Lod is twinned with: See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Moonshot_AI] | [TOKENS: 1435] |
Contents Moonshot AI Moonshot AI (Moonshot; Chinese: 月之暗面; pinyin: Yuè Zhī Ànmiàn; lit. 'Dark Side of the Moon') is an artificial intelligence (AI) company based in Beijing, China. It has been dubbed one of China's "AI Tiger" companies by investors with its focus on developing large language models. Background Moonshot was founded in March 2023 by Yang Zhilin, Zhou Xinyu and Wu Yuxin who were schoolmates at Tsinghua University. It was launched on the 50th anniversary of Pink Floyd's The Dark Side of the Moon which was Yang's favorite album and the inspiration for the company's name. Yang has stated his goal for founding Moonshot AI is to build foundation models to achieve AGI. Yang's three milestones are long context length, multimodal world model, and a scalable general architecture capable of continuous self-improvement without human input. In October 2023, the company released the first version of its chatbot, Kimi, which was capable of processing up to 200,000 Chinese characters per conversation. In June 2024, it was reported that Moonshot was planning to enter the US market. An insider revealed Moonshot was developing products for the US market, including an AI role-playing chat application called Ohai as well as a music video generator called Noisee. In response, Moonshot stated it had no plans to develop and release overseas products. In January 2026, Moonshot released Kimi K2.5, a multimodal upgrade to Kimi K2 that added native vision capabilities through a 400-million-parameter vision encoder called MoonViT. The model can process both images and video, enabling agentic tasks such as replicating website user journeys from video demonstrations alone. The K2 model was released just 3 months prior. Funding and investments Moonshot was valued at $300 million when it received its initial funding of $60 million and had 40 employees. In February 2024, Alibaba Group led a $1 billion funding round for Moonshot, which gave it a valuation of $2.5 billion. In August 2024, Tencent and Gaorong Capital joined as investors in a $300 million funding round that valued Moonshot at $3.3 billion. In October 2025, Moonshot was reportedly nearing the completion of a new funding round of approximately $600 million, led by IDG Capital with participation from existing investors including Tencent, valuing the company at $3.8 billion pre-money. Products and research In October 2023, Moonshot launched its first AI chatbot, Kimi, whose name comes from Yang's English nickname. It had emerged as the closest rival to Baidu's Ernie Bot. In March 2024, Moonshot claimed Kimi could handle 2 million Chinese characters in a single prompt which was a significant upgrade from the previous version that could only handle 200,000. Due to the increased number of users, on 21 March, Kimi suffered an outage for two days and Moonshot had to issue an apology. As of August 2024, Kimi ranked third in active monthly users according to aicpb.com. On 20 January 2025, Kimi K1.5 was released. Moonshot claimed it matched the performance of OpenAI o1 in mathematics, coding, and multimodal reasoning capabilities. In June 2025, Kimi dropped in popularity to seventh place in active monthly users. In July 2025, the company released the weights for Kimi K2, a large language model with 1 trillion total parameters. The model uses a mixture-of-experts (MoE) architecture, where 32 billion parameters are active during inference. K2 was trained on 15.5 trillion tokens of data and is released under a modified MIT license. Kimi K2 is an open source LLM, meaning that it can be downloaded and built upon by users. The day after its release, Kimi K2 had the most downloads on the platform, an increase in popularity from previous months. Moonshot claims that the model excels in coding tasks, having passed tests like LiveCodeBench. In certain instances, the model performed on-par with or better than its Western counterparts. It has also been praised for its writing skills. On 9 September 2025, Moonshot AI released an updated version of K2, Kimi-K2-Instruct-0905, which further increased its performance in agentic coding tasks and doubled its context window from 128K tokens to 256K tokens. The release of Kimi K2 follows a trend amongst Chinese companies to make their AI models open sourced likely trying to counter US’s efforts to limit China's tech growth. In November 2025, Moonshot released Kimi K2 Thinking, an open-source update to Kimi K2 designed for advanced reasoning and agentic tasks. The model, trained for approximately $4.6 million, features a 1-trillion-parameter MoE architecture with 32 billion active parameters and supports up to 256,000-token contexts. It can execute 200-300 sequential tool calls autonomously and uses native INT4 quantization for efficiency. Benchmarks showed it outperforming GPT-5 and Claude Sonnet 4.5 on tests including Humanity's Last Exam (44.9%), BrowseComp (60.2%), and SWE-Bench Verified (71.3%). It is released under a modified MIT license requiring attribution for products exceeding 100 million monthly users or $20 million in monthly revenue. In China, Kimi has six tiers of plans ranging from 5.2 yuan for four days to 399 yuan for a year of priority use. Mooncake is the platform that serves Moonshot's Kimi chatbot and processes 100 billion tokens daily. Moonshot was awarded the Erik Riedel Best Paper Award at the USENIX FAST conference for the paper detailing the architecture of Mooncake. In the Moonshot and UCLA joint paper "Muon is Scalable for LLM Training", the researchers claim to have successfully scaled the Muon optimizer, which was previously known to have strong results in training small language models, to train a 16 billion parameter mixture of experts (MoE) large language model with 3 billion active parameters. The researchers indicate that Muon improves computational efficiency by a factor of 2 compared to the standard optimizer, AdamW, in training large models. The researchers have open sourced their Muon optimizer implementation and the pretrained and instruction-tuned checkpoints. In their technical report on the Kimi K1.5 model, Moonshot researchers outline their reinforcement learning methods, which they claim enabled the model to achieve state-of-the-art reasoning capabilities on par with OpenAI's o1 model. The researchers note that long context scaling and improved policy optimization methods were key, without relying on complex techniques like Monte Carlo tree search, value functions, and process reward models. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Electronic_commerce] | [TOKENS: 5500] |
Contents E-commerce E-commerce (electronic commerce) refers to commercial activities including the electronic buying or selling products and services which are conducted on online platforms or over the Internet. E-commerce draws on technologies such as mobile commerce, electronic funds transfer, supply chain management, Internet marketing, online transaction processing, electronic data interchange (EDI), inventory management systems, and automated data collection systems. E-commerce is a part of retail. It is the largest segment of the electronics industry and is in turn driven by the technological advances of the semiconductor industry. Defining e-commerce The term was coined and first employed by Robert Jacobson, Principal Consultant to the California State Assembly's Utilities & Commerce Committee, in the title and text of California's Electronic Commerce Act, carried by the late Committee Chairwoman Gwen Moore (D-L.A.) and enacted in 1984. E-commerce typically uses the web for at least a part of a transaction's life cycle although it may also use other technologies such as e-mail. Typical e-commerce transactions include the purchase of products (such as books from Amazon) or services (such as music downloads in the form of digital distribution such as the iTunes Store). There are three areas of e-commerce: online retailing, electronic markets, and online auctions. E-commerce is supported by electronic business. The existence value of e-commerce is to allow consumers to shop online and pay online through the Internet, saving the time and space of customers and enterprises, greatly improving transaction efficiency, especially for busy office workers, and also saving a lot of valuable time. E-commerce businesses may also employ some or all of the following: There are five essential categories of E-commerce: Forms Contemporary electronic commerce can be classified into two categories. The first category is business based on types of goods sold (involves everything from ordering "digital" content for immediate online consumption, to ordering conventional goods and services, to "meta" services to facilitate other types of electronic commerce). The second category is based on the nature of the participant (B2B, B2C, C2B and C2C). On the institutional level, big corporations and financial institutions use the internet to exchange financial data to facilitate domestic and international business. Data integrity and security are pressing issues for electronic commerce. Aside from traditional e-commerce, the terms m-Commerce (mobile commerce) as well (around 2013) t-Commerce have also been used. Governmental regulation In the United States, California's Electronic Commerce Act (1984), enacted by the Legislature, the more recent California Privacy Rights Act (2020), enacted through a popular election proposition and to control specifically how electronic commerce may be conducted in California. In the US in its entirety, electronic commerce activities are regulated more broadly by the Federal Trade Commission (FTC). These activities include the use of commercial e-mails, online advertising and consumer privacy. The CAN-SPAM Act of 2003 establishes national standards for direct marketing over e-mail. The Federal Trade Commission Act regulates all forms of advertising, including online advertising, and states that advertising must be truthful and non-deceptive. Using its authority under Section 5 of the FTC Act, which prohibits unfair or deceptive practices, the FTC has brought a number of cases to enforce the promises in corporate privacy statements, including promises about the security of consumers' personal information. As a result, any corporate privacy policy related to e-commerce activity may be subject to enforcement by the FTC. The Ryan Haight Online Pharmacy Consumer Protection Act of 2008, which came into law in 2008, amends the Controlled Substances Act to address online pharmacies. Conflict of laws in cyberspace is a major hurdle for harmonization of legal framework for e-commerce around the world. In order to give a uniformity to e-commerce law around the world, many countries adopted the UNCITRAL Model Law on Electronic Commerce (1996). Internationally there is the International Consumer Protection and Enforcement Network (ICPEN), which was formed in 1991 from an informal network of government customer fair trade organisations. The purpose was stated as being to find ways of co-operating on tackling consumer problems connected with cross-border transactions in both goods and services, and to help ensure exchanges of information among the participants for mutual benefit and understanding. From this came Econsumer.gov, an ICPEN initiative since April 2001. It is a portal to report complaints about online and related transactions with foreign companies. There is also Asia Pacific Economic Cooperation. APEC was established in 1989 with the vision of achieving stability, security and prosperity for the region through free and open trade and investment. APEC has an Electronic Commerce Steering Group as well as working on common privacy regulations throughout the APEC region. In Australia, trade is covered under Australian Treasury Guidelines for electronic commerce and the Australian Competition & Consumer Commission regulates and offers advice on how to deal with businesses online, and offers specific advice on what happens if things go wrong. The European Union undertook an extensive enquiry into e-commerce in 2015–16 which observed significant growth in the development of e-commerce, along with some developments which raised concerns, such as increased use of selective distribution systems, which allow manufacturers to control routes to market, and "increased use of contractual restrictions to better control product distribution". The European Commission felt that some emerging practices might be justified if they could improve the quality of product distribution, but "others may unduly prevent consumers from benefiting from greater product choice and lower prices in e-commerce and therefore warrant Commission action" in order to promote compliance with EU competition rules. In the United Kingdom, the Financial Services Authority (FSA) was formerly the regulating authority for most aspects of the EU's Payment Services Directive (PSD), until its replacement in 2013 by the Prudential Regulation Authority and the Financial Conduct Authority. The UK implemented the PSD through the Payment Services Regulations 2009 (PSRs), which came into effect on 1 November 2009. The PSR affects firms providing payment services and their customers. These firms include banks, non-bank credit card issuers and non-bank merchant acquirers, e-money issuers, etc. The PSRs created a new class of regulated firms known as payment institutions (PIs), who are subject to prudential requirements. Article 87 of the PSD required the European Commission to report on the implementation and impact of the PSD by 1 November 2012. In India, the Information Technology Act 2000 governs the basic applicability of e-commerce. In China, the Telecommunications Regulations of the People's Republic of China (promulgated on 25 September 2000), stipulated the Ministry of Industry and Information Technology (MIIT) as the government department regulating all telecommunications related activities, including electronic commerce. On the same day, the Administrative Measures on Internet Information Services were released, the first administrative regulations to address profit-generating activities conducted through the Internet, and lay the foundation for future regulations governing e-commerce in China. On 28 August 2004, the eleventh session of the tenth NPC Standing Committee adopted an Electronic Signature Law, which regulates data message, electronic signature authentication and legal liability issues. It is considered the first law in China's e-commerce legislation. It was a milestone in the course of improving China's electronic commerce legislation, and also marks the entering of China's rapid development stage for electronic commerce legislation. Global trends E-commerce has become an important tool for small and large businesses worldwide, not only to sell to customers, but also to engage them. Cross-border e-Commerce is also an essential field for e-Commerce businesses. It has responded to the trend of globalization. It shows that numerous firms have opened up new businesses, expanded new markets, and overcome trade barriers; more and more enterprises have started exploring the cross-border cooperation field. In addition, compared with traditional cross-border trade, the information on cross-border e-commerce is more concealed. In the era of globalization, cross-border e-commerce for inter-firm companies means the activities, interactions, or social relations of two or more e-commerce enterprises. However, the success of cross-border e-commerce promotes the development of small and medium-sized firms, and it has finally become a new transaction mode. It has helped the companies solve financial problems and realize the reasonable allocation of resources field. SMEs (small and medium enterprises) can also precisely match the demand and supply in the market, having the industrial chain majorization and creating more revenues for companies. In 2012, e-commerce sales topped $1 trillion for the first time in history. Mobile devices are playing an increasing role in the mix of e-commerce, this is also commonly called mobile commerce, or m-commerce. In 2014, one estimate saw purchases made on mobile devices making up 25% of the market by 2017. For traditional businesses, one research stated that information technology and cross-border e-commerce is a good opportunity for the rapid development and growth of enterprises. Many companies have invested an enormous volume of investment in mobile applications. The DeLone and McLean Model stated that three perspectives contribute to a successful e-business: information system quality, service quality and users' satisfaction. There is no limit of time and space, there are more opportunities to reach out to customers around the world, and to cut down unnecessary intermediate links, thereby reducing the cost price, and can benefit from one on one large customer data analysis, to achieve a high degree of personal customization strategic plan, in order to fully enhance the core competitiveness of the products in the company. Modern 3D graphics technologies, such as Facebook 3D Posts, are considered by some social media marketers and advertisers as a preferable way to promote consumer goods than static photos, and some brands like Sony are already paving the way for augmented reality commerce. Wayfair now lets you inspect a 3D version of its furniture in a home setting before buying. Among emerging economies, China's e-commerce presence continued to expand every year. With 668 million Internet users as of 2014, China's online shopping sales reached $253 billion in the first half of 2015, accounting for 10% of total Chinese consumer retail sales in that period. The Chinese retailers have been able to help consumers feel more comfortable shopping online. e-commerce transactions between China and other countries increased 32% to 2.3 trillion yuan ($375.8 billion) in 2012 and accounted for 9.6% of China's total international trade. In 2013, Alibaba had an e-commerce market share of 80% in China. In 2014, Alibaba still dominated the B2B marketplace in China with a market share of 44.82%, followed by several other companies including Made-in-China.com at 3.21%, and GlobalSources.com at 2.98%, with the total transaction value of China's B2B market exceeding 4.5 billion yuan. In 2012, Alibaba Group delisted Alibaba.com from the Hong Kong stock exchange after acquiring full control. In 2014, it was privately held again following a $2.5 billion buyback. The company’s NYSE debut under the stock ticker BABA made headlines for being, at that time, the biggest IPO in U.S. history. Alibaba's International Digital Commerce Group (AIDC), which includes Alibaba.com's B2B operations, reported 22% year-over-year revenue growth in the quarter ending March 31, 2025 (Q4 FY2025). China was also the largest e-commerce market in the world by value of sales, with an estimated US$899 billion in 2016. It accounted for 42.4% of worldwide retail e-commerce in that year, the most of any country.: 110 Research shows that Chinese consumer motivations are different enough from Western audiences to require unique e-commerce app designs instead of simply porting Western apps into the Chinese market. The expansion of e-commerce in China has resulted in the development of Taobao villages, clusters of e-commerce businesses operating in rural areas.: 112 Because Taobao villages have increased the incomes or rural people and entrepreneurship in rural China, Taobao villages have become a component of rural revitalization strategies.: 278 In 2015, the State Council promoted the Internet Plus initiative, a five-year plan to integrate traditional manufacturing and service industries with big data, cloud computing, and Internet of things technology.: 44 The State Council provided support for Internet Plus through policy support in area including cross-border e-commerce and rural e-commerce.: 44 In 2019, the city of Hangzhou established a pilot program artificial intelligence-based Internet Court to adjudicate disputes related to e-commerce and internet-related intellectual property claims.: 124 In 2010, the United Kingdom had the highest per capita e-commerce spending in the world. As of 2013, the Czech Republic was the European country where e-commerce delivers the biggest contribution to the enterprises' total revenue. Almost a quarter (24%) of the country's total turnover is generated via the online channel. The rate of growth of the number of internet users in the Arab countries has been rapid – 13.1% in 2015. A significant portion of the e-commerce market in the Middle East comprises people in the 30–34 year age group. Egypt has the largest number of internet users in the region, followed by Saudi Arabia and Morocco; these constitute 3/4th of the region's share. Yet, internet penetration is low: 35% in Egypt and 65% in Saudi Arabia. The Gulf Cooperation Council countries have a rapidly growing market and are characterized by a population that becomes wealthier (Yuldashev). As such, retailers have launched Arabic-language websites as a means to target this population. Secondly, there are predictions of increased mobile purchases and an expanding internet audience (Yuldashev). The growth and development of the two aspects make the GCC countries become larger players in the electronic commerce market with time progress. Specifically, research shows that the e-commerce market was expected to grow to over $20 billion by 2020 among these GCC countries (Yuldashev). The e-commerce market has also gained much popularity among western countries, and in particular Europe and the U.S. These countries have been highly characterized by consumer-packaged goods (CPG) (Geisler, 34). However, trends show that there are future signs of a reverse. Similar to the GCC countries, there has been increased purchase of goods and services in online channels rather than offline channels. Activist investors are trying hard to consolidate and slash their overall cost and the governments in western countries continue to impose more regulation on CPG manufacturers (Geisler, 36). In these senses, CPG investors are being forced to adapt to e-commerce as it is effective as well as a means for them to thrive. The future trends in the GCC countries will be similar to that of the western countries. Despite the forces that push business to adapt e-commerce as a means to sell goods and products, the manner in which customers make purchases is similar in countries from these two regions. For instance, there has been an increased usage of smartphones which comes in conjunction with an increase in the overall internet audience from the regions. Yuldashev writes that consumers are scaling up to more modern technology that allows for mobile marketing. However, the percentage of smartphone and internet users who make online purchases is expected to vary in the first few years. It will be independent on the willingness of the people to adopt this new trend (The Statistics Portal). For example, UAE has the greatest smartphone penetration of 73.8 per cent and has 91.9 per cent of its population has access to the internet. On the other hand, smartphone penetration in Europe has been reported to be at 64.7 per cent (The Statistics Portal). Regardless, the disparity in percentage between these regions is expected to level out in future because e-commerce technology is expected to grow to allow for more users. The e-commerce business within these two regions will result in competition. Government bodies at the country level will enhance their measures and strategies to ensure sustainability and consumer protection (Krings, et al.). These increased measures will raise the environmental and social standards in the countries, factors that will determine the success of the e-commerce market in these countries. For example, an adoption of tough sanctions will make it difficult for companies to enter the e-commerce market while lenient sanctions will allow ease of companies. As such, the future trends between GCC countries and the Western countries will be independent of these sanctions (Krings, et al.). These countries need to make rational conclusions in coming up with effective sanctions. India had an Internet user base of about 460 million as of December 2017. Despite being the third largest user base in the world, the penetration of the Internet is low compared to markets like the United States, United Kingdom or France but is growing at a much faster rate, adding around six million new entrants every month.[citation needed] In India, cash on delivery is the most preferred payment method, accumulating 75% of the e-retail activities.[citation needed] The India retail market was expected to rise from 2.5% in 2016 to 5% in 2020. In 2013, Brazil's e-commerce was growing quickly with retail e-commerce sales expected to grow at a double-digit pace through 2014. By 2016, eMarketer expected retail e-commerce sales in Brazil to reach $17.3 billion. Logistics Logistics in e-commerce mainly concerns fulfillment. Online markets and retailers have to find the best possible way to fill orders and deliver products. Small companies usually control their own logistic operation because they do not have the ability to hire an outside company. Most large companies hire a fulfillment service that takes care of a company's logistic needs. The optimization of logistics processes that contains long-term investment in an efficient storage infrastructure system and adoption of inventory management strategies is crucial to prioritize customer satisfaction throughout the entire process, from order placement to final delivery. Impacts E-commerce markets grew at noticeable rates. The online market was expected to grow by 56% in 2015–2020. In 2017, retail e-commerce sales worldwide amounted to 2.3 trillion US dollars and e-retail revenues were projected to grow to 4.891 trillion US dollars in 2021. Traditional markets are only expected 2% growth during the same time. Brick and mortar retailers are struggling because of online retailer's ability to offer lower prices and higher efficiency. Many larger retailers are able to maintain a presence offline and online by linking physical and online offerings. E-commerce allows customers to overcome geographical barriers and allows them to purchase products anytime and from anywhere. Online and traditional markets have different strategies for conducting business. Traditional retailers offer fewer assortment of products because of shelf space where, online retailers often hold no inventory but send customer orders directly to the manufacturer. Dropshipping is a means of shipping goods from a manufacturer or wholesaler directly to a customer instead of to a retailer. This process results in the vendor not holding any stock but serves as an intermediary between the buyer and the third-party supplier. The dropshipping market is expected to reach $1.51 Tn by 2032, according to a Global Market Insights report, which studied the main dropshipping markets, including Alibaba.com, Chinabrands.com, Doba, Printful, Salehoo, Shopify, and Spocket. The pricing strategies are also different for traditional and online retailers. Traditional retailers base their prices on store traffic and the cost to keep inventory. Online retailers base prices on the speed of delivery. There are two ways for marketers to conduct business through e-commerce: fully online or online along with a brick and mortar store. Online marketers can offer lower prices, greater product selection, and high efficiency rates. Many customers prefer online markets if the products can be delivered quickly at relatively low price. However, online retailers cannot offer the physical experience that traditional retailers can. It can be difficult to judge the quality of a product without the physical experience, which may cause customers to experience product or seller uncertainty. Another issue regarding the online market is concerns about the security of online transactions. Many customers remain loyal to well-known retailers because of this issue. Security is a primary problem for e-commerce in developed and developing countries. E-commerce security is protecting businesses' websites and customers from unauthorized access, use, alteration, or destruction. The type of threats include: malicious codes, unwanted programs (ad ware, spyware), phishing, hacking, and cyber vandalism. E-commerce websites use different tools to avert security threats. These tools include firewalls, encryption software, digital certificates, and passwords.[citation needed] For a long time, companies had been troubled by the gap between the benefits which supply chain technology has and the solutions to deliver those benefits. However, the emergence of e-commerce has provided a more practical and effective way of delivering the benefits of the new supply chain technologies. E-commerce has the capability to integrate all inter-company and intra-company functions, meaning that the three flows (physical flow, financial flow and information flow) of the supply chain could be also affected by e-commerce. The affections on physical flows improved the way of product and inventory movement level for companies. For the information flows, e-commerce optimized the capacity of information processing than companies used to have, and for the financial flows, e-commerce allows companies to have more efficient payment and settlement solutions. In addition, e-commerce has a more sophisticated level of impact on supply chains: Firstly, the performance gap will be eliminated since companies can identify gaps between different levels of supply chains by electronic means of solutions; Secondly, as a result of e-commerce emergence, new capabilities such implementing ERP systems, like SAP ERP, Xero, or Megaventory, have helped companies to manage operations with customers and suppliers. Yet these new capabilities are still not fully exploited. Thirdly, technology companies would keep investing on new e-commerce software solutions as they are expecting investment return. Fourthly, e-commerce would help to solve many aspects of issues that companies may feel difficult to cope with, such as political barriers or cross-country changes. Finally, e-commerce provides companies a more efficient and effective way to collaborate with each other within the supply chain. E-commerce helps create new job opportunities due to information related services, software app and digital products. It also causes job losses. The areas with the greatest predicted job-loss are retail, postal, and travel agencies. The development of e-commerce will create jobs that require highly skilled workers to manage large amounts of information, customer demands, and production processes. In contrast, people with poor technical skills cannot enjoy the wages welfare. On the other hand, because e-commerce requires sufficient stocks that could be delivered to customers in time, the warehouse becomes an important element. Warehouse needs more staff to manage, supervise and organize, thus the condition of warehouse environment will be concerned by employees. E-commerce brings convenience for customers as they do not have to leave home and only need to browse websites online, especially for buying products which are not sold in nearby shops. It could help customers buy a wider range of products and save customers' time. Consumers also gain power through online shopping. They are able to research products and compare prices among retailers. Thanks to the practice of user-generated ratings and reviews from companies like Bazaarvoice, Trustpilot, and Yelp, customers can also see what other people think of a product, and decide before buying if they want to spend money on it. Also, online shopping often provides sales promotion or discounts code, thus it is more price effective for customers. Moreover, e-commerce provides products' detailed information; even the in-store staff cannot offer such detailed explanation. Customers can also review and track the order history online. E-commerce technologies cut transaction costs by allowing both manufactures and consumers to skip through the intermediaries. This is achieved through by extending the search area best price deals and by group purchase. The success of e-commerce in urban and regional levels depend on how the local firms and consumers have adopted to e-commerce. However, e-commerce lacks human interaction for customers, especially who prefer face-to-face connection. Customers are also concerned with the security of online transactions and tend to remain loyal to well-known retailers. In recent years, clothing retailers such as Tommy Hilfiger have started adding Virtual Fit platforms to their e-commerce sites to reduce the risk of customers buying the wrong sized clothes, although these vary greatly in their fit for purpose. When the customer regret the purchase of a product, it involves returning goods and refunding process. This process is inconvenient as customers need to pack and post the goods. If the products are expensive, large or fragile, it refers to safety issues. In 2018, E-commerce generated 1.3 million short tons (1.2 megatonnes) of container cardboard in North America, an increase from 1.1 million (1.00)) in 2017. Only 35 percent of North American cardboard manufacturing capacity was from recycled content. The recycling rate in Europe was 80 percent and Asia was 93 percent. Amazon, the largest user of boxes, had a strategy to cut back on packing material and reduced packaging material used by 19 percent by weight since 2016. Amazon is requiring retailers to manufacture their product packaging in a way that does not require additional shipping packaging. Amazon also has an 85-person team researching ways to reduce and improve their packaging and shipping materials. Accelerated movement of packages around the world includes accelerated movement of living things, such as invasive species. Weeds, pests, and diseases all sometimes travel in packages of seeds. Some of these packages are part of brushing manipulation of e-commerce reviews. E-commerce has been cited as a major force for the failure of major U.S. retailers in a trend frequently referred to as a "retail apocalypse." The rise of e-commerce outlets like Amazon has made it harder for traditional retailers to attract customers to their stores and forced companies to change their sales strategies. Many companies have turned to sales promotions and increased digital efforts to lure shoppers while shutting down brick-and-mortar locations. The trend has forced some traditional retailers to shutter its brick and mortar operations. E-commerce during COVID-19 In March 2020, global retail website traffic hit 14.3 billion visits signifying an unprecedented growth of e-commerce during the lockdown of 2020. Later studies show that online sales increased by 25% and online grocery shopping increased by over 100% during the crisis in the United States. Meanwhile, as many as 29% of surveyed shoppers state that they will never go back to shopping in person again; in the UK, 43% of consumers state that they expect to keep on shopping the same way even after the lockdown is over. Retail sales of e-commerce shows that COVID-19 has a significant impact on e-commerce and its sales were expected to reach $6.5 trillion by 2023. Business application Some common applications related to electronic commerce are: Timeline A timeline for the development of e-commerce: See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Wahlquist_fluid] | [TOKENS: 259] |
Contents Wahlquist fluid In general relativity, the Wahlquist fluid is an exact rotating perfect fluid solution to Einstein's equation with equation of state corresponding to constant gravitational mass density. Introduction The Wahlquist fluid was first discovered by Hugo D. Wahlquist in 1968. It is one of few known exact rotating perfect fluid solutions in general relativity. The solution reduces to the static Whittaker metric in the limit of zero rotation. Metric The metric of a Wahlquist fluid is given by where and ξ A {\displaystyle \xi _{A}} is defined by h ~ 2 ( ξ A ) = 0 {\displaystyle {\tilde {h}}_{2}(\xi _{A})=0} . It is a solution with equation of state μ + 3 p = μ 0 {\displaystyle \mu +3p=\mu _{0}} where μ 0 {\displaystyle \mu _{0}} is a constant. Properties The pressure and density of the Wahlquist fluid are given by The vanishing pressure surface of the fluid is prolate, in contrast to physical rotating stars, which are oblate. It has been shown that the Wahlquist fluid can not be matched to an asymptotically flat region of spacetime. References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Python_(programming_language)#cite_note-150] | [TOKENS: 4314] |
Contents Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language. Python 3.0, released in 2008, was a major revision and not completely backward-compatible with earlier versions. Beginning with Python 3.5, capabilities and keywords for typing were added to the language, allowing optional static typing. As of 2026[update], the Python Software Foundation supports Python 3.10, 3.11, 3.12, 3.13, and 3.14, following the project's annual release cycle and five-year support policy. Python 3.15 is currently in the alpha development phase, and the stable release is expected to come out in October 2026. Earlier versions in the 3.x series have reached end-of-life and no longer receive security updates. Python has gained widespread use in the machine learning community. It is widely taught as an introductory programming language. Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index, which ranks based on searches in 24 platforms. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands. It was designed as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Python implementation began in December 1989. Van Rossum first released it in 1991 as Python 0.9.0. Van Rossum assumed sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from responsibilities as Python's "benevolent dictator for life" (BDFL); this title was bestowed on him by the Python community to reflect his long-term commitment as the project's chief decision-maker. (He has since come out of retirement and is self-titled "BDFL-emeritus".) In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python derives from the British comedy series Monty Python's Flying Circus. (See § Naming.) Python 2.0 was released on 16 October 2000, featuring many new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, and then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. It no longer receives security patches or updates. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e., "2.7.18+" (plus 3.11), with the plus signifying (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, and was a major revision and not completely backward-compatible with earlier versions, with some new semantics and changed syntax. Python 2.7.18, released in 2020, was the last release of Python 2. Several releases in the Python 3.x series have added new syntax to the language, and made a few (considered very minor) backward-incompatible changes. As of January 2026[update], Python 3.14.3 is the latest stable release. All older 3.x versions had a security update down to Python 3.9.24 then again with 3.9.25, the final version in 3.9 series. Python 3.10 is, since November 2025, the oldest supported branch. Python 3.15 has an alpha released, and Android has an official downloadable executable available for Python 3.14. Releases receive two years of full support followed by three years of security support. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming – including metaprogramming and metaobjects. Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language' because it is purposely designed to be able to integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the "Lisp tradition". It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML. Python's core philosophy is summarized in the Zen of Python (PEP 20) written by Tim Peters, which includes aphorisms such as these: However, Python has received criticism for violating these principles and adding unnecessary language bloat. Responses to these criticisms note that the Zen of Python is a guideline rather than a rule. The addition of some new features had been controversial: Guido van Rossum resigned as Benevolent Dictator for Life after conflict about adding the assignment expression operator in Python 3.8. Nevertheless, rather than building all functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which represented the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar, while giving developers a choice in their coding methodology. Python lacks do .. while loops, which Rossum considered harmful. In contrast to Perl's motto "there is more than one way to do it", Python advocates an approach where "there should be one – and preferably only one – obvious way to do it". In practice, however, Python provides many ways to achieve a given goal. There are at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli is a Fellow at the Python Software Foundation and Python book author; he wrote that "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers typically prioritize readability over performance. For example, they reject patches to non-critical parts of the CPython reference implementation that would offer increases in speed that do not justify the cost of clarity and readability.[failed verification] Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. Also, it is possible to transpile to other languages. However, this approach either fails to achieve the expected speed-up, since Python is a very dynamic language, or only a restricted subset of Python is compiled (with potential minor semantic changes). Python is meant to be a fun language to use. This goal is reflected in the name – a tribute to the British comedy group Monty Python – and in playful approaches to some tutorials and reference materials. For instance, some code examples use the terms "spam" and "eggs" (in reference to a Monty Python sketch), rather than the typical terms "foo" and "bar". A common neologism in the Python community is pythonic, which has a broad range of meanings related to program style: Pythonic code may use Python idioms well; be natural or show fluency in the language; or conform with Python's minimalist philosophy and emphasis on readability. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Python's statements include the following: The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typing—in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations; according to Van Rossum, the language never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, data can be passed through multiple stack levels. Python's expressions include the following: In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example: A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement. Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them. Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support the semantics of only the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Python includes a module typing including several type names for type annotations. Also, mypy supports a Python compiler called mypyc, which leverages type annotations for optimization. 1.33333 frozenset() Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the modulo operator, a remainder can be negative, e.g., 4 % -3 == -2.) Also, Python offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0. Also, it offers the matrix‑multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively. Division between integers produces floating-point results. The behavior of division has changed significantly over time: In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division. Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. Also, the rounding implies that the equation b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is −1.0. Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation. Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs: To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header. Code examples "Hello, World!" program: Program to calculate the factorial of a non-negative integer: Libraries Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations. As of 13 March 2025,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. Development environments Most[which?] Python implementations (including CPython) include a read–eval–print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately. Also, CPython is bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.[citation needed] Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting. Standard desktop IDEs include PyCharm, Spyder, and Visual Studio Code; there are web browser-based IDEs, such as the following environments: Implementations CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard since version 3.11. Older versions use the C89 standard with several select C99 features, but third-party extensions are not limited to older C versions—e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python. CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms. All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past. All alternative implementations have at least slightly different semantics. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Creating an executable with Python often is done by bundling an entire Python interpreter into the executable, which causes binary sizes to be massive for small programs, yet there exist implementations that are capable of truly compiling Python. Alternative implementations include the following: Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version. Just-in-time Python compilers have been developed, but are now unsupported: There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python: There are also specialized compilers: Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax: A performance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game. There are several approaches to optimizing Python performance, despite the inherent slowness of an interpreted language. These approaches include the following strategies or tools: Language Development Python's development is conducted mostly through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases have three types, distinguished by which part of the version number is incremented: Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development. The major academic conference on Python is PyCon. Also, there are special Python mentoring programs, such as PyLadies. Naming Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. Also, the official Python documentation contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas". Languages influenced by Python See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_note-205] | [TOKENS: 8773] |
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/William_F._Galvin] | [TOKENS: 1499] |
Contents William F. Galvin William Francis Galvin (born September 17, 1950) is an American politician who has served as the 27th Massachusetts secretary of the commonwealth since 1995. A member of the Democratic Party, he previously served in the Massachusetts House of Representatives from 1975 to 1991. Early life Galvin was born and raised in the Brighton neighborhood of Boston. He attended Saint Mary's High School in Waltham, Massachusetts and graduated in 1968. Galvin graduated cum laude from Boston College in 1972 and received a Juris Doctor from Suffolk University Law School in 1976. Career Galvin began his political career in 1972 as an aide to the Massachusetts Governor's Council after graduating from Boston College, thanks to his connection with councilor Herb Connolly, whom Galvin had campaigned for. Galvin worked part-time at the council while attending Suffolk Law School full-time. Galvin won a special election to the open seat in the Massachusetts General Court in 1975, after State Representative Michael Daly departed from office; the race had nine candidates. Galvin became the Massachusetts state representative from the Allston-Brighton district, the same year he graduated from law school. He was the Democratic nominee for Massachusetts State Treasurer in 1990, but was defeated by Republican Joe Malone. It was during this election that he was given the nickname "The Prince of Darkness", in reference to his habit of working late into the night and making legislative deals behind closed doors. He was first elected Secretary of the Commonwealth in 1994, and has retained this title longer than any other politician in Massachusetts history. Galvin has been an active participant in the National Association of Secretaries of State, serving first as Chairman of the Standing Committee on Securities, then as co-chairman of the Committee on Presidential Primaries. At one point during the administration of Gov. Mitt Romney and Lt. Gov. Kerry Healey, Galvin became the Acting Governor of Massachusetts when both Romney and Healey were out of the state. During the administration of former Acting Governor Jane Swift, Galvin automatically became Acting Governor whenever Swift left the state, since there was no lieutenant governor in office at the time. When Swift gave birth to twins in 2001, she chose to keep full executive authority and did not hand over the governorship at any point to Galvin. While it had been widely rumored that Galvin would run for Governor of Massachusetts in 2006 as a Democrat, he announced at the end of 2005 that he would instead seek reelection as Secretary of State. Voting rights advocate John Bonifaz had already declared that he would run for the office, and stayed in the race to challenge Galvin for re-election. However, Galvin defeated Bonifaz in the September 19 Democratic primary. Galvin defeated Green-Rainbow Party candidate Jill Stein, a medical doctor and environmental health advocate who ran for Governor in 2002, in the November general election. The Democratic primary race received relatively little attention or press coverage for most of 2006, but in the last few weeks before election, a controversy over Galvin's refusal to debate his opponent broke into the news with a front-page story in The Boston Sunday Globe. This is the first time a front-page story appeared about this race in any major Boston paper. In November 2017, Boston City Council member Josh Zakim announced that he would run for Secretary of the Commonwealth, challenging fellow-Democrat Galvin in the 2018 election. Amid the primary challenge, Galvin came out in favor of same-day voter registration and automatic voter registration. Previously, Galvin had expressed skepticism of automatic voter registration, and had appealed a Superior Court ruling which struck down a state law requiring that voters be registered 20 days prior to an election in order to vote in it. On June 2, 2018, Zakim won the endorsement of the Massachusetts Democratic Party at its state convention, defeating Galvin with 55% of the vote to Galvin's 45%. Galvin subsequently defeated Zakim in the Democratic primary on September 4 with 67% of the vote. On November 6, Galvin won re-election as Secretary of the Commonwealth, winning 71% of the vote against Republican Anthony Amore. In January 2022, NAACP Boston president Tanisha Sullivan announced a campaign for Secretary of the Commonwealth. Galvin campaigned on his voting rights record, having implemented no-excuse mail-in voting during the COVID-19 pandemic, which became a permanent change. On the other hand, Sullivan claimed that he hadn't gone far enough to further voting rights. She claimed that mail-in voting should have been implemented before the pandemic, and emphasized that Massachusetts still did not have same-day voter registration. Galvin claimed that while he supports same-day registration, the legislature is responsible for implementing it. Sullivan won the endorsement of the state Democratic Party, as well as from multiple Boston city councillors and mayors. 62% of Massachusetts Democratic Party Convention delegates voted to support her. During the campaign, Sullivan was more active, attending regular interviews and hosting rallies, while Galvin ran a quieter campaign. Galvin defeated Sullivan in the September 6 Democratic primary with 70% of the vote. In the general election, Galvin faced Republican Rayla Campbell, who opposed mail in voting. On November 8, Galvin won re-election with 68% of the vote. On February 4, 2026, Galvin announced he would be running for a ninth term, adding that he had "no intention of running in 2030." Notable lawsuits An investigation by the US Justice Department found that Galvin, as Massachusetts Secretary of State, had violated the Uniformed and Overseas Citizens Absentee Voting Act. The Office of the Secretary of the Commonwealth was found to have failed to collect and report data on absentee ballots sent, returned, and cast by overseas citizens and military personnel registered to vote in Massachusetts, as required by the law since amendments in 2002. The lawsuit was settled out of court, requiring Galvin to comply with the law. On January 14, 2009, Galvin filed suit against Robert Jaffe to compel Jaffe to testify about his role in the Bernard Madoff investment scandal. Jaffe, who lives in Weston, Massachusetts and in Florida, countered that he is actually one of the victims of Madoff. Jaffe is married to Ellen Shapiro, daughter of Boston philanthropist Carl Shapiro. Jaffe reportedly convinced the elder Shapiro to invest $250 million with Madoff about 10 days before Madoff's arrest. In September 2021, Massachusetts regulators fined MassMutual $4 million for failing to supervise the trading activity of their employee Keith Gill, a leading player in the GameStop short squeeze which led to hedge funds losing billions. Galvin characterised Gill as a professional trader/dealer, citing his 1,700 trades on behalf of three other individuals. However, Galvin failed to disclose that the three individuals were all members of Gill's family and that less than 5% of the 1,700 trades were for GameStop. Following his pursuit of litigation against Gill, it was reported that Galvin was engaging in partisan politics and had opposed bilingual ballots in contravention of the Voting Rights Act. Electoral history References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Taub%E2%80%93NUT_space] | [TOKENS: 413] |
Contents Taub–NUT space The Taub–NUT metric (/tɔːb nʌt/, /- ˌɛn.juːˈtiː/) is an exact solution to Einstein's equations. It may be considered a first attempt in finding the metric of a spinning black hole. It is sometimes also used in homogeneous but anisotropic cosmological models formulated in the framework of general relativity.[citation needed] The underlying Taub space was found by Abraham Haskel Taub (1951), and extended to a larger manifold by Ezra T. Newman, Louis A. Tamburino, and Theodore W. J. Unti (1963), whose initials form the "NUT" of "Taub–NUT". Description Taub's solution is an empty space solution of Einstein's equations with topology R×S3 and metric (or equivalently line element) where and m and l are positive constants. Taub's metric has coordinate singularities at U = 0 , t = m + ( m 2 + l 2 ) 1 / 2 {\displaystyle U=0,t=m+(m^{2}+l^{2})^{1/2}} , and Newman, Tamburino and Unti showed how to extend the metric across these surfaces. Related work When Roy Kerr developed the Kerr metric for spinning black holes in 1963, he ended up with a four-parameter solution, one of which was the mass and another the angular momentum of the central body. One of the two other parameters was the NUT-parameter, which he threw out of his solution because he found it to be nonphysical since it caused the metric to be not asymptotically flat, while other sources interpret it either as a gravomagnetic monopole parameter of the central mass, or a twisting property of the surrounding spacetime. A simplified 1+1-dimensional version of the Taub–NUT spacetime is the Misner spacetime. References Notes |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/John_Kitto] | [TOKENS: 981] |
Contents John Kitto John Kitto (4 December 1804 – 25 November 1854) was an English biblical scholar of Cornish descent. Biography Born in Plymouth, John Kitto was a sickly child, son of a Cornish stonemason. The drunkenness of his father and the poverty of his family meant that much of his childhood was spent in the workhouse. He had no more than three years of erratic and interrupted education. At the age of twelve John Kitto fell on his head from a rooftop, and became totally and permanently deaf. As a young man he suffered further tragedies, disappointments and much loneliness. His height was 4 ft 8 in, and his accident left him with an impaired sense of balance. He found consolation in browsing at bookstalls and reading any books that came his way. From these hardships he was rescued by friends who became aware of his mental abilities and encouraged him to write topical articles for local newspapers, arranging eventually for him to work as an assistant in a local library. Here he continued to educate himself. One of his benefactors was the Exeter dentist Anthony Norris Groves, who in 1824 offered him employment as a dental assistant. Living with the Groves family, Kitto was profoundly influenced by the practical Christian faith of his employer. In 1829 he accompanied Groves on his pioneering mission to Baghdad and served as tutor to Groves's two sons. In 1833 Kitto returned to England via Constantinople, accompanied by another member of the Groves mission, Francis William Newman. Shortly afterwards he married, and in due course had several children. A London publisher asked Kitto to write up his travel journals for a series of articles in the Penny Magazine, a publication read at that time by a million people in Britain, reprinted in America and translated into French, German and Dutch. Other writing projects followed as readers enquired about his experiences in the East amidst people living in circumstances closely resembling those of Bible times. Scholarship Kitto had been a careful observer of physical detail – the topography, the animals, architecture, agricultural methods, the manner of interaction between people. His retelling of Bible stories in the light of what he had seen brought the narratives to life and confirmed the accuracy of the ancient texts. He showed how the activities described by the prophets and apostles accorded with the realities of Eastern culture. He supplemented his own observations with details from the journals of other travellers, and helped the Bible reader to understand many things previously obscure or contradictory to the Western mind. His careful research into the geography, biology and archaeology of Bible lands served to support and encourage confidence in the accuracy of the Bible. In his generation Dr Kitto was a most significant contributor to Christian scholarship, and he provided much help for Evangelicals defending the Bible against the attack of liberal critics. He eventually wrote a total of twenty-three books, of which Charles Spurgeon considered the Daily Bible Illustrations to be "more interesting than any novel that was ever written, and as instructive as the heaviest theology." Inevitably, Kitto's encyclopedic works have been superseded by the researches of later generations of scholars. Yet his Pictorial Bible and Cyclopaedia of Biblical Literature held, for almost a century, a unique and valued place on the academic library shelf, and his Daily Bible Illustrations encouraged the faith of readers of all ages and backgrounds, and stimulated the imagination of many a Sunday school teacher. The sensible style that made Kitto so popular can be seen in this brief passage, written at a time when attempts to reconstruct the design of the Temple of Solomon on paper, in scale models, and in the architecture of churches, synagogues and Masonic Halls was a serious scholarly pursuit: John Kitto summed up his life in the following words: Tributes In 1844 the University of Giessen conferred upon him the degree of D.D. In 1850 he received a pension for life from the British government. He died on 25 November 1854 at Cannstatt in Germany. Kitto Road in New Cross (South London), built in the late 19th century by the Worshipful Company of Haberdashers, is believed to have been named after John Kitto. In 1989 the Burrington Secondary Modern School in Plymouth was renamed the John Kitto Comprehensive School in his honour, and later, the John Kitto Community College. In September 2010 it became the All Saints Church of England Academy, Plymouth. Family John Kitto's son, John Fenwick Kitto, was an Anglican priest who served as Rector of Whitechapel, Rector of Stepney, and Vicar of St Martin-in-the-Fields. Selected works References External links Media related to John Kitto at Wikimedia Commons |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Frank_Drake] | [TOKENS: 1325] |
Contents Frank Drake Frank Donald Drake (May 28, 1930 – September 2, 2022) was an American astrophysicist and astrobiologist. He began his career as a radio astronomer, studying the planets of the Solar System and later pulsars. Drake expanded his interests to the search for extraterrestrial intelligence (SETI), beginning with Project Ozma in 1960, an attempt at extraterrestrial communication. He developed the Drake equation, which attempts to quantify the number of intelligent lifeforms that could potentially be discovered. Working with Carl Sagan, Drake helped to design the Pioneer plaque, the first physical message flown beyond the Solar System, and was part of the team that developed the Voyager Golden Record. Drake designed and implemented the Arecibo message in 1974, an extraterrestrial radio transmission of astronomical and biological information about Earth. He is the father of Advanced SETI. Drake worked at the National Radio Astronomy Observatory, Jet Propulsion Laboratory, Cornell University, University of California at Santa Cruz, and the SETI Institute. Early life and education Born on May 28, 1930, in Chicago, Illinois, Drake showed an early interest in electronics and chemistry. His father was a chemical engineer, and his mother a music teacher. He had two younger siblings. He enrolled at Cornell University on a Navy Reserve Officer Training Corps scholarship. Once there he began studying astronomy. His ideas about the possibility of extraterrestrial life were reinforced by a lecture from astrophysicist Otto Struve in 1951. After receiving a B.A. in Engineering Physics, Drake served briefly as an electronics officer on the heavy cruiser USS Albany. He then went on to graduate school at Harvard University from 1952 to 1955 where he received a M.S. and Ph.D. in astronomy. His doctoral advisor was Cecilia Payne-Gaposchkin. Career Drake began his research career as a radio astronomer, working at the National Radio Astronomy Observatory (NRAO) in Green Bank, West Virginia from 1958 to 1963. At NRAO, he conducted research into radio emissions from the planets of the Solar System: using the radio telescope at Green Bank, Drake discovered the ionosphere and magnetosphere of Jupiter, and observed the atmosphere of Venus. He also mapped the radio emission from the Galactic Center. Drake extended the capabilities of the under-construction Arecibo Observatory to allow it to be used for radio astronomy (it was originally designed purely for ionospheric physics). In April 1959, Drake obtained approval from the director Otto Struve of NRAO to begin Project Ozma, a search for extraterrestrial radio communications. Initially, they agreed to keep the project secret, fearing public ridicule. However, Drake decided to publicize his project after Giuseppe Cocconi and Philip Morrison published a paper in Nature in September 1959, entitled "Searching for Interstellar Communications". Drake began his Project Ozma observations in 1960, using the NRAO 26-meter radio telescope, by searching for possible signals from the star systems Tau Ceti and Epsilon Eridani. No extraterrestrial signals were detected and the project was terminated in July 1960. After learning about Project Ozma, Carl Sagan (then a graduate student) contacted Drake, initiating a lifelong collaboration between them. In 1961, Drake devised the Drake equation, which attempted to estimate the number of extraterrestrial civilizations that might be detectable in the Milky Way. The Drake equation has been described as the "second most-famous equation in science", after E=mc2. In 1963, Drake served as section chief of Lunar and Planetary Science at the Jet Propulsion Laboratory. He returned to Cornell in 1964, this time as a member of the faculty, where he would spend the next two decades. He was promoted to Goldwin Smith Professor of Astronomy in 1976. Drake served as associate director of the Cornell Center for Radiophysics and Space Research[when?], as director of the Arecibo Observatory from 1966 to 1968, and as director of the National Astronomy and Ionosphere Center (NAIC, which includes the Arecibo facility), from its establishment in 1971 to 1981. In 1972, Drake co-designed the Pioneer plaque with Carl Sagan and Linda Salzman Sagan. The plaque was the first physical message sent into space and intended to be understandable by any sufficiently technologically advanced extraterrestrial lifeforms that might intercept it. In 1974, Drake wrote the Arecibo message, the first interstellar message transmitted deliberately from Earth. He later served as technical director, with Carl Sagan and Ann Druyan, in the development of the Voyager Golden Record, an improved version of the Pioneer plaque which also incorporated audio recordings. In 1984, Drake moved to the University of California at Santa Cruz (UCSC), becoming their Dean of Natural Science. The non-profit SETI Institute was founded the same year, with Drake as president of its board of trustees. Drake left his role as dean in 1988, but remained a professor at UCSC while also becoming director of the SETI Institute's Carl Sagan Center. Drake was President of the Astronomical Society of the Pacific from 1988 to 1990. From 1989 to 1992, he was chairman of the Board of Physics and Astronomy for the National Research Council. He retired from teaching in 1996 but remained emeritus professor of astronomy and astrophysics at UCSC. In 2010, Drake stepped down as director of The Carl Sagan Center but continued to serve on the SETI Institute's board of trustees. On the subject of the search for the existence of extraterrestrial life, Drake said: "[A]s far as I know, the most fascinating, interesting thing you could find in the universe is not another kind of star or galaxy … but another kind of life." Personal life Drake's hobbies included lapidary and the cultivation of orchids. Drake married musician Elizabeth Bell in 1953; they divorced in 1976. They had three sons. In 1978, Drake married Amahl Shakhashiri, with whom he had two daughters, including science journalist Nadia Drake. Drake died on September 2, 2022, at his home in Aptos, California, from natural causes at the age of 92. Honors See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/William_Lindsay_Alexander] | [TOKENS: 926] |
Contents William Lindsay Alexander William Lindsay Alexander FRSE LLD (24 August 1808 – 20 December 1884) was a Scottish church leader. Life He was born in Leith, the son of William Alexander, a wine merchant, and his wife, Elizabeth Lindsay. The only address given for his father appears in 1813 at 7 Blair Street off the Royal Mile in Edinburgh rather than Leith. He was educated at Leith High School then the universities of St Andrews and Edinburgh, where he gained a lasting reputation for classical scholarship. He entered Glasgow Theological Academy under Ralph Wardlaw in September 1827, but in December of the same year he left to become classical tutor at the Blackburn Theological Academy, afterwards the Lancashire Independent College, in north-west England. He stayed at Blackburn until 1831, lecturing on biblical literature, metaphysics, Greek and Latin. After short visits to Germany and London, he was invited back to Edinburgh in November 1834 to become minister of North College Street church (afterwards Argyle Square), an independent church which had arisen in 1802 out of the evangelical movement associated with the Haldane brothers, Robert and James. When the church sold its property to the government to make way for the National Museum of Scotland, Alexander's congregation worshipped in the Queen Street Hall until 1861 when the new church was completed on George IV Bridge, renamed Augustine Church because of Alexander's strong, albeit independent Augustinian influence in his sermons.[citation needed] He deliberately put aside the ambition to become a pulpit orator in favour of the practice of biblical exposition, which he invested with charm and impressiveness. Alexander took an active part in the "voluntary" controversy which ended in the Disruption of 1843, but he also maintained broad and catholic views of the spiritual relations between different sections of the Christian church. In 1845 he visited Switzerland with the special object of inquiring into the religious life of the churches there. In 1845 he received the degree of Doctor of Divinity (D.D.) from the university of St Andrews. In 1854 Alexander became Professor of Theology at Edinburgh (and Principal of the Edinburgh Theological Hall from 1877), a position which he held until 1881, in spite of many alternative offers. In 1867 he was elected a Fellow of the Royal Society of Edinburgh. His address is then given as Pinkie Burn in Musselburgh. He served as its vice president from 1873 to 1878 and from 1880 to 1884. He died at Pinkieburn House just south of Musselburgh and is buried nearby in Inveresk Churchyard. The grave lies in the south-east corner in the plot of Sir Alexander Hope. Works Alexander published, besides sermons and pamphlets: Posthumous was A System of Biblical Theology, Edinburgh, 1888, 2 vols. (edited by James Ross). He published also: memoirs of John Watson, minister at Musselburgh (1846), Ralph Wardlaw (1856), and William Alexander, his father (1867); expositions of Deuteronomy (Pulpit Commentary, 1882) and Zechariah (1885); and translations of Gustav Billroth on Corinthians (1837), Heinrich Andreas Christoph Havernick's Introduction to the Old Testament (1852), and Isaak August Dorner's History of the Doctrine of the Person of Christ, vol. i. (1864). In 1861 Alexander undertook the editorship of the third edition of John Kitto's Biblical Encyclopaedia, with the understanding that the whole work should be revised and brought up to date. In January 1870 he became one of the committee of Old Testament revisers. He edited other theological works. His Hymns for Christian Worship reached a third edition in 1866. Alexander frequently contributed to the British Quarterly, the British and Foreign Evangelical Review, Good Words, and other periodicals; he edited the Scottish Congregational Magazine, 1835-1840 and 1847–51. To the Encyclopædia Britannica (Eighth edition) he contributed several articles on topics of theology and philosophy (the publisher Adam Black was a member of his congregation). His articles on "Calvin" and "Channing" raised some controversy, and were changed in the ninth edition. He also contributed to the Imperial Dictionary of Biography. References Attribution: Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Greylock_Partners] | [TOKENS: 423] |
Contents Greylock Partners Greylock Partners is one of the oldest venture capital firms, founded in 1965, with committed capital of over $3.5 billion under management. The firm focuses on early-stage companies in consumer and enterprise software. History Greylock was founded in 1965 in Cambridge, Massachusetts by Bill Elfers and Dan Gregory, joined shortly thereafter by Charlie Waite. Elfers and Waite had both worked at American Research and Development Corporation. The original capital of $10 million was committed by a group of six families. The company opened a second fund in 1973. The company opened its first office in Silicon Valley in 1999. Greylock closed its 12th fund in 2005 with $500 million. In 2009, Greylock relocated its headquarters from the original Boston location to Silicon Valley. Also in 2009, Greylock opened its 13th fund with $575 million. In 2011, the 13th fund was increased to $1 billion. The company organized a 14th fund in 2013 with $1 billion. The company organized a 15th fund in 2016 with $1 billion. In 2020, it organized a 16th fund with $1 billion, and in 2021, the company raised an additional $500 million for the 16th fund to be used exclusively on seed deals. In 2014, Greylock launched Communities, a series of networking events centered on areas like design, big data, infrastructure engineering, user growth, data science, and network security. The communities are composed of product managers, engineers, and technologists from Silicon Valley's largest and fastest growing companies who meet once a quarter. Their branch in Israel formerly known as Greylock IL was rebranded as 83north in January 2015. Greylock partners include David Sze, Reid Hoffman, and Mustafa Suleyman. References External links This article about a private equity or venture capital firm based in the United States is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Expert_system] | [TOKENS: 4363] |
Contents Expert system In artificial intelligence (AI), an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural programming code. Expert systems were among the first truly successful forms of AI software. They were created in the 1970s and then proliferated in the 1980s, being then widely regarded as the future of AI — before the advent of successful artificial neural networks. An expert system is divided into two subsystems: 1) a knowledge base, which represents facts and rules; and 2) an inference engine, which applies the rules to the known facts to deduce new facts, and can include explaining and debugging abilities. History Soon after the dawn of modern computers in the late 1940s and early 1950s, researchers started realizing the immense potential these machines had for modern society. One of the first challenges was to make such machines able to “think” like humans – in particular, making these machines able to make important decisions the way humans do. The medical–healthcare field presented the tantalizing challenge of enabling these machines to make medical diagnostic decisions. Thus, in the late 1950s, right after the information age had fully arrived, researchers started experimenting with the prospect of using computer technology to emulate human decision making. For example, biomedical researchers started creating computer-aided systems for diagnostic applications in medicine and biology. These early diagnostic systems used patients’ symptoms and laboratory test results as inputs to generate a diagnostic outcome. These systems were often described as the early forms of expert systems. However, researchers realized that there were significant limits when using traditional methods such as flow charts, statistical pattern matching, or probability theory. This previous situation gradually led to the development of expert systems, which used knowledge-based approaches. These expert systems in medicine were the MYCIN expert system, the Internist-I expert system and later, in the middle of the 1980s, the CADUCEUS. Expert systems were formally introduced around 1965 by the Stanford Heuristic Programming Project led by Edward Feigenbaum, who is sometimes termed the "father of expert systems"; other key early contributors were Bruce Buchanan and Randall Davis. The Stanford researchers tried to identify domains where expertise was highly valued and complex, such as diagnosing infectious diseases (Mycin) and identifying unknown organic molecules (Dendral). The idea that "intelligent systems derive their power from the knowledge they possess rather than from the specific formalisms and inference schemes they use" – as Feigenbaum said – was at the time a significant step forward, since the past research had been focused on heuristic computational methods, culminating in attempts to develop very general-purpose problem solvers (foremostly the conjunct work of Allen Newell and Herbert Simon). Expert systems became some of the first truly successful forms of artificial intelligence (AI) software. Research on expert systems was also active in Europe. In the US, the focus tended to be on the use of production rule systems, first on systems hard coded on top of Lisp programming environments and then on expert system shells developed by vendors such as Intellicorp. In Europe, research focused more on systems and expert systems shells developed in Prolog. The advantage of Prolog systems was that they employed a form of rule-based programming that was based on formal logic. One such early expert system shell based on Prolog was APES. One of the first use cases of Prolog and APES was in the legal area namely, the encoding of a large portion of the British Nationality Act. Lance Elliot wrote: "The British Nationality Act was passed in 1981 and shortly thereafter was used as a means of showcasing the efficacy of using Artificial Intelligence (AI) techniques and technologies, doing so to explore how the at-the-time newly enacted statutory law might be encoded into a computerized logic-based formalization. A now oft-cited research paper entitled “The British Nationality Act as a Logic Program” was published in 1986 and subsequently became a hallmark for subsequent work in AI and the law." In the 1980s, expert systems proliferated. Universities offered expert system courses and two-thirds of the Fortune 500 companies applied the technology in daily business activities. Interest was international with the Fifth Generation Computer Systems project in Japan and increased research funding in Europe. In 1981, the first IBM PC, with the PC DOS operating system, was introduced. The imbalance between the low cost of the relatively powerful chips in the PC, compared to the much more expensive cost of processing power in the mainframes that dominated the corporate IT world at the time, created a new type of architecture for corporate computing, termed the client–server model. Calculations and reasoning could be performed at a fraction of the price of a mainframe using a PC. This model also enabled business units to bypass corporate IT departments and directly build their own applications. As a result, client-server had a tremendous impact on the expert systems market. Expert systems were already outliers in much of the business world, requiring new skills that many IT departments did not have and were not eager to develop. They were a natural fit for new PC-based shells that promised to put application development into the hands of end users and experts. Until then, the main development environment for expert systems had been high end Lisp machines from Xerox, Symbolics, and Texas Instruments. With the rise of the PC and client-server computing, vendors such as Intellicorp and Inference Corporation shifted their priorities to developing PC-based tools. Also, new vendors, often financed by venture capital started appearing regularly. The first expert system to be used in a design capacity for a large-scale product was the Synthesis of Integral Design (SID) software program, developed in 1982. Written in Lisp, SID generated 93% of the VAX 9000 CPU logic gates. Input to the software was a set of rules created by several expert logic designers. SID expanded the rules and generated software logic synthesis routines many times the size of the rules themselves. Surprisingly, the combination of these rules resulted in an overall design that exceeded the capabilities of the experts themselves, and in many cases out-performed the human counterparts. While some rules contradicted others, top-level control parameters for speed and area provided the tie-breaker. The program was highly controversial but used nevertheless due to project budget constraints. It was terminated by logic designers after the VAX 9000 project completion. During the years before the middle of the 1970s, the expectations of what expert systems can accomplish in many fields tended to be extremely optimistic. At the start of these early studies, researchers were hoping to develop entirely automatic (i.e., completely computerized) expert systems. The expectations of people of what computers can do were frequently too idealistic. This situation radically changed after Richard M. Karp published his breakthrough paper: “Reducibility among Combinatorial Problems” in the early 1970s. Thanks to Karp's work, together with other scholars, like Hubert L. Dreyfus, it became clear that there are certain limits and possibilities when one designs computer algorithms. His findings describe what computers can do and what they cannot do. Many of the computational problems related to this type of expert systems have certain pragmatic limits. These findings laid down the groundwork that led to the next developments in the field. In the 1990s and beyond, the term expert system and the idea of a standalone AI system mostly dropped from the IT lexicon. There are two interpretations of this. One is that "expert systems failed": the IT world moved on because expert systems did not deliver on their over hyped promise. The other is the mirror opposite, that expert systems were simply victims of their success: as IT professionals grasped concepts such as rule engines, such tools migrated from being standalone tools for developing special purpose expert systems, to being one of many standard tools. Other researchers suggest that Expert Systems caused inter-company power struggles when the IT organization lost its exclusivity in software modifications to users or Knowledge Engineers. In the first decade of the 2000s, there was a "resurrection" for the technology, while using the term rule-based systems, with significant success stories and adoption. Many of the leading major business application suite vendors (such as SAP, Siebel, and Oracle) integrated expert system abilities into their suite of products as a way to specify business logic. Rule engines are no longer simply for defining the rules an expert would use but for any type of complex, volatile, and critical business logic; they often go hand in hand with business process automation and integration environments. The limits of prior type of expert systems prompted researchers to develop new types of approaches. They have developed more efficient, flexible, and powerful methods to simulate the human decision-making process. Some of the approaches that researchers have developed are based on new methods of artificial intelligence (AI), and in particular in machine learning and data mining approaches with a feedback mechanism.[failed verification] Recurrent neural networks often take advantage of such mechanisms. Related is the discussion on the disadvantages section. Modern systems can incorporate new knowledge more easily and thus update themselves easily. Such systems can generalize from existing knowledge better and deal with vast amounts of complex data. Related is the subject of big data here. Sometimes these types of expert systems are called "intelligent systems." More recently, it can be argued that expert systems have moved into the area of business rules and business rules management systems. Software architecture An expert system is an example of a knowledge-based system. Expert systems were the first commercial systems to use a knowledge-based architecture. In general view, an expert system includes the following components: a knowledge base, an inference engine, an explanation facility, a knowledge acquisition facility, and a user interface. The knowledge base represents facts about the world. In early expert systems such as Mycin and Dendral, these facts were represented mainly as flat assertions about variables. In later expert systems developed with commercial shells, the knowledge base took on more structure and used concepts from object-oriented programming. The world was represented as classes, subclasses, and instances and assertions were replaced by values of object instances. The rules worked by querying and asserting values of the objects. The inference engine is an automated reasoning system that evaluates the current state of the knowledge-base, applies relevant rules, and then asserts new knowledge into the knowledge base. The inference engine may also include abilities for explanation, so that it can explain to a user the chain of reasoning used to arrive at a particular conclusion by tracing back over the firing of rules that resulted in the assertion. There are mainly two modes for an inference engine: forward chaining and backward chaining. The different approaches are dictated by whether the inference engine is being driven by the antecedent (left hand side) or the consequent (right hand side) of the rule. In forward chaining an antecedent fires and asserts the consequent. For example, consider the following rule: R 1 : M a n ( x ) ⟹ M o r t a l ( x ) {\displaystyle R1:{\mathit {Man}}(x)\implies {\mathit {Mortal}}(x)} A simple example of forward chaining would be to assert Man(Socrates) to the system and then trigger the inference engine. It would match R1 and assert Mortal(Socrates) into the knowledge base. Backward chaining is a bit less straight forward. In backward chaining the system looks at possible conclusions and works backward to see if they might be true. So if the system was trying to determine if Mortal(Socrates) is true it would find R1 and query the knowledge base to see if Man(Socrates) is true. One of the early innovations of expert systems shells was to integrate inference engines with a user interface. This could be especially powerful with backward chaining. If the system needs to know a particular fact but does not, then it can simply generate an input screen and ask the user if the information is known. So in this example, it could use R1 to ask the user if Socrates was a Man and then use that new information accordingly. The use of rules to explicitly represent knowledge also enabled explanation abilities. In the simple example above if the system had used R1 to assert that Socrates was Mortal and a user wished to understand why Socrates was mortal they could query the system and the system would look back at the rules which fired to cause the assertion and present those rules to the user as an explanation. In English, if the user asked "Why is Socrates Mortal?" the system would reply "Because all men are mortal and Socrates is a man". A significant area for research was the generation of explanations from the knowledge base in natural English rather than simply by showing the more formal but less intuitive rules. As expert systems evolved, many new techniques were incorporated into various types of inference engines. Some of the most important of these were: Advantages The goal of knowledge-based systems is to make the critical information required for the system to work explicit rather than implicit. In a traditional computer program, the logic is embedded in code that can typically only be reviewed by an IT specialist. With an expert system, the goal was to specify the rules in a format that was intuitive and easily understood, reviewed, and even edited by domain experts rather than IT experts. The benefits of this explicit knowledge representation were rapid development and ease of maintenance. Ease of maintenance is the most obvious benefit. This was achieved in two ways. First, by removing the need to write conventional code, many of the normal problems that can be caused by even small changes to a system could be avoided with expert systems. Essentially, the logical flow of the program (at least at the highest level) was simply a given for the system, simply invoke the inference engine. This also was a reason for the second benefit: rapid prototyping. With an expert system shell it was possible to enter a few rules and have a prototype developed in days rather than the months or year typically associated with complex IT projects. A claim for expert system shells that was often made was that they removed the need for trained programmers and that experts could develop systems themselves. In reality, this was seldom if ever true. While the rules for an expert system were more comprehensible than typical computer code, they still had a formal syntax where a misplaced comma or other character could cause havoc as with any other computer language. Also, as expert systems moved from prototypes in the lab to deployment in the business world, issues of integration and maintenance became far more critical. Inevitably demands to integrate with, and take advantage of, large legacy databases and systems arose. To accomplish this, integration required the same skills as any other type of system. Summing up the benefits of using expert systems, the following can be highlighted: Disadvantages The most common disadvantage cited for expert systems in the academic literature is the knowledge acquisition problem. Obtaining the time of domain experts for any software application is always difficult, but for expert systems it was especially difficult because the experts were by definition highly valued and in constant demand by the organization. As a result of this problem, a great deal of research in the later years of expert systems was focused on tools for knowledge acquisition, to help automate the process of designing, debugging, and maintaining rules defined by experts. However, when looking at the life-cycle of expert systems in actual use, other problems – essentially the same problems as those of any other large system – seem at least as critical as knowledge acquisition: integration, access to large databases, and performance. Performance could be especially problematic because early expert systems were built using tools (such as earlier Lisp versions) that interpreted code expressions without first compiling them. This provided a powerful development environment, but with the drawback that it was virtually impossible to match the efficiency of the fastest compiled languages (such as C). System and database integration were difficult for early expert systems because the tools were mostly in languages and platforms that were neither familiar to nor welcome in most corporate IT environments – programming languages such as Lisp and Prolog, and hardware platforms such as Lisp machines and personal computers. As a result, much effort in the later stages of expert system tool development was focused on integrating with legacy environments such as COBOL and large database systems, and on porting to more standard platforms. These issues were resolved mainly by the client–server paradigm shift, as PCs were gradually accepted in the IT environment as a legitimate platform for serious business system development and as affordable minicomputer servers provided the processing power needed for AI applications. Another major challenge of expert systems emerges when the size of the knowledge base increases. This causes the processing complexity to increase. For instance, when an expert system with 100 million rules was envisioned as the ultimate expert system, it became obvious that such system would be too complex and it would face too many computational problems. An inference engine would have to be able to process huge numbers of rules to reach a decision. How to verify that decision rules are consistent with each other is also a challenge when there are too many rules. Usually such problem leads to a satisfiability (SAT) formulation. This is a well-known NP-complete problem Boolean satisfiability problem. If we assume only binary variables, say n of them, and then the corresponding search space is of size 2 n {\displaystyle ^{n}} . Thus, the search space can grow exponentially. There are also questions on how to prioritize the use of the rules to operate more efficiently, or how to resolve ambiguities (for instance, if there are too many else-if sub-structures within one rule) and so on. Other problems are related to the overfitting and overgeneralization effects when using known facts and trying to generalize to other cases not described explicitly in the knowledge base. Such problems exist with methods that employ machine learning approaches too. Another problem related to the knowledge base is how to make updates of its knowledge quickly and effectively. Also how to add a new piece of knowledge (i.e., where to add it among many rules) is challenging. Modern approaches that rely on machine learning methods are easier in this regard.[citation needed] Because of the above challenges, it became clear that new approaches to AI were required instead of rule-based technologies. These new approaches are based on the use of machine learning techniques, along with the use of feedback mechanisms. The key challenges that expert systems in medicine (if one considers computer-aided diagnostic systems as modern expert systems), and perhaps in other application domains, include issues related to aspects such as: big data, existing regulations, healthcare practice, various algorithmic issues, and system assessment. Finally, the following disadvantages of using expert systems can be summarized: Applications Hayes-Roth divides expert systems applications into 10 categories illustrated in the following table. The example applications were not in the original Hayes-Roth table, and some of them arose well afterward. Any application that is not footnoted is described in the Hayes-Roth book. Also, while these categories provide an intuitive framework to describe the space of expert systems applications, they are not rigid categories, and in some cases an application may show traits of more than one category. Hearsay was an early attempt at solving voice recognition through an expert systems approach. For the most part this category of expert systems was not all that successful. Hearsay and all interpretation systems are essentially pattern recognition systems—looking for patterns in noisy data. In the case of Hearsay recognizing phonemes in an audio stream. Other early examples were analyzing sonar data to detect Russian submarines. These kinds of systems proved much more amenable to a neural network AI solution than a rule-based approach. CADUCEUS and MYCIN were medical diagnosis systems. The user describes their symptoms to the computer as they would to a doctor and the computer returns a medical diagnosis. Dendral was a tool to study hypothesis formation in the identification of organic molecules. The general problem it solved—designing a solution given a set of constraints—was one of the most successful areas for early expert systems applied to business domains such as salespeople configuring Digital Equipment Corporation (DEC) VAX computers and mortgage loan application development. SMH.PAL is an expert system for the assessment of students with multiple disabilities. GARVAN-ES1 was a medical expert system, developed at the Garvan Institute of Medical Research, that provided automated clinical diagnostic comments on endocrine reports from a pathology laboratory. It was one of the first medical expert systems to go into routine clinical use internationally and the first expert system to be used for diagnosis daily in Australia. The system was written in "C" and ran on a PDP-11 in 64K of memory. It had 661 rules that were compiled; not interpreted. Mistral is an expert system to monitor dam safety, developed in the 1990s by Ismes (Italy). It gets data from an automatic monitoring system and performs a diagnosis of the state of the dam. Its first copy, installed in 1992 on the Ridracoli Dam (Italy), is still operational 24/7/365. It has been installed on several dams in Italy and abroad (e.g., Itaipu Dam in Brazil), and on landslide sites under the name of Eydenet, and on monuments under the name of Kaleidos. Mistral is a registered trade mark of CESI. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Klagmuhme] | [TOKENS: 975] |
Contents Klagmuhme The Klagmuhme or Klagemuhme (both: wailing aunt; German: [ˈklaːɡˌmuːmə, ˈklaːɡəˌmuːmə]) is a female sprite from German folklore also known as Klagmutter or Klagemutter (both: wailing mother; German: [ˈklaːɡˌmʊtɐ, ˈklaːɡəˌmʊtɐ]). She heralds imminent death through wailing and whining and is thus the German equivalent of the banshee. Other name variants The terms Klagmuhme, Klage, Klag, Wehklage (all: wailing), Klageweib, Klagefrau (both: wailing woman), and Klagmütterle (wailing mother) refer both to the Klagmuhme and to the owl with which the Klagmuhme often is identified. The term Klagmutter also refers to the caterpillar of both the death's-head hawkmoth and of Arctiinae moths. Further terms are Klagmütterchen (wailing mother), Winselmutter (whining mother), Haulemutter (wailing mother), Klinselweib (wailing woman), and Klagweh (wailing). The Klagmuhme is first attested in 15th century Middle High German as klagmuoter (wailing mother), denoting an owl. Appearance The Klagmuhme often appears as an animal. So she roves howling around the concerned house in the shape of a longhaired black dog, or she sits whiningly in the corner as a white goose or in the eaves gutter as a dove. She also appears as a big gray cat wearing a scarf on its head or during curfew ringing as a whimpering white or three-legged sheep near the concerned house. She further is a very eerie bird, a fiery toad or a calf with red eyes. As a sheep, she will grow to gigantic proportions if pranked by humans. The Klagmuhme's human appearance is that of an old woman in a black dress with a white scarf. Otherwise, she is described as a small woman with a face covered with cobwebs who is wearing a little three-cornered hat. She also appears clad in linen, as tall as a church steeple, and with glowing eyes or gigantic, hollow-eyed, deathly pale, and dressed in a wafting burial robe. Her appearance can also be neither human nor animal, though. She then appears as a distorted black figure or a rolling tangled clew, particularly a misshapen blue clew spraying sparks. Activities The Klagmuhme's wailing being an omen of death and disaster, it can be downright deadly for those who hear her. If the Klagmuhme wails in front of a house where an ill person is inside, clothing belonging to the diseased is thrown outside the door. If the Klagmuhme carries the clothing away, the diseased will undoubtedly die; if she leaves it behind, the diseased will recover. The prophesied disaster can be averted by immediately telling the Klagmuhme an alternative. In the houses over which she stretches her long bony arm in stormy nights, there will be a corpse ere the moon has finished its cycle. She is usually invisible when she gets close to the houses without ever entering them and floats over them. Her whining (which sounds like "u-u-u!") can cause frightened people to fall ill with a nervous disease. Otherwise, she does nobody harm. It might happen that a cold shiver can be felt at the sight of the Klagmuhme, though. She is usually active at midnight. Origin and identity Regarding its origin, the Klage in Austria is the soul of a deceased. In Saxony, she is the soul of an unlucky mother looking for her drowned son. In the Allgäu, there is a midnight procession of Klagefrauen (wailing women) or of ghostly men carrying a coffin. In Carinthia and Switzerland, the Klagmuhme is part of the wild hunt (German Wildes Heer meaning "wild host"). In the Fichtel Mountains, the Klagmütterlein is a female wood sprite, a Waldweibchen (forest woman). In the Harz, mythologists wrongly identified the Haulemutter with the similar-sounding Frau Holle. Literature References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Phoenicia] | [TOKENS: 10912] |
Contents Phoenicia Phoenicians were an ancient Semitic people who inhabited city-states in Canaan along the Levantine coast of the eastern Mediterranean, primarily in present-day Lebanon and parts of coastal Syria. Their maritime civilization expanded and contracted over time, with its cultural core stretching from Arwad to Mount Carmel. Through trade and colonization, the Phoenicians extended their influence across the Mediterranean, from Cyprus to the Iberian Peninsula, leaving behind thousands of inscriptions. The Phoenicians emerged directly from the Bronze Age Canaanites, their cultural traditions survived the Late Bronze Age collapse and continued into the Iron Age with little interruption. They referred to themselves as Canaanites and their land as Canaan, though the territory they occupied was smaller than that of earlier Bronze Age Canaan. The name Phoenicia is a Greek exonym that did not correspond to a unified native identity. Modern scholarship generally views the distinction between Canaanites and Phoenicians after c. 1200 BC as artificial. Renowned for seafaring and trade, the Phoenicians established one of antiquity's most extensive maritime networks, active for over a millennium. This network facilitated exchanges among cradles of civilization such as Mesopotamia, Egypt, and Greece. They founded colonies and trading posts throughout the Mediterranean; among these, Carthage in North Africa developed into a major power by the 7th century BC. Phoenician society was organized into independent city-states, notably Byblos, Sidon, and Tyre. Each retained political autonomy, and there is no evidence of a shared national identity. While kingship was common, powerful merchant families likely exercised influence through oligarchies. The Phoenician cities flourished most in the 9th century BC, but subsequently declined under the expansion of empires such as the Neo-Assyrian and Achaemenid. Their influence nevertheless endured in the western Mediterranean until the Roman destruction of Carthage in the mid-2nd century BC. Long regarded as a "lost" civilization due to the absence of native historical accounts, the Phoenicians became better understood only after the discovery of inscriptions in the 17th and 18th centuries. Since the mid-20th century, archaeological research has revealed their significance in the ancient world. Their most enduring legacy is the development of the earliest verified alphabet, derived from the Proto-Sinaitic script, which spread across the Mediterranean and gave rise to the Greek alphabet, which in turn gave rise to the Latin and Cyrillic scripts, as well as influencing Syriac and Arabic writing systems. They also contributed innovations in shipbuilding, navigation, industry, agriculture, and governance. Their commercial networks played a foundational role in the economic and cultural development of classical Mediterranean civilization. Etymology Being a society of independent city-states, the Phoenicians apparently did not have a term to denote the land of Phoenicia as a whole; instead, demonyms were often derived from the name of the city a person hailed from (e.g., Sidonian for Sidon, Tyrian for Tyre, etc.) There is no evidence that the peoples living in the area denoted as Phoenicia identified as "Phoenicians" or shared a common identity, although they may have referred to themselves as "Canaanites". Krahmalkov reconstructs the Honeyman inscription (dated to c. 900 BC by William F. Albright) as containing a reference to the Phoenician homeland, calling it Pūt (Phoenician: 𐤐𐤕). Furthermore, as late as the first century BC, a distinction appears to have been made between 'Syrian' and 'Phoenician' people, as evidenced by the epitaph of Meleager of Gadara: 'If you are a Syrian, Salam! If you are a Phoenician, Naidius! If you are a Greek, Chaire! (Hail), and say the same yourself.' Obelisks at Karnak contain references to a "land of fnḫw", fnḫw being the plural form of fnḫ, the Ancient Egyptian word for 'carpenter'. This "land of carpenters" is generally identified as Phoenicia, given that Phoenicia played a central role in the lumber trade of the Levant. As an exonym, fnḫw was evidently loaned into Greek as φοῖνιξ, phoînix, which meant variably 'Phoenician person', 'Tyrian purple, crimson' or 'date palm'. Homer used it with each of these meanings. The word is already attested in Linear B script of Mycenaean Greek from the 2nd millennium BC, as po-ni-ki-jo. In those records, it means 'crimson' or 'palm tree' and does not denote a group of people. The name Phoenicians, like Latin Poenī (adj. poenicus, later pūnicus), comes from Greek Φοινίκη, Phoiníkē. According to Krahmalkov, Poenulus, a Latin comedic play written in the early 2nd century BC, appears to preserve a Punic term for the Phoenician/Punic language which may be reconstructed as Pōnnīm, a point disputed by Joseph Naveh, a professor of West Semitic epigraphy and palaeography at the Hebrew University. History Since little has survived of Phoenician records or literature, most of what is known about their origins and history comes from the accounts of other civilizations and inferences from their material culture excavated throughout the Mediterranean. The scholarly consensus is that the Phoenicians' period of greatest prominence was 1200 BC to the end of the Persian period (332 BC). It is debated among historians and archaeologists whether Phoenicians were actually distinct from the broader group of Semitic-speaking peoples known as Canaanites. Historian Robert Drews believes the term "Canaanites" corresponds to the ethnic group referred to as "Phoenicians" by the ancient Greeks; archaeologist Jonathan N. Tubb argues that "Ammonites, Moabites, Israelites, and Phoenicians undoubtedly achieved their own cultural identities, and yet ethnically they were all Canaanites", "the same people who settled in farming villages in the region in the 8th millennium BC". Brian R. Doak states that scholars use "Phoenicians" as a short-hand for "Canaanites living in a set of cities along the northern Levantine coast who shared a language and material culture in the Iron I–II period and who also developed an organized system of colonies in the western Mediterranean world". The Phoenician Early Bronze Age is largely unknown. The two most important sites are Byblos and Sidon-Dakerman (near Sidon), although, as of 2021, well over a hundred sites remain to be excavated, while others that have been are yet to be fully analysed. The Middle Bronze Age was a generally peaceful time of increasing population, trade, and prosperity, though there was competition for natural resources. In the Late Bronze Age, rivalry between Egypt, the Mittani, the Hittites, and Assyria had a significant impact on Phoenician cities. The Canaanite culture that gave rise to the Phoenicians apparently developed in situ from the earlier Ghassulian chalcolithic culture. The Ghassulian culture itself developed from the Circum-Arabian Nomadic Pastoral Complex, which in turn developed from a fusion of their ancestral Natufian and Harifian cultures with Pre-Pottery Neolithic B (PPNB) farming cultures. These practiced the domestication of animals during the 8.2 kiloyear event, which led to the Neolithic Revolution in the Levant. The Late Bronze Age state of Ugarit is considered Canaanite, even though the Ugaritic language does not belong to the Canaanite languages proper, and some of the texts on clay tablets discovered there indicate that the inhabitants of Ugarit did not consider themselves Canaanites. The fourth-century BC Greek historian Herodotus claimed that the Phoenicians had migrated from the Erythraean Sea around 2750 BC and the first-century AD geographer Strabo reports a claim that they came from Tylos and Arad (Bahrain and Muharraq). Some archaeologists working on the Persian Gulf have accepted these traditions and suggest a migration connected with the collapse of the Dilmun civilization c. 1750 BC. However, most scholars reject the idea of a migration; archaeological and historical evidence alike indicate millennia of population continuity in the region, and recent genetic research indicates that present-day Lebanese derive most of their ancestry from a Canaanite-related population. The first known account of the Phoenicians relates the conquests of Pharaoh Thutmose III (1479–1425 BC), including the subjugation of those the Egyptians called Fenekhu ('carpenters'). The Egyptians targeted the coastal cities such as Byblos, Arwad, and Ullasa for their crucial geographic and commercial links with the interior (via the Nahr al-Kabir and the Orontes rivers). The cities provided Egypt with access to Mesopotamian trade and abundant stocks of the region's native cedarwood, of which there was no equivalent in the Egyptian homeland. Thutmose IV himself visited Sidon, where the purchase of lumber from Lebanon was arranged. By the mid-14th century BC, the Phoenician city-states were considered "favored cities" by the Egyptians. Tyre, Sidon, Beirut, and Byblos were regarded as the most important. The Phoenicians had considerable autonomy, and their cities were reasonably well developed and prosperous. Byblos was the leading city; it was a center for bronze-making and the primary terminus of trade routes for precious goods such as tin and lapis lazuli from as far east as Afghanistan. Sidon and Tyre also commanded the interest of Egyptian governmental officials, beginning a pattern of commercial rivalry that would span the next millennium. The Amarna letters report that from 1350 to 1300 BC, neighboring Amorites and Hittites were capturing Phoenician cities, especially in the north. Egypt subsequently lost its coastal holdings from Ugarit in northern Syria to Byblos near central Lebanon. Sometime between 1200 and 1150 BC, the Late Bronze Age collapse severely weakened or destroyed most civilizations in the region, including those of the Egyptians and the Hittites. The Phoenicians were able to survive and navigate the challenges of the crisis, and by 1230 BC city-states such as Tyre, Sidon, and Byblos maintained political independence, asserted their maritime interests, and enjoyed economic prosperity. The period sometimes described as a "Phoenician renaissance" had begun, and by the end of the 11th century BC, an alliance formed between Tyre and Israel had created a new geopolitical status quo in the Levant. Commercial maritime activity now involved not just mercantilism, but colonization as well, and Phoenician expansion into the Mediterranean was well under way. The Phoenician city-states during this time were Tyre, Sidon, Byblos, Aradus, Beirut, and Tripoli. They filled the power vacuum caused by the Late Bronze Age collapse and created a vast mercantile network. The recovery of the Mediterranean economy can be credited to Phoenician mariners and merchants, who re-established long-distance trade between Egypt and Mesopotamia in the 10th century BC. Early in the Iron Age, the Phoenicians established ports, warehouses, markets, and settlements all across the Mediterranean and up to the southern Black Sea. Colonies were established on Cyprus, Sardinia, the Balearic Islands, Sicily, and Malta, as well as the coasts of North Africa and the Iberian Peninsula. Phoenician hacksilver dated to this period bears lead isotope ratios matching ores in Sardinia and Spain, indicating the extent of Phoenician trade networks. By the tenth century BC, Tyre rose to become the richest and most powerful Phoenician city-state, particularly during the reign of Hiram I (c. 969–936 BC). The expertise of Phoenician artisans sent by Hiram I of Tyre in significant construction projects during the reign of Solomon, the King of Israel, is alluded to in the Hebrew Bible, although the reliability of this biblical history is disputed by some scientific researchers in modern times. During the rule of the priest Ithobaal (887–856 BC), Tyre expanded its territory as far north as Beirut and into part of Cyprus; this unusual act of aggression was the closest the Phoenicians ever came to forming a unitary territorial state. Once his realm reached its largest territorial extent, Ithobaal declared himself "King of the Sidonians", a title that would be used by his successors and mentioned in both Greek and Jewish accounts. The Late Iron Age saw the height of Phoenician shipping, mercantile, and cultural activity, particularly between 750 and 650 BC. The Phoenician influence was visible in the "orientalization" of Greek cultural and artistic conventions. Among their most popular goods were fine textiles, typically dyed with Tyrian purple. Homer's Iliad, which was composed during this period, references the quality of Phoenician clothing and metal goods. Carthage was founded by Phoenicians coming from Tyre, probably to provide an anchorage and supplies to the Tyrian merchants in their voyages. The city's name in Punic, Qart-Ḥadašt (𐤒𐤓𐤕 𐤇𐤃𐤔𐤕), means 'New City'. There is a tradition in some ancient sources, such as Philistos of Syracuse, for an "early" foundation date of around 1215 BC—before the fall of Troy in 1180 BC. However, Timaeus, a Greek historian from Sicily c. 300 BC, places the foundation of Carthage in 814 BC, which is the date generally accepted by modern historians. Legend, including Virgil's Aeneid, assigns the founding of the city to Queen Dido. Carthage would grow into a multi-ethnic empire spanning North Africa, Sardinia, Sicily, Malta, the Balearic Islands, and southern Iberia, but would ultimately be destroyed by Rome in the Punic Wars (264–146 BC). It was eventually rebuilt as a Roman city by Julius Caesar in the period from 49 to 44 BC, with the official name Colonia Iulia Concordia Carthago. As mercantile city-states concentrated along a narrow coastal strip of land, the Phoenicians lacked the size and population to support a large military. Thus, as neighboring empires began to rise, the Phoenicians increasingly fell under the sway of foreign rulers, who to varying degrees circumscribed their autonomy. The Assyrian domination of Phoenicia began with King Shalmaneser III. He rose to power in 858 BC and began a series of campaigns against neighboring states. Although he did not invade Phoenicia and maintained good relations with the Phoenician cities, he demanded tribute from the "kings of the seacoast", a group which probably included the Phoenician city-states. According to Aubet, Tyre, Sidon, Arwad and Byblos paid tribute in bronze and bronze vessels, tin, silver, gold, ebony and ivory. Initially, they were not annexed outright—they were allowed a certain degree of freedom. This changed in 744 BC with the ascension of Tiglath-Pileser III. By 738 BC, most of the Levant, including northern Phoenicia, was annexed; only Tyre and Byblos, the most powerful city-states, remained tributary states outside of direct Assyrian control. Tyre, Byblos, and Sidon all rebelled against Assyrian rule. In 721 BC, Sargon II besieged Tyre and crushed the rebellion. His successor Sennacherib suppressed further rebellions across the region. During the seventh century BC, Sidon rebelled and was destroyed by Esarhaddon, who enslaved its inhabitants and built a new city on its ruins. By the end of the century, the Assyrians had been weakened by successive revolts, which led to their destruction by the Median Empire.[citation needed] The Babylonians, formerly vassals of the Assyrians, took advantage of the empire's collapse and rebelled, quickly establishing the Neo-Babylonian Empire in its place. Phoenician cities revolted several times throughout the reigns of the first Babylonian kings: Nabopolassar (626–605 BC) and his son Nebuchadnezzar II (c. 605 – c. 562 BC). Nebuchadnezzar besieged Tyre, his siege commonly having been thought to have lasted thirteen years, although the city was not destroyed and suffered little damage. The consensus opinion in contemporary Phoenician historiography is that the thirteen-year siege began soon after the conquest of Jerusalem in 587 BC, and lasted from 585 BC through 573 BC. Among the writings of ancient historians, this detail about the length of Nebuchadnezzar's supposed thirteen-year siege of Tyre in the early sixth century BC can be found only in Josephus' first century writings, recorded almost 700 years after the date of the purported event. Helen Dixon proposes that the putative 'thirteen-year' siege was more likely several small-scale interventions in the region, or a limited blockade between the land-side city and its port. In 539 BC, Cyrus the Great, king and founder of the Persian Achaemenid Empire, took Babylon. As Cyrus began consolidating territories across the Near East, the Phoenicians apparently made the pragmatic calculation of "[yielding] themselves to the Persians". Most of the Levant was consolidated by Cyrus into a single satrapy (province) and forced to pay a yearly tribute of 350 talents, which was roughly half the tribute that was required of Egypt and Libya. The Phoenician area was later divided into four vassal kingdoms—Sidon, Tyre, Arwad, and Byblos—which were allowed considerable autonomy. Unlike in other areas of the empire, there is no record of Persian administrators governing the Phoenician city-states. Local Phoenician kings were allowed to remain in power and given the same rights as Persian satraps (governors), such as hereditary offices and the minting of coinage. The Phoenicians remained a core asset to the Achaemenid Empire, particularly for their prowess in maritime technology and navigation; they furnished the bulk of the Persian fleet during the Greco-Persian Wars of the late fifth century BC. Phoenicians under Xerxes I built the Xerxes Canal and the pontoon bridges that allowed his forces to cross into mainland Greece. Nevertheless, they were harshly punished by Xerxes following his defeat at the Battle of Salamis, which he blamed on Phoenician cowardice and incompetence. In the mid-fourth century BC, King Tennes of Sidon led a failed rebellion against Artaxerxes III, enlisting the help of the Egyptians, who were subsequently drawn into a war with the Persians. The resulting destruction of Sidon led to the resurgence of Tyre, which remained the dominant Phoenician city for two decades until the arrival of Alexander the Great. Phoenicia was one of the first areas to be conquered by Alexander the Great during his military campaigns across western Asia. Alexander's main target in the Persian Levant was Tyre, now the region's largest and most important city. It capitulated after a roughly seven month siege, during which some of its non-combatant citizens were sent to Carthage. Tyre's refusal to allow Alexander to visit its temple to Melqart, culminating in the killing of his envoys, led to a brutal reprisal: 2,000 of its leading citizens were crucified and a puppet ruler was installed. The rest of Phoenicia easily came under his control, with Sidon surrendering peacefully. Alexander's empire had a Hellenization policy, whereby Hellenic culture, religion, and sometimes language were spread or imposed across conquered peoples. However, Hellenization was not enforced most of the time and was just a language of administration until his death. This was typically implemented in other lands through the founding of new cities, the settlement of a Macedonian or Greek urban elite, and the alteration of native place names to Greek. However, there was no organized Hellenization in Phoenicia, and with one or two minor exceptions, all Phoenician city-states retained their native names, while Greek settlement and administration appears to have been very limited. The Phoenicians maintained cultural and commercial links with their western counterparts. Polybius recounts how the Seleucid King Demetrius I escaped from Rome by boarding a Carthaginian ship that was delivering goods to Tyre. The adaptation to Macedonian rule was probably aided by the Phoenicians' historical ties with the Greeks, with whom they shared some mythological stories and figures; the two peoples were even sometimes considered "relatives". When Alexander's empire collapsed after his death in 323 BC, the Phoenicians came under the control of the largest of its successors, the Seleucids. The Phoenician homeland was repeatedly contested by the Ptolemaic Kingdom of Egypt during the forty-year Syrian Wars, coming under Ptolemaic rule in the third century BC. The Seleucids reclaimed the area the following century, holding it until the mid-first 2nd century BC. Under their rule, the Phoenicians were allowed a considerable degree of autonomy and self-governance. During the Seleucid Dynastic Wars (157–63 BC), the Phoenician cities were mainly self-governed. Many of them were fought for or over by the warring factions of the Seleucid royal family. Some Phoenician regions were under Jewish influence, after the Jews revolted and succeeded in defeating the Seleucids in 164 BC. A significant portion of the Phoenician diaspora in North Africa thus converted to Judaism in the late millennium BC. The Seleucid Kingdom was seized by Tigranes the Great of Armenia in 74/73 BC, ending the Hellenistic influence on the Levant. Demographics The people now known as Phoenicians were a group of ancient Semitic-speaking peoples that emerged in the Levant in at least the third millennium BC. Phoenicians did not refer themselves as "Phoenicians" but rather are thought to have broadly referred to themselves as "Kenaʿani", meaning 'Canaanites'. Phoenicians identified themselves specifically with the name of the city they hailed from (e.g., Sidonian for Sidon, Tyrian for Tyre, etc.). [citation needed][failed verification] A 2008 study led by Pierre Zalloua found that six subclades of Haplogroup J-M172 (J2)—thought to have originated between the Caucasus Mountains, Mesopotamia and the Levant—were of a "Phoenician signature" and present amongst the male populations of coastal Lebanon as well as the wider Levant (the "Phoenician Periphery"), followed by other areas of historic Phoenician settlement, spanning Cyprus through to Morocco. This deliberate sequential sampling was an attempt to develop a methodology to link the documented historical expansion of a population with a particular geographic genetic pattern or patterns. The researchers suggested that the proposed genetic signature stemmed from "a common source of related lineages rooted in Lebanon". Another study in 2006 found evidence for the genetic persistence of Phoenicians in the Spanish island of Ibiza. In 2016, the rare U5b2c1 maternal haplogroup was identified in the DNA of a 2,500-year-old male skeleton excavated from a Punic tomb in Tunisia. The lineage of this "Young Man of Byrsa" is believed to represent early gene flow from Iberia to the Maghreb. According to a 2017 study published by the American Journal of Human Genetics, present-day Lebanese derive most of their ancestry from a Canaanite-related population, which therefore implies substantial genetic continuity in the Levant since at least the Bronze Age. More specifically, the research of geneticist Chris Tyler-Smith and his team at the Sanger Institute in Britain, who compared "sampled ancient DNA from five Canaanite people who lived 3,750 and 3,650 years ago" to modern people, revealed that 93 percent of the genetic ancestry of people in Lebanon came from the Canaanites (the other 7 percent was of a Eurasian steppe population). One 2018 study of mitochondrial lineages in Sardinia concluded that the Phoenicians were "inclusive, multicultural and featured significant female mobility", with evidence of indigenous Sardinians integrating "peacefully and permanently" with Semitic Phoenician settlers. The study also found evidence suggesting that south Europeans may have likewise settled in the area of modern Lebanon. In a 2020 study published in the American Journal of Human Genetics, researchers have shown that there is substantial genetic continuity in Lebanon since the Bronze Age interrupted by three significant admixture events during the Iron Age, Hellenistic, and Ottoman period. In particular, the Phoenicians can be modeled as a mixture of the local Bronze Age population (63–88%) and a population coming from the North, related to ancient Anatolians or ancient South-Eastern Europeans (12–37%). The results show that a Steppe-like ancestry, typically found in Europeans, appears in the region starting from the Iron Age. A 2022 analysis of maternal haplogroups from ancient samples from Punic sites of Motya and Lilibeo in Sicily, indicates that the Sicilian Phoenicians shared genetic similarities with Bronze Age samples from the Iberian Peninsula, Sardinia and Italy, and are not particularly close to other Phoenicians from Sardinia and the Iberian islands. The Phoenicians in Motya shared lesser genetic similarities with samples from Bronze Age Levant. A genetic study published in Nature Communications in April 2025 examined the remains of 196 individuals from 14 sites traditionally identified as Phoenician and Punic in the central and western Mediterranean. The results suggest that during the earlier stages of the Phoenician colonization, the Punic demographic expansion was primarily driven by the spread of people with Sicilian-Aegean ancestry, while Levantine Phoenicians made little to no genetic contribution to Punic settlements in the central and western Mediterranean. The North African ancestry became widespread only after 400 BCE in the Punic world, suggesting that expanding Carthaginian influence facilitated this spread. However, this was a minority contributor of ancestry in all of the sampled sites, including in Carthage itself. Economy The Phoenicians served as intermediaries between the disparate civilizations that spanned the Mediterranean and Near East, facilitating the exchange of goods and knowledge, culture, and religious traditions. Their expansive and enduring trade network is credited with laying the foundations of an economically and culturally cohesive Mediterranean, which would be continued by the Greeks and especially the Romans. Phoenician ties with the Greeks ran deep. The earliest verified relationship appears to have begun with the Minoan civilization on Crete (1950–1450 BC), which together with the Mycenaean civilization (1600–1100 BC) is considered the progenitor of classical Greece. Archaeological research suggests that the Minoans gradually imported Near Eastern goods, artistic styles, and customs from other cultures via the Phoenicians.[citation needed] The Phoenicians were known for trading beer across their colonies around the Mediterranean, particularly along the North African coast. The trade expanded to regions beyond the Mediterranean, including the Basque Country, where it is believed that beer brewing was introduced by the Phoenicians.[better source needed] To Egypt the Phoenicians sold logs of cedar for significant sums, and wine beginning in the eighth century. The wine trade with Egypt is vividly documented by shipwrecks discovered in 1997 in the open sea 50 kilometres (30 mi) west of Ascalon, Israel. Pottery kilns at Tyre and Sarepta produced the large terracotta jars used for transporting wine. From Egypt, the Phoenicians bought Nubian gold. From elsewhere, they obtained other materials, perhaps the most crucial being silver, mostly from Sardinia and the Iberian Peninsula. Tin for making bronze "may have been acquired from Galicia by way of the Atlantic coast of southern Spain; alternatively, it may have come from northern Europe (Cornwall or Brittany) via the Rhone valley and coastal Massalia". Strabo states that there was a highly lucrative Phoenician trade with Britain for tin via the Cassiterides, whose location is unknown but may have been off the northwest coast of the Iberian Peninsula. Phoenicia lacked considerable natural resources other than its cedar wood. Timber was probably the earliest and most lucrative source of wealth; neither Egypt nor Mesopotamia had adequate wood sources. Unable to rely solely on this limited resource, the Phoenicians developed an industrial base manufacturing a variety of goods for both everyday and luxury use. The Phoenicians developed or mastered techniques such as glass-making, engraved and chased metalwork (including bronze, iron, and gold), ivory carving, and woodwork. The Phoenicians were early pioneers in mass production, and sold a variety of items in bulk. They set up trade networks to market their glassware and became its leading source in antiquity, shipping flasks, beads, and other glass objects across the Mediterranean in their vessels. Excavations of colonies in Spain suggest they also used the potter's wheel. Their exposure to a wide variety of cultures allowed them to manufacture goods for specific markets. The Iliad suggests Phoenician clothing and metal goods were highly prized by the Greeks. Specialized goods were designed specifically for wealthier clientele, including ivory reliefs and plaques, carved clam shells, sculpted amber, and finely detailed and painted ostrich eggs. The most prized Phoenician goods were fabrics dyed with Tyrian purple, which formed a major part of Phoenician wealth. The violet-purple dye derived from the hypobranchial gland of the Murex marine snail, once profusely available in coastal waters of the eastern Mediterranean Sea but now exploited to local extinction. Phoenicians may have discovered the dye as early as 1750 BC. The Phoenicians established a second production center for the dye in Mogador, in present-day Morocco. The Phoenicians' exclusive command over the production and trade of the dye, combined with the labor-intensive extraction process, made it very expensive. Tyrian purple subsequently became associated with the upper classes. It soon became a status symbol in several civilizations, most notably among the Romans. Assyrian tribute records from the Phoenicians include "garments of brightly colored stuff" that most likely included Tyrian purple. While the designs, ornamentation, and embroidery used in Phoenician textiles were well-regarded, the techniques and specific descriptions are unknown. Mining operations in the Phoenician homeland were limited; iron was the only metal of any worth. The first large-scale mining operations by Phoenicians probably occurred in Cyprus, principally for copper. Sardinia may have been colonized almost exclusively for its mineral resources; Phoenician settlements were concentrated in the southern parts of the island, close to sources of copper and lead. Piles of scoria and copper ingots, which appear to predate Roman occupation, suggest the Phoenicians mined and processed metals on the island. The Iberian Peninsula was the richest source of numerous metals in antiquity, including gold, silver, copper, iron, tin, and lead. The output of silver during the Phoenician and Carthaginian occupation there was enormous. The Carthaginians relied on slave labor almost exclusively in their mining operations, and according to Rawlinson, because they likely continued the established practices of their predecessors in Iberia, the Phoenicians themselves probably also used slave labor. The most notable agricultural product was wine, which the Phoenicians helped propagate across the Mediterranean. The common grape vine may have been domesticated by the Phoenicians or Canaanites, although it most likely arrived from Transcaucasia via trade routes across Mesopotamia or the Black Sea. Vines grew readily in the coastal Levant, and wine was exported to Egypt as early as the Old Kingdom period (2686–2134 BC). Wine played an important part in Phoenician religion, serving as the principal beverage for offerings and sacrifice. An excavation of a small Phoenician town south of Sidon uncovered a wine factory used from at least the seventh century BC, which is believed to have been aimed for an overseas market. To prevent oxidation of their contents, amphorae were sealed with a disk plug made of pinewood and a mixture of resin and clay. The Phoenicians established vineyards and wineries in their colonies in North Africa, Sicily, France, and Spain, and may have taught winemaking to some of their trading partners. The ancient Iberians began producing wine from local grape varieties following their encounter with the Phoenicians. Iberian cultivars subsequently formed the basis of most western European wine. As early as 1200 BC, texts from Ugarit suggest that Canaanite merchant ships were capable of carrying cargoes weighing up to 450 tons. During the first millennium BC, the cargo capacity of Phoenician merchant ships ranged between 100 and 500 tons. The Phoenicians pioneered the use of locked mortise and tenon joints, known as Phoenician joints, to secure the planking of ship hulls underwater. This method involved cutting mortises into adjoining planks and inserting wooden tenons to join them, which were then secured with dowels. Examples of this technique include the Uluburun shipwreck (c. 1320 BC) and the Cape Gelidonya shipwreck (c. 1200 BC). The innovation spread across the Mediterranean and influenced Greek and Roman shipbuilding, with the Romans referring to it as coagmenta punicana. The Phoenicians were possibly the first to introduce the bireme. Fernand Braudel cites the bas-relief carvings on the walls of the palace of Nineveh which depict the Tyrian fleet fleeing the port of Tyre before the city was attacked by Sennacherib c. 700 BC. The Phoenicians sailed their biremes close to shore and only in fair weather. They have also been credited with developing the trireme by scholars such as Lucien Basch. Referring to archaeological evidence of ships depicted in the Nineveh relief, cylinder seals, and Phoenician coins, he argues that the trireme was invented in Sidon around 700 BC and later adopted by the Greeks. The classicist J. S. Morrison, a student of the trireme, quotes Thucydides' statement that triereis, or triremes, were said to have been built first at Corinth in Greece. Although he allows that Phoenicians of 701 BC were credited by the sculptor of the Nineveh relief with one type of the vessel, interpreted by Morrison as having three banks of oarsmen on each side in three tiers with the uppermost tier unmanned, he argues that there is no good reason why Thucydides' account should not be believed. The trieme was regarded as the most advanced vessel in the ancient Mediterranean world. The Phoenicians developed several other maritime inventions. The amphora, a type of container used for both dry and liquid goods, was an ancient Phoenician invention that became a standardized measurement of volume for close to two thousand years. The remnants of self-cleaning artificial harbors have been discovered in Sidon, Tyre, Atlit, and Acre. The first example of admiralty law also appears in the Levant. The Phoenicians continued to contribute to cartography into the Iron Age. In 2014, a 12 metres (39 ft) long Phoenician trading ship was found near Gozo island in Malta. Dated 700 BC, it is one of the oldest wrecks found in the Mediterranean. Fifty amphorae, used to contain wine and oil, were scattered nearby. Important cities and colonies The Phoenicians were not a nation in the political sense. However, they were organized into independent city-states that shared a common language and culture. The leading city-states were Tyre, Sidon, and Byblos. Rivalries were expected, but armed conflict was rare. Numerous other cities existed in the Levant alone, many probably unknown, including Beiruta (modern Beirut) Ampi, Amia, Arqa, Baalbek, Botrys, Sarepta, and Tripolis. From the late tenth century BC, the Phoenicians established commercial outposts throughout the Mediterranean, with Tyre founding colonies in Cyprus, Sardinia, Iberia, the Balearic Islands, Sicily, Malta, and North Africa. Later colonies were established beyond the Straits of Gibraltar, particularly on the Atlantic coast of Iberia. The Phoenicians may have explored the Canary Islands and the British Isles. Phoenician settlement was primarily concentrated in Cyprus, Sicily, Sardinia, Malta, northwest Africa, the Balearic Islands, and southern Iberia. To facilitate their commercial ventures, the Phoenicians established numerous colonies and trading posts along the coasts of the Mediterranean. Phoenician city states generally lacked the numbers or even the desire to expand their territory overseas. Few colonies had more than 1,000 inhabitants; only Carthage and some nearby settlements in the western Mediterranean would grow larger. A major motivating factor was competition with the Greeks, who began expanding across the Mediterranean during the same period. Though largely peaceful rivals, their respective settlements in Crete and Sicily did clash intermittently. The earliest Phoenician settlements outside the Levant were on Cyprus and Crete, gradually moving westward towards Corsica, the Balearic Islands, Sardinia, and Sicily, as well as on the European mainland in Cádiz and Málaga. The first Phoenician colonies in the western Mediterranean were along the northwest African coast and on Sicily, Sardinia and the Balearic Islands. Tyre led the way in settling or controlling coastal areas. Phoenician colonies were fairly autonomous. At most, they were expected to send annual tribute to their mother city, usually in the context of a religious offering. However, in the seventh century BC the western colonies came under the control of Carthage, which was exercised directly through appointed magistrates. Carthage continued to send annual tribute to Tyre for some time after its independence. Society and culture Since very little of the Phoenicians' writings have survived, much of what is known about their culture and society comes from accounts by contemporary civilizations or inferences from archaeological discoveries.[citation needed] The Phoenicians had much in common with other Canaanites, including language, religion, social customs, and a monarchical political system centered around city-states. Their culture, economy, and daily life were heavily centered on commerce and maritime trade. Their propensity for seafaring brought them into contact with many other civilizations.[better source needed] The Phoenician city-states were highly independent, competing with each other. Formal alliances between city-states were rare. The relative power and influence of city-states varied over time. Sidon was dominant between the 12th and 11th centuries BC and influenced its neighbors. However, by the tenth century BC, Tyre rose to become the most powerful city. At least in its earlier stages, Phoenician society was highly stratified and predominantly monarchical. Hereditary kings usually governed with absolute power over civic, commercial, and religious affairs. They often relied upon senior officials from the noble and merchant classes; the priesthood was a distinct class, usually of royal lineage or leading merchant families. The King was considered a representative of the gods and carried many obligations and duties concerning religious processions and rituals. Priests were thus highly influential and often became intertwined with the royal family. Phoenician kings did not commemorate their reign through sculptures or monuments. Their wealth, power, and accomplishments were usually conveyed through ornate sarcophagi, like that of Ahiram of Byblos. The Phoenicians kept records of their rulers in tomb inscriptions, which are among the few primary sources still available. Historians have determined a clear line of succession over centuries for some city-states, notably Byblos and Tyre. Starting as early as 15th century BC, Phoenician leaders were "advised by councils or assemblies which gradually took greater power". In the sixth century BC, during the period of Babylonian rule, Tyre briefly adopted a system of government consisting of a pair of judges with authority roughly equivalent to the Roman consul, known as sufetes (shophets), who were chosen from the most powerful noble families and served short terms. In the fourth century BC, when the armies of Alexander the Great approached Tyre, they were met not by its King but by representatives of the commonwealth of the city. Similarly, historians at the time describe the "inhabitants" or "the people" of Sidon making peace with Alexander. When the Macedonians sought to appoint a new king over Sidon, the citizens nominated their candidate. After the King and council, the two most important political positions in virtually every Phoenician city-state were governor and commander of the army. Details regarding the duties of these offices are sparse. However, it is known that the governor was responsible for collecting taxes, implementing decrees, supervising judges, and ensuring the administration of law and justice. As warfare was rare among the most mercantile Phoenicians, the army's commander was generally responsible for ensuring the defense and security of the city-state and its hinterlands. The Phoenicians had a system of courts and judges that resolved disputes and punished crimes based on a semi-codified body of laws and traditions. Laws were implemented by the state and were the responsibility of the ruler and certain designated officials. Like other Levantine societies, laws were harsh and biased, reflecting the social stratification of society. The murder of a commoner was treated as less severe than that of a nobleman, and the upper classes had the most rights; the wealthy often escaped punishment by paying a fine. Free men of any class could represent themselves in court and had more rights than women and children, while slaves had no rights. Men could often deflect punishment to their wives, children, or slaves, even having them serve their sentence in their place. Lawyers eventually emerged as a profession for those who could not plead their case. As in neighboring societies at the time, penalties for crimes were often severe, usually reflecting the principle of reciprocity; for example, the killing of a slave would be punished by having the offender's slave killed. Imprisonment was rare, with fines, exile, punishment, and execution the main remedies. As with most aspects of Phoenician civilization, there are few records of their military or approach to warfare. Compared to most of their neighbors, the Phoenicians generally had little interest in conquest and were relatively peaceful. The wealth and prosperity of all their city-states rested on foreign trade, which required good relations and a certain degree of mutual trust. They also lacked the territory and agricultural base to support a population large enough to raise an army of conquest.[citation needed] Instead, each city had an army commander in charge of a defensive garrison. However, the specifics of the role, or city defense, are unknown.[citation needed] The Phoenician language was a member of the Canaanite branch of the Northwest Semitic languages. Its descendant language spoken in the Carthaginian Empire is termed Punic. Punic was still spoken in the fifth century AD and known to St. Augustine of Hippo. Around 1050 BC, the Phoenicians developed a script for writing their own language. The Canaanite-Phoenician alphabet consists of 22 letters, all consonants (and is thus strictly an abjad). It is believed to be a continuation of the Proto-Sinaitic (or Proto-Canaanite) script attested in the Sinai and in Canaan in the Late Bronze Age. Through their maritime trade, the Phoenicians spread the use of the alphabet to Anatolia, North Africa, and Europe. The name Phoenician is by convention given to inscriptions beginning around 1050 BC, because Phoenician, Hebrew, and other Canaanite dialects were largely indistinguishable before that time. Phoenician inscriptions are found in Lebanon, Syria, Israel, Palestine, Cyprus and other locations, as late as the early centuries of the Christian era. The alphabet was adopted and modified by the Greeks probably in the eighth century BC. This most likely did not occur in a single instance but via the drawn out process of long-term commercial exchange. According to Alessandro Pierattini, the Apollo sanctuary at Eretria is considered one of the places where the Greeks might have first adopted the Phoenician alphabet. The legendary Phoenician hero Cadmus is credited with bringing the alphabet to Greece, but it is more plausible that Phoenician immigrants brought it to Crete, whence it gradually diffused northwards. Phoenician art was largely centered on ornamental objects, particularly jewelry, pottery, glassware, and reliefs. Large sculptures were rare; figurines were more common. Phoenician goods have been found from Spain and Morocco to Russia and Iraq; much of what is known about Phoenician art is based on excavations outside Phoenicia proper. Phoenician art was highly influenced by many cultures, primarily Egypt, Greece, and Assyria. Greek inspiration was particularly pronounced in pottery, while Egyptian themes were most reflected in bronze and ivory work. Phoenician art also differed from its contemporaries in its continuance of Bronze Age conventions well into the Iron Age, such as terracotta masks. Phoenician artisans were known for their skill with wood, ivory, bronze, and textiles. In the Old Testament, a craftsman from Tyre is commissioned to build and decorate the legendary Solomon's Temple in Jerusalem, which "presupposes a well-developed and highly respected craft industry in Phoenicia by the mid-tenth century BC". The Iliad mentions the embroidered robes of Priam's wife, Hecabe, as "the work of Sidonian women" and describes a mixing bowl of chased silver as "a masterpiece of Sidonian craftsmanship". The Assyrians appeared to have valued Phoenician ivory work in particular, collecting vast quantities in their palaces. Phoenician art appears to have been indelibly tied to Phoenician commercial interests. They have crafted goods to appeal to particular trading partners, distinguishing not only different cultures but even socioeconomic status classes. Women in Phoenicia took part in public events and religious processions, with depictions of banquets showing them casually sitting or reclining with men, dancing, and playing music. In most contexts, women were expected to dress and behave more modestly than men; female figures are almost always portrayed as clothed from head to feet, with the arms sometimes covered as well. Although they rarely had political power, women took part in community affairs, including in the popular assemblies that emerged in some city-states. At least one woman, Unmiashtart, is recorded to have ruled Sidon in the fifth century BC. The two most famous Phoenician women are political figures: Jezebel, portrayed in the Bible as the wicked princess of Sidon, and Dido, the semi-legendary founder and first queen of Carthage. In Virgil's epic poem, the Aeneid, Dido is described as having been the co-ruler of Tyre, using cleverness to escape the tyranny of her brother Pygmalion and to secure an ideal site for Carthage. Religion The religious practices and beliefs of Phoenicians were generally common to those of their neighbors in Canaan, which in turn shared characteristics common throughout the ancient Semitic world. Religious rites were primarily for city-state purposes; payment of taxes by citizens was considered in the category of religious sacrifices. The Phoenician sacred writings known to the ancients have been lost. Several Canaanite practices are alleged in ancient sources and mentioned by scholars, such as temple prostitution and child sacrifice. Special sites known as "Tophets" were allegedly used by the Phoenicians "to burn their sons and their daughters in the fire", and are condemned in the Hebrew Bible, particularly in Jeremiah 7:30–32, and in 2nd Kings 23:10 and 17:17. Notwithstanding differences, cultural and religious similarities persisted between the ancient Hebrews and the Phoenicians. Biblical traditions state that the Tribe of Asher lived amongst local Phoenicians, and that David and Solomon gave Phoenicia full political autonomy due to their supremacy in shipping and trade. Canaanite religious mythology does not appear as elaborate as their Semitic cousins in Mesopotamia. In Canaan the supreme god was called El (𐤀𐤋, 'god'). The son of El was Baal (𐤁𐤏𐤋, 'master', 'lord'), a powerful dying-and-rising thunder god. Other gods were called by royal titles, such as Melqart, meaning 'king of the city', or Adonis for 'lord'. Such epithets may often have been merely local titles for the same deities. The Semitic pantheon was well-populated; which god became primary evidently depended on the exigencies of a particular city-state. Melqart was prominent throughout Phoenicia and overseas, as was Astarte, a fertility goddess with regal and matronly aspects. Religious institutions in Tyre called marzeh (𐤌𐤓𐤆𐤄, 'place of reunion'), did much to foster social bonding and "kin" loyalty. Marzeh held banquets for their membership on festival days, and many developed into elite fraternities. Each marzeh nurtured congeniality and community through a series of ritual meals shared among trusted kin in honor of deified ancestors. In Carthage, which had developed a complex republican system of government, the marzeh may have played a role in forging social and political ties among citizens; Carthaginians were divided into different institutions that were solidified through communal feasts and banquets. Such festival groups may also have composed the voting cohort for selecting members of the city-state's Assembly. The Phoenicians made votive offerings to their gods, namely in the form of figurines and pottery vessels. Figurines and votive fragments have been found in ceremonial favissae, underground storage spaces for sacred objects, in the temples grounds of the Temple of the Obelisks in Byblos, the Phoenician sanctuary of Kharayeb in the hinterland of Tyre, and the Temple of Eshmun north of Sidon, among others. Votive gifts were also recovered all over the Mediterranean, often spanning centuries between them, suggesting they were cast into the sea to ensure safe travels. Since the Phoenicians were predominantly a seafaring people, some sources have speculated that many of their rituals were performed at sea or aboard ships. However, the specific nature of these practices is unknown. On land they were renowned temple builders, perhaps inspiring elements of the architecture of the First Temple, the Temple of Solomon. According to William G. Dever, an archaeologist and scholar of the Old Testament, described features of the Solomonic Temple such as its longitudinal tripartite plan, fine furnishings, and elaborate decorative motifs were clearly inspired by Phoenician examples. See also References Further reading External links 34°07′25″N 35°39′04″E / 34.12361°N 35.65111°E / 34.12361; 35.65111 |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Prime_Minister%27s_Office_(Israel)] | [TOKENS: 216] |
Contents Prime Minister's Office (Israel) Israeli Prime Minister's Office (Hebrew: מִשְׂרָד רֹאשׁ הַמֶּמְשָׁלָה, Misrad Rosh HaMemshala) is the Israeli cabinet department responsible for coordinating the work of all governmental ministry offices and assisting the Israeli prime minister in their daily work. The Prime Minister's Office is responsible for formulating the Israeli cabinet's policy, conducting its cabinet meetings, as well as responsible for the foreign diplomatic relations with countries around the world, and supervising and overseeing the implementation of the Cabinet's policy. In addition, it is in charge of other governmental bodies, which are directly under the Prime Minister's responsibilities. Unlike many other countries, the Office of the Prime Minister of Israel does not serve as their residence. The official residence of the prime minister of Israel is in Beit Aghion, in Jerusalem's Rehavia neighborhood. Subdivisions See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Python_(programming_language)#cite_note-152] | [TOKENS: 4314] |
Contents Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language. Python 3.0, released in 2008, was a major revision and not completely backward-compatible with earlier versions. Beginning with Python 3.5, capabilities and keywords for typing were added to the language, allowing optional static typing. As of 2026[update], the Python Software Foundation supports Python 3.10, 3.11, 3.12, 3.13, and 3.14, following the project's annual release cycle and five-year support policy. Python 3.15 is currently in the alpha development phase, and the stable release is expected to come out in October 2026. Earlier versions in the 3.x series have reached end-of-life and no longer receive security updates. Python has gained widespread use in the machine learning community. It is widely taught as an introductory programming language. Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index, which ranks based on searches in 24 platforms. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands. It was designed as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Python implementation began in December 1989. Van Rossum first released it in 1991 as Python 0.9.0. Van Rossum assumed sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from responsibilities as Python's "benevolent dictator for life" (BDFL); this title was bestowed on him by the Python community to reflect his long-term commitment as the project's chief decision-maker. (He has since come out of retirement and is self-titled "BDFL-emeritus".) In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python derives from the British comedy series Monty Python's Flying Circus. (See § Naming.) Python 2.0 was released on 16 October 2000, featuring many new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, and then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. It no longer receives security patches or updates. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e., "2.7.18+" (plus 3.11), with the plus signifying (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, and was a major revision and not completely backward-compatible with earlier versions, with some new semantics and changed syntax. Python 2.7.18, released in 2020, was the last release of Python 2. Several releases in the Python 3.x series have added new syntax to the language, and made a few (considered very minor) backward-incompatible changes. As of January 2026[update], Python 3.14.3 is the latest stable release. All older 3.x versions had a security update down to Python 3.9.24 then again with 3.9.25, the final version in 3.9 series. Python 3.10 is, since November 2025, the oldest supported branch. Python 3.15 has an alpha released, and Android has an official downloadable executable available for Python 3.14. Releases receive two years of full support followed by three years of security support. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming – including metaprogramming and metaobjects. Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language' because it is purposely designed to be able to integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the "Lisp tradition". It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML. Python's core philosophy is summarized in the Zen of Python (PEP 20) written by Tim Peters, which includes aphorisms such as these: However, Python has received criticism for violating these principles and adding unnecessary language bloat. Responses to these criticisms note that the Zen of Python is a guideline rather than a rule. The addition of some new features had been controversial: Guido van Rossum resigned as Benevolent Dictator for Life after conflict about adding the assignment expression operator in Python 3.8. Nevertheless, rather than building all functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which represented the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar, while giving developers a choice in their coding methodology. Python lacks do .. while loops, which Rossum considered harmful. In contrast to Perl's motto "there is more than one way to do it", Python advocates an approach where "there should be one – and preferably only one – obvious way to do it". In practice, however, Python provides many ways to achieve a given goal. There are at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli is a Fellow at the Python Software Foundation and Python book author; he wrote that "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers typically prioritize readability over performance. For example, they reject patches to non-critical parts of the CPython reference implementation that would offer increases in speed that do not justify the cost of clarity and readability.[failed verification] Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. Also, it is possible to transpile to other languages. However, this approach either fails to achieve the expected speed-up, since Python is a very dynamic language, or only a restricted subset of Python is compiled (with potential minor semantic changes). Python is meant to be a fun language to use. This goal is reflected in the name – a tribute to the British comedy group Monty Python – and in playful approaches to some tutorials and reference materials. For instance, some code examples use the terms "spam" and "eggs" (in reference to a Monty Python sketch), rather than the typical terms "foo" and "bar". A common neologism in the Python community is pythonic, which has a broad range of meanings related to program style: Pythonic code may use Python idioms well; be natural or show fluency in the language; or conform with Python's minimalist philosophy and emphasis on readability. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Python's statements include the following: The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typing—in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations; according to Van Rossum, the language never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, data can be passed through multiple stack levels. Python's expressions include the following: In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example: A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement. Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them. Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support the semantics of only the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Python includes a module typing including several type names for type annotations. Also, mypy supports a Python compiler called mypyc, which leverages type annotations for optimization. 1.33333 frozenset() Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the modulo operator, a remainder can be negative, e.g., 4 % -3 == -2.) Also, Python offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0. Also, it offers the matrix‑multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively. Division between integers produces floating-point results. The behavior of division has changed significantly over time: In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division. Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. Also, the rounding implies that the equation b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is −1.0. Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation. Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs: To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header. Code examples "Hello, World!" program: Program to calculate the factorial of a non-negative integer: Libraries Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations. As of 13 March 2025,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. Development environments Most[which?] Python implementations (including CPython) include a read–eval–print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately. Also, CPython is bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.[citation needed] Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting. Standard desktop IDEs include PyCharm, Spyder, and Visual Studio Code; there are web browser-based IDEs, such as the following environments: Implementations CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard since version 3.11. Older versions use the C89 standard with several select C99 features, but third-party extensions are not limited to older C versions—e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python. CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms. All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past. All alternative implementations have at least slightly different semantics. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Creating an executable with Python often is done by bundling an entire Python interpreter into the executable, which causes binary sizes to be massive for small programs, yet there exist implementations that are capable of truly compiling Python. Alternative implementations include the following: Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version. Just-in-time Python compilers have been developed, but are now unsupported: There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python: There are also specialized compilers: Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax: A performance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game. There are several approaches to optimizing Python performance, despite the inherent slowness of an interpreted language. These approaches include the following strategies or tools: Language Development Python's development is conducted mostly through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases have three types, distinguished by which part of the version number is incremented: Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development. The major academic conference on Python is PyCon. Also, there are special Python mentoring programs, such as PyLadies. Naming Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. Also, the official Python documentation contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas". Languages influenced by Python See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Elon_Musk#cite_note-228] | [TOKENS: 10515] |
Contents Elon Musk Elon Reeve Musk (/ˈiːlɒn/ EE-lon; born June 28, 1971) is a businessman and entrepreneur known for his leadership of Tesla, SpaceX, Twitter, and xAI. Musk has been the wealthiest person in the world since 2025; as of February 2026,[update] Forbes estimates his net worth to be around US$852 billion. Born into a wealthy family in Pretoria, South Africa, Musk emigrated in 1989 to Canada; he has Canadian citizenship since his mother was born there. He received bachelor's degrees in 1997 from the University of Pennsylvania before moving to California to pursue business ventures. In 1995, Musk co-founded the software company Zip2. Following its sale in 1999, he co-founded X.com, an online payment company that later merged to form PayPal, which was acquired by eBay in 2002. Musk also became an American citizen in 2002. In 2002, Musk founded the space technology company SpaceX, becoming its CEO and chief engineer; the company has since led innovations in reusable rockets and commercial spaceflight. Musk joined the automaker Tesla as an early investor in 2004 and became its CEO and product architect in 2008; it has since become a leader in electric vehicles. In 2015, he co-founded OpenAI to advance artificial intelligence (AI) research, but later left; growing discontent with the organization's direction and their leadership in the AI boom in the 2020s led him to establish xAI, which became a subsidiary of SpaceX in 2026. In 2022, he acquired the social network Twitter, implementing significant changes, and rebranding it as X in 2023. His other businesses include the neurotechnology company Neuralink, which he co-founded in 2016, and the tunneling company the Boring Company, which he founded in 2017. In November 2025, a Tesla pay package worth $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Musk was the largest donor in the 2024 U.S. presidential election, where he supported Donald Trump. After Trump was inaugurated as president in early 2025, Musk served as Senior Advisor to the President and as the de facto head of the Department of Government Efficiency (DOGE). After a public feud with Trump, Musk left the Trump administration and returned to managing his companies. Musk is a supporter of global far-right figures, causes, and political parties. His political activities, views, and statements have made him a polarizing figure. Musk has been criticized for COVID-19 misinformation, promoting conspiracy theories, and affirming antisemitic, racist, and transphobic comments. His acquisition of Twitter was controversial due to a subsequent increase in hate speech and the spread of misinformation on the service, following his pledge to decrease censorship. His role in the second Trump administration attracted public backlash, particularly in response to DOGE. The emails he sent to Jeffrey Epstein are included in the Epstein files, which were published between 2025–26 and became a topic of worldwide debate. Early life Elon Reeve Musk was born on June 28, 1971, in Pretoria, South Africa's administrative capital. He is of British and Pennsylvania Dutch ancestry. His mother, Maye (née Haldeman), is a model and dietitian born in Saskatchewan, Canada, and raised in South Africa. Musk therefore holds both South African and Canadian citizenship from birth. His father, Errol Musk, is a South African electromechanical engineer, pilot, sailor, consultant, emerald dealer, and property developer, who partly owned a rental lodge at Timbavati Private Nature Reserve. His maternal grandfather, Joshua N. Haldeman, who died in a plane crash when Elon was a toddler, was an American-born Canadian chiropractor, aviator and political activist in the technocracy movement who moved to South Africa in 1950. Elon has a younger brother, Kimbal, a younger sister, Tosca, and four paternal half-siblings. Musk was baptized as a child in the Anglican Church of Southern Africa. Despite both Elon and Errol previously stating that Errol was a part owner of a Zambian emerald mine, in 2023, Errol recounted that the deal he made was to receive "a portion of the emeralds produced at three small mines". Errol was elected to the Pretoria City Council as a representative of the anti-apartheid Progressive Party and has said that his children shared their father's dislike of apartheid. After his parents divorced in 1979, Elon, aged around 9, chose to live with his father because Errol Musk had an Encyclopædia Britannica and a computer. Elon later regretted his decision and became estranged from his father. Elon has recounted trips to a wilderness school that he described as a "paramilitary Lord of the Flies" where "bullying was a virtue" and children were encouraged to fight over rations. In one incident, after an altercation with a fellow pupil, Elon was thrown down concrete steps and beaten severely, leading to him being hospitalized for his injuries. Elon described his father berating him after he was discharged from the hospital. Errol denied berating Elon and claimed, "The [other] boy had just lost his father to suicide, and Elon had called him stupid. Elon had a tendency to call people stupid. How could I possibly blame that child?" Elon was an enthusiastic reader of books, and had attributed his success in part to having read The Lord of the Rings, the Foundation series, and The Hitchhiker's Guide to the Galaxy. At age ten, he developed an interest in computing and video games, teaching himself how to program from the VIC-20 user manual. At age twelve, Elon sold his BASIC-based game Blastar to PC and Office Technology magazine for approximately $500 (equivalent to $1,600 in 2025). Musk attended Waterkloof House Preparatory School, Bryanston High School, and then Pretoria Boys High School, where he graduated. Musk was a decent but unexceptional student, earning a 61/100 in Afrikaans and a B on his senior math certification. Musk applied for a Canadian passport through his Canadian-born mother to avoid South Africa's mandatory military service, which would have forced him to participate in the apartheid regime, as well as to ease his path to immigration to the United States. While waiting for his application to be processed, he attended the University of Pretoria for five months. Musk arrived in Canada in June 1989, connected with a second cousin in Saskatchewan, and worked odd jobs, including at a farm and a lumber mill. In 1990, he entered Queen's University in Kingston, Ontario. Two years later, he transferred to the University of Pennsylvania, where he studied until 1995. Although Musk has said that he earned his degrees in 1995, the University of Pennsylvania did not award them until 1997 – a Bachelor of Arts in physics and a Bachelor of Science in economics from the university's Wharton School. He reportedly hosted large, ticketed house parties to help pay for tuition, and wrote a business plan for an electronic book-scanning service similar to Google Books. In 1994, Musk held two internships in Silicon Valley: one at energy storage startup Pinnacle Research Institute, which investigated electrolytic supercapacitors for energy storage, and another at Palo Alto–based startup Rocket Science Games. In 1995, he was accepted to a graduate program in materials science at Stanford University, but did not enroll. Musk decided to join the Internet boom of the 1990s, applying for a job at Netscape, to which he reportedly never received a response. The Washington Post reported that Musk lacked legal authorization to remain and work in the United States after failing to enroll at Stanford. In response, Musk said he was allowed to work at that time and that his student visa transitioned to an H1-B. According to numerous former business associates and shareholders, Musk said he was on a student visa at the time. Business career In 1995, Musk, his brother Kimbal, and Greg Kouri founded the web software company Zip2 with funding from a group of angel investors. They housed the venture at a small rented office in Palo Alto. Replying to Rolling Stone, Musk denounced the notion that they started their company with funds borrowed from Errol Musk, but in a tweet, he recognized that his father contributed 10% of a later funding round. The company developed and marketed an Internet city guide for the newspaper publishing industry, with maps, directions, and yellow pages. According to Musk, "The website was up during the day and I was coding it at night, seven days a week, all the time." To impress investors, Musk built a large plastic structure around a standard computer to create the impression that Zip2 was powered by a small supercomputer. The Musk brothers obtained contracts with The New York Times and the Chicago Tribune, and persuaded the board of directors to abandon plans for a merger with CitySearch. Musk's attempts to become CEO were thwarted by the board. Compaq acquired Zip2 for $307 million in cash in February 1999 (equivalent to $590,000,000 in 2025), and Musk received $22 million (equivalent to $43,000,000 in 2025) for his 7-percent share. In 1999, Musk co-founded X.com, an online financial services and e-mail payment company. The startup was one of the first federally insured online banks, and, in its initial months of operation, over 200,000 customers joined the service. The company's investors regarded Musk as inexperienced and replaced him with Intuit CEO Bill Harris by the end of the year. The following year, X.com merged with online bank Confinity to avoid competition. Founded by Max Levchin and Peter Thiel, Confinity had its own money-transfer service, PayPal, which was more popular than X.com's service. Within the merged company, Musk returned as CEO. Musk's preference for Microsoft software over Unix created a rift in the company and caused Thiel to resign. Due to resulting technological issues and lack of a cohesive business model, the board ousted Musk and replaced him with Thiel in 2000.[b] Under Thiel, the company focused on the PayPal service and was renamed PayPal in 2001. In 2002, PayPal was acquired by eBay for $1.5 billion (equivalent to $2,700,000,000 in 2025) in stock, of which Musk—the largest shareholder with 11.72% of shares—received $175.8 million (equivalent to $320,000,000 in 2025). In 2017, Musk purchased the domain X.com from PayPal for an undisclosed amount, stating that it had sentimental value. In 2001, Musk became involved with the nonprofit Mars Society and discussed funding plans to place a growth-chamber for plants on Mars. Seeking a way to launch the greenhouse payloads into space, Musk made two unsuccessful trips to Moscow to purchase intercontinental ballistic missiles (ICBMs) from Russian companies NPO Lavochkin and Kosmotras. Musk instead decided to start a company to build affordable rockets. With $100 million of his early fortune, (equivalent to $180,000,000 in 2025) Musk founded SpaceX in May 2002 and became the company's CEO and Chief Engineer. SpaceX attempted its first launch of the Falcon 1 rocket in 2006. Although the rocket failed to reach Earth orbit, it was awarded a Commercial Orbital Transportation Services program contract from NASA, then led by Mike Griffin. After two more failed attempts that nearly caused Musk to go bankrupt, SpaceX succeeded in launching the Falcon 1 into orbit in 2008. Later that year, SpaceX received a $1.6 billion NASA contract (equivalent to $2,400,000,000 in 2025) for Falcon 9-launched Dragon spacecraft flights to the International Space Station (ISS), replacing the Space Shuttle after its 2011 retirement. In 2012, the Dragon vehicle docked with the ISS, a first for a commercial spacecraft. Working towards its goal of reusable rockets, in 2015 SpaceX successfully landed the first stage of a Falcon 9 on a land platform. Later landings were achieved on autonomous spaceport drone ships, an ocean-based recovery platform. In 2018, SpaceX launched the Falcon Heavy; the inaugural mission carried Musk's personal Tesla Roadster as a dummy payload. Since 2019, SpaceX has been developing Starship, a reusable, super heavy-lift launch vehicle intended to replace the Falcon 9 and Falcon Heavy. In 2020, SpaceX launched its first crewed flight, the Demo-2, becoming the first private company to place astronauts into orbit and dock a crewed spacecraft with the ISS. In 2024, NASA awarded SpaceX an $843 million (equivalent to $865,000,000 in 2025) contract to build a spacecraft that NASA will use to deorbit the ISS at the end of its lifespan. In 2015, SpaceX began development of the Starlink constellation of low Earth orbit satellites to provide satellite Internet access. After the launch of prototype satellites in 2018, the first large constellation was deployed in May 2019. As of May 2025[update], over 7,600 Starlink satellites are operational, comprising 65% of all operational Earth satellites. The total cost of the decade-long project to design, build, and deploy the constellation was estimated by SpaceX in 2020 to be $10 billion (equivalent to $12,000,000,000 in 2025).[c] During the Russian invasion of Ukraine, Musk provided free Starlink service to Ukraine, permitting Internet access and communication at a yearly cost to SpaceX of $400 million (equivalent to $440,000,000 in 2025). However, Musk refused to block Russian state media on Starlink. In 2023, Musk denied Ukraine's request to activate Starlink over Crimea to aid an attack against the Russian navy, citing fears of a nuclear response. Tesla, Inc., originally Tesla Motors, was incorporated in July 2003 by Martin Eberhard and Marc Tarpenning. Both men played active roles in the company's early development prior to Musk's involvement. Musk led the Series A round of investment in February 2004; he invested $6.35 million (equivalent to $11,000,000 in 2025), became the majority shareholder, and joined Tesla's board of directors as chairman. Musk took an active role within the company and oversaw Roadster product design, but was not deeply involved in day-to-day business operations. Following a series of escalating conflicts in 2007 and the 2008 financial crisis, Eberhard was ousted from the firm.[page needed] Musk assumed leadership of the company as CEO and product architect in 2008. A 2009 lawsuit settlement with Eberhard designated Musk as a Tesla co-founder, along with Tarpenning and two others. Tesla began delivery of the Roadster, an electric sports car, in 2008. With sales of about 2,500 vehicles, it was the first mass production all-electric car to use lithium-ion battery cells. Under Musk, Tesla has since launched several well-selling electric vehicles, including the four-door sedan Model S (2012), the crossover Model X (2015), the mass-market sedan Model 3 (2017), the crossover Model Y (2020), and the pickup truck Cybertruck (2023). In May 2020, Musk resigned as chairman of the board as part of the settlement of a lawsuit from the SEC over him tweeting that funding had been "secured" for potentially taking Tesla private. The company has also constructed multiple lithium-ion battery and electric vehicle factories, called Gigafactories. Since its initial public offering in 2010, Tesla stock has risen significantly; it became the most valuable carmaker in summer 2020, and it entered the S&P 500 later that year. In October 2021, it reached a market capitalization of $1 trillion (equivalent to $1,200,000,000,000 in 2025), the sixth company in U.S. history to do so. Musk provided the initial concept and financial capital for SolarCity, which his cousins Lyndon and Peter Rive founded in 2006. By 2013, SolarCity was the second largest provider of solar power systems in the United States. In 2014, Musk promoted the idea of SolarCity building an advanced production facility in Buffalo, New York, triple the size of the largest solar plant in the United States. Construction of the factory started in 2014 and was completed in 2017. It operated as a joint venture with Panasonic until early 2020. Tesla acquired SolarCity for $2 billion in 2016 (equivalent to $2,700,000,000 in 2025) and merged it with its battery unit to create Tesla Energy. The deal's announcement resulted in a more than 10% drop in Tesla's stock price; at the time, SolarCity was facing liquidity issues. Multiple shareholder groups filed a lawsuit against Musk and Tesla's directors, stating that the purchase of SolarCity was done solely to benefit Musk and came at the expense of Tesla and its shareholders. Tesla directors settled the lawsuit in January 2020, leaving Musk the sole remaining defendant. Two years later, the court ruled in Musk's favor. In 2016, Musk co-founded Neuralink, a neurotechnology startup, with an investment of $100 million. Neuralink aims to integrate the human brain with artificial intelligence (AI) by creating devices that are embedded in the brain. Such technology could enhance memory or allow the devices to communicate with software. The company also hopes to develop devices to treat neurological conditions like spinal cord injuries. In 2022, Neuralink announced that clinical trials would begin by the end of the year. In September 2023, the Food and Drug Administration approved Neuralink to initiate six-year human trials. Neuralink has conducted animal testing on macaques at the University of California, Davis. In 2021, the company released a video in which a macaque played the video game Pong via a Neuralink implant. The company's animal trials—which have caused the deaths of some monkeys—have led to claims of animal cruelty. The Physicians Committee for Responsible Medicine has alleged that Neuralink violated the Animal Welfare Act. Employees have complained that pressure from Musk to accelerate development has led to botched experiments and unnecessary animal deaths. In 2022, a federal probe was launched into possible animal welfare violations by Neuralink.[needs update] In 2017, Musk founded the Boring Company to construct tunnels; he also revealed plans for specialized, underground, high-occupancy vehicles that could travel up to 150 miles per hour (240 km/h) and thus circumvent above-ground traffic in major cities. Early in 2017, the company began discussions with regulatory bodies and initiated construction of a 30-foot (9.1 m) wide, 50-foot (15 m) long, and 15-foot (4.6 m) deep "test trench" on the premises of SpaceX's offices, as that required no permits. The Los Angeles tunnel, less than two miles (3.2 km) in length, debuted to journalists in 2018. It used Tesla Model Xs and was reported to be a rough ride while traveling at suboptimal speeds. Two tunnel projects announced in 2018, in Chicago and West Los Angeles, have been canceled. A tunnel beneath the Las Vegas Convention Center was completed in early 2021. Local officials have approved further expansions of the tunnel system. April 14, 2022 In early 2017, Musk expressed interest in buying Twitter and had questioned the platform's commitment to freedom of speech. By 2022, Musk had reached 9.2% stake in the company, making him the largest shareholder.[d] Musk later agreed to a deal that would appoint him to Twitter's board of directors and prohibit him from acquiring more than 14.9% of the company. Days later, Musk made a $43 billion offer to buy Twitter. By the end of April Musk had successfully concluded his bid for approximately $44 billion. This included approximately $12.5 billion in loans and $21 billion in equity financing. Having backtracked on his initial decision, Musk bought the company on October 27, 2022. Immediately after the acquisition, Musk fired several top Twitter executives including CEO Parag Agrawal; Musk became the CEO instead. Under Elon Musk, Twitter instituted monthly subscriptions for a "blue check", and laid off a significant portion of the company's staff. Musk lessened content moderation and hate speech also increased on the platform after his takeover. In late 2022, Musk released internal documents relating to Twitter's moderation of Hunter Biden's laptop controversy in the lead-up to the 2020 presidential election. Musk also promised to step down as CEO after a Twitter poll, and five months later, Musk stepped down as CEO and transitioned his role to executive chairman and chief technology officer (CTO). Despite Musk stepping down as CEO, X continues to struggle with challenges such as viral misinformation, hate speech, and antisemitism controversies. Musk has been accused of trying to silence some of his critics such as Twitch streamer Asmongold, who criticized him during one of his streams. Musk has been accused of removing their accounts' blue checkmarks, which hinders visibility and is considered a form of shadow banning, or suspending their accounts without justification. Other activities In August 2013, Musk announced plans for a version of a vactrain, and assigned engineers from SpaceX and Tesla to design a transport system between Greater Los Angeles and the San Francisco Bay Area, at an estimated cost of $6 billion. Later that year, Musk unveiled the concept, dubbed the Hyperloop, intended to make travel cheaper than any other mode of transport for such long distances. In December 2015, Musk co-founded OpenAI, a not-for-profit artificial intelligence (AI) research company aiming to develop artificial general intelligence, intended to be safe and beneficial to humanity. Musk pledged $1 billion of funding to the company, and initially gave $50 million. In 2018, Musk left the OpenAI board. Since 2018, OpenAI has made significant advances in machine learning. In July 2023, Musk launched the artificial intelligence company xAI, which aims to develop a generative AI program that competes with existing offerings like OpenAI's ChatGPT. Musk obtained funding from investors in SpaceX and Tesla, and xAI hired engineers from Google and OpenAI. December 16, 2022 Musk uses a private jet owned by Falcon Landing LLC, a SpaceX-linked company, and acquired a second jet in August 2020. His heavy use of the jets and the consequent fossil fuel usage have received criticism. Musk's flight usage is tracked on social media through ElonJet. In December 2022, Musk banned the ElonJet account on Twitter, and made temporary bans on the accounts of journalists that posted stories regarding the incident, including Donie O'Sullivan, Keith Olbermann, and journalists from The New York Times, The Washington Post, CNN, and The Intercept. In October 2025, Musk's company xAI launched Grokipedia, an AI-generated online encyclopedia that he promoted as an alternative to Wikipedia. Articles on Grokipedia are generated and reviewed by xAI's Grok chatbot. Media coverage and academic analysis described Grokipedia as frequently reusing Wikipedia content but framing contested political and social topics in line with Musk's own views and right-wing narratives. A study by Cornell University researchers and NBC News stated that Grokipedia cites sources that are blacklisted or considered "generally unreliable" on Wikipedia, for example, the conspiracy site Infowars and the neo-Nazi forum Stormfront. Wired, The Guardian and Time criticized Grokipedia for factual errors and for presenting Musk himself in unusually positive terms while downplaying controversies. Politics Musk is an outlier among business leaders who typically avoid partisan political advocacy. Musk was a registered independent voter when he lived in California. Historically, he has donated to both Democrats and Republicans, many of whom serve in states in which he has a vested interest. Since 2022, his political contributions have mostly supported Republicans, with his first vote for a Republican going to Mayra Flores in the 2022 Texas's 34th congressional district special election. In 2024, he started supporting international far-right political parties, activists, and causes, and has shared misinformation and numerous conspiracy theories. Since 2024, his views have been generally described as right-wing. Musk supported Barack Obama in 2008 and 2012, Hillary Clinton in 2016, Joe Biden in 2020, and Donald Trump in 2024. In the 2020 Democratic Party presidential primaries, Musk endorsed candidate Andrew Yang and expressed support for Yang's proposed universal basic income, and endorsed Kanye West's 2020 presidential campaign. In 2021, Musk publicly expressed opposition to the Build Back Better Act, a $3.5 trillion legislative package endorsed by Joe Biden that ultimately failed to pass due to unanimous opposition from congressional Republicans and several Democrats. In 2022, gave over $50 million to Citizens for Sanity, a conservative political action committee. In 2023, he supported Republican Ron DeSantis for the 2024 U.S. presidential election, giving $10 million to his campaign, and hosted DeSantis's campaign announcement on a Twitter Spaces event. From June 2023 to January 2024, Musk hosted a bipartisan set of X Spaces with Republican and Democratic candidates, including Robert F. Kennedy Jr., Vivek Ramaswamy, and Dean Phillips. In October 2025, former vice-president Kamala Harris commented that it was a mistake from the Democratic side to not invite Musk to a White House electric vehicle event organized in August 2021 and featuring executives from General Motors, Ford and Stellantis, despite Tesla being "the major American manufacturer of extraordinary innovation in this space." Fortune remarked that this was a nod to United Auto Workers and organized labor. Harris said presidents should put aside political loyalties when it came to recognizing innovation, and guessed that the non-invitation impacted Musk's perspective. Fortune noted that, at the time, Musk said, "Yeah, seems odd that Tesla wasn't invited." A month later, he criticized Biden as "not the friendliest administration." Jacob Silverman, author of the book Gilded Rage: Elon Musk and the Radicalization of Silicon Valley, said that the tech industry represented by Musk, Thiel, Andreessen and other capitalists, actually flourished under Biden, but the tech leaders chose Trump for their common ground on cultural issues. By early 2024, Musk had become a vocal and financial supporter of Donald Trump. In July 2024, minutes after the attempted assassination of Donald Trump, Musk endorsed him for president saying; "I fully endorse President Trump and hope for his rapid recovery." During the presidential campaign, Musk joined Trump on stage at a campaign rally, and during the campaign promoted conspiracy theories and falsehoods about Democrats, election fraud and immigration, in support of Trump. Musk was the largest individual donor of the 2024 election. In 2025, Musk contributed $19 million to the Wisconsin Supreme Court race, hoping to influence the state's future redistricting efforts and its regulations governing car manufacturers and dealers. In 2023, Musk said he shunned the World Economic Forum because it was boring. The organization commented that they had not invited him since 2015. He has participated in Dialog, dubbed "Tech Bilderberg" and organized by Peter Thiel and Auren Hoffman, though. Musk's international political actions and comments have come under increasing scrutiny and criticism, especially from the governments and leaders of France, Germany, Norway, Spain and the United Kingdom, particularly due to his position in the U.S. government as well as ownership of X. An NBC News analysis found he had boosted far-right political movements to cut immigration and curtail regulation of business in at least 18 countries on six continents since 2023. During his speech after the second inauguration of Donald Trump, Musk twice made a gesture interpreted by many as a Nazi or a fascist Roman salute.[e] He thumped his right hand over his heart, fingers spread wide, and then extended his right arm out, emphatically, at an upward angle, palm down and fingers together. He then repeated the gesture to the crowd behind him. As he finished the gestures, he said to the crowd, "My heart goes out to you. It is thanks to you that the future of civilization is assured." It was widely condemned as an intentional Nazi salute in Germany, where making such gestures is illegal. The Anti-Defamation League said it was not a Nazi salute, but other Jewish organizations disagreed and condemned the salute. American public opinion was divided on partisan lines as to whether it was a fascist salute. Musk dismissed the accusations of Nazi sympathies, deriding them as "dirty tricks" and a "tired" attack. Neo-Nazi and white supremacist groups celebrated it as a Nazi salute. Multiple European political parties demanded that Musk be banned from entering their countries. The concept of DOGE emerged in a discussion between Musk and Donald Trump, and in August 2024, Trump committed to giving Musk an advisory role, with Musk accepting the offer. In November and December 2024, Musk suggested that the organization could help to cut the U.S. federal budget, consolidate the number of federal agencies, and eliminate the Consumer Financial Protection Bureau, and that its final stage would be "deleting itself". In January 2025, the organization was created by executive order, and Musk was designated a "special government employee". Musk led the organization and was a senior advisor to the president, although his official role is not clear. In sworn statement during a lawsuit, the director of the White House Office of Administration stated that Musk "is not an employee of the U.S. DOGE Service or U.S. DOGE Service Temporary Organization", "is not the U.S. DOGE Service administrator", and has "no actual or formal authority to make government decisions himself". Trump said two days later that he had put Musk in charge of DOGE. A federal judge has ruled that Musk acted as the de facto leader of DOGE. Musk's role in the second Trump administration, particularly in response to DOGE, has attracted public backlash. He was criticized for his treatment of federal government employees, including his influence over the mass layoffs of the federal workforce. He has prioritized secrecy within the organization and has accused others of violating privacy laws. A Senate report alleged that Musk could avoid up to $2 billion in legal liability as a result of DOGE's actions. In May 2025, Bill Gates accused Musk of "killing the world's poorest children" through his cuts to USAID, which modeling by Boston University estimated had resulted in 300,000 deaths by this time, most of them of children. By November 2025, the estimated death toll had increased to 400,000 children and 200,000 adults. Musk announced on May 28, 2025, that he would depart from the Trump administration as planned when the special government employee's 130 day deadline expired, with a White House official confirming that Musk's offboarding from the Trump administration was already underway. His departure was officially confirmed during a joint Oval Office press conference with Trump on May 30, 2025. @realDonaldTrump is in the Epstein files. That is the real reason they have not been made public. June 5, 2025 After leaving office, Musk criticized the Trump administration's Big Beautiful Bill, calling it a "disgusting abomination" due to its provisions increasing the deficit. A feud began between Musk and Trump, with its most notable event being Musk alleging Trump had ties to sex offender Jeffrey Epstein on X (formerly Twitter) on June 5, 2025. Trump responded on Truth Social stating that Musk went "CRAZY" after the "EV Mandate" was purportedly taken away and threatened to cut Musk's government contracts. Musk then called for a third Trump impeachment. The next day, Trump stated that he did not wish to reconcile with Musk, and added that Musk would face "very serious consequences" if he funds Democratic candidates. On June 11, Musk publicly apologized for the tweets against Trump, saying they "went too far". Views November 6, 2022 Rejecting the conservative label, Musk has described himself as a political moderate, even as his views have become more right-wing over time. His views have been characterized as libertarian and far-right, and after his involvement in European politics, they have received criticism from world leaders such as Emmanuel Macron and Olaf Scholz. Within the context of American politics, Musk supported Democratic candidates up until 2022, at which point he voted for a Republican for the first time. He has stated support for universal basic income, gun rights, freedom of speech, a tax on carbon emissions, and H-1B visas. Musk has expressed concern about issues such as artificial intelligence (AI) and climate change, and has been a critic of wealth tax, short-selling, and government subsidies. An immigrant himself, Musk has been accused of being anti-immigration, and regularly blames immigration policies for illegal immigration. He is also a pronatalist who believes population decline is the biggest threat to civilization, and identifies as a cultural Christian. Musk has long been an advocate for space colonization, especially the colonization of Mars. He has repeatedly pushed for humanity colonizing Mars, in order to become an interplanetary species and lower the risks of human extinction. Musk has promoted conspiracy theories and made controversial statements that have led to accusations of racism, sexism, antisemitism, transphobia, disseminating disinformation, and support of white pride. While describing himself as a "pro-Semite", his comments regarding George Soros and Jewish communities have been condemned by the Anti-Defamation League and the Biden White House. Musk was criticized during the COVID-19 pandemic for making unfounded epidemiological claims, defying COVID-19 lockdowns restrictions, and supporting the Canada convoy protest against vaccine mandates. He has amplified false claims of white genocide in South Africa. Musk has been critical of Israel's actions in the Gaza Strip during the Gaza war, praised China's economic and climate goals, suggested that Taiwan and China should resolve cross-strait relations, and was described as having a close relationship with the Chinese government. In Europe, Musk expressed support for Ukraine in 2022 during the Russian invasion, recommended referendums and peace deals on the annexed Russia-occupied territories, and supported the far-right Alternative for Germany political party in 2024. Regarding British politics, Musk blamed the 2024 UK riots on mass migration and open borders, criticized Prime Minister Keir Starmer for what he described as a "two-tier" policing system, and was subsequently attacked as being responsible for spreading misinformation and amplifying the far-right. He has also voiced his support for far-right activist Tommy Robinson and pledged electoral support for Reform UK. In February 2026, Musk described Spanish Prime Minister Pedro Sánchez as a "tyrant" following Sánchez's proposal to prohibit minors under the age of 16 from accessing social media platforms. Legal affairs In 2018, Musk was sued by the U.S. Securities and Exchange Commission (SEC) for a tweet stating that funding had been secured for potentially taking Tesla private.[f] The securities fraud lawsuit characterized the tweet as false, misleading, and damaging to investors, and sought to bar Musk from serving as CEO of publicly traded companies. Two days later, Musk settled with the SEC, without admitting or denying the SEC's allegations. As a result, Musk and Tesla were fined $20 million each, and Musk was forced to step down for three years as Tesla chairman but was able to remain as CEO. Shareholders filed a lawsuit over the tweet, and in February 2023, a jury found Musk and Tesla not liable. Musk has stated in interviews that he does not regret posting the tweet that triggered the SEC investigation. In 2019, Musk stated in a tweet that Tesla would build half a million cars that year. The SEC reacted by asking a court to hold him in contempt for violating the terms of the 2018 settlement agreement. A joint agreement between Musk and the SEC eventually clarified the previous agreement details, including a list of topics about which Musk needed preclearance. In 2020, a judge blocked a lawsuit that claimed a tweet by Musk regarding Tesla stock price ("too high imo") violated the agreement. Freedom of Information Act (FOIA)-released records showed that the SEC concluded Musk had subsequently violated the agreement twice by tweeting regarding "Tesla's solar roof production volumes and its stock price". In October 2023, the SEC sued Musk over his refusal to testify a third time in an investigation into whether he violated federal law by purchasing Twitter stock in 2022. In February 2024, Judge Laurel Beeler ruled that Musk must testify again. In January 2025, the SEC filed a lawsuit against Musk for securities violations related to his purchase of Twitter. In January 2024, Delaware judge Kathaleen McCormick ruled in a 2018 lawsuit that Musk's $55 billion pay package from Tesla be rescinded. McCormick called the compensation granted by the company's board "an unfathomable sum" that was unfair to shareholders. The Delaware Supreme Court overturned McCormick's decision in December 2025, restoring Musk's compensation package and awarding $1 in nominal damages. Personal life Musk became a U.S. citizen in 2002. From the early 2000s until late 2020, Musk resided in California, where both Tesla and SpaceX were founded. He then relocated to Cameron County, Texas, saying that California had become "complacent" about its economic success. While hosting Saturday Night Live in 2021, Musk stated that he has Asperger syndrome (an outdated term for autism spectrum disorder). When asked about his experience growing up with Asperger's syndrome in a TED2022 conference in Vancouver, Musk stated that "the social cues were not intuitive ... I would just tend to take things very literally ... but then that turned out to be wrong — [people were not] simply saying exactly what they mean, there's all sorts of other things that are meant, and [it] took me a while to figure that out." Musk suffers from back pain and has undergone several spine-related surgeries, including a disc replacement. In 2000, he contracted a severe case of malaria while on vacation in South Africa. Musk has stated he uses doctor-prescribed ketamine for occasional depression and that he doses "a small amount once every other week or something like that"; since January 2024, some media outlets have reported that he takes ketamine, marijuana, LSD, ecstasy, mushrooms, cocaine and other drugs. Musk at first refused to comment on his alleged drug use, before responding that he had not tested positive for drugs, and that if drugs somehow improved his productivity, "I would definitely take them!". The New York Times' investigations revealed Musk's overuse of ketamine and numerous other drugs, as well as strained family relationships and concerns from close associates who have become troubled by his public behavior as he became more involved in political activities and government work. According to The Washington Post, President Trump described Musk as "a big-time drug addict". Through his own label Emo G Records, Musk released a rap track, "RIP Harambe", on SoundCloud in March 2019. The following year, he released an EDM track, "Don't Doubt Ur Vibe", featuring his own lyrics and vocals. Musk plays video games, which he stated has a "'restoring effect' that helps his 'mental calibration'". Some games he plays include Quake, Diablo IV, Elden Ring, and Polytopia. Musk once claimed to be one of the world's top video game players but has since admitted to "account boosting", or cheating by hiring outside services to achieve top player rankings. Musk has justified the boosting by claiming that all top accounts do it so he has to as well to remain competitive. In 2024 and 2025, Musk criticized the video game Assassin's Creed Shadows and its creator Ubisoft for "woke" content. Musk posted to X that "DEI kills art" and specified the inclusion of the historical figure Yasuke in the Assassin's Creed game as offensive; he also called the game "terrible". Ubisoft responded by saying that Musk's comments were "just feeding hatred" and that they were focused on producing a game not pushing politics. Musk has fathered at least 14 children, one of whom died as an infant. The Wall Street Journal reported in 2025 that sources close to Musk suggest that the "true number of Musk's children is much higher than publicly known". He had six children with his first wife, Canadian author Justine Wilson, whom he met while attending Queen's University in Ontario, Canada; they married in 2000. In 2002, their first child Nevada Musk died of sudden infant death syndrome at the age of 10 weeks. After his death, the couple used in vitro fertilization (IVF) to continue their family; they had twins in 2004, followed by triplets in 2006. The couple divorced in 2008 and have shared custody of their children. The elder twin he had with Wilson came out as a trans woman and, in 2022, officially changed her name to Vivian Jenna Wilson, adopting her mother's surname because she no longer wished to be associated with Musk. Musk began dating English actress Talulah Riley in 2008. They married two years later at Dornoch Cathedral in Scotland. In 2012, the couple divorced, then remarried the following year. After briefly filing for divorce in 2014, Musk finalized a second divorce from Riley in 2016. Musk then dated the American actress Amber Heard for several months in 2017; he had reportedly been "pursuing" her since 2012. In 2018, Musk and Canadian musician Grimes confirmed they were dating. Grimes and Musk have three children, born in 2020, 2021, and 2022.[g] Musk and Grimes originally gave their eldest child the name "X Æ A-12", which would have violated California regulations as it contained characters that are not in the modern English alphabet; the names registered on the birth certificate are "X" as a first name, "Æ A-Xii" as a middle name, and "Musk" as a last name. They received criticism for choosing a name perceived to be impractical and difficult to pronounce; Musk has said the intended pronunciation is "X Ash A Twelve". Their second child was born via surrogacy. Despite the pregnancy, Musk confirmed reports that the couple were "semi-separated" in September 2021; in an interview with Time in December 2021, he said he was single. In October 2023, Grimes sued Musk over parental rights and custody of X Æ A-Xii. Elon Musk has taken X Æ A-Xii to multiple official events in Washington, D.C. during Trump's second term in office. Also in July 2022, The Wall Street Journal reported that Musk allegedly had an affair with Nicole Shanahan, the wife of Google co-founder Sergey Brin, in 2021, leading to their divorce the following year. Musk denied the report. Musk also had a relationship with Australian actress Natasha Bassett, who has been described as "an occasional girlfriend". In October 2024, The New York Times reported Musk bought a Texas compound for his children and their mothers, though Musk denied having done so. Musk also has four children with Shivon Zilis, director of operations and special projects at Neuralink: twins born via IVF in 2021, a child born in 2024 via surrogacy and a child born in 2025.[h] On February 14, 2025, Ashley St. Clair, an influencer and author, posted on X claiming to have given birth to Musk's son Romulus five months earlier, which media outlets reported as Musk's supposed thirteenth child.[i] On February 22, 2025, it was reported that St Clair had filed for sole custody of her five-month-old son and for Musk to be recognised as the child's father. On March 31, 2025, Musk wrote that, while he was unsure if he was the father of St. Clair's child, he had paid St. Clair $2.5 million and would continue paying her $500,000 per year.[j] Later reporting from the Wall Street Journal indicated that $1 million of these payments to St. Clair were structured as a loan. In 2014, Musk and Ghislaine Maxwell appeared together in a photograph taken at an Academy Awards after-party, which Musk later described as a "photobomb". The January 2026 Epstein files contain emails between Musk and Epstein from 2012 to 2013, after Epstein's first conviction. Emails released on January 30, 2026, indicated that Epstein invited Musk to visit his private island on multiple occasions. The correspondence showed that while Epstein repeatedly encouraged Musk to attend, Musk did not visit the island. In one instance, Musk discussed the possibility of attending a party with his then-wife Talulah Riley and asked which day would be the "wildest party"; according to the emails, the visit did not take place after Epstein later cancelled the plans.[k] On Christmas day in 2012, Musk emailed Epstein asking "Do you have any parties planned? I’ve been working to the edge of sanity this year and so, once my kids head home after Christmas, I really want to hit the party scene in St Barts or elsewhere and let loose. The invitation is much appreciated, but a peaceful island experience is the opposite of what I’m looking for". Epstein replied that the "ratio on my island" might make Musk's wife uncomfortable to which Musk responded, "Ratio is not a problem for Talulah". On September 11, 2013, Epstein sent an email asking Musk if he had any plans for coming to New York for the opening of the United Nations General Assembly where many "interesting people" would be coming to his house to which Musk responded that "Flying to NY to see UN diplomats do nothing would be an unwise use of time". Epstein responded by stating "Do you think i am retarded. Just kidding, there is no one over 25 and all very cute." Musk has denied any close relationship with Epstein and described him as a "creep" who attempted to ingratiate himself with influential people. When Musk was asked in 2019 if he introduced Epstein to Mark Zuckerberg, Musk responded: "I don’t recall introducing Epstein to anyone, as I don’t know the guy well enough to do so." The released emails nonetheless showed cordial exchanges on a range of topics, including Musk's inquiry about parties on the island. The correspondence also indicated that Musk suggested hosting Epstein at SpaceX, while Epstein separately discussed plans to tour SpaceX and bring "the girls", though there is no evidence that such a visit occurred. Musk has described the release of the files a "distraction", later accusing the second Trump administration of suppressing them to protect powerful individuals, including Trump himself.[l] Wealth Elon Musk is the wealthiest person in the world, with an estimated net worth of US$690 billion as of January 2026, according to the Bloomberg Billionaires Index, and $852 billion according to Forbes, primarily from his ownership stakes in SpaceX and Tesla. Having been first listed on the Forbes Billionaires List in 2012, around 75% of Musk's wealth was derived from Tesla stock in November 2020, although he describes himself as "cash poor". According to Forbes, he became the first person in the world to achieve a net worth of $300 billion in 2021; $400 billion in December 2024; $500 billion in October 2025; $600 billion in mid-December 2025; $700 billion later that month; and $800 billion in February 2026. In November 2025, a Tesla pay package worth potentially $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Public image Although his ventures have been highly influential within their separate industries starting in the 2000s, Musk only became a public figure in the early 2010s. He has been described as an eccentric who makes spontaneous and impactful decisions, while also often making controversial statements, contrary to other billionaires who prefer reclusiveness to protect their businesses. Musk's actions and his expressed views have made him a polarizing figure. Biographer Ashlee Vance described people's opinions of Musk as polarized due to his "part philosopher, part troll" persona on Twitter. He has drawn denouncement for using his platform to mock the self-selection of personal pronouns, while also receiving praise for bringing international attention to matters like British survivors of grooming gangs. Musk has been described as an American oligarch due to his extensive influence over public discourse, social media, industry, politics, and government policy. After Trump's re-election, Musk's influence and actions during the transition period and the second presidency of Donald Trump led some to call him "President Musk", the "actual president-elect", "shadow president" or "co-president". Awards for his contributions to the development of the Falcon rockets include the American Institute of Aeronautics and Astronautics George Low Transportation Award in 2008, the Fédération Aéronautique Internationale Gold Space Medal in 2010, and the Royal Aeronautical Society Gold Medal in 2012. In 2015, he received an honorary doctorate in engineering and technology from Yale University and an Institute of Electrical and Electronics Engineers Honorary Membership. Musk was elected a Fellow of the Royal Society (FRS) in 2018.[m] In 2022, Musk was elected to the National Academy of Engineering. Time has listed Musk as one of the most influential people in the world in 2010, 2013, 2018, and 2021. Musk was selected as Time's "Person of the Year" for 2021. Then Time editor-in-chief Edward Felsenthal wrote that, "Person of the Year is a marker of influence, and few individuals have had more influence than Musk on life on Earth, and potentially life off Earth too." Notes References Works cited Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Category:Role-playing_game_terminology] | [TOKENS: 64] |
Category:Role-playing game terminology Terms used in role-playing games. Subcategories This category has the following 2 subcategories, out of 2 total. Pages in category "Role-playing game terminology" The following 57 pages are in this category, out of 57 total. This list may not reflect recent changes. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_note-119] | [TOKENS: 9291] |
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Wearable_computer] | [TOKENS: 5349] |
Contents Wearable computer A wearable computer, also known as a body-borne computer or wearable, is a computing device worn on the body. The definition of 'wearable computer' may be narrow or broad, extending to smartphones or even ordinary wristwatches. Wearables may be for general use, in which case they are just a particularly small example of mobile computing. Alternatively, they may be for specialized purposes such as fitness trackers. They may incorporate special sensors such as accelerometers, heart rate monitors, or on the more advanced side, electrocardiogram (ECG) and blood oxygen saturation (SpO2) monitors. Under the definition of wearable computers, we also include novel user interfaces such as Google Glass, an optical head-mounted display controlled by gestures. It may be that specialized wearables will evolve into general all-in-one devices, as happened with the convergence of PDAs and mobile phones into smartphones. Wearables are typically worn on the wrist (e.g. fitness trackers), hung from the neck (like a necklace), strapped to the arm or leg (electronic tagging), or on the head (as glasses or a helmet), though some have been located elsewhere (e.g. on a finger or in a shoe). Devices carried in a pocket or bag – such as smartphones and before them, pocket calculators and PDAs, may or may not be regarded as 'worn'. Wearable computers have various technical issues common to other mobile computing, such as batteries, heat dissipation, software architectures, wireless and personal area networks, and data management. Many wearable computers are active all the time, e.g. processing or recording data continuously. Applications Wearable computers are not only limited to computers such as fitness trackers that are worn on wrists; they also include wearables such as heart pacemakers and other prosthetics. They are used most often in research that focuses on behavioral modeling, health monitoring systems, IT and media development, where the person wearing the computer actually moves or is otherwise engaged with his or her surroundings. Wearable computers have been used for the following: Wearable computing is the subject of active research, especially the form-factor and location on the body, with areas of study including user interface design, augmented reality, and pattern recognition. The use of wearables for specific applications, for compensating disabilities or supporting elderly people steadily increases. Operating systems The dominant operating systems for wearable computing are: History Due to the varied definitions of wearable and computer, the first wearable computer could be as early as the first abacus on a necklace, a 16th-century abacus ring, a wristwatch and 'finger-watch' owned by Queen Elizabeth I of England, or the covert timing devices hidden in shoes to cheat at roulette by Thorp and Shannon in the 1960s and 1970s. However, a general-purpose computer is not merely a time-keeping or calculating device, but rather a user-programmable item for arbitrary complex algorithms, interfacing, and data management. By this definition, the wearable computer was invented by Steve Mann, in the late 1970s: Steve Mann, a professor at the University of Toronto, was hailed as the father of the wearable computer and the ISSCC's first virtual panelist, by moderator Woodward Yang of Harvard University (Cambridge Mass.). — IEEE ISSCC 8 Feb. 2000 The development of wearable items has taken several steps of miniaturization from discrete electronics over hybrid designs to fully integrated designs, where just one processor chip, a battery, and some interface conditioning items make the whole unit. Queen Elizabeth I of England received a watch from Robert Dudley in 1571, as a New Year's present; it may have been worn on the forearm rather than the wrist. She also possessed a 'finger-watch' set in a ring, with an alarm that prodded her finger. The Qing dynasty saw the introduction of a fully functional abacus on a ring, which could be used while it was being worn. In 1961, mathematicians Edward O. Thorp and Claude Shannon built some computerized timing devices to help them win a game of roulette. One such timer was concealed in a shoe and another in a pack of cigarettes. Various versions of this apparatus were built in the 1960s and 1970s. Thorp refers to himself as the inventor of the first "wearable computer". In other variations, the system was a concealed cigarette-pack-sized analog computer designed to predict the motion of roulette wheels. A data-taker would use microswitches hidden in his shoes to indicate the speed of the roulette wheel, and the computer would indicate an octant of the roulette wheel to bet on by sending musical tones via radio to a miniature speaker hidden in a collaborator's ear canal. The system was successfully tested in Las Vegas in June 1961, but hardware issues with the speaker wires prevented it from being used beyond test runs. This was not a wearable computer because it could not be re-purposed during use; rather it was an example of task-specific hardware. This work was kept secret until it was first mentioned in Thorp's book Beat the Dealer (revised ed.) in 1966 and later published in detail in 1969. Pocket calculators became mass-market devices in 1970, starting in Japan. Programmable calculators followed in the late 1970s, being somewhat more general-purpose computers. The HP-01 algebraic calculator watch by Hewlett-Packard was released in 1977. A camera-to-tactile vest for the blind, launched by C.C. Collins in 1977, converted images into a 1024-point, ten-inch square tactile grid on a vest. The 1980s saw the rise of more general-purpose wearable computers. In 1981, Steve Mann designed and built a backpack-mounted 6502-based wearable multimedia computer with text, graphics, and multimedia capability, as well as video capability (cameras and other photographic systems). Mann went on to be an early and active researcher in the wearables field, especially known for his 1994 creation of the Wearable Wireless Webcam, the first example of lifelogging. Seiko Epson released the RC-20 Wrist Computer in 1984. It was an early smartwatch, powered by a computer on a chip. In 1989, Reflection Technology marketed the Private Eye head-mounted display, which scans a vertical array of LEDs across the visual field using a vibrating mirror. This display gave rise to several hobbyist and research wearables, including Gerald "Chip" Maguire's IBM/Columbia University Student Electronic Notebook, Doug Platt's Hip-PC, and Carnegie Mellon University's VuMan 1 in 1991. The Student Electronic Notebook consisted of the Private Eye, Toshiba diskless AIX notebook computers (prototypes), a stylus based input system and a virtual keyboard. It used direct-sequence spread spectrum radio links to provide all the usual TCP/IP based services, including NFS mounted file systems and X11, which all ran in the Andrew Project environment. The Hip-PC included an Agenda palmtop used as a chording keyboard attached to the belt and a 1.44 megabyte floppy drive. Later versions incorporated additional equipment from Park Engineering. The system debuted at "The Lap and Palmtop Expo" on 16 April 1991. VuMan 1 was developed as part of a Summer-term course at Carnegie Mellon's Engineering Design Research Center, and was intended for viewing house blueprints. Input was through a three-button unit worn on the belt, and output was through Reflection Tech's Private Eye. The CPU was an 8 MHz 80188 processor with 0.5 MB ROM. In the 1990s PDAs became widely used, and in 1999 were combined with mobile phones in Japan to produce the first mass-market smartphone. In 1993, the Private Eye was used in Thad Starner's wearable, based on Doug Platt's system and built from a kit from Park Enterprises, a Private Eye display on loan from Devon Sean McCullough, and the Twiddler chording keyboard made by Handykey. Many iterations later this system became the MIT "Tin Lizzy" wearable computer design, and Starner went on to become one of the founders of MIT's wearable computing project. 1993 also saw Columbia University's augmented-reality system known as KARMA (Knowledge-based Augmented Reality for Maintenance Assistance). Users would wear a Private Eye display over one eye, giving an overlay effect when the real world was viewed with both eyes open. KARMA would overlay wireframe schematics and maintenance instructions on top of whatever was being repaired. For example, graphical wireframes on top of a laser printer would explain how to change the paper tray. The system used sensors attached to objects in the physical world to determine their locations, and the entire system ran tethered from a desktop computer. In 1994, Edgar Matias and Mike Ruicci of the University of Toronto, debuted a "wrist computer." Their system presented an alternative approach to the emerging head-up display plus chord keyboard wearable. The system was built from a modified HP 95LX palmtop computer and a Half-QWERTY one-handed keyboard. With the keyboard and display modules strapped to the operator's forearms, text could be entered by bringing the wrists together and typing. The same technology was used by IBM researchers to create the half-keyboard "belt computer. Also in 1994, Mik Lamming and Mike Flynn at Xerox EuroPARC demonstrated the Forget-Me-Not, a wearable device that would record interactions with people and devices and store this information in a database for later query. It interacted via wireless transmitters in rooms and with equipment in the area to remember who was there, who was being talked to on the telephone, and what objects were in the room, allowing queries like "Who came by my office while I was on the phone to Mark?". As with the Toronto system, Forget-Me-Not was not based on a head-mounted display. Also in 1994, DARPA started the Smart Modules Program to develop a modular, humionic approach to wearable and carryable computers, with the goal of producing a variety of products including computers, radios, navigation systems and human-computer interfaces that have both military and commercial use. In July 1996, DARPA went on to host the "Wearables in 2005" workshop, bringing together industrial, university, and military visionaries to work on the common theme of delivering computing to the individual. A follow-up conference was hosted by Boeing in August 1996, where plans were finalized to create a new academic conference on wearable computing. In October 1997, Carnegie Mellon University, MIT, and Georgia Tech co-hosted the IEEE International Symposium on Wearables Computers (ISWC) in Cambridge, Massachusetts. The symposium was a full academic conference with published proceedings and papers ranging from sensors and new hardware to new applications for wearable computers, with 382 people registered for the event. In 1998, the Microelectronic and Computer Technology Corporation created the Wearable Electronics consortial program for industrial companies in the U.S. to rapidly develop wearable computers. The program preceded the MCC Heterogeneous Component Integration Study, an investigation of the technology, infrastructure, and business challenges surrounding the continued development and integration of micro-electro-mechanical systems (MEMS) with other system components. In 1998, Steve Mann invented and built the world's first smartwatch. It was featured on the cover of Linux Journal in 2000, and demonstrated at ISSCC 2000. Dr. Bruce H. Thomas and Dr. Wayne Piekarski developed the Tinmith wearable computer system to support augmented reality. This work was first published internationally in 2000 at the ISWC conference. The work was carried out at the Wearable Computer Lab in the University of South Australia. In 2002, as part of Kevin Warwick's Project Cyborg, Warwick's wife, Irena, wore a necklace which was electronically linked to Warwick's nervous system via an implanted electrode array. The color of the necklace changed between red and blue dependent on the signals on Warwick's nervous system. Also in 2002, Xybernaut released a wearable computer called the Xybernaut Poma Wearable PC, Poma for short. Poma stood for Personal Media Appliance. The project failed for a few reasons though the top reasons are that the equipment was expensive and clunky. The user would wear a head-mounted optical piece, a CPU that could be clipped onto clothing, and a mini keyboard that was attached to the user's arm. GoPro released their first product, the GoPro HERO 35mm, which began a successful franchise of wearable cameras. The cameras can be worn atop the head or around the wrist and are shock and waterproof. GoPro cameras are used by many athletes and extreme sports enthusiasts, a trend that became very apparent during the early 2010s. In the late 2000s, various Chinese companies began producing mobile phones in the form of wristwatches, the descendants of which as of 2013 include the i5 and i6, which are GSM phones with 1.8-inch displays, and the ZGPAX s5 Android wristwatch phone. Standardization with IEEE, IETF, and several industry groups (e.g. Bluetooth) lead to more various interfacing under the WPAN (wireless personal area network). It also led the WBAN (Wireless body area network) to offer new classification of designs for interfacing and networking. The 6th-generation iPod Nano, released in September 2010, has a wristband attachment available to convert it into a wearable wristwatch computer. The development of wearable computing spread to encompass rehabilitation engineering, ambulatory intervention treatment, life guard systems, and defense wearable systems.[clarification needed] Sony produced a wristwatch called Sony SmartWatch that must be paired with an Android phone. Once paired, it becomes an additional remote display and notification tool. Fitbit released several wearable fitness trackers and the Fitbit Surge, a full smartwatch that is compatible with Android and iOS. On 11 April 2012, Pebble launched a Kickstarter campaign to raise $100,000 for their initial smartwatch model. The campaign ended on 18 May with $10,266,844, over 100 times the fundraising target. Pebble released several smartwatches, including the Pebble Time and the Pebble Round. Google Glass launched their optical head-mounted display (OHMD) to a test group of users in 2013, before it became available to the public on 15 May 2014. Google's mission was to produce a mass-market ubiquitous computer that displays information in a smartphone-like hands-free format that can interact with the Internet via natural language voice commands. Google Glass received criticism over privacy and safety concerns. On 15 January 2015, Google announced that it would stop producing the Google Glass prototype but would continue to develop the product. According to Google, Project Glass was ready to "graduate" from Google X, the experimental phase of the project. Thync, a headset launched in 2014, is a wearable that stimulates the brain with mild electrical pulses, causing the wearer to feel energized or calm based on input into a phone app. The device is attached to the temple and to the back of the neck with an adhesive strip. Macrotellect launched two portable brainwave (EEG) sensing devices, BrainLink Pro and BrainLink Lite in 2014, which allows families and meditation students to enhance the mental fitness and stress relief with 20+ brain fitness enhancement Apps on Apple and Android App Stores. In January 2015, Intel announced the sub-miniature Intel Curie for wearable applications, based on its Intel Quark platform. As small as a button, it features a six-axis accelerometer, a DSP sensor hub, a Bluetooth LE unit, and a battery charge controller. It was scheduled to ship in the second half of the year. On 24 April 2015, Apple released their take on the smartwatch, known as the Apple Watch. The Apple Watch features a touchscreen, many applications, and a heart-rate sensor. The Apple Watch would later become the most popular wristwatch in the world. Some advanced VR headsets require the user to wear a desktop-sized computer as a backpack to enable them to move around freely. On June 5, 2023, Apple unveiled the Vision Pro, an AR headset with a computer built in that has a screen on the front, allowing others to see the wearer's face. Commercialization The commercialization of general-purpose wearable computers, as led by companies such as Xybernaut, CDI and ViA, Inc. has thus far been met with limited success. Publicly traded Xybernaut tried forging alliances with companies such as IBM and Sony in order to make wearable computing widely available, and managed to get their equipment seen on such shows as The X-Files, but in 2005 their stock was delisted and the company filed for Chapter 11 bankruptcy protection amid financial scandal and federal investigation. Xybernaut emerged from bankruptcy protection in January, 2007. ViA, Inc. filed for bankruptcy in 2001 and subsequently ceased operations. In 1998, Seiko marketed the Ruputer, a computer in a (fairly large) wristwatch, to mediocre returns. In 2001, IBM developed and publicly displayed two prototypes for a wristwatch computer running Linux. The last message about them dates to 2004, saying the device would cost about $250, but it is still under development. In 2002, Fossil, Inc. announced the Fossil Wrist PDA, which ran the Palm OS. Its release date was set for summer of 2003, but was delayed several times and was finally made available on 5 January 2005. Timex Datalink is another example of a practical wearable computer. Hitachi launched a wearable computer called Poma in 2002. Eurotech offers the ZYPAD, a wrist-wearable touch screen computer with GPS, Wi-Fi and Bluetooth connectivity and which can run a number of custom applications. In 2013, a wearable computing device on the wrist to control body temperature was developed at MIT. Evidence of weak market acceptance was demonstrated when Panasonic Computer Solutions Company's product failed. Panasonic has specialized in mobile computing with their Toughbook line since 1996 and has extensive market research into the field of portable, wearable computing products. In 2002, Panasonic introduced a wearable brick computer coupled with a handheld or a touchscreen worn on the arm. The "Brick" Computer is the CF-07 Toughbook, dual batteries, screen used same batteries as the base, 800 x 600 resolution, optional GPS and WWAN. Has one M-PCI slot and one PCMCIA slot for expansion. CPU used is a 600 MHz Pentium 3 factory under clocked to 300 MHz so it can stay cool passively as it has no fan. Micro DIM RAM is upgradeable. The screen can be used wirelessly on other computers. The brick would communicate wirelessly to the screen, and concurrently the brick would communicate wirelessly out to the internet or other networks. The wearable brick was quietly pulled from the market in 2005, while the screen evolved to a thin client touchscreen used with a handstrap. Google has announced that it has been working on a head-mounted display-based wearable "augmented reality" device called Google Glass. An early version of the device was available to the US public from April 2013 until January 2015. Despite ending sales of the device through their Explorer Program, Google has stated that they plan to continue developing the technology. LG and iriver produce earbud wearables measuring heart rate and other biometrics, as well as various activity metrics. Greater response to commercialization has been found in creating devices with designated purposes rather than all-purpose. One example is the WSS1000. The WSS1000 is a wearable computer designed to make the work of inventory employees easier and more efficient. The device allows workers to scan the barcode of items and immediately enter the information into the company system. This removed the need for carrying a clipboard, removed error and confusion from hand written notes, and allowed workers the freedom of both hands while working; the system improves accuracy as well as efficiency. Popular culture Many technologies for wearable computers derive their ideas from science fiction. There are many examples of ideas from popular movies that have become technologies or are technologies currently being developed. Technology has advanced with continuous change in wearable computers. Wearable technologies are increasingly used in healthcare. For instance, portable sensors are used as medical devices which helps patients with diabetes to help them keep track of exercise related data. A number of people think wearable technology as a new trend;[citation needed] however, companies have been trying to develop or design wearable technologies for decades. The spotlight has more recently been focused on new types of technology that prioritize boosting efficiency in the wearer's daily life. Wearable technology comes with many challenges, like data security, trust issues, and regulatory and ethical issues. After 2010, wearable technologies have been seen more as a technology focused mostly on fitness. They have been used with the potential to improve the operations of health and many other professions. With an increase in wearable devices, privacy and security issues can be very important, especially when it comes to health devices. Also, the FDA considers wearable devices as "general wellness products". In the US, wearable devices are not under any Federal laws, but regulatory law like Protected Health Information (PHI) is the subject to regulation which is handled by the Office for Civil Rights (OCR). The devices with sensors can create security issues as the companies have to be more alert to protect the public data. The issue with cybersecurity of these devices are the regulations are not that strict in the US.[citation needed] The National Institute of Standards and Technology (NIST) has developed the NIST Cybersecurity Framework, which provides guidelines for improving cybersecurity, although adherence to the framework is voluntary Consequently, the lack of specific regulations for wearable devices, specifically medical devices, increases the risk of threats and other vulnerabilities. For instance, Google Glass raised major privacy risks with wearable computer technology; Congress investigated the privacy risks related to consumers using Google Glass and how they[clarification needed] use the data.[citation needed] The product can be used to track not only the users of the product but others around them, particularly without them being aware. Nonetheless, all the data captured with Google Glass was then stored on Google's cloud servers, giving them access to the data. They also raised questions regarding women's security as they allowed stalkers or harassers to take intrusive pictures of women's bodies by wearing the Glass without any fear of getting caught. Wearable technologies like smart glasses can also raise cultural and social issues. While wearable technologies can enhance convenience, some devices, such as Bluetooth headphones, may contribute to increased reliance on technology over interpersonal interactions. Society considers these technologies luxury accessories and there may be peer pressure within a group to own similar products. These products raise challenges of social and moral discipline. For instance, wearing a smart watch can be a way to fit in with standards in male-dominated fields, where femininity may be perceived as unprofessional. Despite the fact that the demand for this technology is increasing, one of the biggest challenges is the price. For example, as of March 2023, the price of an Apple Watch ranges from $249 to $1,749, which for a normal consumer can be prohibitively expensive. Augmented reality allows a new generation of display. As opposed to virtual reality, the user does not exist in a virtual world, but information is superimposed on the real world. These displays can be easily portable, such as the Vufine+. Other are quite massive, like the Hololens 2. Some headsets are autonomous, such as the Oculus Quest 2 and others. In contrast to a computer, they are more like a terminal module. Single-board computers (SBC) are improving in performance and becoming cheaper. Some boards are cheap such as the Raspberry Pi Zero and Pi 4, while others are more expensive but more similar to a normal PC, like the Hackboard and LattePanda. One main domain of future research could be the method of control. Today computers are commonly controlled through the keyboard and the mouse, which could change in the future. For example, the words per minute rate on a keyboard could be statistically improved with a BEPO layout. Ergonomics could also change the results with split keyboards and minimalist keyboards (which use one key for more than one letter or symbol). The extreme could be the Plover and steno keyboard that allow the use of very few keys, pressing more than one at the same time for a letter. Furthermore, the pointer could be improved from a basic mouse to an accelerator pointer. The system of gesture controls is evolving from image control (Leap Motion camera) to integrated capture (ex-prototype AI data glove from Zack Freedman.) For some people, the main idea could be to build computers integrated with the AR system which will be controlled with ergonomic controllers. It will make a universal machine that can be as portable as a mobile phone and as efficient as a computer, additionally with ergonomic controllers. Military use The wearable computer was introduced to the US Army in 1989 as a small computer that was meant to assist soldiers in battle. Since then, the concept has grown to include the Land Warrior program and proposal for future systems. The most extensive military program in the wearables arena is the US Army's Land Warrior system, which will eventually be merged into the Future Force Warrior system. There are also researches for increasing the reliability of terrestrial navigation. F-INSAS is an Indian military project, designed largely with wearable computing. The goal of F-INSAS is to equip soldiers with state-of-the-art technologies that improve their combat effectiveness, including wearable computers to aid in communication, navigation, and situational awareness. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Scale_AI] | [TOKENS: 1671] |
Contents Scale AI Scale AI, Inc. is an American data annotation company based in San Francisco, California. It provides data labeling, model evaluation, and software to develop applications for artificial intelligence. The company’s research arm, the Safety, Evaluation and Alignment Lab, focuses on evaluating and aligning large language models (LLMs), including through initiatives such as Humanity's Last Exam, a benchmark designed to assess advanced AI systems on alignment, reasoning, and safety. Scale AI outsources data labeling through its subsidiaries, Remotasks, which focuses on computer vision and autonomous vehicles, and Outlier, which focuses on data annotation for LLMs. Scale AI's customers in the commercial sector have included Google, Microsoft, Meta, General Motors, OpenAI, and Time. The company also directly works with world governments, including the United States on multiple military-related projects, and with Qatar to improve the efficiency of its social programs. History Scale was founded in 2016 by Alexandr Wang and Lucy Guo through Y Combinator. The pair previously worked together at Quora. Initial investors of Scale included Dragoneer Investment Group, Tiger Global Management and Index Ventures. Lucy Guo was fired two years later in 2018. In August 2019, after Peter Thiel’s Founders Fund made a $100 million investment in Scale, its valuation exceeded $1 billion and it acquired unicorn status. Scale contracted with the United States Department of Defense in 2020. In May 2021, Michael Kratsios, Chief Technology Officer of the United States under the Trump administration, joined as Scale AI's managing director and head of strategy. By July 2021, Scale had reached a valuation of $7 billion, after a financing led by Greenoaks, Dragoneer Investment Group and Tiger Global Management. There was an increased demand for data labelling from clients in different industries. In January 2022, Scale AI won a contract worth $250 million to give American federal agencies access to its suite of tools. In February 2022, Scale AI developed its Automated Damage Identification Service in response to the Russian invasion of Ukraine. Satellite imagery was analyzed, measuring the damage to buildings, which were then geotagged and reported to humanitarian groups. In November 2022, Scale AI was recognized by Time on it’s Best Inventions of 2022 list. The company also opened an office in St. Louis in that same year. In January 2023, Scale laid off 20% of its workforce. In May 2023, Scale AI signed a deal with the US Army’s XVIII Airborne Corps, becoming the first AI company to deploy its LLM (known as Donovan) on a classified network. In August 2023, Scale AI partnered with OpenAI, becoming the company’s "preferred partner" to fine-tune GPT-3.5. The company's services were used in the initial creation of ChatGPT. In that same month, Scale AI’s evaluation platform was used at DEF CON, a hacking convention, at its first generative AI red team event, testing models provided by various companies. In December 2023, Scale AI was among a list of companies that contributed to Meta Platforms’s Purple Llama initiative, a security framework for the purpose of development of open generative AI models. In February 2024, Scale AI was selected by the Department of Defense to test and evaluate its LLMs for military purposes under a one-year contract. In March 2024, Scale reached a valuation of almost $13 billion after Accel led another round of funding. In May 2024, Scale raised an additional $1 billion with new investors including Amazon and Meta Platforms. Its valuation reached $14 billion. In August 2024, Scale signed an agreement with the US AI Safety Institute, collaborating with the agency on research, testing, and evaluation of the company’s AI models. The US AI Safety Institute is controlled by the Department of Commerce’s National Institute of Standards and Technology. In December 2024, Scale was sued by a former employee, alleging that the company was committing wage theft and misclassifying workers. The following month, a second employee filed a similar suit. In January 2025, several contractors sued Scale alleging psychological harm from being exposed to disturbing content. In January 2025, it was reported in The Conversation that Scale AI and Meta had previously teamed up to create and sell Defense Llama, an LLM product with military-style defense purposes. The company also took out a full-page ad in The Washington Post, appealing to American President Donald Trump to "win the AI war". Later in the month, Scale AI and the Center for AI Safety partnered to release Humanity's Last Exam, a benchmark test for AI systems. The company has also assisted in the development of the benchmarks EnigmaEval, MultiChallenge, and MASK. In February 2025, Scale AI agreed to a five-year partnership with the Qatari government to improve government services via AI-based tools and training, including predictive analytics, automation, and advanced data analytics. The deal was signed at the Web Qatar 2025 Summit by Mohammed bin Ali bin Mohammed Al Mannai, the Qatari Minister of Communications and Information Technology. Also in February, the company became a third-party evaluator of AI models for the U.S. AI Safety Institute. In March 2025, Scale AI reached a deal with the United States Department of Defense to develop the Thunderforge project. The project aims to use AI to “plan and help execute movements of ships, planes, and other assets”, with the goal of speeding up military decisions in both peace and wartime. The contract was awarded to Scale AI and other companies (such as Anduril Industries and Microsoft) by the Defense Innovation Unit, and is intended to first be used with the USINDOPACOM and EUCOM. In April 2025, Scale AI released Scale Evaluation, a platform used to test LLMs against benchmarks to pinpoint weaknesses and flag where additional training data would improve the model. On June 10, 2025, it was reported that Meta Platforms had agreed to purchase a 49% stake in Scale AI for $14.8 billion, with the goal of accessing specialized datasets to improve Llama, a group of LLMs. The company will remain as a standalone, independent entity from Meta. Former CEO Alexandr Wang took a top position inside Meta as a part of the deal and was replaced by the company's chief strategy officer and former Uber executive, Jason Droege. Additionally, Google (which was Scale AI's largest customer) stated its intentions to cut ties with the company as a result of the deal. Remotasks In 2017, Scale AI established Remotasks, a crowdworking platform to support the creation of labeled data for machine learning, particularly in areas such as computer vision and autonomous vehicles. The subsidiary has facilities in Southeast Asia and Africa. In 2019, Scale AI set up a company called Smart Ecosystem Philippines to operate Remotasks within the country. In the Philippines, many of Remotasks' hires are freelance contractors not covered under labor laws. The pay for some annotation tasks dropped to less than one cent due to "vicious competition" after Remotask expanded to India as well as Venezuela. Late payments are reportedly "commonplace", and some workers received only a few percent of their promised compensation. In 2022, an Oxford Internet Institute study said Remotasks met the "minimum standards of fair work" in only one out of ten criteria. Remotasks has been criticized for obscuring its affiliation with Scale AI, opaque communications, and abrupt changes in worker access in some regions. In early 2024, the platform terminated operations in several countries, including Kenya, Nigeria, and Pakistan, citing administrative and operational considerations. Outlier Outlier is a separate contributor platform operated by Scale AI, designed for generative AI data work, particularly in the development and fine-tuning of LLMs. Contributors on Outlier typically include professionals with advanced degrees, industry expertise, and native fluency in various languages. Outlier tasks involve content evaluation and reinforcement learning from human feedback (RLHF). References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Tawny_(color)] | [TOKENS: 876] |
Contents Tawny (colour) Tawny (also called tenné) is a light brown to brownish-orange colour. Etymology The word means "tan-colored", from Anglo-Norman tauné "associated with the brownish-yellow of tanned leather", from Old French tané "to tan hides", from Medieval Latin tannare, from tannum "crushed oak bark", used in tanning leather, probably from a Celtic source (e.g. Breton tann, "oak tree"). Electronic definitions of tawny A digitised version of the 1912 book Color Standards And Color Nomenclature lists tawny as AE6938, tawny-olive as 826644 or 967117, ochraceous-tawny as BE8A3D or 996515, and vinaceous-tawny as B4745E. HP Labs' Online Color Thesaurus, which lists colours found through their Color Naming Experiment, gives tawny as CC7F3B, noting it is "rarely used", and lists its synonyms as: light chocolate, caramel, light brown, and camel. Dictionary of Color lists tawny as AE6938 or A67B5B, and tawny birch as A87C6D, A67B5B or 958070. It also lists "lion tawny" (which it also refers to as just "lion") as C19A6B or 826644. Orange tawny is listed as CB6D51. Resene RGB Values List includes "Resene Tawny Port" as 105, 37, 69 (#692545), while Resene-2007-rgb lists tawny port as 100, 58, 72 (#643A48). While tan is defined since HTML4 and elsewhere, the colour names tawny, tenné and fulvous do not appear in the standard web colours used by HTML, CSS, and SVG. Most standard X11 colour name files also do not have these names. However, many colour lists include "Tenné (Tawny)" as #CD5700. The proprietary Pantone TC colour system includes Tawny Olive, Tawny Birch, Tawny Brown, Tawny Orange, and Tawny Port. It also has several shades of tan: Apricot Tan, Copper Tan, Rose Tan, Tan, Pastel Rose Tan, and Indian Tan. The colour burnt orange, having the hex number CC5500, is sometimes considered to be a close approximation to tawny. The colour tan may also be considered synonymous with tawny, or a different shade: #D2B48C. Fulvous, meaning tawny-coloured, may also be considered synonymous or its own shade. Tawny#CD5700 Tawny(Dictionary of Color; Ridgway)#AE6938 Tawny(Dictionary of Color)#A67B5B Tawny(Online Color Thesaurus)#CC7F3B Related colours Tan#D2B48C Fulvous#E48400 Burnt orange#CC5500 Synonyms Colours listed as synonyms by HP Labs' Online Color Thesaurus: Light chocolate#cc7f33 Caramel#bc7a3d Light brown#B5651D Camel#b77f4c Variations Tawny-olive(Ridgway)#826644 Tawny-olive(Ridgway)#967117 Ochraceous-tawny (Ridgway)#BE8A3D Ochraceous-tawny (Ridgway)#996515 Vinaceous-tawny (Ridgway)#B4745E Tawny birch(Dictionary of Color)#A87C6D Tawny birch(Dictionary of Color)#A67B5B Tawny birch(Dictionary of Color)#958070 Lion tawny(Dictionary of Color)#C19A6B Lion tawny(Dictionary of Color)#826644 Orange tawny(Dictionary of Color)#CB6D51 Resene Tawny Port#692545 Tawny port(Resene-2007)#643A48 See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Nate_Garrelts] | [TOKENS: 241] |
Contents Nate Garrelts Nate Garrelts is an American academic who studies digital games and other media. He has edited four collections of essays on digital games: Digital Gameplay (McFarland, 2005), The Meaning and Culture of Grand Theft Auto (McFarland, 2006), Understanding Minecraft (McFarland, 2014), and Responding to Call of Duty (McFarland, 2017).The Meaning and Culture of Grand Theft Auto was the first academic collection to focus on a single game series. He has also contributed essays to the websites Bad Subjects and Berfrois. In 2003, he founded the Video Game Studies area at the Popular Culture Association/ American Culture Association National Conference in New Orleans and continued to coordinate it until 2007. This area, which has since been renamed Game Studies, is one of the longest continually run game studies events in the United States. Biography Garrelts received his PhD in American Studies from Michigan State University (2003). His dissertation was titled The Official Strategy Guide for Video Game Studies: A Grammar and Rhetoric. He is currently Professor of English at Ferris State University. Published works References External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.