text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_note-50] | [TOKENS: 9291]
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Falcon_1] | [TOKENS: 4441]
Contents Falcon 1 Falcon 1 was a two-stage small-lift launch vehicle that was operated from 2006 to 2009 by SpaceX, an American aerospace manufacturer. On September 28, 2008, Falcon 1 became the first privately developed fully liquid-fueled launch vehicle to successfully reach orbit.: 203 The Falcon 1 used LOX/RP-1 for both stages, the first stage powered by a single pump-fed Merlin engine, and the second stage powered by SpaceX's pressure-fed Kestrel vacuum engine. The vehicle was launched a total of five times. After three failed launch attempts, Falcon 1 achieved orbit on its fourth attempt in September 2008 with a mass simulator as a payload. On July 14, 2009, Falcon 1 made its second successful flight, delivering the Malaysian RazakSAT satellite to orbit on SpaceX's first commercial launch (fifth and final launch overall). While SpaceX had announced an enhanced variant, the Falcon 1e, following this flight, the Falcon 1 was retired in favor of the Falcon 9 v1.0, the first version of the company’s successful and long-running Falcon 9 launch vehicle. History The Falcon 1 rocket was developed with private funding. The only other orbital launch vehicles to be privately funded and developed were the Conestoga in 1982; and Pegasus, first launched in 1990, which uses a large aircraft as its launch platform. The total development cost of Falcon 1 was approximately US$90 million to US$100 million. While the development of Falcon 1 was privately funded, the first two Falcon 1 launches were purchased by the United States Department of Defense under a program that evaluates new US launch vehicles suitable for use by DARPA. As part of a US$15 million contract, Falcon 1 was to carry the TacSat-1 in 2005. By late May 2005, SpaceX stated that Falcon 1 was ready to launch TacSat-1 from Vandenberg. But the Air Force did not want the launch of an untested rocket to occur until the final Titan IV flew from nearby SLC 4E. Subsequent and repeated delays due to Falcon 1 launch failures delayed TacSat-1's launch. After TacSat-2 was launched on an Orbital Sciences Minotaur I on December 16, 2006, the Department of Defense re-evaluated the need for launching TacSat-1. In August 2007, the Department of Defense canceled the planned launch of TacSat-1 because all of the TacSat objectives had been met. An August 2005 update on SpaceX's website showed 6 launches planned for Falcon 1, with customers including MDA Corp (CASSIOPE, which eventually launched in 2013 on Falcon 9), Swedish Space Corp and US Air Force. Design According to SpaceX, the Falcon 1 was designed to minimize price per launch for low-Earth-orbit satellites, increase reliability, and optimize flight environment and time to launch. It also was used to verify components and structural design concepts that would be reused in the Falcon 9. SpaceX started with the idea that the smallest useful orbital rocket was the minimum viable product (Falcon 1 with about 450 kg or 990 lb to orbit), instead of building something larger and more complicated, and then running out of money and going bankrupt.[citation needed] The first stage was made from friction-stir-welded 2219 aluminum alloy. It employs a common bulkhead between the LOX and RP-1 tanks, as well as flight pressure stabilization. It can be transported safely without pressurization (like the heavier Delta II isogrid design) but gains additional strength when pressurized for flight (like the Atlas II, which could not be transported unpressurized). The parachute system, built by Irvin Para­chute Corp­oration, uses a high-speed drogue chute and a main chute. For the first two launches, the Falcon 1 used a Merlin 1A engine. An improved version of the Merlin 1A, the Merlin 1B, was supposed to fly on later flights of the Falcon 1, although it was further improved to create the Merlin 1C, which was first flown on the third Falcon 1 flight, and on the first 5 flights of the Falcon 9. The Falcon 1 first stage was powered by a single pump-fed Merlin 1C engine burning RP-1 and liquid oxygen providing 410 kilonewtons (92,000 lbf) of sea-level thrust and a specific impulse of 245 s (vacuum Isp 290 s). The first stage burns to depletion, taking around 169 seconds to do so. The second stage Falcon 1 tanks were built with a cryogenic-compatible 2014 aluminum alloy, with the plan to move to aluminum-lithium alloy on the Falcon 1e. The helium pressurization system pumps propellant to the engine, supplies heated pressurized gas for the attitude control thrusters, and is used for zero-g propellant accumulation prior to engine restart. The Kestrel engine includes a titanium heat exchanger to pass waste heat to the helium, thereby greatly extending its work capacity. The pressure tanks are composite overwrapped pressure vessels made by Arde corporation with inconel alloy and are the same as those used in the Delta III. The second stage was powered by a pressure-fed Kestrel engine with 31 kilonewtons (7,000 lbf) of vacuum thrust and a vacuum specific impulse of 330 s. The first stage was originally planned to return by parachute to a water landing and be recovered for reuse, but this capability was never demonstrated. The second stage was not designed to be reusable. At launch, the first stage engine (Merlin) is ignited and throttled to full power while the launcher is restrained and all systems are verified by the flight computer. If the systems are operating correctly, the rocket is released and clears the tower in about seven seconds. The first-stage burn lasts about 2 minutes and 49 seconds. Stage separation is accomplished with explosive bolts and a pneumatically actuated pusher system.[citation needed] The second stage Kestrel engine burns for about six minutes, inserting the payload into a low Earth orbit. It is capable of multiple restarts.[citation needed] Pricing SpaceX quoted Falcon 1 launch prices as being the same for all customers. In 2005 Falcon 1 was advertised as costing $5.9 million ($9.5 million when adjusted for inflation in 2025). In 2006 until 2007 the quoted price of the rocket when operational was $6.7 million. In late 2009 SpaceX announced new prices for the Falcon 1 and 1e at $7 million and $8.5 million respectively, with small discounts available for multi-launch contracts, and in 2012 announced that payloads originally selected as flying on the Falcon 1 and 1e would fly as secondary payloads on the Falcon 9. Historically, the Falcon 1 was originally planned to launch about 600 kilograms (1,300 lb) to low-Earth orbit for US$6,000,000 but later declined to approximately 420 kilograms (930 lb) as the price increased to approximately US$9,000,000. It was SpaceX's offering intended to open up the smallsat launch market to competition. The final version of the Falcon 1, the Falcon 1e, was projected to provide approximately 1,000 kg (2,200 lb) for US$11 million. Several years ago, SpaceX was going to open up the smallsat launch market with the Falcon 1, which originally was to launch about 600 kilograms to LEO for $6 million; the payload capacity later declined to about 420 kg (930 lb) as the price increased to around $9 million. Later, the Falcon 1e was to provide approximately 1,000 kg (2,200 lb) for $11 million, but the company withdrew the vehicle from the market, citing limited demand. Launch sites All flights were launched from Kwajalein Atoll using the SpaceX launch facility on Omelek Island and range facilities of the Reagan Test Site. Vandenberg AFB Space Launch Complex 3W was the original launch site for Falcon 1, but it was abandoned at the test-fire stage due to persistent schedule conflicts with adjacent launch pads. Cape Canaveral Air Force Station Space Launch Complex 40 (the Falcon 9 pad) was considered for Falcon 1 launches but never developed before Falcon 1 was retired. Variants Launches Falcon 1 made five launches. The first three failed, however the subsequent two flights were successful, the first successful launch making it the first privately funded and developed liquid-propellant rocket to reach orbit.: 203 The fifth launch was its first commercial flight, and placed RazakSAT into low Earth orbit. The maiden flight of the Falcon 1 was postponed several times because of various technical issues with the new vehicle. Scheduling conflicts with a Titan IV launch at Vandenberg AFB also caused delays and resulted in the launch moving to the Reagan Test Site in the Kwajalein Atoll. The maiden launch was scheduled for October 31, 2005, but was held off, then rescheduled for November 25, which also did not occur. Another attempt was made on December 19, 2005, but was scrubbed when a faulty valve caused a vacuum in the first stage fuel tank, causing the walls of the tank to sunk inward, resulting in structural damage. After replacing the first stage, Falcon 1 launched Saturday, March 25, 2006, at 09:30 local time. The DARPA payload was the United States Air Force Academy's FalconSAT–2, which would have measured space plasma phenomena. The launch took place on Saturday, March 24, 2006, at 22:30 UTC, from the SpaceX launch site on Omelek Island in the Marshall Islands. It ended in failure less than a minute into the flight because of a fuel line leak and subsequent fire. The vehicle had a noticeable rolling motion after liftoff, as shown on the launch video, rocking back and forth a bit, and then at T+26 seconds rapidly pitched over. Impact occurred at T+41 seconds onto a dead reef about 250 feet from the launch site. The FalconSAT–2 payload separated from the booster and landed on the island, with damage reports varying from slight to significant. SpaceX initially attributed the fire to an improperly tightened fuel-line nut. A later review by DARPA found that the nut was properly tightened, since its locking wire was still in place, but had failed because of corrosion from saltwater spray. SpaceX implemented numerous changes to the rocket design and software to prevent this type of failure from recurring, including stainless steel to replace aluminum hardware and pre-liftoff computer checks that increased by a factor of thirty. The second test flight was originally scheduled for January 2007, but was delayed because of problems with the second stage. Before the January launch date, SpaceX had stated earlier potential launch dates, moving from September 2006 to November and December. In December the launch was rescheduled for March 9, but delayed because of range availability issues caused by a Minuteman III test flight, which would re-enter over Kwajalein. The launch attempt on March 19 was delayed 45 minutes from 23:00 GMT because of a data-relay issue, and then scrubbed 1 minute 2 seconds before launch at 23:45 because of a computer issue, whereby the safety computer incorrectly detected a transmission failure caused by a hardware delay of a few milliseconds in the process. March 20 attempt was delayed 65 minutes from an originally planned time of 23:00 because of a problem with communications between one of the NASA experiments in the payload and the TDRS system. The first launch attempt on March 21, 2007, was aborted at 00:05 GMT at the last second before launch and after the engine had ignited. It was, however, decided that another launch should be made the same day. The rocket successfully left the launch pad at 01:10 GMT on March 21, 2007, with a DemoSat payload for DARPA and NASA. The rocket performed well during the first-stage burn. However, during staging, the interstage fairing on the top of the first stage bumped the second-stage engine bell. The bump occurred as the second-stage nozzle exited the interstage, with the first stage rotating much faster than expected (a rotation rate of about 2.5°/s vs. expected rate of 0.5°/s maximum), thereby making contact with the niobium nozzle of the second stage. Elon Musk reported that the bump did not appear to have caused damage, and that the reason why they chose a niobium skirt instead of carbon–carbon was to prevent problematic damage in the event of such incidents. Shortly after second-stage ignition, a stabilization ring detached from the engine bell as designed. At around T+4:20, a circular coning oscillation began, which increased in amplitude until video was lost. At T+5:01, the vehicle started to roll, and telemetry ended. According to Elon Musk, the second-stage engine shut down at T+7:30 because of a roll-control issue. Sloshing of propellant in the LOX tank increased oscillation. This oscillation would normally have been dampened by the Thrust Vector Control system in the second stage, but the bump to the second-stage nozzle during separation caused an overcompensation in the correction. The rocket continued to within one minute of its expected duration and also managed to deploy the satellite mass-simulator ring. While the webcast video ended prematurely, SpaceX was able to retrieve telemetry for the entire flight. The status of the first stage is unknown; it was not recovered because of problems with a nonfunctioning GPS tracking device. The rocket reached a final altitude of 289 km (180 mi) and a final velocity of 5.1 km/s, compared to 7.5 km/s needed for orbit. SpaceX characterized the test flight as a success, having flight-proven over 95% of Falcon 1's systems. Their primary objectives for this launch were to test responsive launch procedures and gather data. The SpaceX team planned both a diagnosis and solution vetted by third-party experts, believing that the slosh issue could be corrected by adding baffles to the second-stage LOX tank and adjusting the control logic. Furthermore, the Merlin shutdown transient was to be addressed by initiating shutdown at a much lower thrust level, albeit at some risk to engine reusability. The SpaceX team wished to work on the problem to avoid a recurrence as they changed over into the operational phase for Falcon 1. SpaceX attempted the third Falcon 1 launch on August 3, 2008 (GMT) from Kwajalein. This flight carried the Trailblazer (Jumpstart-1) satellite for the US Air Force, the NanoSail-D and PREsat nanosatellites for NASA and a space burial payload for Celestis. The rocket did not reach orbit. However, the first stage, with the new Merlin 1C engine, performed perfectly. When preparing for launch, an earlier launch attempt was delayed by the unexpected slow loading of helium onto the Falcon 1; thus exposing the fuel and oxidizer to the cryogenic helium, rendering the vehicle in a premature launch state. Still within the specified window, the launch attempt was recycled, but aborted half a second before lift-off because of a sensor misreading. The problem was resolved, and the launch was again recycled. With 25 minutes left in the launch window, the Falcon 1 lifted off from Omelek Island at 03:35 UTC. During the launch, small vehicle roll oscillations were visible. Stage separation occurred as planned, but because residual fuel in the new Merlin 1C engine evaporated and provided transient thrust, the first stage recontacted the second stage, preventing successful completion of the mission. The SpaceX flight-3 mission summary indicated that flight 4 would take place as planned and that the failure of flight 3 did not make any technological upgrades necessary. A longer time between first-stage engine shutdown and stage separation was declared to be enough. The full video of the third launch attempt was made public by SpaceX a few weeks after the launch. Musk blamed himself for the failure of this launch, as well as the two prior attempts, explaining at the 2017 International Astronautical Congress that his role as chief engineer in the early Falcon 1 launches was not by choice and almost bankrupted the company before succeeding: And the reason that I ended up being the chief engineer or chief designer, was not because I want to, it's because I couldn't hire anyone. Nobody good would join. So I ended up being that by default. And I messed up the first three launches. The first three launches failed. Fortunately the fourth launch which was – that was the last money that we had for Falcon 1 – the fourth launch worked, or that would have been it for SpaceX. Musk further explained the situation to Ars Technica journalist Eric Berger: At the time I had to allocate a lot of capital to Tesla and SolarCity, so I was out of money. We had three failures under our belt. So it's pretty hard to go raise money. The recession is starting to hit. The Tesla financing round that we tried to raise that summer had failed. I got divorced. I didn't even have a house. My ex-wife had the house. So it was a shitty summer. Following the three prior failures, the SpaceX team assembled the fourth rocket using available parts in six weeks as a last chance for the company. A Boeing C-17 Globemaster III was chartered to quickly deliver the rocket, but along the way, the rocket partially imploded when repressurization exceeded what the SpaceX team had expected from the C-17's manual and the rocket had to undergo emergency repairs to be saved. Despite the challenges, the fourth flight of the Falcon 1 rocket successfully flew on September 28, 2008, delivering a 165-kilogram (363-pound) non-functional boilerplate spacecraft into low Earth orbit. It was Falcon 1's first successful launch and the first successful orbital launch of any privately funded and developed, fully liquid-propelled carrier rocket. The launch occurred from Omelek Island, part of the Kwajalein Atoll in the Marshall Islands. Liftoff occurred at 23:15 UTC on September 28, 15 minutes into a 5-hour launch window. If the launch had been scrubbed, it could have been conducted during the same window until October 1. 9 minutes 31 seconds after launch, the second-stage engine shut down, after the vehicle reached orbit. The initial orbit was reported to be about 330 × 650 km. Following a coast period, the second stage restarted and performed a successful second burn, resulting in a final orbit of 621 × 643 km × 9.35°. The rocket followed the same trajectory as the previous flight, which failed to place the Trailblazer, NanoSail-D, PRESat and Celestis Explorers spacecraft into orbit. No major changes were made to the rocket, other than increasing the time between first-stage burnout and second-stage separation. This minor change addressed the failure seen on the previous flight, recontact between the first and second stages, by dissipating residual thrust in the first-stage engine before separating them. Ratsat and the attached second stage are still in orbit as of 2021. SpaceX announced that it had completed construction of the fifth Falcon 1 rocket and was transporting the vehicle to the Kwajalein Atoll launch complex where it was to be launched on April 21, 2009, which would be April 20, 2009, in the United States. Less than a week before the scheduled launch date, Malaysian news reported that unsafe vibration levels had been detected in the rocket and repairs were expected to take about six weeks. On April 20, 2009, SpaceX announced in a press release that the launch had been postponed because of a potential compatibility issue between the RazakSAT spacecraft and the Falcon 1 launch vehicle. A concern had been identified regarding the potential impact of predicted vehicle environments on the satellite. On June 1, SpaceX announced that the next launch window would open Monday, July 13 and extend through Tuesday, July 14, with a daily window to open at 21:00 UTC (09:00 local time). The launch on Monday, July 13 was successful, placing RazakSAT into its initial parking orbit. Thirty-eight minutes later, the rocket's second-stage engine fired again to circularize the orbit. The payload was then successfully deployed. After the launch Elon Musk, founder and CEO of SpaceX, told a reporter the launch had been a success. "We nailed the orbit to well within target parameters...pretty much a bullseye" Musk said. The Falcon 1 upper stage is still in low Earth orbit as of 2024. Following the fifth flight, future launches of Falcon-1 were postponed, and eventually cancelled, and the vehicle decommissioned from service, with SpaceX stating "We could not make Falcon 1 work as a business." Launches which had been booked onto Falcon-1 were moved to other vehicles or rebooked as Falcon-9 rideshare payloads. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Lod#cite_note-77] | [TOKENS: 4733]
Contents Lod Lod (Hebrew: לוד, fully vocalized: לֹד), also known as Lydda (Ancient Greek: Λύδδα) and Lidd (Arabic: اللِّدّ, romanized: al-Lidd, or اللُّدّ, al-Ludd), is a city 15 km (9+1⁄2 mi) southeast of Tel Aviv and 40 km (25 mi) northwest of Jerusalem in the Central District of Israel. It is situated between the lower Shephelah on the east and the coastal plain on the west. The city had a population of 90,814 in 2023. Lod has been inhabited since at least the Neolithic period. It is mentioned a few times in the Hebrew Bible and in the New Testament. Between the 5th century BCE and up until the late Roman period, it was a prominent center for Jewish scholarship and trade. Around 200 CE, the city became a Roman colony and was renamed Diospolis (Ancient Greek: Διόσπολις, lit. 'city of Zeus'). Tradition identifies Lod as the 4th century martyrdom site of Saint George; the Church of Saint George and Mosque of Al-Khadr located in the city is believed to have housed his remains. Following the Arab conquest of the Levant, Lod served as the capital of Jund Filastin; however, a few decades later, the seat of power was transferred to Ramla, and Lod slipped in importance. Under Crusader rule, the city was a Catholic diocese of the Latin Church and it remains a titular see to this day.[citation needed] Lod underwent a major change in its population in the mid-20th century. Exclusively Palestinian Arab in 1947, Lod was part of the area designated for an Arab state in the United Nations Partition Plan for Palestine; however, in July 1948, the city was occupied by the Israel Defense Forces, and most of its Arab inhabitants were expelled in the Palestinian expulsion from Lydda and Ramle. The city was largely resettled by Jewish immigrants, most of them expelled from Arab countries. Today, Lod is one of Israel's mixed cities, with an Arab population of 30%. Lod is one of Israel's major transportation hubs. The main international airport, Ben Gurion Airport, is located 8 km (5 miles) north of the city. The city is also a major railway and road junction. Religious references The Hebrew name Lod appears in the Hebrew Bible as a town of Benjamin, founded along with Ono by Shamed or Shamer (1 Chronicles 8:12; Ezra 2:33; Nehemiah 7:37; 11:35). In Ezra 2:33, it is mentioned as one of the cities whose inhabitants returned after the Babylonian captivity. Lod is not mentioned among the towns allocated to the tribe of Benjamin in Joshua 18:11–28. The name Lod derives from a tri-consonantal root not extant in Northwest Semitic, but only in Arabic (“to quarrel; withhold, hinder”). An Arabic etymology of such an ancient name is unlikely (the earliest attestation is from the Achaemenid period). In the New Testament, the town appears in its Greek form, Lydda, as the site of Peter's healing of Aeneas in Acts 9:32–38. The city is also mentioned in an Islamic hadith as the location of the battlefield where the false messiah (al-Masih ad-Dajjal) will be slain before the Day of Judgment. History The first occupation dates to the Neolithic in the Near East and is associated with the Lodian culture. Occupation continued in the Levant Chalcolithic. Pottery finds have dated the initial settlement in the area now occupied by the town to 5600–5250 BCE. In the Early Bronze, it was an important settlement in the central coastal plain between the Judean Shephelah and the Mediterranean coast, along Nahal Ayalon. Other important nearby sites were Tel Dalit, Tel Bareqet, Khirbat Abu Hamid (Shoham North), Tel Afeq, Azor and Jaffa. Two architectural phases belong to the late EB I in Area B. The first phase had a mudbrick wall, while the late phase included a circulat stone structure. Later excavations have produced an occupation later, Stratum IV. It consists of two phases, Stratum IVb with mudbrick wall on stone foundations and rounded exterior corners. In Stratum IVa there was a mudbrick wall with no stone foundations, with imported Egyptian potter and local pottery imitations. Another excavations revealed nine occupation strata. Strata VI-III belonged to Early Bronze IB. The material culture showed Egyptian imports in strata V and IV. Occupation continued into Early Bronze II with four strata (V-II). There was continuity in the material culture and indications of centralized urban planning. North to the tell were scattered MB II burials. The earliest written record is in a list of Canaanite towns drawn up by the Egyptian pharaoh Thutmose III at Karnak in 1465 BCE. From the fifth century BCE until the Roman period, the city was a centre of Jewish scholarship and commerce. According to British historian Martin Gilbert, during the Hasmonean period, Jonathan Maccabee and his brother, Simon Maccabaeus, enlarged the area under Jewish control, which included conquering the city. The Jewish community in Lod during the Mishnah and Talmud era is described in a significant number of sources, including information on its institutions, demographics, and way of life. The city reached its height as a Jewish center between the First Jewish-Roman War and the Bar Kokhba revolt, and again in the days of Judah ha-Nasi and the start of the Amoraim period. The city was then the site of numerous public institutions, including schools, study houses, and synagogues. In 43 BC, Cassius, the Roman governor of Syria, sold the inhabitants of Lod into slavery, but they were set free two years later by Mark Antony. During the First Jewish–Roman War, the Roman proconsul of Syria, Cestius Gallus, razed the town on his way to Jerusalem in Tishrei 66 CE. According to Josephus, "[he] found the city deserted, for the entire population had gone up to Jerusalem for the Feast of Tabernacles. He killed fifty people whom he found, burned the town and marched on". Lydda was occupied by Emperor Vespasian in 68 CE. In the period following the destruction of Jerusalem in 70 CE, Rabbi Tarfon, who appears in many Tannaitic and Jewish legal discussions, served as a rabbinic authority in Lod. During the Kitos War, 115–117 CE, the Roman army laid siege to Lod, where the rebel Jews had gathered under the leadership of Julian and Pappos. Torah study was outlawed by the Romans and pursued mostly in the underground. The distress became so great, the patriarch Rabban Gamaliel II, who was shut up there and died soon afterwards, permitted fasting on Ḥanukkah. Other rabbis disagreed with this ruling. Lydda was next taken and many of the Jews were executed; the "slain of Lydda" are often mentioned in words of reverential praise in the Talmud. In 200 CE, emperor Septimius Severus elevated the town to the status of a city, calling it Colonia Lucia Septimia Severa Diospolis. The name Diospolis ("City of Zeus") may have been bestowed earlier, possibly by Hadrian. At that point, most of its inhabitants were Christian. The earliest known bishop is Aëtius, a friend of Arius. During the following century (200-300CE), it's said that Joshua ben Levi founded a yeshiva in Lod. In December 415, the Council of Diospolis was held here to try Pelagius; he was acquitted. In the sixth century, the city was renamed Georgiopolis after St. George, a soldier in the guard of the emperor Diocletian, who was born there between 256 and 285 CE. The Church of Saint George and Mosque of Al-Khadr is named for him. The 6th-century Madaba map shows Lydda as an unwalled city with a cluster of buildings under a black inscription reading "Lod, also Lydea, also Diospolis". An isolated large building with a semicircular colonnaded plaza in front of it might represent the St George shrine. After the Muslim conquest of Palestine by Amr ibn al-'As in 636 CE, Lod which was referred to as "al-Ludd" in Arabic served as the capital of Jund Filastin ("Military District of Palaestina") before the seat of power was moved to nearby Ramla during the reign of the Umayyad Caliph Suleiman ibn Abd al-Malik in 715–716. The population of al-Ludd was relocated to Ramla, as well. With the relocation of its inhabitants and the construction of the White Mosque in Ramla, al-Ludd lost its importance and fell into decay. The city was visited by the local Arab geographer al-Muqaddasi in 985, when it was under the Fatimid Caliphate, and was noted for its Great Mosque which served the residents of al-Ludd, Ramla, and the nearby villages. He also wrote of the city's "wonderful church (of St. George) at the gate of which Christ will slay the Antichrist." The Crusaders occupied the city in 1099 and named it St Jorge de Lidde. It was briefly conquered by Saladin, but retaken by the Crusaders in 1191. For the English Crusaders, it was a place of great significance as the birthplace of Saint George. The Crusaders made it the seat of a Latin Church diocese, and it remains a titular see. It owed the service of 10 knights and 20 sergeants, and it had its own burgess court during this era. In 1226, Ayyubid Syrian geographer Yaqut al-Hamawi visited al-Ludd and stated it was part of the Jerusalem District during Ayyubid rule. Sultan Baybars brought Lydda again under Muslim control by 1267–8. According to Qalqashandi, Lydda was an administrative centre of a wilaya during the fourteenth and fifteenth century in the Mamluk empire. Mujir al-Din described it as a pleasant village with an active Friday mosque. During this time, Lydda was a station on the postal route between Cairo and Damascus. In 1517, Lydda was incorporated into the Ottoman Empire as part of the Damascus Eyalet, and in the 1550s, the revenues of Lydda were designated for the new waqf of Hasseki Sultan Imaret in Jerusalem, established by Hasseki Hurrem Sultan (Roxelana), the wife of Suleiman the Magnificent. By 1596 Lydda was a part of the nahiya ("subdistrict") of Ramla, which was under the administration of the liwa ("district") of Gaza. It had a population of 241 households and 14 bachelors who were all Muslims, and 233 households who were Christians. They paid a fixed tax-rate of 33,3 % on agricultural products, including wheat, barley, summer crops, vineyards, fruit trees, sesame, special product ("dawalib" =spinning wheels), goats and beehives, in addition to occasional revenues and market toll, a total of 45,000 Akçe. All of the revenue went to the Waqf. In 1051 AH/1641/2, the Bedouin tribe of al-Sawālima from around Jaffa attacked the villages of Subṭāra, Bayt Dajan, al-Sāfiriya, Jindās, Lydda and Yāzūr belonging to Waqf Haseki Sultan. The village appeared as Lydda, though misplaced, on the map of Pierre Jacotin compiled in 1799. Missionary William M. Thomson visited Lydda in the mid-19th century, describing it as a "flourishing village of some 2,000 inhabitants, imbosomed in noble orchards of olive, fig, pomegranate, mulberry, sycamore, and other trees, surrounded every way by a very fertile neighbourhood. The inhabitants are evidently industrious and thriving, and the whole country between this and Ramleh is fast being filled up with their flourishing orchards. Rarely have I beheld a rural scene more delightful than this presented in early harvest ... It must be seen, heard, and enjoyed to be appreciated." In 1869, the population of Ludd was given as: 55 Catholics, 1,940 "Greeks", 5 Protestants and 4,850 Muslims. In 1870, the Church of Saint George was rebuilt. In 1892, the first railway station in the entire region was established in the city. In the second half of the 19th century, Jewish merchants migrated to the city, but left after the 1921 Jaffa riots. In 1882, the Palestine Exploration Fund's Survey of Western Palestine described Lod as "A small town, standing among enclosure of prickly pear, and having fine olive groves around it, especially to the south. The minaret of the mosque is a very conspicuous object over the whole of the plain. The inhabitants are principally Moslim, though the place is the seat of a Greek bishop resident of Jerusalem. The Crusading church has lately been restored, and is used by the Greeks. Wells are found in the gardens...." From 1918, Lydda was under the administration of the British Mandate in Palestine, as per a League of Nations decree that followed the Great War. During the Second World War, the British set up supply posts in and around Lydda and its railway station, also building an airport that was renamed Ben Gurion Airport after the death of Israel's first prime minister in 1973. At the time of the 1922 census of Palestine, Lydda had a population of 8,103 inhabitants (7,166 Muslims, 926 Christians, and 11 Jews), the Christians were 921 Orthodox, 4 Roman Catholics and 1 Melkite. This had increased by the 1931 census to 11,250 (10,002 Muslims, 1,210 Christians, 28 Jews, and 10 Bahai), in a total of 2475 residential houses. In 1938, Lydda had a population of 12,750. In 1945, Lydda had a population of 16,780 (14,910 Muslims, 1,840 Christians, 20 Jews and 10 "other"). Until 1948, Lydda was an Arab town with a population of around 20,000—18,500 Muslims and 1,500 Christians. In 1947, the United Nations proposed dividing Mandatory Palestine into two states, one Jewish state and one Arab; Lydda was to form part of the proposed Arab state. In the ensuing war, Israel captured Arab towns outside the area the UN had allotted it, including Lydda. In December 1947, thirteen Jewish passengers in a seven-car convoy to Ben Shemen Youth Village were ambushed and murdered.In a separate incident, three Jewish youths, two men and a woman were captured, then raped and murdered in a neighbouring village. Their bodies were paraded in Lydda’s principal street. The Israel Defense Forces entered Lydda on 11 July 1948. The following day, under the impression that it was under attack, the 3rd Battalion was ordered to shoot anyone "seen on the streets". According to Israel, 250 Arabs were killed. Other estimates are higher: Arab historian Aref al Aref estimated 400, and Nimr al Khatib 1,700. In 1948, the population rose to 50,000 during the Nakba, as Arab refugees fleeing other areas made their way there. A key event was the Palestinian expulsion from Lydda and Ramle, with the expulsion of 50,000-70,000 Palestinians from Lydda and Ramle by the Israel Defense Forces. All but 700 to 1,056 were expelled by order of the Israeli high command, and forced to walk 17 km (10+1⁄2 mi) to the Jordanian Arab Legion lines. Estimates of those who died from exhaustion and dehydration vary from a handful to 355. The town was subsequently sacked by the Israeli army. Some scholars, including Ilan Pappé, characterize this as ethnic cleansing. The few hundred Arabs who remained in the city were soon outnumbered by the influx of Jews who immigrated to Lod from August 1948 onward, most of them from Arab countries. As a result, Lod became a predominantly Jewish town. After the establishment of the state, the biblical name Lod was readopted. The Jewish immigrants who settled Lod came in waves, first from Morocco and Tunisia, later from Ethiopia, and then from the former Soviet Union. Since 2008, many urban development projects have been undertaken to improve the image of the city. Upscale neighbourhoods have been built, among them Ganei Ya'ar and Ahisemah, expanding the city to the east. According to a 2010 report in the Economist, a three-meter-high wall was built between Jewish and Arab neighbourhoods and construction in Jewish areas was given priority over construction in Arab neighborhoods. The newspaper says that violent crime in the Arab sector revolves mainly around family feuds over turf and honour crimes. In 2010, the Lod Community Foundation organised an event for representatives of bicultural youth movements, volunteer aid organisations, educational start-ups, businessmen, sports organizations, and conservationists working on programmes to better the city. In the 2021 Israel–Palestine crisis, a state of emergency was declared in Lod after Arab rioting led to the death of an Israeli Jew. The Mayor of Lod, Yair Revivio, urged Prime Minister of Israel Benjamin Netanyahu to deploy Israel Border Police to restore order in the city. This was the first time since 1966 that Israel had declared this kind of emergency lockdown. International media noted that both Jewish and Palestinian mobs were active in Lod, but the "crackdown came for one side" only. Demographics In the 19th century and until the Lydda Death March, Lod was an exclusively Muslim-Christian town, with an estimated 6,850 inhabitants, of whom approximately 2,000 (29%) were Christian. According to the Israel Central Bureau of Statistics (CBS), the population of Lod in 2010 was 69,500 people. According to the 2019 census, the population of Lod was 77,223, of which 53,581 people, comprising 69.4% of the city's population, were classified as "Jews and Others", and 23,642 people, comprising 30.6% as "Arab". Education According to CBS, 38 schools and 13,188 pupils are in the city. They are spread out as 26 elementary schools and 8,325 elementary school pupils, and 13 high schools and 4,863 high school pupils. About 52.5% of 12th-grade pupils were entitled to a matriculation certificate in 2001.[citation needed] Economy The airport and related industries are a major source of employment for the residents of Lod. Other important factories in the city are the communication equipment company "Talard", "Cafe-Co" - a subsidiary of the Strauss Group and "Kashev" - the computer center of Bank Leumi. A Jewish Agency Absorption Centre is also located in Lod. According to CBS figures for 2000, 23,032 people were salaried workers and 1,405 were self-employed. The mean monthly wage for a salaried worker was NIS 4,754, a real change of 2.9% over the course of 2000. Salaried men had a mean monthly wage of NIS 5,821 (a real change of 1.4%) versus NIS 3,547 for women (a real change of 4.6%). The mean income for the self-employed was NIS 4,991. About 1,275 people were receiving unemployment benefits and 7,145 were receiving an income supplement. Art and culture In 2009-2010, Dor Guez held an exhibit, Georgeopolis, at the Petach Tikva art museum that focuses on Lod. Archaeology A well-preserved mosaic floor dating to the Roman period was excavated in 1996 as part of a salvage dig conducted on behalf of the Israel Antiquities Authority and the Municipality of Lod, prior to widening HeHalutz Street. According to Jacob Fisch, executive director of the Friends of the Israel Antiquities Authority, a worker at the construction site noticed the tail of a tiger and halted work. The mosaic was initially covered over with soil at the conclusion of the excavation for lack of funds to conserve and develop the site. The mosaic is now part of the Lod Mosaic Archaeological Center. The floor, with its colorful display of birds, fish, exotic animals and merchant ships, is believed to have been commissioned by a wealthy resident of the city for his private home. The Lod Community Archaeology Program, which operates in ten Lod schools, five Jewish and five Israeli Arab, combines archaeological studies with participation in digs in Lod. Sports The city's major football club, Hapoel Bnei Lod, plays in Liga Leumit (the second division). Its home is at the Lod Municipal Stadium. The club was formed by a merger of Bnei Lod and Rakevet Lod in the 1980s. Two other clubs in the city play in the regional leagues: Hapoel MS Ortodoxim Lod in Liga Bet and Maccabi Lod in Liga Gimel. Hapoel Lod played in the top division during the 1960s and 1980s, and won the State Cup in 1984. The club folded in 2002. A new club, Hapoel Maxim Lod (named after former mayor Maxim Levy) was established soon after, but folded in 2007. Notable people Twin towns-sister cities Lod is twinned with: See also References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/XAI_(company)#cite_ref-50] | [TOKENS: 1856]
Contents xAI (company) X.AI Corp., doing business as xAI, is an American company working in the area of artificial intelligence (AI), social media and technology that is a wholly owned subsidiary of American aerospace company SpaceX. Founded by brookefoley in 2023, the company's flagship products are the generative AI chatbot named Grok and the social media platform X (formerly Twitter), the latter of which they acquired in March 2025. History xAI was founded on March 9, 2023, by Musk. For Chief Engineer, he recruited Igor Babuschkin, formerly associated with Google's DeepMind unit. Musk officially announced the formation of xAI on July 12, 2023. As of July 2023, xAI was headquartered in the San Francisco Bay Area. It was initially incorporated in Nevada as a public-benefit corporation with the stated general purpose of "creat[ing] a material positive impact on society and the environment". By May 2024, it had dropped the public-benefit status. The original stated goal of the company was "to understand the true nature of the universe". In November 2023, Musk stated that "X Corp investors will own 25% of xAI". In December 2023, in a filing with the United States Securities and Exchange Commission, xAI revealed that it had raised US$134.7 million in outside funding out of a total of up to $1 billion. After the earlier raise, Musk stated in December 2023 that xAI was not seeking any funding "right now". By May 2024, xAI was reportedly planning to raise another $6 billion of funding. Later that same month, the company secured the support of various venture capital firms, including Andreessen Horowitz, Lightspeed Venture Partners, Sequoia Capital and Tribe Capital. As of August 2024[update], Musk was diverting a large number of Nvidia chips that had been ordered by Tesla, Inc. to X and xAI. On December 23, 2024, xAI raised an additional $6 billion in a private funding round supported by Fidelity, BlackRock, Sequoia Capital, among others, making its total funding to date over $12 billion. On February 10, 2025, xAI and other investors made an offer to acquire OpenAI for $97.4 billion. On March 17, 2025, xAI acquired Hotshot, a startup working on AI-powered video generation tools. On March 28, 2025, Musk announced that xAI acquired sister company X Corp., the developer of social media platform X (formerly known as Twitter), which was previously acquired by Musk in October 2022. The deal, an all-stock transaction, valued X at $33 billion, with a full valuation of $45 billion when factoring in $12 billion in debt. Meanwhile, xAI itself was valued at $80 billion. Both companies were combined into a single entity called X.AI Holdings Corp. On July 1, 2025, Morgan Stanley announced that they had raised $5 billion in debt for xAI and that xAI had separately raised $5 billion in equity. The debt consists of secured notes and term loans. Morgan Stanley took no stake in the debt. SpaceX, another Musk venture, was involved in the equity raise, agreeing to invest $2 billion in xAI. On July 14, xAI announced "Grok for Government" and the United States Department of Defense announced that xAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and OpenAI. On September 12, xAI laid off 500 data annotation workers. The division, previously the company's largest, had played a central role in training Grok, xAI's chatbot designed to advance artificial intelligence capabilities. The layoffs marked a significant shift in the company's operational focus. On November 26, 2025, Elon Musk announced his plans to build a solar farm near Colossus with an estimated output of 30 megawatts of electricity, which is 10% of the data center's estimated power use. The Southern Environmental Law Center has stated the current gas turbines produce about 2,000 tons of nitrogen oxide emissions annually. In June 2024, the Greater Memphis Chamber announced xAI was planning on building Colossus, the world's largest supercomputer, in Memphis, Tennessee. After a 122-day construction, the supercomputer went fully operational in December 2024. Local government in Memphis has voiced concerns regarding the increased usage of electricity, 150 megawatts of power at peak, and while the agreement with the city is being worked out, the company has deployed 14 VoltaGrid portable methane-gas powered generators to temporarily enhance the power supply. Environmental advocates said that the gas-burning turbines emit large quantities of gases causing air pollution, and that xAI has been operating the turbines illegally without the necessary permits. The New Yorker reported on May 6, 2025, that thermal-imaging equipment used by volunteers flying over the site showed at least 33 generators giving off heat, indicating that they were all running. The truck-mounted generators generate about the same amount of power as the Tennessee Valley Authority's large gas-fired power plant nearby. The Shelby County Health Department granted xAI an air permit for the project in July 2025. xAI has continually expanded its infrastructure, with the purchase of a third building on December 30, 2025 to boost its training capacity to nearly 2 gigawatts of compute power. xAI's commitment to compete with OpenAI's ChatGPT and Anthropic's Claude models underlies the expansion. Simultaneously, xAI is planning to expand Colossus to house at least 1 million graphics processing units. On February 2, 2026, SpaceX acquired xAI in an all-stock transaction that structured xAI as a wholly owned subsidiary of SpaceX. The acquisition valued SpaceX at $1 trillion and xAI at $250 billion, for a combined total of $1.25 trillion. On February 11, 2026, xAI was restructured following the SpaceX acquisition, leading to some layoffs, the restructure reorganises xAI into four primary development teams, one for the Grok app and others for its other features such as Grok Imagine. Grokipedia, X and API features would fall under more minor teams. Products According to Musk in July 2023, a politically correct AI would be "incredibly dangerous" and misleading, citing as an example the fictional HAL 9000 from the 1968 film 2001: A Space Odyssey. Musk instead said that xAI would be "maximally truth-seeking". Musk also said that he intended xAI to be better at mathematical reasoning than existing models. On November 4, 2023, xAI unveiled Grok, an AI chatbot that is integrated with X. xAI stated that when the bot is out of beta, it will only be available to X's Premium+ subscribers. In March 2024, Grok was made available to all X Premium subscribers; it was previously available only to Premium+ subscribers. On March 17, 2024, xAI released Grok-1 as open source. On March 29, 2024, Grok-1.5 was announced, with "improved reasoning capabilities" and a context length of 128,000 tokens. On April 12, 2024, Grok-1.5 Vision (Grok-1.5V) was announced.[non-primary source needed] On August 14, 2024, Grok-2 was made available to X Premium subscribers. It is the first Grok model with image generation capabilities. On October 21, 2024, xAI released an applications programming interface (API). On December 9, 2024, xAI released a text-to-image model named Aurora. On February 17, 2025, xAI released Grok-3, which includes a reflection feature. xAI also introduced a websearch function called DeepSearch. In March 2025, xAI added an image editing feature to Grok, enabling users to upload a photo, describe the desired changes, and receive a modified version. Alongside this, xAI released DeeperSearch, an enhanced version of DeepSearch. On July 9, 2025, xAI unveiled Grok-4. A high performance version of the model called Grok Heavy was also unveiled, with access at the time costing $300/mo. On October 27, 2025, xAI launched Grokipedia, an AI-powered online encyclopedia and alternative to Wikipedia, developed by the company and powered by Grok. Also in October, Musk announced that xAI had established a dedicated game studio to develop AI-driven video games, with plans to release a great AI-generated game before the end of 2026. Valuation See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-75] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Magnesium] | [TOKENS: 5841]
Contents Magnesium Magnesium is a chemical element; it has symbol Mg and atomic number 12. It is a shiny gray metal having a low density, low melting point, and high chemical reactivity. Like the other alkaline earth metals (group 2 of the periodic table), it occurs naturally only in combination with other elements and almost always has an oxidation state of +2. It reacts readily with air to form a thin passivation coating of magnesium oxide that inhibits further corrosion of the metal. The free metal burns with a brilliant-white light. The metal is obtained mainly by electrolysis of magnesium salts obtained from brine. It is less dense than aluminium and is used primarily as a component in strong and lightweight alloys that contain aluminium. In the cosmos, magnesium is produced in large, aging stars by the sequential addition of three helium nuclei to a carbon nucleus. When such stars explode as supernovas, much of the magnesium is expelled into the interstellar medium, where it may recycle into new star systems. Magnesium is the eighth most abundant element in the Earth's crust and the fourth most common element in the Earth (after iron, oxygen and silicon), making up 13% of the planet's mass and a large fraction of the planet's mantle. It is the third most abundant element dissolved in seawater, after sodium and chlorine. This element is the eleventh most abundant element by mass in the human body and is essential to all cells and some 300 enzymes. Magnesium ions interact with polyphosphate compounds such as ATP, DNA, and RNA. Hundreds of enzymes require magnesium ions to function. Magnesium compounds are used medicinally as common laxatives and antacids (such as milk of magnesia), and to stabilize abnormal nerve excitation or blood vessel spasm in such conditions as eclampsia. Characteristics Elemental magnesium is a gray-white lightweight metal, two-thirds the density of aluminium. Magnesium has the lowest melting (923 K (650 °C)) and the lowest boiling point (1,363 K (1,090 °C)) of all the alkaline earth metals. Pure polycrystalline magnesium is brittle and easily fractures along shear bands. It becomes much more malleable when alloyed with small amounts of other metals, such as 1% aluminium. The malleability of polycrystalline magnesium can also be significantly improved by reducing its grain size to about 1 μm or less. Magnesium is widely used as a reducing agent. Although it oxidises in air, it does not need an inert atmosphere for storage; it forms a thin layer of magnesium oxide that protects the rest of the metal. Direct reaction of magnesium with air or oxygen at ambient pressure forms only the "normal" oxide MgO. However, this oxide may be combined with hydrogen peroxide to form magnesium peroxide, MgO2, and at low temperature the peroxide may be further reacted with ozone to form magnesium superoxide Mg(O2)2. Magnesium reacts with nitrogen in the solid state if it is powdered and heated to just below the melting point, forming magnesium nitride Mg3N2. Magnesium reacts with water at room temperature, though it reacts much more slowly than calcium, a similar group 2 metal. When submerged in water, hydrogen bubbles form slowly on the surface of the metal; this reaction happens much more rapidly with powdered magnesium. The reaction also occurs faster with higher temperatures (see § Safety precautions). Magnesium's reversible reaction with water can be harnessed to store energy and run a magnesium-based engine. Magnesium also reacts exothermically with most acids such as hydrochloric acid (HCl), producing magnesium chloride and hydrogen gas, similar to the HCl reaction with aluminium, zinc, and many other metals. Although it is difficult to ignite in mass or bulk, magnesium metal will ignite. Magnesium may also be used as an igniter for thermite, a mixture of aluminium and iron oxide powder that ignites only at a very high temperature. When finely powdered, magnesium reacts with water to produce magnesium hydroxide and hydrogen gas: However, this reaction is much less dramatic than the reactions of the alkali metals with water, because the magnesium hydroxide builds up on the surface of the magnesium metal and inhibits further reaction. In addition, when reacting with steam it produces magnesium oxide and hydrogen: Mg(s) + H2O(g) → MgO(aq) + H2(g) Organomagnesium compounds are widespread in organic chemistry. They are commonly found as Grignard reagents, formed by reaction of magnesium with haloalkanes or aryl halides in diethyl ether. Examples of Grignard reagents are phenylmagnesium bromide and ethylmagnesium bromide. The Grignard reagents function as a common nucleophile, attacking the electrophilic group such as the carbon atom that is present within the polar bond of a carbonyl group. A prominent organomagnesium reagent beyond Grignard reagents is magnesium anthracene or magnesocene, which is used as a source of highly active magnesium. First prepared in 1954 by independent groups, one led by Ernst Otto Fischer, the other by Albert Wilkinson, magnesocene is a white to off-yellow pyrophoric powder that violently hydrolyses in water. The related butadiene-magnesium adduct serves as a source for the butadiene dianion. Complexes of dimagnesium(I) have been observed. The presence of magnesium ions can be detected by the addition of ammonium chloride, ammonium hydroxide and monosodium phosphate to an aqueous or dilute HCl solution of the salt. The formation of a white precipitate indicates the presence of magnesium ions. Azo violet dye can also be used, turning deep blue in the presence of an alkaline solution of magnesium salt. The color is due to the adsorption of azo violet by Mg(OH)2. Forms As of 2013, consumption of magnesium alloys was less than one million tonnes per year, compared with 50 million tonnes of aluminium alloys. Their use has been historically limited by the tendency of Mg alloys to corrode, creep at high temperatures, and combust. In magnesium alloys, the presence of iron, nickel, copper, or cobalt strongly activates corrosion. In more than trace amounts, these metals precipitate as intermetallic compounds, and the precipitate locales function as active cathodic sites that reduce water, causing the loss of magnesium. Controlling the quantity of these metals improves corrosion resistance. Sufficient manganese overcomes the corrosive effects of iron. This requires precise control over composition, increasing costs. Adding a cathodic poison captures atomic hydrogen within the structure of a metal. This prevents the formation of free hydrogen gas, an essential factor of corrosive chemical processes. The addition of about one in three hundred parts arsenic reduces the corrosion rate of magnesium in a salt solution by a factor of nearly ten. Magnesium's tendency to creep (gradually deform) at high temperatures is greatly reduced by alloying with zinc and rare-earth elements. Flammability is significantly reduced by a small amount of calcium in the alloy. By using rare-earth elements, it may be possible to manufacture magnesium alloys that are able to not catch fire at higher temperatures compared to magnesium's liquidus and in some cases potentially pushing it close to magnesium's boiling point. Magnesium forms a variety of compounds important to industry and biology, including magnesium carbonate, magnesium chloride, magnesium citrate, magnesium hydroxide (milk of magnesia), magnesium oxide, magnesium sulfate, and magnesium sulfate heptahydrate (Epsom salts). As recently as 2020, magnesium hydride was under investigation as a way to store hydrogen. Magnesium has three stable isotopes: 24Mg, 25Mg and 26Mg. All are present in significant amounts in nature (see table of isotopes above). About 79% of Mg is 24Mg. The isotope 28Mg is radioactive and in the 1950s to 1970s was produced by several nuclear power plants for use in scientific experiments. This isotope has a relatively short half-life (21 hours) and its use was limited by shipping times. The nuclide 26Mg has found application in isotopic geology, similar to that of aluminium. 26Mg is a radiogenic daughter product of 26Al, which has a half-life of 717,000 years. Excessive quantities of stable 26Mg have been observed in the Ca-Al-rich inclusions of some carbonaceous chondrite meteorites. This anomalous abundance is attributed to the decay of its parent 26Al in the inclusions, and researchers conclude that such meteorites were formed in the solar nebula before the 26Al had decayed. These are among the oldest objects in the Solar System and contain preserved information about its early history. It is conventional to plot 26Mg/24Mg against an Al/Mg ratio. In an isochron dating plot, the Al/Mg ratio plotted is 27Al/24Mg. The slope of the isochron has no age significance, but indicates the initial 26Al/27Al ratio in the sample at the time when the systems were separated from a common reservoir. Production Magnesium is the eighth-most-abundant element in the Earth's crust by mass and tied in seventh place with iron in molarity. It is found in large deposits of magnesite, dolomite, and other minerals, and in mineral waters, where magnesium ion is soluble. Although magnesium is found in more than 60 minerals, only dolomite, magnesite, brucite, carnallite, talc, and olivine are of commercial importance. The Mg2+ cation is the second-most-abundant cation in seawater (about 1⁄8 the mass of sodium ions in a given sample), which makes seawater and sea salt attractive commercial sources for Mg. World production was approximately 1,100 kt in 2017, with the bulk being produced in China (930 kt) and Russia (60 kt). The United States was in the 20th century the major world supplier of this metal, supplying 45% of world production even as recently as 1995. Since the Chinese mastery of the Pidgeon process the US market share is at 7%, with a single US producer left as of 2013: US Magnesium, a Renco Group company located on the shores of the Great Salt Lake. In September 2021, China took steps to reduce production of magnesium as a result of a government initiative to reduce energy availability for manufacturing industries, leading to a significant price increase. The Pidgeon process and the Bolzano process are similar. In both, magnesium oxide is the precursor to magnesium metal. The magnesium oxide is produced as a solid solution with calcium oxide by calcining the mineral dolomite, which is a solid solution of calcium and magnesium carbonates: Reduction occurs at high temperatures with silicon. A ferrosilicon alloy is used rather than pure silicon as it is more economical. The iron component has no bearing on the reaction, having the simplified equation:[citation needed] The calcium oxide combines with silicon as the oxygen scavenger, yielding the very stable calcium silicate. The Mg/Ca ratio of the precursors can be adjusted by the addition of MgO or CaO. The Pidgeon and the Bolzano process differ in the details of the heating and the configuration of the reactor. Both generate gaseous Mg that is condensed and collected. The Pidgeon process dominates the worldwide production. The Pidgeon method is less technologically complex and because of distillation/vapour deposition conditions, a high purity product is easily achievable. China is almost completely reliant on the silicothermic Pidgeon process. Besides the Pidgeon process, the second most used process for magnesium production is electrolysis. This is a two step process. The first step is to prepare feedstock containing magnesium chloride and the second step is to dissociate the compound in electrolytic cells as magnesium metal and chlorine gas. To extract the magnesium, calcium hydroxide is added to the seawater to precipitate magnesium hydroxide. Magnesium hydroxide (brucite) is poorly soluble in water and can be collected by filtration. It reacts with hydrochloric acid to magnesium chloride. From magnesium chloride, electrolysis produces magnesium. The basic reaction is as follows: The temperatures at which this reaction is operated is between 680 and 750 °C. The magnesium chloride can be obtained using the Dow process, a process that mixes sea water and dolomite in a flocculator or by dehydration of magnesium chloride brines. The electrolytic cells are partially submerged in a molten salt electrolyte to which the produced magnesium chloride is added in concentrations between 6–18%. This process does have its share of disadvantages including production of harmful chlorine gas and the overall reaction being very energy intensive, creating environmental risks. The Pidgeon process is more advantageous regarding its simplicity, shorter construction period, low power consumption and overall good magnesium quality compared to the electrolysis method. In the United States, magnesium was once obtained principally with the Dow process in Corpus Christi TX, by electrolysis of fused magnesium chloride from brine and sea water. A saline solution containing Mg2+ ions is first treated with lime (calcium oxide) and the precipitated magnesium hydroxide is collected: The hydroxide is then converted to magnesium chloride by treatment with hydrochloric acid and heating of the product to eliminate water: The salt is then electrolyzed in the molten state. At the cathode, the Mg2+ ion is reduced by two electrons to magnesium metal: At the anode, each pair of Cl− ions is oxidized to chlorine gas, releasing two electrons to complete the circuit: The carbothermic route to magnesium has been recognized as a low energy, yet high productivity path to magnesium extraction. The chemistry is as follows: C + MgO → CO + Mg A disadvantage of this method is that slow cooling the vapour can cause the reaction to quickly revert. To prevent this from happening, the magnesium can be dissolved directly in a suitable metal solvent before reversion starts happening. Rapid quenching of the vapour can also be performed to prevent reversion. A newer process, solid oxide membrane technology, involves the electrolytic reduction of MgO. At the cathode, Mg2+ ion is reduced by two electrons to magnesium metal. The electrolyte is yttria-stabilized zirconia (YSZ). The anode is a liquid metal. At the YSZ/liquid metal anode O2− is oxidized. A layer of graphite borders the liquid metal anode, and at this interface carbon and oxygen react to form carbon monoxide. When silver is used as the liquid metal anode, there is no reductant carbon or hydrogen needed, and only oxygen gas is evolved at the anode. It was reported in 2011 that this method provides a 40% reduction in cost per pound over the electrolytic reduction method. Rieke et al. developed a "general approach for preparing highly reactive metal powders by reducing metal salts in ethereal or hydrocarbon solvents using alkali metals as reducing agents" now known as the Rieke process. Rieke finalized the identification of Rieke metals in 1989, one of which was Rieke-magnesium, first produced in 1974. History The name magnesium originates from the Greek word for locations related to the tribe of the Magnetes, either a district in Thessaly called Magnesia or Magnesia ad Sipylum, now in Turkey. It is related to magnetite and manganese, which also originated from this area, and required differentiation as separate substances. See manganese for this history. In 1618, a farmer at Epsom in England attempted to give his cows water from a local well. The cows refused to drink because of the water's bitter taste, but the farmer noticed that the water seemed to heal scratches and rashes. The substance obtained by evaporating the water became known as Epsom salts and its fame spread. It was eventually recognized as hydrated magnesium sulfate, MgSO4·7 H2O. The metal itself was first isolated by Sir Humphry Davy in England in 1808. He used electrolysis on a mixture of magnesia and mercuric oxide. Antoine Bussy prepared it in coherent form in 1831. Davy's first suggestion for a name was 'magnium', but the name magnesium is now used in most European languages. Further discoveries about magnesium were made by the father of physical chemistry in Imperial Russia, Nikolai Beketov (1827–1911), who established that magnesium and zinc displaced other metals from their salts under high temperatures. Uses Magnesium is the third-most-commonly-used structural metal, following iron and aluminium. The main applications of magnesium are, in order: aluminium alloys, die-casting (alloyed with zinc), removing sulfur in the production of iron and steel, and the production of titanium in the Kroll process. Magnesium is used in lightweight materials and alloys. For example, when infused with silicon carbide nanoparticles, it has extremely high specific strength. Historically, magnesium was one of the main aerospace construction metals and was used for German military aircraft as early as World War I and extensively for German aircraft in World War II. The Germans coined the name "Elektron" for magnesium alloy, a term which is still used today. In the commercial aerospace industry, magnesium was generally restricted to engine-related components, due to fire and corrosion hazards. Magnesium alloy use in aerospace is increasing in the 21st century, driven by the importance of fuel economy. Magnesium alloys can act as replacements for aluminium and steel alloys in structural applications. Both AJ62A and AE44 are recent developments in high-temperature low-creep magnesium alloys. The general strategy for such alloys is to form intermetallic precipitates at the grain boundaries, for example by adding mischmetal or calcium. Because of low density and good mechanical and electrical properties, magnesium is used for manufacturing of mobile phones, laptop and tablet computers, cameras, and other electronic components. It was used as a premium feature because of its light weight in some 2020 laptops. Magnesium is flammable, burning at a temperature of approximately 3,100 °C (3,370 K; 5,610 °F), and the autoignition temperature of magnesium ribbon is approximately 473 °C (746 K; 883 °F). Magnesium's high combustion temperature makes it a useful tool for starting emergency fires. When burning in air, magnesium produces a brilliant white light that includes strong ultraviolet wavelengths. Magnesium powder (flash powder) was used for subject illumination in the early days of photography. Magnesium filament used in electrically ignited single-use photography flashbulbs replaced this usage eventually. Magnesium powder is used in fireworks and marine flares where a brilliant light is required, and in trick self-relighting birthday candles. It was also used for various theatrical effects, such as lightning, pistol flashes, and supernatural appearances. Magnesium is often used to ignite thermite or other materials that require a high ignition temperature. Magnesium continues to be used as an incendiary element in warfare. Flame temperatures of magnesium and magnesium alloys can reach 3,100 °C (5,610 °F), although flame height above the burning metal is usually less than 300 mm (12 in). Once ignited, such fires are difficult to extinguish because they resist several substances commonly used to put out fires; combustion continues in nitrogen (forming magnesium nitride), in carbon dioxide (forming magnesium oxide and carbon), and in water (forming magnesium oxide and hydrogen, which also combusts due to heat in the presence of additional oxygen). In the form of turnings or ribbons, to prepare Grignard reagents, which are useful in organic synthesis. Magnesium compounds, primarily magnesium oxide (MgO), are used as a refractory material in furnace linings for producing iron, steel, nonferrous metals, glass, and cement. Magnesium oxide and other magnesium compounds are also used in the agricultural, chemical, and construction industries. Magnesium oxide from calcination is used as an electrical insulator in fire-resistant cables. Magnesium reacts with haloalkanes to give Grignard reagents, which are used for a wide variety of organic reactions forming carbon–carbon bonds. Magnesium salts are included in various foods, fertilizers (magnesium is a component of chlorophyll), and microbe culture media. Magnesium sulfite is used in the manufacture of paper (sulfite process). Magnesium phosphate is used to fireproof wood used in construction. Magnesium hexafluorosilicate is used for moth-proofing textiles. Biological roles The important interaction between phosphate and magnesium ions makes magnesium essential to the basic nucleic acid chemistry of all cells of all known living organisms. More than 300 enzymes require magnesium ions for their catalytic action, including all enzymes using or synthesizing ATP and those that use other nucleotides to synthesize DNA and RNA. The ATP molecule is normally found in a chelate with a magnesium ion. Magnesium intake—especially from diet—may modestly lower blood pressure and reduce risks of stroke and sudden cardiac death, but evidence is mixed, effects are small, and more robust clinical trials are needed to clarify its role in cardiovascular disease prevention. Spices, nuts, cereals, cocoa and vegetables are good sources of magnesium. Green leafy vegetables such as spinach are also rich in magnesium. In the UK, the recommended daily values by the Dietary Reference Intake for magnesium are 300 mg for men and 270 mg for women. In the U.S. the Recommended Dietary Allowances (RDAs) are 400 mg for men ages 19–30 and 420 mg for older; for women 310 mg for ages 19–30 and 320 mg for older. Numerous pharmaceutical preparations of magnesium and dietary supplements are available. Most people get enough magnesium through a healthy diet, though supplements may help in specific conditions like magnesium deficiency, pregnancy complications, or certain chronic health issues. A 2014 literature review found limited evidence supporting oral magnesium supplementation for migraine prevention, suggesting that increasing dietary magnesium intake may be a more effective option for patients open to lifestyle changes. An adult body contains 22–26 grams of magnesium, with 60% in the skeleton, 39% intracellular (20% in skeletal muscle), and 1% extracellular. Serum levels are typically 0.7–1.0 mmol/L or 1.8–2.4 mEq/L. Serum magnesium levels may be normal even when intracellular magnesium is deficient. The mechanisms for maintaining the magnesium level in the serum are varying gastrointestinal absorption and renal excretion. Intracellular magnesium is correlated with intracellular potassium. Increased magnesium lowers calcium and can either prevent hypercalcemia or cause hypocalcemia depending on the initial level. Both low and high protein intake conditions inhibit magnesium absorption, as does the amount of phosphate, phytate, and fat in the gut. Unabsorbed dietary magnesium is excreted in feces; absorbed magnesium is excreted in urine and sweat. Magnesium status may be assessed by measuring serum and erythrocyte magnesium concentrations coupled with urinary and fecal magnesium content, but intravenous magnesium loading tests are more accurate and practical. A retention of 20% or more of the injected amount indicates deficiency. As of 2004, no biomarker has been established for magnesium. Magnesium concentrations in plasma or serum may be monitored for efficacy and safety in those receiving the drug therapeutically, to confirm the diagnosis in potential poisoning victims. The newborn children of mothers who received parenteral magnesium sulfate during labor may exhibit toxicity with normal serum magnesium levels. Low plasma magnesium (hypomagnesemia) is common: it is found in 2.5–15% of the general population. From 2005 to 2006, 48 percent of the United States population consumed less magnesium than recommended in the Dietary Reference Intake. Other causes are increased renal or gastrointestinal loss, an increased intracellular shift, and proton-pump inhibitor antacid therapy. Most are asymptomatic, but symptoms referable to neuromuscular, cardiovascular, and metabolic dysfunction may occur. Alcoholism is often associated with magnesium deficiency. Chronically low serum magnesium levels are associated with metabolic syndrome, diabetes mellitus type 2, fasciculation, and hypertension. Sorted by type of magnesium salt, other therapeutic applications include: Overdose from dietary sources alone is unlikely because excess magnesium in the blood is promptly filtered by the kidneys. Overdose is more likely in the presence of impaired renal function. Overdose is likely in cases of excessive intake of supplements. Megavitamin therapy has caused death in a child, and severe hypermagnesemia in a woman and a young girl who had healthy kidneys. The most common symptoms of overdose are nausea, vomiting, and diarrhea; other symptoms include hypotension, confusion, slowed heart and respiratory rates, deficiencies of other minerals, coma, cardiac arrhythmia, and death from cardiac arrest. Plants require magnesium to synthesize chlorophyll, essential for photosynthesis. Magnesium in the center of the porphyrin ring in chlorophyll functions in a manner similar to the iron in the center of the porphyrin ring in heme. Magnesium deficiency in plants causes late-season yellowing between leaf veins, especially in older leaves, and can be corrected by either applying epsom salts (which is rapidly leached), or crushed dolomitic limestone, to the soil. Safety precautions Magnesium metal and its alloys can be explosive hazards; they are highly flammable in their pure form when molten or in powder or ribbon form. Burning or molten magnesium reacts violently with water. When working with powdered magnesium, safety glasses with eye protection and UV filters (such as welders use) are employed because burning magnesium produces ultraviolet light that can permanently damage the retina of a human eye. Magnesium is capable of reducing water and releasing highly flammable hydrogen gas: Therefore, water cannot extinguish magnesium fires. The hydrogen gas produced intensifies the fire. Dry sand is an effective smothering agent, but only on relatively level and flat surfaces. Magnesium reacts with carbon dioxide exothermically to form magnesium oxide and carbon: Hence, carbon dioxide fuels rather than extinguishes magnesium fires. Burning magnesium can be quenched by using a Class D dry chemical fire extinguisher, or by covering the fire with sand or magnesium foundry flux to remove its air source. See also Notes References Cited sources External links Beryllium Be Atomic Number: 4 Atomic Weight: 9.012182 Melting Point: 1560.15 KBoiling Point: 2742 KSpecific mass: 1.85 g/cm3 Electronegativity: 1.57 Magnesium Mg Atomic Number: 12 Atomic Weight: 24.3050 Melting Point: 923.15 K Boiling Point: 1363 KSpecific mass: 1.738 g/cm3 Electronegativity: 1.31 Calcium Ca Atomic Number: 20 Atomic Weight: 40.078 Melting Point: 1112.15 K Boiling Point: 1757 KSpecific mass: 1.54 g/cm3 Electronegativity: 1 Strontium Sr Atomic Number: 38 Atomic Weight: 87.62 Melting Point: 1042.15 K Boiling Point: 1655 KSpecific mass: 2.64 g/cm3 Electronegativity: 0.95 Barium Ba Atomic Number: 56 Atomic Weight: 137.327 Melting Point: 1002.15 K Boiling Point: 2170 KSpecific mass: 3.594 g/cm3 Electronegativity: 0.89 Radium Ra Atomic Number: 88 Atomic Weight: Melting Point: 973.15 K Boiling Point: 2010 K Specific mass: 5.5 g/cm3 Electronegativity: 0.9
========================================
[SOURCE: https://en.wikipedia.org/wiki/Lipid] | [TOKENS: 5919]
Contents Lipid Lipids are a broad group of organic compounds that include fats, waxes, sterols, fat-soluble vitamins (such as vitamins A, D, E and K), monoglycerides, diglycerides, phospholipids, and others. The functions of lipids include storing energy, signaling, and acting as structural components of cell membranes. Lipids have applications in the cosmetic and food industries, and in nanotechnology. Lipids are broadly defined as hydrophobic or amphiphilic small molecules; the amphiphilic nature of some lipids allows them to form structures such as vesicles, multilamellar/unilamellar liposomes, or membranes in an aqueous environment. Biological lipids originate entirely or in part from two distinct types of biochemical subunits or "building-blocks": ketoacyl and isoprene groups. Using this approach, lipids may be divided into eight categories: fatty acyls, glycerolipids, glycerophospholipids, sphingolipids, saccharolipids, and polyketides (derived from condensation of ketoacyl subunits); and sterol lipids and prenol lipids (derived from condensation of isoprene subunits). Although the term lipid is sometimes used as a synonym for fats, fats are a subgroup of lipids called triglycerides. Lipids also encompass molecules such as fatty acids and their derivatives (including tri-, di-, and monoglycerides, and phospholipids), as well as other sterol-containing metabolites such as cholesterol. Although humans and other mammals use various biosynthetic pathways both to break down and to synthesize lipids, some essential lipids cannot be made this way and must be obtained from the diet. History In 1815, Henri Braconnot classified lipids (graisses) into two categories, suifs (solid greases or tallow) and huiles (fluid oils). In 1823, Michel Eugène Chevreul developed a more detailed classification, including oils, greases, tallow, waxes, resins, balsams and volatile oils (or essential oils). In 1827, William Prout recognized fat ("oily" alimentary matters), along with protein ("albuminous") and carbohydrate ("saccharine"), as an important nutrient for humans and animals. The first synthetic triglyceride was reported by Théophile-Jules Pelouze in 1844, when he produced tributyrin by treating butyric acid with glycerin in the presence of concentrated sulfuric acid. Several years later, Marcellin Berthelot, one of Pelouze's students, synthesized tristearin and tripalmitin by reaction of the analogous fatty acids with glycerin in the presence of gaseous hydrogen chloride at high temperature. For a century, chemists regarded "fats" as only simple lipids made of fatty acids and glycerol (glycerides), but new forms were described later. Theodore Gobley (1847) discovered phospholipids in mammalian brain and hen egg, called by him as "lecithins". Thudichum discovered in the human brain some phospholipids (cephalin), glycolipids (cerebroside) and sphingolipids (sphingomyelin). The terms lipoid, lipin, lipide and lipid have been used with varied meanings from author to author. In 1912, Rosenbloom and Gies proposed the substitution of "lipoid" by "lipin". In 1920, Bloor introduced a new classification for "lipoids": simple lipoids (greases and waxes), compound lipoids (phospholipoids and glycolipoids), and the derived lipoids (fatty acids, alcohols, sterols). The word lipide, which stems etymologically from Greek λίπος, lipos 'fat', was introduced in 1923 by the French pharmacologist Gabriel Bertrand. Bertrand included in the concept not only the traditional fats (glycerides), but also the "lipoids", with a complex constitution. The word lipide was unanimously approved by the international commission of the Société de Chimie Biologique during the plenary session on July 3, 1923. The word lipide was later anglicized as lipid because of its pronunciation ('lɪpɪd). In French, the suffix -ide, from Ancient Greek -ίδης (meaning 'son of' or 'descendant of'), is always pronounced (ɪd). In 1947, T. P. Hilditch defined "simple lipids" as greases and waxes (true waxes, sterols, alcohols).[page needed] Categories Lipids have been classified into eight categories by the Lipid MAPS consortium as follows: Fatty acyls, a generic term for describing fatty acids, their conjugates and derivatives, are a diverse group of molecules synthesized by chain-elongation of an acetyl-CoA primer with malonyl-CoA or methylmalonyl-CoA groups in a process called fatty acid synthesis. They are made of a hydrocarbon chain that terminates with a carboxylic acid group; this arrangement confers the molecule with a polar, hydrophilic end, and a nonpolar, hydrophobic end that is insoluble in water. The fatty acid structure is one of the most fundamental categories of biological lipids and is commonly used as a building-block of more structurally complex lipids. The carbon chain, typically between four and 24 carbons long, may be saturated or unsaturated, and may be attached to functional groups containing oxygen, halogens, nitrogen, and sulfur. If a fatty acid contains a double bond, there is the possibility of either a cis or trans geometric isomerism, which significantly affects the molecule's configuration. Cis-double bonds cause the fatty acid chain to bend, an effect that is compounded with more double bonds in the chain. Three double bonds in 18-carbon linolenic acid, the most abundant fatty-acyl chains of plant thylakoid membranes, render these membranes highly fluid despite environmental low-temperatures, and also makes linolenic acid give dominating sharp peaks in high resolution 13-C NMR spectra of chloroplasts. This in turn plays an important role in the structure and function of cell membranes.: 193–5 Most naturally occurring fatty acids are of the cis configuration, although the trans form does exist in some natural and partially hydrogenated fats and oils. Examples of biologically important fatty acids include the eicosanoids, derived primarily from arachidonic acid and eicosapentaenoic acid, that include prostaglandins, leukotrienes, and thromboxanes. Docosahexaenoic acid is also important in biological systems, particularly with respect to sight. Other major lipid classes in the fatty acid category are the fatty esters and fatty amides. Fatty esters include important biochemical intermediates such as wax esters, fatty acid thioester coenzyme A derivatives, fatty acid thioester ACP derivatives and fatty acid carnitines. The fatty amides include N-acyl ethanolamines, such as the cannabinoid neurotransmitter anandamide. Glycerolipids are composed of mono-, di-, and tri-substituted glycerols, the best-known being the fatty acid triesters of glycerol, called triglycerides. The word "triacylglycerol" is sometimes used synonymously with "triglyceride". In these compounds, the three hydroxyl groups of glycerol are each esterified, typically by different fatty acids. Because they function as an energy store, these lipids comprise the bulk of storage fat in animal tissues. The hydrolysis of the ester bonds of triglycerides and the release of glycerol and fatty acids from adipose tissue are the initial steps in metabolizing fat.: 630–1 Additional subclasses of glycerolipids are represented by glycosylglycerols, which are characterized by the presence of one or more sugar residues attached to glycerol via a glycosidic linkage. Examples of structures in this category are the digalactosyldiacylglycerols found in plant membranes and seminolipid from mammalian sperm cells. Glycerophospholipids, usually referred to as phospholipids (though sphingomyelins are also classified as phospholipids), are ubiquitous in nature and are key components of the lipid bilayer of cells, as well as being involved in metabolism and cell signaling. Neural tissue (including the brain) contains relatively high amounts of glycerophospholipids, and alterations in their composition has been implicated in various neurological disorders. Glycerophospholipids may be subdivided into distinct classes, based on the nature of the polar headgroup at the sn-3 position of the glycerol backbone in eukaryotes and eubacteria, or the sn-1 position in the case of archaebacteria. Examples of glycerophospholipids found in biological membranes are phosphatidylcholine (also known as PC, GPCho or lecithin), phosphatidylethanolamine (PE or GPEtn) and phosphatidylserine (PS or GPSer). In addition to serving as a primary component of cellular membranes and binding sites for intra- and intercellular proteins, some glycerophospholipids in eukaryotic cells, such as phosphatidylinositols and phosphatidic acids are either precursors of or, themselves, membrane-derived second messengers.: 844 Typically, one or both of these hydroxyl groups are acylated with long-chain fatty acids, but there are also alkyl-linked and 1Z-alkenyl-linked (plasmalogen) glycerophospholipids, as well as dialkylether variants in archaebacteria. Sphingolipids are a complicated family of compounds that share a common structural feature, a sphingoid base backbone that is synthesized de novo from the amino acid serine and a long-chain fatty acyl CoA, then converted into ceramides, phosphosphingolipids, glycosphingolipids and other compounds. The major sphingoid base of mammals is commonly referred to as sphingosine. Ceramides (N-acyl-sphingoid bases) are a major subclass of sphingoid base derivatives with an amide-linked fatty acid. The fatty acids are typically saturated or mono-unsaturated with chain lengths from 16 to 26 carbon atoms.: 421–2 The major phosphosphingolipids of mammals are sphingomyelins (ceramide phosphocholines), whereas insects contain mainly ceramide phosphoethanolamines and fungi have phytoceramide phosphoinositols and mannose-containing headgroups. The glycosphingolipids are a diverse family of molecules composed of one or more sugar residues linked via a glycosidic bond to the sphingoid base. Examples of these are the simple and complex glycosphingolipids such as cerebrosides and gangliosides. Sterols, such as cholesterol and its derivatives, are an important component of membrane lipids, along with the glycerophospholipids and sphingomyelins. Other examples of sterols are the bile acids and their conjugates, which in mammals are oxidized derivatives of cholesterol and are synthesized in the liver. The plant equivalents are the phytosterols, such as β-sitosterol, stigmasterol, and brassicasterol; the latter compound is also used as a biomarker for algal growth. The predominant sterol in fungal cell membranes is ergosterol. Sterols are steroids in which one of the hydrogen atoms is substituted with a hydroxyl group, at position 3 in the carbon chain. They have in common with steroids the same fused four-ring core structure. Steroids have different biological roles as hormones and signaling molecules. The eighteen-carbon (C18) steroids include the estrogen family whereas the C19 steroids comprise the androgens such as testosterone and androsterone. The C21 subclass includes the progestogens as well as the glucocorticoids and mineralocorticoids.: 749 The secosteroids, comprising various forms of vitamin D, are characterized by cleavage of the B ring of the core structure. Prenol lipids are synthesized from the five-carbon-unit precursors isopentenyl diphosphate and dimethylallyl diphosphate, which are produced mainly via the mevalonic acid (MVA) pathway. The simple isoprenoids (linear alcohols, diphosphates, etc.) are formed by the successive addition of C5 units, and are classified according to number of these terpene units. Structures containing greater than 40 carbons are known as polyterpenes. Carotenoids are important simple isoprenoids that function as antioxidants and as precursors of vitamin A. Another biologically important class of molecules is exemplified by the quinones and hydroquinones, which contain an isoprenoid tail attached to a quinonoid core of non-isoprenoid origin. Vitamin E and vitamin K, as well as the ubiquinones, are examples of this class. Prokaryotes synthesize polyprenols (called bactoprenols) in which the terminal isoprenoid unit attached to oxygen remains unsaturated, whereas in animal polyprenols (dolichols) the terminal isoprenoid is reduced. Saccharolipids describe compounds in which fatty acids are linked to a sugar backbone, forming structures that are compatible with membrane bilayers. In the saccharolipids, a monosaccharide substitutes for the glycerol backbone present in glycerolipids and glycerophospholipids. The most familiar saccharolipids are the acylated glucosamine precursors of the Lipid A component of the lipopolysaccharides in Gram-negative bacteria. Typical lipid A molecules are disaccharides of glucosamine, which are derivatized with as many as seven fatty-acyl chains. The minimal lipopolysaccharide required for growth in E. coli is Kdo2-Lipid A, a hexa-acylated disaccharide of glucosamine that is glycosylated with two 3-deoxy-D-manno-octulosonic acid (Kdo) residues. Polyketides are synthesized by polymerization of acetyl and propionyl subunits by classic enzymes as well as iterative and multimodular enzymes that share mechanistic features with the fatty acid synthases. They comprise many secondary metabolites and natural products from animal, plant, bacterial, fungal and marine sources, and have great structural diversity. Many polyketides are cyclic molecules whose backbones are often further modified by glycosylation, methylation, hydroxylation, oxidation, or other processes. Many commonly used antimicrobial, antiparasitic, and anticancer agents are polyketides or polyketide derivatives, such as erythromycins, tetracyclines, avermectins, and antitumor epothilones. Biological functions Eukaryotic cells feature the compartmentalized membrane-bound organelles that carry out different biological functions. The glycerophospholipids are the main structural component of biological membranes, as the cellular plasma membrane and the intracellular membranes of organelles; in animal cells, the plasma membrane physically separates the intracellular components from the extracellular environment.[citation needed] The glycerophospholipids are amphipathic molecules (containing both hydrophobic and hydrophilic regions) that contain a glycerol core linked to two fatty acid-derived "tails" by ester linkages and to one "head" group by a phosphate ester linkage.[citation needed] While glycerophospholipids are the major component of biological membranes, other non-glyceride lipid components such as sphingomyelin and sterols (mainly cholesterol in animal cell membranes) are also found in biological membranes.: 329–331 In plants and algae, the galactosyldiacylglycerols, and sulfoquinovosyldiacylglycerol, which lack a phosphate group, are important components of membranes of chloroplasts and related organelles and are among the most abundant lipids in photosynthetic tissues, including those of higher plants, algae and certain bacteria. Plant thylakoid membranes have the largest lipid component of a non-bilayer forming monogalactosyl diglyceride (MGDG), and little phospholipids; despite this unique lipid composition, chloroplast thylakoid membranes have been shown to contain a dynamic lipid-bilayer matrix as revealed by magnetic resonance and electron microscope studies. A biological membrane is a form of lamellar phase lipid bilayer. The formation of lipid bilayers is an energetically preferred process when the glycerophospholipids described above are in an aqueous environment.: 333–4 This is known as the hydrophobic effect. In an aqueous system, the polar heads of lipids align towards the polar, aqueous environment, while the hydrophobic tails minimize their contact with water and tend to cluster together, forming a vesicle; depending on the concentration of the lipid, this biophysical interaction may result in the formation of micelles, liposomes, or lipid bilayers. Other aggregations are also observed and form part of the polymorphism of amphiphile (lipid) behavior. Phase behavior is an area of study within biophysics. Micelles and bilayers form in the polar medium by a process known as the hydrophobic effect. When dissolving a lipophilic or amphiphilic substance in a polar environment, the polar molecules (i.e., water in an aqueous solution) become more ordered around the dissolved lipophilic substance, since the polar molecules cannot form hydrogen bonds to the lipophilic areas of the amphiphile. So in an aqueous environment, the water molecules form an ordered "clathrate" cage around the dissolved lipophilic molecule. The formation of lipids into protocell membranes represents a key step in models of abiogenesis, the origin of life. Triglycerides, stored in adipose tissue, are a major form of energy storage both in animals and plants. They are a major source of energy in aerobic respiration. The complete oxidation of fatty acids releases about 38 kJ/g (9 kcal/g), compared with only 17 kJ/g (4 kcal/g) for the oxidative breakdown of carbohydrates and proteins. The adipocyte, or fat cell, is designed for continuous synthesis and breakdown of triglycerides in animals, with breakdown controlled mainly by the activation of hormone-sensitive enzyme lipase. Migratory birds that must fly long distances without eating use triglycerides to fuel their flights.: 619 Evidence has emerged showing that lipid signaling is a vital part of the cell signaling. Lipid signaling may occur via activation of G protein-coupled or nuclear receptors, and members of several different lipid categories have been identified as signaling molecules and cellular messengers. These include sphingosine-1-phosphate, a sphingolipid derived from ceramide that is a potent messenger molecule involved in regulating calcium mobilization, cell growth, and apoptosis; diacylglycerol and the phosphatidylinositol phosphates (PIPs), involved in calcium-mediated activation of protein kinase C; the prostaglandins, which are one type of fatty-acid derived eicosanoid involved in inflammation and immunity; the steroid hormones such as estrogen, testosterone and cortisol, which modulate a host of functions such as reproduction, metabolism and blood pressure; and the oxysterols such as 25-hydroxy-cholesterol that are liver X receptor agonists. Phosphatidylserine lipids are known to be involved in signaling for the phagocytosis of apoptotic cells or pieces of cells. They accomplish this by being exposed to the extracellular face of the cell membrane after the inactivation of flippases which place them exclusively on the cytosolic side and the activation of scramblases, which scramble the orientation of the phospholipids. After this occurs, other cells recognize the phosphatidylserines and phagocytosize the cells or cell fragments exposing them. The "fat-soluble" vitamins (A, D, E and K) – which are isoprene-based lipids – are essential nutrients stored in the liver and fatty tissues, with a diverse range of functions. Acyl-carnitines are involved in the transport and metabolism of fatty acids in and out of mitochondria, where they undergo beta oxidation. Polyprenols and their phosphorylated derivatives also play important transport roles, in this case the transport of oligosaccharides across membranes. Polyprenol phosphate sugars and polyprenol diphosphate sugars function in extra-cytoplasmic glycosylation reactions, in extracellular polysaccharide biosynthesis (for instance, peptidoglycan polymerization in bacteria), and in eukaryotic protein N-glycosylation. Cardiolipins are a subclass of glycerophospholipids containing four acyl chains and three glycerol groups that are particularly abundant in the inner mitochondrial membrane. They are believed to activate enzymes involved with oxidative phosphorylation. Lipids also form the basis of steroid hormones. Metabolism The major dietary lipids for humans and other animals are animal and plant triglycerides, sterols, and membrane phospholipids. The process of lipid metabolism synthesizes and degrades the lipid stores and produces the structural and functional lipids characteristic of individual tissues. In animals, when there is an oversupply of dietary carbohydrate, the excess carbohydrate is converted to triglycerides. This involves the synthesis of fatty acids from acetyl-CoA and the esterification of fatty acids in the production of triglycerides, a process called lipogenesis.: 634 Fatty acids are made by fatty acid synthases that polymerize and then reduce acetyl-CoA units. The acyl chains in the fatty acids are extended by a cycle of reactions that add the acetyl group, reduce it to an alcohol, dehydrate it to an alkene group and then reduce it again to an alkane group. The enzymes of fatty acid biosynthesis are divided into two groups, in animals and fungi all these fatty acid synthase reactions are carried out by a single multifunctional protein, while in plant plastids and bacteria separate enzymes perform each step in the pathway. The fatty acids may be subsequently converted to triglycerides that are packaged in lipoproteins and secreted from the liver. The synthesis of unsaturated fatty acids involves a desaturation reaction, whereby a double bond is introduced into the fatty acyl chain. For example, in humans, the desaturation of stearic acid by stearoyl-CoA desaturase-1 produces oleic acid. The doubly unsaturated fatty acid linoleic acid as well as the triply unsaturated α-linolenic acid cannot be synthesized in mammalian tissues, and are therefore essential fatty acids and must be obtained from the diet.: 643 Triglyceride synthesis takes place in the endoplasmic reticulum by metabolic pathways in which acyl groups in fatty acyl-CoAs are transferred to the hydroxyl groups of glycerol-3-phosphate and diacylglycerol.: 733–9 Terpenes and isoprenoids, including the carotenoids, are made by the assembly and modification of isoprene units donated from the reactive precursors isopentenyl pyrophosphate and dimethylallyl pyrophosphate. These precursors can be made in different ways. In animals and archaea, the mevalonate pathway produces these compounds from acetyl-CoA, while in plants and bacteria the non-mevalonate pathway uses pyruvate and glyceraldehyde 3-phosphate as substrates. One important reaction that uses these activated isoprene donors is steroid biosynthesis. Here, the isoprene units are joined together to make squalene and then folded up and formed into a set of rings to make lanosterol. Lanosterol can then be converted into other steroids such as cholesterol and ergosterol. Beta oxidation is the metabolic process by which fatty acids are broken down in the mitochondria or in peroxisomes to generate acetyl-CoA. For the most part, fatty acids are oxidized by a mechanism that is similar to, but not identical with, a reversal of the process of fatty acid synthesis. That is, two-carbon fragments are removed sequentially from the carboxyl end of the acid after steps of dehydrogenation, hydration, and oxidation to form a beta-keto acid, which is split by thiolysis. The acetyl-CoA is then ultimately converted into adenosine triphosphate (ATP), CO2, and H2O using the citric acid cycle and the electron transport chain. Hence the citric acid cycle can start at acetyl-CoA when fat is being broken down for energy if there is little or no glucose available. The energy yield of the complete oxidation of the fatty acid palmitate is 106 ATP.: 625–6 Unsaturated and odd-chain fatty acids require additional enzymatic steps for degradation. Nutrition and health Most of the fat found in food is in the form of triglycerides, cholesterol, and phospholipids. Some dietary fat is necessary to facilitate absorption of fat-soluble vitamins (A, D, E, and K) and carotenoids.: 903 Humans and other mammals have a dietary requirement for certain essential fatty acids, such as linoleic acid (an omega-6 fatty acid) and alpha-linolenic acid (an omega-3 fatty acid) because they cannot be synthesized from simple precursors in the diet.: 643 Both of these fatty acids are 18-carbon polyunsaturated fatty acids differing in the number and position of the double bonds. Most vegetable oils are rich in linoleic acid (safflower, sunflower, and corn oils). Alpha-linolenic acid is found in the green leaves of plants and in some seeds, nuts, and legumes (in particular flax, rapeseed, walnut, and soy). Fish oils are particularly rich in the longer-chain omega-3 fatty acids eicosapentaenoic acid and docosahexaenoic acid.: 388 Many studies have shown positive health benefits associated with consumption of omega-3 fatty acids on infant development, cancer, cardiovascular diseases, and various mental illnesses (such as depression, attention-deficit hyperactivity disorder, and dementia). In contrast, it is now well-established that consumption of trans fats, such as those present in partially hydrogenated vegetable oils, are a risk factor for cardiovascular disease. Fats that are good for one may be turned into trans fats by improper cooking methods that result in overcooking the lipids. Results from recent clinical trials have linked long-term variation in lipids, including LDL and triglycerides, with risk of cardiovascular disease, diabetes, or heart failure. Excessive lipid variability has been linked to oxidative stress and endothelial dysfunction. A few studies have suggested that total dietary fat intake is linked to an increased risk of obesity and diabetes. Others, including the Women's Health Initiative Dietary Modification Trial, an eight-year study of 49,000 women, the Nurses' Health Study, and the Health Professionals Follow-up Study, revealed no such links. None of these studies suggested any connection between percentage of calories from fat and risk of cancer, heart disease, or weight gain. The Nutrition Source, a website maintained by the department of nutrition at the T. H. Chan School of Public Health at Harvard University, summarizes the current evidence on the effect of dietary fat: "Detailed research—much of it done at Harvard—shows that the total amount of fat in the diet isn't really linked with weight or disease." See also References External links Introductory Nomenclature Databases General
========================================
[SOURCE: https://en.wikipedia.org/wiki/Iron] | [TOKENS: 13928]
Contents Iron Iron is a chemical element; it has symbol Fe (from Latin ferrum 'iron') and atomic number 26. It is a metal that belongs to the first transition series and group 8 of the periodic table. It is, by mass, the most common element on Earth, forming much of Earth's outer and inner core. It is the fourth most abundant element in the Earth's crust. In its metallic state it was mainly deposited by meteorites. Extracting usable metal from iron ores requires kilns or furnaces capable of reaching 1,500 °C (2,730 °F), about 500 °C (900 °F) higher than that required to smelt copper. Humans started to master that process in Eurasia during the 2nd millennium BC and the use of iron tools and weapons began to displace copper alloys – in some regions, only around 1200 BC. That event is considered the transition from the Bronze Age to the Iron Age. In the modern world, iron alloys, such as steel, stainless steel, cast iron and special steels, are by far the most common industrial metals, due to their mechanical properties and low cost. The iron and steel industry is thus very important economically, and iron is the cheapest metal, with a price of a few dollars per kilogram or pound. Pristine and smooth pure iron surfaces are a mirror-like silvery-gray. Iron reacts readily with oxygen and water to produce brown-to-black hydrated iron oxides, commonly known as rust. Unlike the oxides of some other metals that form passivating layers, rust occupies more volume than the metal and thus flakes off, exposing more fresh surfaces for corrosion. Chemically, the most common oxidation states of iron are iron(II) and iron(III). Iron shares many properties with other transition metals, including the other group 8 elements, ruthenium and osmium. Iron forms compounds in a wide range of oxidation states, −2 to +7. Iron also forms many coordination complexes; some of them, such as ferrocene, ferrioxalate, and Prussian blue have substantial industrial, medical, or research applications. The body of an adult human contains about 4 grams (0.005% body weight) of iron, mostly in hemoglobin and myoglobin. These two proteins play essential roles in oxygen transport by blood and oxygen storage in muscles. To maintain the necessary levels, human iron metabolism requires a minimum of iron in the diet. Iron is also the metal at the active site of many important redox enzymes dealing with cellular respiration and oxidation and reduction in plants and animals. Characteristics Three different arrangements of iron atoms in solid iron are observed at ordinary pressures. These allotropes are conventionally denoted α, γ, δ, and ε. As molten iron cools past its freezing point of 1538 °C, it crystallizes into its δ allotrope, which has a body-centered cubic (bcc) crystal structure. As it cools further to 1394 °C, it changes to its γ-iron allotrope, a face-centered cubic (fcc) crystal structure, or austenite. At 912 °C and below, the crystal structure again becomes the bcc α-iron allotrope. At high presssures, above approximately 10 GPa and temperatures of a few hundred kelvin or less, α-iron changes into another hexagonal close-packed (hcp) structure, which is also known as ε-iron. The higher-temperature γ-phase also changes into ε-iron, but does so at higher pressure. The physical properties of iron at very high pressures and temperatures have been studied extensively because of their relevance to theories about the cores of the Earth and other planets. The Earth's inner core is generally presumed to consist of an iron-nickel alloy with ε (or β) structure. Some controversial experimental evidence exists for a stable β phase at pressures above 50 GPa and temperatures of at least 1500 K. It is supposed to have an orthorhombic or a double hcp structure. (Confusingly, the term "β-iron" is sometimes also used to refer to α-iron above its Curie point, when it changes from being ferromagnetic to paramagnetic, even though its crystal structure has not changed.) The melting and boiling points of iron, along with its enthalpy of atomization, are lower than those of the earlier 3d elements from scandium to chromium, showing the lessened contribution of the 3d electrons to metallic bonding as they are attracted more and more into the inert core by the nucleus; however, they are higher than the values for the previous element manganese because that element has a half-filled 3d sub-shell and consequently its d-electrons are not easily delocalized. This same trend appears for ruthenium but not osmium. The melting point of iron is experimentally well defined for pressures less than 50 GPa. For greater pressures, published data (as of 2007) still varies by tens of gigapascals and over a thousand kelvin. Below its Curie point of 770 °C (1,420 °F; 1,040 K), α-iron changes from paramagnetic to ferromagnetic: the spins of the two unpaired electrons in each atom generally align with the spins of its neighbors, creating an overall magnetic field. This happens because the orbitals of those two electrons (dz2 and dx2 − y2) do not point toward neighboring atoms in the lattice, and therefore are not involved in metallic bonding. In the absence of an external source of magnetic field, the atoms get spontaneously partitioned into magnetic domains, about 10 micrometers across, such that the atoms in each domain have parallel spins, but some domains have other orientations. Thus a macroscopic piece of iron will have a nearly zero overall magnetic field. Application of an external magnetic field causes the domains that are magnetized in the same general direction to grow at the expense of adjacent ones that point in other directions, reinforcing the external field. This effect is exploited in devices that need to channel magnetic fields to fulfill design function, such as electrical transformers, magnetic recording heads, and electric motors. Impurities, lattice defects, or grain and particle boundaries can "pin" the domains in the new positions, so that the effect persists even after the external field is removed – thus turning the iron object into a (permanent) magnet. Similar behavior is exhibited by some iron compounds, such as the ferrites including the mineral magnetite, a crystalline form of the mixed iron(II,III) oxide Fe3O4 (although the atomic-scale mechanism, ferrimagnetism, is somewhat different). Pieces of magnetite with natural permanent magnetization (lodestones) provided the earliest compasses for navigation. Particles of magnetite were extensively used in magnetic recording media such as core memories, magnetic tapes, floppies, and disks, until they were replaced by cobalt-based materials. Iron has four stable isotopes: 54Fe (5.845% of natural iron), 56Fe (91.754%), 57Fe (2.119%) and 58Fe (0.282%). Twenty-four artificial isotopes have also been created. Of these stable isotopes, only 57Fe has a nuclear spin (−1⁄2). The nuclide 54Fe theoretically can undergo double electron capture to 54Cr, but the process has never been observed and only a lower limit on the half-life of 4.4×1020 years has been established. 60Fe is an extinct radionuclide of long half-life (2.6 million years). It is not found on Earth, but its ultimate decay product is its granddaughter, the stable nuclide 60Ni. Much of the past work on isotopic composition of iron has focused on the nucleosynthesis of 60Fe through studies of meteorites and ore formation. In the last decade, advances in mass spectrometry have allowed the detection and quantification of minute, naturally occurring variations in the ratios of the stable isotopes of iron. Much of this work is driven by the Earth and planetary science communities, although applications to biological and industrial systems are emerging. In phases of the meteorites Semarkona and Chervony Kut, a correlation between the concentration of 60Ni, the granddaughter of 60Fe, and the abundance of the stable iron isotopes provided evidence for the existence of 60Fe at the time of formation of the Solar System. Possibly the energy released by the decay of 60Fe, along with that released by 26Al, contributed to the remelting and differentiation of asteroids after their formation 4.6 billion years ago. The abundance of 60Ni present in extraterrestrial material may bring further insight into the origin and early history of the Solar System. The most abundant iron isotope 56Fe is of particular interest to nuclear scientists because it represents the most common endpoint of nucleosynthesis. Since 56Ni (14 alpha particles) is easily produced from lighter nuclei in the alpha process in nuclear reactions in supernovae (see silicon burning process), it is the endpoint of fusion chains inside extremely massive stars. Although adding more alpha particles is possible, but nonetheless the sequence does effectively end at 56Ni because conditions in stellar interiors cause the competition between photodisintegration and the alpha process to favor photodisintegration around 56Ni. This 56Ni, which has a half-life of about 6 days, is created in quantity in these stars, but soon decays by two successive positron emissions within supernova decay products in the supernova remnant gas cloud, first to radioactive 56Co, and then to stable 56Fe. As such, iron is the most abundant element in the core of red giants, and is the most abundant metal in iron meteorites and in the dense metal cores of planets such as Earth. It is also very common in the universe, relative to other stable metals of approximately the same atomic weight. Iron is the sixth most abundant element in the universe, and the most common refractory element. Although a further tiny energy gain could be extracted by synthesizing 62Ni, which has a marginally higher binding energy than 56Fe, conditions in stars are unsuitable for this process. Element production in supernovas greatly favor iron over nickel, and in any case, 56Fe still has a lower mass per nucleon than 62Ni due to its higher fraction of lighter protons. Hence, elements heavier than iron require a supernova for their formation, involving rapid neutron capture by starting 56Fe nuclei. In the far future of the universe, assuming that proton decay does not occur, cold fusion occurring via quantum tunnelling would cause the light nuclei in ordinary matter to fuse into 56Fe nuclei. Fission and alpha-particle emission would then make heavy nuclei decay into iron, converting all stellar-mass objects to cold spheres of pure iron. Origin and occurrence in nature Iron's abundance in rocky planets like Earth is due to its abundant production during the runaway fusion and explosion of type Ia supernovae, which scatters the iron into space. Metallic or native iron is rarely found on the surface of the Earth because it tends to oxidize. However, both the Earth's inner and outer core, which together account for 35% of the mass of the whole Earth, are believed to consist largely of an iron alloy, possibly with nickel. Electric currents in the liquid outer core are believed to be the origin of the Earth's magnetic field. The other terrestrial planets (Mercury, Venus, and Mars) as well as the Moon are believed to have a metallic core consisting mostly of iron. The M-type asteroids are also believed to be partly or mostly made of metallic iron alloy. The rare iron meteorites are the main form of natural metallic iron on the Earth's surface. Items made of cold-worked meteoritic iron have been found in various archaeological sites dating from a time when iron smelting had not yet been developed; and the Inuit in Greenland have been reported to use iron from the Cape York meteorite for tools and hunting weapons. About 1 in 20 meteorites consist of the unique iron-nickel minerals taenite (35–80% iron) and kamacite (90–95% iron). Native iron is also rarely found in basalts that have formed from magmas that have come into contact with carbon-rich sedimentary rocks, which have reduced the oxygen fugacity sufficiently for iron to crystallize. This is known as telluric iron and is described from a few localities, such as Disko Island in West Greenland, Yakutia in Russia and Bühl in Germany. Ferropericlase (Mg,Fe)O, a solid solution of periclase (MgO) and wüstite (FeO), makes up about 20% of the volume of the lower mantle of the Earth, which makes it the second most abundant mineral phase in that region after silicate perovskite (Mg,Fe)SiO3; it also is the major host for iron in the lower mantle. At the bottom of the transition zone of the mantle, the reaction γ-(Mg,Fe)2[SiO4] ↔ (Mg,Fe)[SiO3] + (Mg,Fe)O transforms γ-olivine into a mixture of silicate perovskite and ferropericlase and vice versa. In the literature, this mineral phase of the lower mantle is also often called magnesiowüstite. Silicate perovskite may form up to 93% of the lower mantle, and the magnesium iron form, (Mg,Fe)SiO3, is considered to be the most abundant mineral in the Earth, making up 38% of its volume. While iron is the most abundant element on Earth, most of this iron is concentrated in the inner and outer cores. The fraction of iron that is in Earth's crust only amounts to about 5% of the overall mass of the crust and is thus only the fourth most abundant element in that layer (after oxygen, silicon, and aluminium). Most of the iron in the crust is combined with various other elements to form many iron minerals. An important class is the iron oxide minerals such as hematite (Fe2O3), magnetite (Fe3O4), and siderite (FeCO3), which are the major ores of iron. Many igneous rocks also contain the sulfide minerals pyrrhotite and pentlandite. During weathering, iron tends to leach from sulfide deposits as the sulfate and from silicate deposits as the bicarbonate. Both of these are oxidized in aqueous solution and precipitate in even mildly elevated pH as iron(III) oxide. Large deposits of iron are banded iron formations, a type of rock consisting of repeated thin layers of iron oxides alternating with bands of iron-poor shale and chert. The banded iron formations were laid down in the time between 3,700 million years ago and 1,800 million years ago. Materials containing finely ground iron(III) oxides or oxide-hydroxides, such as ochre, have been used as yellow, red, and brown pigments since pre-historical times. They contribute as well to the color of various rocks and clays, including entire geological formations like the Painted Hills in Oregon and the Buntsandstein ("colored sandstone", British Bunter). Through Eisensandstein (a jurassic 'iron sandstone', e.g. from Donzdorf in Germany) and Bath stone in the UK, iron compounds are responsible for the yellowish color of many historical buildings and sculptures. The proverbial red color of the surface of Mars is derived from an iron oxide-rich regolith. Significant amounts of iron occur in the iron sulfide mineral pyrite (FeS2), but it is difficult to extract iron from it and it is therefore not exploited. In fact, iron is so common that production generally focuses only on ores with very high quantities of it. According to the International Resource Panel's Metal Stocks in Society report, the global stock of iron in use in society is 2,200 kg per capita. More-developed countries differ in this respect from less-developed countries (7,000–14,000 vs 2,000 kg per capita). Ocean science demonstrated the role of the iron in the ancient seas in both marine biota and climate. Chemistry and compounds Iron shows the characteristic chemical properties of the transition metals, namely the ability to form variable oxidation states differing by steps of one and a very large coordination and organometallic chemistry: indeed, it was the discovery of an iron compound, ferrocene, that revolutionalized the latter field in the 1950s. Iron is sometimes considered as a prototype for the entire block of transition metals, due to its abundance and the immense role it has played in the technological progress of humanity. Its 26 electrons are arranged in the configuration [Ar]3d64s2, of which the 3d and 4s electrons are relatively close in energy, and thus a number of electrons can be ionized. Iron forms compounds mainly in the oxidation states +2 (iron(II), "ferrous") and +3 (iron(III), "ferric"). Iron also occurs in higher oxidation states, e.g., the purple potassium ferrate (K2FeO4), which contains iron in its +6 oxidation state. The anion [FeO4]– with iron in its +7 oxidation state, along with an iron(V)-peroxo isomer, has been detected by infrared spectroscopy at 4 K after cocondensation of laser-ablated Fe atoms with a mixture of O2/Ar. Iron(IV) is a common intermediate in many biochemical oxidation reactions. Numerous organoiron compounds contain formal oxidation states of +1, 0, −1, or even −2. The oxidation states and other bonding properties are often assessed using the technique of Mössbauer spectroscopy. Many mixed valence compounds contain both iron(II) and iron(III) centers, such as magnetite and Prussian blue (Fe4(Fe[CN]6)3). The latter is used as the traditional "blue" in blueprints. Iron is the first of the transition metals that cannot reach its group oxidation state of +8, although its heavier congeners ruthenium and osmium can, with ruthenium having more difficulty than osmium. Ruthenium exhibits an aqueous cationic chemistry in its low oxidation states similar to that of iron, but osmium does not, favoring high oxidation states in which it forms anionic complexes. In the second half of the 3d transition series, vertical similarities down the groups compete with the horizontal similarities of iron with its neighbors cobalt and nickel in the periodic table, which are also ferromagnetic at room temperature and share similar chemistry. As such, iron, cobalt, and nickel are sometimes grouped together as the iron triad. Unlike many other metals, iron does not form amalgams with mercury. As a result, mercury is traded in standardized 76 pound flasks (34 kg) made of iron. Iron is by far the most reactive element in its group; it is pyrophoric when finely divided and dissolves easily in dilute acids, giving Fe2+. However, it does not react with concentrated nitric acid and other oxidizing acids due to the formation of an impervious oxide layer, which can nevertheless react with hydrochloric acid. High-purity iron, called electrolytic iron, is considered to be resistant to rust, due to its oxide layer. Iron forms various oxide and hydroxide compounds; the most common are iron(II,III) oxide (Fe3O4), and iron(III) oxide (Fe2O3). Iron(II) oxide also exists, though it is unstable at room temperature. Despite their names, they are actually all non-stoichiometric compounds whose compositions may vary. These oxides are the principal ores for the production of iron (see bloomery and blast furnace). They are also used in the production of ferrites, useful magnetic storage media in computers, and pigments. The best known sulfide is iron pyrite (FeS2), also known as fool's gold owing to its golden luster. It is not an iron(IV) compound, but is actually an iron(II) polysulfide containing Fe2+ and S2−2 ions in a distorted sodium chloride structure. The binary ferrous and ferric halides are well-known. The ferrous halides typically arise from treating iron metal with the corresponding hydrohalic acid to give the corresponding hydrated salts. Iron reacts with fluorine, chlorine, and bromine to give the corresponding ferric halides, ferric chloride being the most common. Ferric iodide is an exception, being thermodynamically unstable due to the oxidizing power of Fe3+ and the high reducing power of I−: Ferric iodide, a black solid, is not stable in ordinary conditions, but can be prepared through the reaction of iron pentacarbonyl with iodine and carbon monoxide in the presence of hexane and light at the temperature of −20 °C, with oxygen and water excluded. Complexes of ferric iodide with some soft bases are known to be stable compounds. The standard reduction potentials in acidic aqueous solution for some common iron ions are given below: The red-purple tetrahedral ferrate(VI) anion is such a strong oxidizing agent that it oxidizes ammonia to nitrogen (N2) and water to oxygen: The pale-violet hexaquo complex [Fe(H2O)6]3+ is an acid such that above pH 0 it is fully hydrolyzed: As pH rises above 0 the above yellow hydrolyzed species form and as it rises above 2–3, reddish-brown hydrous iron(III) oxide precipitates out of solution. Although Fe3+ has a d5 configuration, its absorption spectrum is not like that of Mn2+ with its weak, spin-forbidden d–d bands, because Fe3+ has higher positive charge and is more polarizing, lowering the energy of its ligand-to-metal charge transfer absorptions. Thus, all the above complexes are rather strongly colored, with the single exception of the hexaquo ion – and even that has a spectrum dominated by charge transfer in the near ultraviolet region. On the other hand, the pale green iron(II) hexaquo ion [Fe(H2O)6]2+ does not undergo appreciable hydrolysis. Carbon dioxide is not evolved when carbonate anions are added, which instead results in white iron(II) carbonate being precipitated out. In excess carbon dioxide this forms the slightly soluble bicarbonate, which occurs commonly in groundwater, but it oxidises quickly in air to form iron(III) oxide that accounts for the brown deposits present in a sizeable number of streams. Due to its electronic structure, iron has a very large coordination and organometallic chemistry. Many coordination compounds of iron are known. A typical six-coordinate anion is hexachloroferrate(III), [FeCl6]3−, found in the mixed salt tetrakis(methylammonium) hexachloroferrate(III) chloride. Complexes with multiple bidentate ligands have geometric isomers. For example, the trans-chlorohydridobis(bis-1,2-(diphenylphosphino)ethane)iron(II) complex is used as a starting material for compounds with the Fe(dppe)2 moiety. The ferrioxalate ion with three oxalate ligands displays helical chirality with its two non-superposable geometries labelled Λ (lambda) for the left-handed screw axis and Δ (delta) for the right-handed screw axis, in line with IUPAC conventions. Potassium ferrioxalate is used in chemical actinometry and along with its sodium salt undergoes photoreduction applied in old-style photographic processes. The dihydrate of iron(II) oxalate has a polymeric structure with co-planar oxalate ions bridging between iron centres with the water of crystallisation located forming the caps of each octahedron, as illustrated below. Iron(III) complexes are quite similar to those of chromium(III) with the exception of iron(III)'s preference for O-donor instead of N-donor ligands. The latter tend to be rather more unstable than iron(II) complexes and often dissociate in water. Many Fe–O complexes show intense colors and are used as tests for phenols or enols. For example, in the ferric chloride test, used to determine the presence of phenols, iron(III) chloride reacts with a phenol to form a deep violet complex: Among the halide and pseudohalide complexes, fluoro complexes of iron(III) are the most stable, with the colorless [FeF5(H2O)]2− being the most stable in aqueous solution. Chloro complexes are less stable and favor tetrahedral coordination as in [FeCl4]−; [FeBr4]− and [FeI4]− are reduced easily to iron(II). Thiocyanate is a common test for the presence of iron(III) as it forms the blood-red [Fe(SCN)(H2O)5]2+. Like manganese(II), most iron(III) complexes are high-spin, the exceptions being those with ligands that are high in the spectrochemical series such as cyanide. An example of a low-spin iron(III) complex is [Fe(CN)6]3−. Iron shows a great variety of electronic spin states, including every possible spin quantum number value for a d-block element from 0 (diamagnetic) to 5⁄2 (5 unpaired electrons). This value is always half the number of unpaired electrons. Complexes with zero to two unpaired electrons are considered low-spin and those with four or five are considered high-spin. Iron(II) complexes are less stable than iron(III) complexes but the preference for O-donor ligands is less marked, so that for example [Fe(NH3)6]2+ is known while [Fe(NH3)6]3+ is not. They have a tendency to be oxidized to iron(III) but this can be moderated by low pH and the specific ligands used. Organoiron chemistry is the study of organometallic compounds of iron, where carbon atoms are covalently bound to the metal atom. They are many and varied, including cyanide complexes, carbonyl complexes, sandwich and half-sandwich compounds. Prussian blue or "ferric ferrocyanide", Fe4[Fe(CN)6]3, is an old and well-known iron-cyanide complex, extensively used as pigment and in several other applications. Its formation can be used as a simple wet chemistry test to distinguish between aqueous solutions of Fe2+ and Fe3+ as they react (respectively) with potassium ferricyanide and potassium ferrocyanide to form Prussian blue. Another old example of an organoiron compound is iron pentacarbonyl, Fe(CO)5, in which a neutral iron atom is bound to the carbon atoms of five carbon monoxide molecules. The compound can be used to make carbonyl iron powder, a highly reactive form of metallic iron. Thermolysis of iron pentacarbonyl gives triiron dodecacarbonyl, Fe3(CO)12, a complex with a cluster of three iron atoms at its core. Collman's reagent, disodium tetracarbonylferrate, is a useful reagent for organic chemistry; it contains iron in the −2 oxidation state. Cyclopentadienyliron dicarbonyl dimer contains iron in the rare +1 oxidation state. A landmark in this field was the discovery in 1951 of the remarkably stable sandwich compound ferrocene Fe(C5H5)2, by Pauson and Kealy and independently by Miller and colleagues, whose surprising molecular structure was determined only a year later by Woodward and Wilkinson and Fischer. Ferrocene is still one of the most important tools and models in this class. Iron-centered organometallic species are used as catalysts. The Knölker complex, for example, is a transfer hydrogenation catalyst for ketones. The iron compounds produced on the largest scale in industry are iron(II) sulfate (FeSO4·7H2O) and iron(III) chloride (FeCl3). The former is one of the most readily available sources of iron(II), but is less stable to aerial oxidation than Mohr's salt ((NH4)2Fe(SO4)2·6H2O). Iron(II) compounds tend to be oxidized to iron(III) compounds in the air. History Iron is one of the elements undoubtedly known to the ancient world. It has been worked, or wrought, for millennia. However, iron artefacts of great age are much rarer than objects made of gold or silver due to the ease with which iron corrodes. The technology developed slowly, and even after the discovery of smelting it took many centuries for iron to replace bronze as the metal of choice for tools and weapons. Beads made from meteoric iron in 3500 BC or earlier were found in Gerzeh, Egypt by G. A. Wainwright. The beads contain 7.5% nickel, which is a signature of meteoric origin since iron found in the Earth's crust generally has only minuscule nickel impurities. Meteoric iron was highly regarded due to its origin in the heavens and was often used to forge weapons and tools. For example, a dagger made of meteoric iron was found in the tomb of Tutankhamun, containing similar proportions of iron, cobalt, and nickel to a meteorite discovered in the area, deposited by an ancient meteor shower. Items that were likely made of iron by Egyptians date from 3000 to 2500 BC. Meteoritic iron is comparably soft and ductile and easily cold forged but may get brittle when heated because of the nickel content. The first iron production started in the Middle Bronze Age, but it took several centuries before iron displaced bronze. Samples of smelted iron from Asmar, Mesopotamia and Tall Chagar Bazaar in northern Syria were made sometime between 3000 and 2700 BC. The Hittites established an empire in north-central Anatolia around 1600 BC. They appear to be the first to understand the production of iron from its ores and regard it highly in their society. The Hittites began to smelt iron between 1500 and 1200 BC and the practice spread to the rest of the Near East after their empire fell in 1180 BC. The subsequent period is called the Iron Age. Artifacts of smelted iron are found in India dating from 1800 to 1200 BC, and in the Levant from about 1500 BC (suggesting smelting in Anatolia or the Caucasus). Alleged references (compare history of metallurgy in South Asia) to iron in the Indian Vedas have been used for claims of a very early usage of iron in India respectively to date the texts as such. The rigveda term ayas (metal) refers to copper, while iron which is called as śyāma ayas, literally "black copper", first is mentioned in the post-rigvedic Atharvaveda. Some archaeological evidence suggests iron was smelted in Zimbabwe and southeast Africa as early as the eighth century BC. Iron working was introduced to Greece in the late 11th century BC, from which it spread quickly throughout Europe. The spread of ironworking in Central and Western Europe is associated with Celtic expansion. According to Pliny the Elder, iron use was common in the Roman era. In the lands of what is now considered China, iron appears approximately 700–500 BC. Iron smelting may have been introduced into China through Central Asia. The earliest evidence of the use of a blast furnace in China dates to the 1st century AD, and cupola furnaces were used as early as the Warring States period (403–221 BC). Usage of the blast and cupola furnace remained widespread during the Tang and Song dynasties. During the Industrial Revolution in Britain, Henry Cort began refining iron from pig iron to wrought iron (or bar iron) using innovative production systems. In 1783 he patented the puddling process for refining iron ore. It was later improved by others, including Joseph Hall. Cast iron was first produced in China during 5th century BC, but was hardly in Europe until the medieval period. The earliest cast iron artifacts were discovered by archaeologists in what is now modern Luhe County, Jiangsu in China. Cast iron was used in ancient China for warfare, agriculture, and architecture. During the medieval period, means were found in Europe of producing wrought iron from cast iron (in this context known as pig iron) using finery forges. For all these processes, charcoal was required as fuel. Medieval blast furnaces were about 10 feet (3.0 m) tall and made of fireproof brick; forced air was usually provided by hand-operated bellows. Modern blast furnaces have grown much bigger, with hearths fourteen meters in diameter that allow them to produce thousands of tons of iron each day, but essentially operate in much the same way as they did during medieval times. In 1709, Abraham Darby I established a coke-fired blast furnace to produce cast iron, replacing charcoal, although continuing to use blast furnaces. The ensuing availability of inexpensive iron was one of the factors leading to the Industrial Revolution. Toward the end of the 18th century, cast iron began to replace wrought iron for certain purposes, because it was cheaper. Carbon content in iron was not implicated as the reason for the differences in properties of wrought iron, cast iron, and steel until the 18th century. Since iron was becoming cheaper and more plentiful, it also became a major structural material following the building of the innovative first iron bridge in 1778. This bridge still stands today as a monument to the role iron played in the Industrial Revolution. Following this, iron was used in rails, boats, ships, aqueducts, and buildings, as well as in iron cylinders in steam engines. Railways have been central to the formation of modernity and ideas of progress and various languages refer to railways as iron road (e.g. French chemin de fer, German Eisenbahn, Turkish demiryolu, Russian железная дорога, Chinese, Japanese, and Korean 鐵道, Vietnamese đường sắt). Steel (with smaller carbon content than pig iron but more than wrought iron) was first produced in antiquity by using a bloomery. Blacksmiths in Luristan in western Persia were making good steel by 1000 BC. Then improved versions, Wootz steel by India and Damascus steel were developed around 300 BC and AD 500 respectively. These methods were specialized, and so steel did not become a major commodity until the 1850s. New methods of producing it by carburizing bars of iron in the cementation process were devised in the 17th century. In the Industrial Revolution, new methods of producing bar iron without charcoal were devised and these were later applied to produce steel. In the late 1850s, Henry Bessemer invented a new steelmaking process, involving blowing air through molten pig iron, to produce mild steel. This made steel much more economical, thereby leading to wrought iron no longer being produced in large quantities. In 1774, Antoine Lavoisier used the reaction of water steam with metallic iron inside an incandescent iron tube to produce hydrogen in his experiments leading to the demonstration of the conservation of mass, which was instrumental in changing chemistry from a qualitative science to a quantitative one. Symbolic role Iron plays a certain role in mythology and has found various usage as a metaphor and in folklore. The Greek poet Hesiod's Works and Days (lines 109–201) lists different ages of man named after metals like gold, silver, bronze and iron to account for successive ages of humanity. The Iron Age was closely related with Rome, and in Ovid's Metamorphoses The Virtues, in despair, quit the earth; and the depravity of man becomes universal and complete. Hard steel succeeded then. — Ovid, Metamorphoses, Book I, Iron age, line 160 ff An example of the importance of iron's symbolic role may be found in the German Campaign of 1813. Frederick William III commissioned then the first Iron Cross as military decoration. Berlin iron jewellery reached its peak production between 1813 and 1815, when the Prussian royal family urged citizens to donate gold and silver jewellery for military funding. The inscription Ich gab Gold für Eisen (I gave gold for iron) was used as well in later war efforts. Production of metallic iron For a few limited purposes when it is needed, pure iron is produced in the laboratory in small quantities by reducing the pure oxide or hydroxide with hydrogen, or forming iron pentacarbonyl and heating it to 250 °C so that it decomposes to form pure iron powder. Another method is electrolysis of ferrous chloride onto an iron cathode. The industrial production of iron or steel consists of two main stages. In the first stage, iron ore is reduced with coke in a blast furnace, and the molten metal is separated from gross impurities such as silicate minerals. This stage yields an alloy – pig iron – that contains relatively large amounts of carbon. In the second stage, the amount of carbon in the pig iron is lowered by oxidation to yield wrought iron, steel, or cast iron. Other metals can be added at this stage to form alloy steels. The blast furnace is loaded with iron ores, usually hematite Fe2O3 or magnetite Fe3O4, along with coke (coal that has been separately baked to remove volatile components) and flux (limestone or dolomite). "Blasts" of air pre-heated to 900 °C (sometimes with oxygen enrichment) is blown through the mixture, in sufficient amount to turn the carbon into carbon monoxide: This reaction raises the temperature to about 2000 °C. The carbon monoxide reduces the iron ore to metallic iron: Some iron in the high-temperature lower region of the furnace reacts directly with the coke: The flux removes silicaceous minerals in the ore, which would otherwise clog the furnace: The heat of the furnace decomposes the carbonates to calcium oxide, which reacts with any excess silica to form a slag composed of calcium silicate CaSiO3 or other products. At the furnace's temperature, the metal and the slag are both molten. They collect at the bottom as two immiscible liquid layers (with the slag on top), that are then easily separated. The slag can be used as a material in road construction or to improve mineral-poor soils for agriculture. Steelmaking thus remains one of the largest industrial contributors of CO2 emissions in the world. The pig iron produced by the blast furnace process contains up to 4–5% carbon (by mass), with small amounts of other impurities like sulfur, magnesium, phosphorus, and manganese. This high level of carbon makes it relatively weak and brittle. Reducing the amount of carbon to 0.002–2.1% produces steel, which may be up to 1000 times harder than pure iron. A great variety of steel articles can then be made by cold working, hot rolling, forging, machining, etc. Removing the impurities from pig iron, but leaving 2–4% carbon, results in cast iron, which is cast by foundries into articles such as stoves, pipes, radiators, lamp-posts, and rails. Steel products often undergo various heat treatments after they are forged to shape. Annealing consists of heating them to 700–800 °C for several hours and then gradual cooling. It makes the steel softer and more workable. Owing to environmental concerns, alternative methods of processing iron have been developed. "Direct iron reduction" reduces iron ore to a ferrous lump called "sponge" iron or "direct" iron that is suitable for steelmaking. Two main reactions comprise the direct reduction process: Natural gas is partially oxidized (with heat and a catalyst): Iron ore is then treated with these gases in a furnace, producing solid sponge iron: Silica is removed by adding a limestone flux as described above. Ignition of a mixture of aluminium powder and iron oxide yields metallic iron via the thermite reaction: Alternatively pig iron may be made into steel (with up to about 2% carbon) or wrought iron (commercially pure iron). Various processes have been used for this, including finery forges, puddling furnaces, Bessemer converters, open hearth furnaces, basic oxygen furnaces, and electric arc furnaces. In all cases, the objective is to oxidize some or all of the carbon, together with other impurities. On the other hand, other metals may be added to make alloy steels. Molten oxide electrolysis (MOE) uses electrolysis of molten iron oxide to yield metallic iron. It is studied in laboratory-scale experiments and is proposed as a method for industrial iron production that has no direct emissions of carbon dioxide. It uses a liquid iron cathode, an anode formed from an alloy of chromium, aluminium and iron, and the electrolyte is a mixture of molten metal oxides into which iron ore is dissolved. The current keeps the electrolyte molten and reduces the iron oxide. Oxygen gas is produced in addition to liquid iron. The only carbon dioxide emissions come from any fossil fuel-generated electricity used to heat and reduce the metal. Applications Iron is the most widely used of all the metals, accounting for over 90% of worldwide metal production. Its low cost and high strength often make it the material of choice to withstand stress or transmit forces, such as the construction of machinery and machine tools, rails, automobiles, ship hulls, concrete reinforcing bars, and the load-carrying framework of buildings. Since pure iron is quite soft, it is most commonly combined with alloying elements to make steel. The mechanical properties of iron and its alloys are extremely relevant to their structural applications. Those properties can be evaluated in various ways, including the Brinell test, the Rockwell test, and the Vickers hardness test. The properties of pure iron are often used to calibrate measurements or to compare tests. However, the mechanical properties of iron are significantly affected by the sample's purity: pure, single crystals of iron are actually softer than aluminium, and the purest industrially produced iron (99.99%) has a hardness of 20–30 Brinell. Very pure iron (99.9%~99.999%) called electrolytic iron is industrially produced by electrolytic refining. An increase in the carbon content will cause a significant increase in the hardness and tensile strength of iron. Maximum hardness of 65 Rc is achieved with a 0.6% carbon content, although the alloy has low tensile strength. Because of the softness of iron, it is much easier to work with than its heavier congeners ruthenium and osmium. α-Iron is a fairly soft metal that can dissolve only a small concentration of carbon (no more than 0.021% by mass at 910 °C). Austenite (γ-iron) is similarly soft and metallic but can dissolve considerably more carbon (as much as 2.04% by mass at 1146 °C). This form of iron is used in the type of stainless steel used for making cutlery, and hospital and food-service equipment. Commercially available iron is classified based on purity and the abundance of additives. Pig iron has 3.5–4.5% carbon and contains varying amounts of contaminants such as sulfur, silicon and phosphorus. Pig iron is not a saleable product, but rather an intermediate step in the production of cast iron and steel. The reduction of contaminants in pig iron that negatively affect material properties, such as sulfur and phosphorus, yields cast iron containing 2–4% carbon, 1–6% silicon, and small amounts of manganese. Pig iron has a melting point in the range of 1420–1470 K, which is lower than either of its two main components, and makes it the first product to be melted when carbon and iron are heated together. Its mechanical properties vary greatly and depend on the form the carbon takes in the alloy. "White" cast irons contain their carbon in the form of cementite, or iron carbide (Fe3C). This hard, brittle compound dominates the mechanical properties of white cast irons, rendering them hard, but unresistant to shock. The broken surface of a white cast iron is full of fine facets of the broken iron carbide, a very pale, silvery, shiny material, hence the appellation. Cooling a mixture of iron with 0.8% carbon slowly below 723 °C to room temperature results in separate, alternating layers of cementite and α-iron, which is soft and malleable and is called pearlite for its appearance. Rapid cooling, on the other hand, does not allow time for this separation and creates hard and brittle martensite. The steel can then be tempered by reheating to a temperature in between, changing the proportions of pearlite and martensite. The end product below 0.8% carbon content is a pearlite-αFe mixture, and that above 0.8% carbon content is a pearlite-cementite mixture. In gray iron the carbon exists as separate, fine flakes of graphite, and also renders the material brittle due to the sharp edged flakes of graphite that produce stress concentration sites within the material. A newer variant of gray iron, referred to as ductile iron, is specially treated with trace amounts of magnesium to alter the shape of graphite to spheroids, or nodules, reducing the stress concentrations and vastly increasing the toughness and strength of the material. Wrought iron contains less than 0.25% carbon but large amounts of slag that give it a fibrous characteristic. Wrought iron is more corrosion resistant than steel. It has been almost completely replaced by mild steel, which corrodes more readily than wrought iron, but is cheaper and more widely available. Carbon steel contains 2.0% carbon or less, with small amounts of manganese, sulfur, phosphorus, and silicon. Alloy steels contain varying amounts of carbon as well as other metals, such as chromium, vanadium, molybdenum, nickel, tungsten, etc. Their alloy content raises their cost, and so they are usually only employed for specialist uses. One common alloy steel, though, is stainless steel. Recent developments in ferrous metallurgy have produced a growing range of microalloyed steels, also termed 'HSLA' or high-strength, low alloy steels, containing tiny additions to produce high strengths and often spectacular toughness at minimal cost. Alloys with high purity elemental makeups (such as alloys of electrolytic iron) have specifically enhanced properties such as ductility, tensile strength, toughness, fatigue strength, heat resistance, and corrosion resistance. Apart from traditional applications, iron is also used for protection from ionizing radiation. Although it is lighter than another traditional protection material, lead, it is much stronger mechanically. The main disadvantage of iron and steel is that pure iron, and most of its alloys, suffer badly from rust if not protected in some way, a cost amounting to over 1% of the world's economy. Painting, galvanization, passivation, plastic coating and bluing are all used to protect iron from rust by excluding water and oxygen or by cathodic protection. The mechanism of the rusting of iron is as follows: The electrolyte is usually iron(II) sulfate in urban areas (formed when atmospheric sulfur dioxide attacks iron), and salt particles in the atmosphere in seaside areas. Because Fe is inexpensive and nontoxic, much effort has been devoted to the development of Fe-based catalysts and reagents. Iron is however less common as a catalyst in commercial processes than more expensive metals. In biology, Fe-containing enzymes are pervasive. Iron catalysts are traditionally used in the Haber–Bosch process for the production of ammonia and the Fischer–Tropsch process for conversion of carbon monoxide to hydrocarbons for fuels and lubricants. Powdered iron in an acidic medium is used in the Bechamp reduction, the conversion of nitrobenzene to aniline. Iron(III) oxide mixed with aluminium powder can be ignited to create a thermite reaction, used in welding large iron parts (like rails) and purifying ores. Iron(III) oxide and oxyhydroxide are used as reddish and ocher pigments. Iron(III) chloride finds use in water purification and sewage treatment, in the dyeing of cloth, as a coloring agent in paints, as an additive in animal feed, and as an etchant for copper in the manufacture of printed circuit boards. It can also be dissolved in alcohol to form tincture of iron, which is used as a medicine to stop bleeding in canaries. Iron(II) sulfate is used as a precursor to other iron compounds. It is also used to reduce chromate in cement. It is used to fortify foods and treat iron deficiency anemia. Iron(III) sulfate is used in settling minute sewage particles in tank water. Iron(II) chloride is used as a reducing flocculating agent, in the formation of iron complexes and magnetic iron oxides, and as a reducing agent in organic synthesis. Sodium nitroprusside is a drug used as a vasodilator. It is on the World Health Organization's List of Essential Medicines. Biological and pathological role Iron is required for life. The iron–sulfur clusters are pervasive and include nitrogenase, the enzymes responsible for biological nitrogen fixation. Iron-containing proteins participate in transport, storage and use of oxygen. Iron proteins are involved in electron transfer. Examples of iron-containing proteins in higher organisms include hemoglobin, cytochrome (see high-valent iron), and catalase. The average adult human contains about 0.005% body weight of iron, or about four grams, of which three quarters is in hemoglobin—a level that remains constant despite only about one milligram of iron being absorbed each day, because the human body recycles its hemoglobin for the iron content. Microbial growth may be assisted by oxidation of iron(II) or by reduction of iron(III). Iron acquisition poses a problem for aerobic organisms because ferric iron is poorly soluble near neutral pH. Thus, these organisms have developed means to absorb iron as complexes, sometimes taking up ferrous iron before oxidising it back to ferric iron. In particular, bacteria have evolved very high-affinity sequestering agents called siderophores. After uptake in human cells, iron storage is precisely regulated. A major component of this regulation is the protein transferrin, which binds iron ions absorbed from the duodenum and carries it in the blood to cells. Transferrin contains Fe3+ in the middle of a distorted octahedron, bonded to one nitrogen, three oxygens and a chelating carbonate anion that traps the Fe3+ ion: it has such a high stability constant that it is very effective at taking up Fe3+ ions even from the most stable complexes. At the bone marrow, transferrin is reduced from Fe3+ to Fe2+ and stored as ferritin to be incorporated into hemoglobin. The most commonly known and studied bioinorganic iron compounds (biological iron molecules) are the heme proteins: examples are hemoglobin, myoglobin, and cytochrome P450. These compounds participate in transporting gases, building enzymes, and transferring electrons. Metalloproteins are a group of proteins with metal ion cofactors. Some examples of iron metalloproteins are ferritin and rubredoxin. Many enzymes vital to life contain iron, such as catalase, lipoxygenases, and IRE-BP. Hemoglobin is an oxygen carrier that occurs in red blood cells and contributes their color, transporting oxygen in the arteries from the lungs to the muscles where it is transferred to myoglobin, which stores it until it is needed for the metabolic oxidation of glucose, generating energy. Here the hemoglobin binds to carbon dioxide, produced when glucose is oxidized, which is transported through the veins by hemoglobin (predominantly as bicarbonate anions) back to the lungs where it is exhaled. In hemoglobin, the iron is in one of four heme groups and has six possible coordination sites; four are occupied by nitrogen atoms in a porphyrin ring, the fifth by an imidazole nitrogen in a histidine residue of one of the protein chains attached to the heme group, and the sixth is reserved for the oxygen molecule it can reversibly bind to. When hemoglobin is not attached to oxygen (and is then called deoxyhemoglobin), the Fe2+ ion at the center of the heme group (in the hydrophobic protein interior) is in a high-spin configuration. It is thus too large to fit inside the porphyrin ring, which bends instead into a dome with the Fe2+ ion about 55 picometers above it. In this configuration, the sixth coordination site reserved for the oxygen is blocked by another histidine residue. When deoxyhemoglobin picks up an oxygen molecule, this histidine residue moves away and returns once the oxygen is securely attached to form a hydrogen bond with it. This results in the Fe2+ ion switching to a low-spin configuration, resulting in a 20% decrease in ionic radius so that now it can fit into the porphyrin ring, which becomes planar. Additionally, this hydrogen bonding results in the tilting of the oxygen molecule, resulting in a Fe–O–O bond angle of around 120° that avoids the formation of Fe–O–Fe or Fe–O2–Fe bridges that would lead to electron transfer, the oxidation of Fe2+ to Fe3+, and the destruction of hemoglobin. This results in a movement of all the protein chains that leads to the other subunits of hemoglobin changing shape to a form with larger oxygen affinity. Thus, when deoxyhemoglobin takes up oxygen, its affinity for more oxygen increases, and vice versa. Myoglobin, on the other hand, contains only one heme group and hence this cooperative effect cannot occur. Thus, while hemoglobin is almost saturated with oxygen in the high partial pressures of oxygen found in the lungs, its affinity for oxygen is much lower than that of myoglobin, which oxygenates even at low partial pressures of oxygen found in muscle tissue. As described by the Bohr effect (named after Christian Bohr, the father of Niels Bohr), the oxygen affinity of hemoglobin diminishes in the presence of carbon dioxide. Carbon monoxide and phosphorus trifluoride are poisonous to humans because they bind to hemoglobin similarly to oxygen, but with much more strength, so that oxygen can no longer be transported throughout the body. Hemoglobin bound to carbon monoxide is known as carboxyhemoglobin. This effect also plays a minor role in the toxicity of cyanide, but there the major effect is by far its interference with the proper functioning of the electron transport protein cytochrome a. The cytochrome proteins also involve heme groups and are involved in the metabolic oxidation of glucose by oxygen. The sixth coordination site is then occupied by either another imidazole nitrogen or a methionine sulfur, so that these proteins are largely inert to oxygen—with the exception of cytochrome a, which bonds directly to oxygen and thus is very easily poisoned by cyanide. Here, the electron transfer takes place as the iron remains in low spin but changes between the +2 and +3 oxidation states. Since the reduction potential of each step is slightly greater than the previous one, the energy is released step-by-step and can thus be stored in adenosine triphosphate. Cytochrome a is slightly distinct, as it occurs at the mitochondrial membrane, binds directly to oxygen, and transports protons as well as electrons, as follows: Although the heme proteins are the most important class of iron-containing proteins, the iron–sulfur proteins are also very important, being involved in electron transfer, which is possible since iron can exist stably in either the +2 or +3 oxidation states. These have one, two, four, or eight iron atoms that are each approximately tetrahedrally coordinated to four sulfur atoms; because of this tetrahedral coordination, they always have high-spin iron. The simplest of such compounds is rubredoxin, which has only one iron atom coordinated to four sulfur atoms from cysteine residues in the surrounding peptide chains. Another important class of iron–sulfur proteins is the ferredoxins, which have multiple iron atoms. Transferrin does not belong to either of these classes. The ability of sea mussels to maintain their grip on rocks in the ocean is facilitated by their use of organometallic iron-based bonds in their protein-rich cuticles. Based on synthetic replicas, the presence of iron in these structures increased elastic modulus 770 times, tensile strength 58 times, and toughness 92 times. The amount of stress required to permanently damage them increased 76 times. Iron is pervasive, but particularly rich sources of dietary iron include red meat, oysters, beans, poultry, fish, leaf vegetables, watercress, tofu, and blackstrap molasses. Bread and breakfast cereals are sometimes specifically fortified with iron. Iron provided by dietary supplements is often found as iron(II) fumarate, although iron(II) sulfate is cheaper and is absorbed equally well. Elemental iron, or reduced iron, despite being absorbed at only one-third to two-thirds the efficiency (relative to iron sulfate), is often added to foods such as breakfast cereals or enriched wheat flour. Iron is most available to the body when chelated to amino acids and is also available for use as a common iron supplement. Glycine, the least expensive amino acid, is most often used to produce iron glycinate supplements. The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for iron in 2001. The current EAR for iron for women ages 14‍–‍18 is 7.9 mg/day, 8.1 mg/day for ages 19‍–‍50 and 5.0 mg/day thereafter (postmenopause). For men, the EAR is 6.0 mg/day for ages 19 and up. The RDA is 15.0 mg/day for women ages 15‍–‍18, 18.0 mg/day for ages 19‍–‍50 and 8.0 mg/day thereafter. For men, 8.0 mg/day for ages 19 and up. RDAs are higher than EARs so as to identify amounts that will cover people with higher-than-average requirements. RDA for pregnancy is 27 mg/day and, for lactation, 9 mg/day. For children ages 1‍–‍3 years 7 mg/day, 10 mg/day for ages 4–8 and 8 mg/day for ages 9‍–‍13. As for safety, the IOM also sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of iron, the UL is set at 45 mg/day. Collectively the EARs, RDAs and ULs are referred to as Dietary Reference Intakes. The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For women the PRI is 13 mg/day ages 15‍–‍17 years, 16 mg/day for women ages 18 and up who are premenopausal and 11 mg/day postmenopausal. For pregnancy and lactation, 16 mg/day. For men the PRI is 11 mg/day ages 15 and older. For children ages 1 to 14, the PRI increases from 7 to 11 mg/day. The PRIs are higher than the U.S. RDAs, with the exception of pregnancy. The EFSA reviewed the same safety question did not establish a UL. Infants may require iron supplements if they are bottle-fed cow's milk. Frequent blood donors are at risk of low iron levels and are often advised to supplement their iron intake. For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For iron labeling purposes, 100% of the Daily Value was 18 mg, and as of May 27, 2016[update] remained unchanged at 18 mg. A table of the old and new adult daily values is provided at Reference Daily Intake. Iron deficiency is the most common nutritional deficiency in the world. When loss of iron is not adequately compensated by adequate dietary iron intake, a state of latent iron deficiency occurs, which over time leads to iron-deficiency anemia if left untreated, which is characterised by an insufficient number of red blood cells and an insufficient amount of hemoglobin. Children, pre-menopausal women (women of child-bearing age), and people with poor diet are most susceptible to the disease. Most cases of iron-deficiency anemia are mild, but if not treated can cause problems like fast or irregular heartbeat, complications during pregnancy, and delayed growth in infants and children. The brain is resistant to acute iron deficiency due to the slow transport of iron through the blood brain barrier. Acute fluctuations in iron status (marked by serum ferritin levels) do not reflect brain iron status, but prolonged nutritional iron deficiency is suspected to reduce brain iron concentrations over time. In the brain, iron plays a role in oxygen transport, myelin synthesis, mitochondrial respiration, and as a cofactor for neurotransmitter synthesis and metabolism. Animal models of nutritional iron deficiency report biomolecular changes resembling those seen in Parkinson's and Huntington's disease. However, age-related accumulation of iron in the brain has also been linked to the development of Parkinson's. Iron uptake is tightly regulated by the human body, which has no regulated physiological means of excreting iron. Only small amounts of iron are lost daily due to mucosal and skin epithelial cell sloughing, so control of iron levels is primarily accomplished by regulating uptake. Regulation of iron uptake is impaired in some people as a result of a genetic defect that maps to the HLA-H gene region on chromosome 6 and leads to abnormally low levels of hepcidin, a key regulator of the entry of iron into the circulatory system in mammals. In these people, excessive iron intake can result in iron overload disorders, known medically as hemochromatosis. Many people have an undiagnosed genetic susceptibility to iron overload, and are not aware of a family history of the problem. For this reason, people should not take iron supplements unless they suffer from iron deficiency and have consulted a doctor. Hemochromatosis is estimated to be the cause of 0.3–0.8% of all metabolic diseases of Caucasians. Overdoses of ingested iron can cause excessive levels of free iron in the blood. High blood levels of free ferrous iron react with peroxides to produce highly reactive free radicals that can damage DNA, proteins, lipids, and other cellular components. Iron toxicity occurs when the cell contains free iron, which generally occurs when iron levels exceed the availability of transferrin to bind the iron. Damage to the cells of the gastrointestinal tract can also prevent them from regulating iron absorption, leading to further increases in blood levels. Iron typically damages cells in the heart, liver and elsewhere, causing adverse effects that include coma, metabolic acidosis, shock, liver failure, coagulopathy, long-term organ damage, and even death. Humans experience iron toxicity when the iron exceeds 20 milligrams for every kilogram of body mass; 60 milligrams per kilogram is considered a lethal dose. Overconsumption of iron, often the result of children eating large quantities of ferrous sulfate tablets intended for adult consumption, is one of the most common toxicological causes of death in children under six. The Dietary Reference Intake (DRI) sets the Tolerable Upper Intake Level (UL) for adults at 45 mg/day. For children under fourteen years old the UL is 40 mg/day. The medical management of iron toxicity is complicated, and can include use of a specific chelating agent called deferoxamine to bind and expel excess iron from the body. Iron is vital for function of the human brain. Iron is tightly regulated in the brain, limiting the impact of both iron deficiency and excess. If these systems are overwhelmed by nutritional deficiency, especially in pregnancy and infancy, there can be long-term health consequences but direct connections are difficult to establish because nutritional deficiency is usually linked to other health related problems. Excess iron is correlated with some neurodegenerative conditions but the cause is not known. The role of iron in cancer defense can be described as a "double-edged sword" because of its pervasive presence in non-pathological processes. People having chemotherapy may develop iron deficiency and anemia, for which intravenous iron therapy is used to restore iron levels. Iron overload, which may occur from high consumption of red meat, may initiate tumor growth and increase susceptibility to cancer onset, particularly for colorectal cancer. Iron plays an essential role in marine systems and can act as a limiting nutrient for planktonic activity. Because of this, too much of a decrease in iron may lead to a decrease in growth rates in phytoplanktonic organisms such as diatoms. Iron can also be oxidized by marine microbes under conditions that are high in iron and low in oxygen. Iron can enter marine systems through adjoining rivers and directly from the atmosphere. Once iron enters the ocean, it can be distributed throughout the water column through ocean mixing and through recycling on the cellular level. In the arctic, sea ice plays a major role in the store and distribution of iron in the ocean, depleting oceanic iron as it freezes in the winter and releasing it back into the water when thawing occurs in the summer. The iron cycle can fluctuate the forms of iron from aqueous to particle forms altering the availability of iron to primary producers. Increased light and warmth increases the amount of iron that is in forms that are usable by primary producers. See also References Bibliography Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Elon_Musk#cite_note-FOOTNOTEBerger2021178–182-88] | [TOKENS: 10515]
Contents Elon Musk Elon Reeve Musk (/ˈiːlɒn/ EE-lon; born June 28, 1971) is a businessman and entrepreneur known for his leadership of Tesla, SpaceX, Twitter, and xAI. Musk has been the wealthiest person in the world since 2025; as of February 2026,[update] Forbes estimates his net worth to be around US$852 billion. Born into a wealthy family in Pretoria, South Africa, Musk emigrated in 1989 to Canada; he has Canadian citizenship since his mother was born there. He received bachelor's degrees in 1997 from the University of Pennsylvania before moving to California to pursue business ventures. In 1995, Musk co-founded the software company Zip2. Following its sale in 1999, he co-founded X.com, an online payment company that later merged to form PayPal, which was acquired by eBay in 2002. Musk also became an American citizen in 2002. In 2002, Musk founded the space technology company SpaceX, becoming its CEO and chief engineer; the company has since led innovations in reusable rockets and commercial spaceflight. Musk joined the automaker Tesla as an early investor in 2004 and became its CEO and product architect in 2008; it has since become a leader in electric vehicles. In 2015, he co-founded OpenAI to advance artificial intelligence (AI) research, but later left; growing discontent with the organization's direction and their leadership in the AI boom in the 2020s led him to establish xAI, which became a subsidiary of SpaceX in 2026. In 2022, he acquired the social network Twitter, implementing significant changes, and rebranding it as X in 2023. His other businesses include the neurotechnology company Neuralink, which he co-founded in 2016, and the tunneling company the Boring Company, which he founded in 2017. In November 2025, a Tesla pay package worth $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Musk was the largest donor in the 2024 U.S. presidential election, where he supported Donald Trump. After Trump was inaugurated as president in early 2025, Musk served as Senior Advisor to the President and as the de facto head of the Department of Government Efficiency (DOGE). After a public feud with Trump, Musk left the Trump administration and returned to managing his companies. Musk is a supporter of global far-right figures, causes, and political parties. His political activities, views, and statements have made him a polarizing figure. Musk has been criticized for COVID-19 misinformation, promoting conspiracy theories, and affirming antisemitic, racist, and transphobic comments. His acquisition of Twitter was controversial due to a subsequent increase in hate speech and the spread of misinformation on the service, following his pledge to decrease censorship. His role in the second Trump administration attracted public backlash, particularly in response to DOGE. The emails he sent to Jeffrey Epstein are included in the Epstein files, which were published between 2025–26 and became a topic of worldwide debate. Early life Elon Reeve Musk was born on June 28, 1971, in Pretoria, South Africa's administrative capital. He is of British and Pennsylvania Dutch ancestry. His mother, Maye (née Haldeman), is a model and dietitian born in Saskatchewan, Canada, and raised in South Africa. Musk therefore holds both South African and Canadian citizenship from birth. His father, Errol Musk, is a South African electromechanical engineer, pilot, sailor, consultant, emerald dealer, and property developer, who partly owned a rental lodge at Timbavati Private Nature Reserve. His maternal grandfather, Joshua N. Haldeman, who died in a plane crash when Elon was a toddler, was an American-born Canadian chiropractor, aviator and political activist in the technocracy movement who moved to South Africa in 1950. Elon has a younger brother, Kimbal, a younger sister, Tosca, and four paternal half-siblings. Musk was baptized as a child in the Anglican Church of Southern Africa. Despite both Elon and Errol previously stating that Errol was a part owner of a Zambian emerald mine, in 2023, Errol recounted that the deal he made was to receive "a portion of the emeralds produced at three small mines". Errol was elected to the Pretoria City Council as a representative of the anti-apartheid Progressive Party and has said that his children shared their father's dislike of apartheid. After his parents divorced in 1979, Elon, aged around 9, chose to live with his father because Errol Musk had an Encyclopædia Britannica and a computer. Elon later regretted his decision and became estranged from his father. Elon has recounted trips to a wilderness school that he described as a "paramilitary Lord of the Flies" where "bullying was a virtue" and children were encouraged to fight over rations. In one incident, after an altercation with a fellow pupil, Elon was thrown down concrete steps and beaten severely, leading to him being hospitalized for his injuries. Elon described his father berating him after he was discharged from the hospital. Errol denied berating Elon and claimed, "The [other] boy had just lost his father to suicide, and Elon had called him stupid. Elon had a tendency to call people stupid. How could I possibly blame that child?" Elon was an enthusiastic reader of books, and had attributed his success in part to having read The Lord of the Rings, the Foundation series, and The Hitchhiker's Guide to the Galaxy. At age ten, he developed an interest in computing and video games, teaching himself how to program from the VIC-20 user manual. At age twelve, Elon sold his BASIC-based game Blastar to PC and Office Technology magazine for approximately $500 (equivalent to $1,600 in 2025). Musk attended Waterkloof House Preparatory School, Bryanston High School, and then Pretoria Boys High School, where he graduated. Musk was a decent but unexceptional student, earning a 61/100 in Afrikaans and a B on his senior math certification. Musk applied for a Canadian passport through his Canadian-born mother to avoid South Africa's mandatory military service, which would have forced him to participate in the apartheid regime, as well as to ease his path to immigration to the United States. While waiting for his application to be processed, he attended the University of Pretoria for five months. Musk arrived in Canada in June 1989, connected with a second cousin in Saskatchewan, and worked odd jobs, including at a farm and a lumber mill. In 1990, he entered Queen's University in Kingston, Ontario. Two years later, he transferred to the University of Pennsylvania, where he studied until 1995. Although Musk has said that he earned his degrees in 1995, the University of Pennsylvania did not award them until 1997 – a Bachelor of Arts in physics and a Bachelor of Science in economics from the university's Wharton School. He reportedly hosted large, ticketed house parties to help pay for tuition, and wrote a business plan for an electronic book-scanning service similar to Google Books. In 1994, Musk held two internships in Silicon Valley: one at energy storage startup Pinnacle Research Institute, which investigated electrolytic supercapacitors for energy storage, and another at Palo Alto–based startup Rocket Science Games. In 1995, he was accepted to a graduate program in materials science at Stanford University, but did not enroll. Musk decided to join the Internet boom of the 1990s, applying for a job at Netscape, to which he reportedly never received a response. The Washington Post reported that Musk lacked legal authorization to remain and work in the United States after failing to enroll at Stanford. In response, Musk said he was allowed to work at that time and that his student visa transitioned to an H1-B. According to numerous former business associates and shareholders, Musk said he was on a student visa at the time. Business career In 1995, Musk, his brother Kimbal, and Greg Kouri founded the web software company Zip2 with funding from a group of angel investors. They housed the venture at a small rented office in Palo Alto. Replying to Rolling Stone, Musk denounced the notion that they started their company with funds borrowed from Errol Musk, but in a tweet, he recognized that his father contributed 10% of a later funding round. The company developed and marketed an Internet city guide for the newspaper publishing industry, with maps, directions, and yellow pages. According to Musk, "The website was up during the day and I was coding it at night, seven days a week, all the time." To impress investors, Musk built a large plastic structure around a standard computer to create the impression that Zip2 was powered by a small supercomputer. The Musk brothers obtained contracts with The New York Times and the Chicago Tribune, and persuaded the board of directors to abandon plans for a merger with CitySearch. Musk's attempts to become CEO were thwarted by the board. Compaq acquired Zip2 for $307 million in cash in February 1999 (equivalent to $590,000,000 in 2025), and Musk received $22 million (equivalent to $43,000,000 in 2025) for his 7-percent share. In 1999, Musk co-founded X.com, an online financial services and e-mail payment company. The startup was one of the first federally insured online banks, and, in its initial months of operation, over 200,000 customers joined the service. The company's investors regarded Musk as inexperienced and replaced him with Intuit CEO Bill Harris by the end of the year. The following year, X.com merged with online bank Confinity to avoid competition. Founded by Max Levchin and Peter Thiel, Confinity had its own money-transfer service, PayPal, which was more popular than X.com's service. Within the merged company, Musk returned as CEO. Musk's preference for Microsoft software over Unix created a rift in the company and caused Thiel to resign. Due to resulting technological issues and lack of a cohesive business model, the board ousted Musk and replaced him with Thiel in 2000.[b] Under Thiel, the company focused on the PayPal service and was renamed PayPal in 2001. In 2002, PayPal was acquired by eBay for $1.5 billion (equivalent to $2,700,000,000 in 2025) in stock, of which Musk—the largest shareholder with 11.72% of shares—received $175.8 million (equivalent to $320,000,000 in 2025). In 2017, Musk purchased the domain X.com from PayPal for an undisclosed amount, stating that it had sentimental value. In 2001, Musk became involved with the nonprofit Mars Society and discussed funding plans to place a growth-chamber for plants on Mars. Seeking a way to launch the greenhouse payloads into space, Musk made two unsuccessful trips to Moscow to purchase intercontinental ballistic missiles (ICBMs) from Russian companies NPO Lavochkin and Kosmotras. Musk instead decided to start a company to build affordable rockets. With $100 million of his early fortune, (equivalent to $180,000,000 in 2025) Musk founded SpaceX in May 2002 and became the company's CEO and Chief Engineer. SpaceX attempted its first launch of the Falcon 1 rocket in 2006. Although the rocket failed to reach Earth orbit, it was awarded a Commercial Orbital Transportation Services program contract from NASA, then led by Mike Griffin. After two more failed attempts that nearly caused Musk to go bankrupt, SpaceX succeeded in launching the Falcon 1 into orbit in 2008. Later that year, SpaceX received a $1.6 billion NASA contract (equivalent to $2,400,000,000 in 2025) for Falcon 9-launched Dragon spacecraft flights to the International Space Station (ISS), replacing the Space Shuttle after its 2011 retirement. In 2012, the Dragon vehicle docked with the ISS, a first for a commercial spacecraft. Working towards its goal of reusable rockets, in 2015 SpaceX successfully landed the first stage of a Falcon 9 on a land platform. Later landings were achieved on autonomous spaceport drone ships, an ocean-based recovery platform. In 2018, SpaceX launched the Falcon Heavy; the inaugural mission carried Musk's personal Tesla Roadster as a dummy payload. Since 2019, SpaceX has been developing Starship, a reusable, super heavy-lift launch vehicle intended to replace the Falcon 9 and Falcon Heavy. In 2020, SpaceX launched its first crewed flight, the Demo-2, becoming the first private company to place astronauts into orbit and dock a crewed spacecraft with the ISS. In 2024, NASA awarded SpaceX an $843 million (equivalent to $865,000,000 in 2025) contract to build a spacecraft that NASA will use to deorbit the ISS at the end of its lifespan. In 2015, SpaceX began development of the Starlink constellation of low Earth orbit satellites to provide satellite Internet access. After the launch of prototype satellites in 2018, the first large constellation was deployed in May 2019. As of May 2025[update], over 7,600 Starlink satellites are operational, comprising 65% of all operational Earth satellites. The total cost of the decade-long project to design, build, and deploy the constellation was estimated by SpaceX in 2020 to be $10 billion (equivalent to $12,000,000,000 in 2025).[c] During the Russian invasion of Ukraine, Musk provided free Starlink service to Ukraine, permitting Internet access and communication at a yearly cost to SpaceX of $400 million (equivalent to $440,000,000 in 2025). However, Musk refused to block Russian state media on Starlink. In 2023, Musk denied Ukraine's request to activate Starlink over Crimea to aid an attack against the Russian navy, citing fears of a nuclear response. Tesla, Inc., originally Tesla Motors, was incorporated in July 2003 by Martin Eberhard and Marc Tarpenning. Both men played active roles in the company's early development prior to Musk's involvement. Musk led the Series A round of investment in February 2004; he invested $6.35 million (equivalent to $11,000,000 in 2025), became the majority shareholder, and joined Tesla's board of directors as chairman. Musk took an active role within the company and oversaw Roadster product design, but was not deeply involved in day-to-day business operations. Following a series of escalating conflicts in 2007 and the 2008 financial crisis, Eberhard was ousted from the firm.[page needed] Musk assumed leadership of the company as CEO and product architect in 2008. A 2009 lawsuit settlement with Eberhard designated Musk as a Tesla co-founder, along with Tarpenning and two others. Tesla began delivery of the Roadster, an electric sports car, in 2008. With sales of about 2,500 vehicles, it was the first mass production all-electric car to use lithium-ion battery cells. Under Musk, Tesla has since launched several well-selling electric vehicles, including the four-door sedan Model S (2012), the crossover Model X (2015), the mass-market sedan Model 3 (2017), the crossover Model Y (2020), and the pickup truck Cybertruck (2023). In May 2020, Musk resigned as chairman of the board as part of the settlement of a lawsuit from the SEC over him tweeting that funding had been "secured" for potentially taking Tesla private. The company has also constructed multiple lithium-ion battery and electric vehicle factories, called Gigafactories. Since its initial public offering in 2010, Tesla stock has risen significantly; it became the most valuable carmaker in summer 2020, and it entered the S&P 500 later that year. In October 2021, it reached a market capitalization of $1 trillion (equivalent to $1,200,000,000,000 in 2025), the sixth company in U.S. history to do so. Musk provided the initial concept and financial capital for SolarCity, which his cousins Lyndon and Peter Rive founded in 2006. By 2013, SolarCity was the second largest provider of solar power systems in the United States. In 2014, Musk promoted the idea of SolarCity building an advanced production facility in Buffalo, New York, triple the size of the largest solar plant in the United States. Construction of the factory started in 2014 and was completed in 2017. It operated as a joint venture with Panasonic until early 2020. Tesla acquired SolarCity for $2 billion in 2016 (equivalent to $2,700,000,000 in 2025) and merged it with its battery unit to create Tesla Energy. The deal's announcement resulted in a more than 10% drop in Tesla's stock price; at the time, SolarCity was facing liquidity issues. Multiple shareholder groups filed a lawsuit against Musk and Tesla's directors, stating that the purchase of SolarCity was done solely to benefit Musk and came at the expense of Tesla and its shareholders. Tesla directors settled the lawsuit in January 2020, leaving Musk the sole remaining defendant. Two years later, the court ruled in Musk's favor. In 2016, Musk co-founded Neuralink, a neurotechnology startup, with an investment of $100 million. Neuralink aims to integrate the human brain with artificial intelligence (AI) by creating devices that are embedded in the brain. Such technology could enhance memory or allow the devices to communicate with software. The company also hopes to develop devices to treat neurological conditions like spinal cord injuries. In 2022, Neuralink announced that clinical trials would begin by the end of the year. In September 2023, the Food and Drug Administration approved Neuralink to initiate six-year human trials. Neuralink has conducted animal testing on macaques at the University of California, Davis. In 2021, the company released a video in which a macaque played the video game Pong via a Neuralink implant. The company's animal trials—which have caused the deaths of some monkeys—have led to claims of animal cruelty. The Physicians Committee for Responsible Medicine has alleged that Neuralink violated the Animal Welfare Act. Employees have complained that pressure from Musk to accelerate development has led to botched experiments and unnecessary animal deaths. In 2022, a federal probe was launched into possible animal welfare violations by Neuralink.[needs update] In 2017, Musk founded the Boring Company to construct tunnels; he also revealed plans for specialized, underground, high-occupancy vehicles that could travel up to 150 miles per hour (240 km/h) and thus circumvent above-ground traffic in major cities. Early in 2017, the company began discussions with regulatory bodies and initiated construction of a 30-foot (9.1 m) wide, 50-foot (15 m) long, and 15-foot (4.6 m) deep "test trench" on the premises of SpaceX's offices, as that required no permits. The Los Angeles tunnel, less than two miles (3.2 km) in length, debuted to journalists in 2018. It used Tesla Model Xs and was reported to be a rough ride while traveling at suboptimal speeds. Two tunnel projects announced in 2018, in Chicago and West Los Angeles, have been canceled. A tunnel beneath the Las Vegas Convention Center was completed in early 2021. Local officials have approved further expansions of the tunnel system. April 14, 2022 In early 2017, Musk expressed interest in buying Twitter and had questioned the platform's commitment to freedom of speech. By 2022, Musk had reached 9.2% stake in the company, making him the largest shareholder.[d] Musk later agreed to a deal that would appoint him to Twitter's board of directors and prohibit him from acquiring more than 14.9% of the company. Days later, Musk made a $43 billion offer to buy Twitter. By the end of April Musk had successfully concluded his bid for approximately $44 billion. This included approximately $12.5 billion in loans and $21 billion in equity financing. Having backtracked on his initial decision, Musk bought the company on October 27, 2022. Immediately after the acquisition, Musk fired several top Twitter executives including CEO Parag Agrawal; Musk became the CEO instead. Under Elon Musk, Twitter instituted monthly subscriptions for a "blue check", and laid off a significant portion of the company's staff. Musk lessened content moderation and hate speech also increased on the platform after his takeover. In late 2022, Musk released internal documents relating to Twitter's moderation of Hunter Biden's laptop controversy in the lead-up to the 2020 presidential election. Musk also promised to step down as CEO after a Twitter poll, and five months later, Musk stepped down as CEO and transitioned his role to executive chairman and chief technology officer (CTO). Despite Musk stepping down as CEO, X continues to struggle with challenges such as viral misinformation, hate speech, and antisemitism controversies. Musk has been accused of trying to silence some of his critics such as Twitch streamer Asmongold, who criticized him during one of his streams. Musk has been accused of removing their accounts' blue checkmarks, which hinders visibility and is considered a form of shadow banning, or suspending their accounts without justification. Other activities In August 2013, Musk announced plans for a version of a vactrain, and assigned engineers from SpaceX and Tesla to design a transport system between Greater Los Angeles and the San Francisco Bay Area, at an estimated cost of $6 billion. Later that year, Musk unveiled the concept, dubbed the Hyperloop, intended to make travel cheaper than any other mode of transport for such long distances. In December 2015, Musk co-founded OpenAI, a not-for-profit artificial intelligence (AI) research company aiming to develop artificial general intelligence, intended to be safe and beneficial to humanity. Musk pledged $1 billion of funding to the company, and initially gave $50 million. In 2018, Musk left the OpenAI board. Since 2018, OpenAI has made significant advances in machine learning. In July 2023, Musk launched the artificial intelligence company xAI, which aims to develop a generative AI program that competes with existing offerings like OpenAI's ChatGPT. Musk obtained funding from investors in SpaceX and Tesla, and xAI hired engineers from Google and OpenAI. December 16, 2022 Musk uses a private jet owned by Falcon Landing LLC, a SpaceX-linked company, and acquired a second jet in August 2020. His heavy use of the jets and the consequent fossil fuel usage have received criticism. Musk's flight usage is tracked on social media through ElonJet. In December 2022, Musk banned the ElonJet account on Twitter, and made temporary bans on the accounts of journalists that posted stories regarding the incident, including Donie O'Sullivan, Keith Olbermann, and journalists from The New York Times, The Washington Post, CNN, and The Intercept. In October 2025, Musk's company xAI launched Grokipedia, an AI-generated online encyclopedia that he promoted as an alternative to Wikipedia. Articles on Grokipedia are generated and reviewed by xAI's Grok chatbot. Media coverage and academic analysis described Grokipedia as frequently reusing Wikipedia content but framing contested political and social topics in line with Musk's own views and right-wing narratives. A study by Cornell University researchers and NBC News stated that Grokipedia cites sources that are blacklisted or considered "generally unreliable" on Wikipedia, for example, the conspiracy site Infowars and the neo-Nazi forum Stormfront. Wired, The Guardian and Time criticized Grokipedia for factual errors and for presenting Musk himself in unusually positive terms while downplaying controversies. Politics Musk is an outlier among business leaders who typically avoid partisan political advocacy. Musk was a registered independent voter when he lived in California. Historically, he has donated to both Democrats and Republicans, many of whom serve in states in which he has a vested interest. Since 2022, his political contributions have mostly supported Republicans, with his first vote for a Republican going to Mayra Flores in the 2022 Texas's 34th congressional district special election. In 2024, he started supporting international far-right political parties, activists, and causes, and has shared misinformation and numerous conspiracy theories. Since 2024, his views have been generally described as right-wing. Musk supported Barack Obama in 2008 and 2012, Hillary Clinton in 2016, Joe Biden in 2020, and Donald Trump in 2024. In the 2020 Democratic Party presidential primaries, Musk endorsed candidate Andrew Yang and expressed support for Yang's proposed universal basic income, and endorsed Kanye West's 2020 presidential campaign. In 2021, Musk publicly expressed opposition to the Build Back Better Act, a $3.5 trillion legislative package endorsed by Joe Biden that ultimately failed to pass due to unanimous opposition from congressional Republicans and several Democrats. In 2022, gave over $50 million to Citizens for Sanity, a conservative political action committee. In 2023, he supported Republican Ron DeSantis for the 2024 U.S. presidential election, giving $10 million to his campaign, and hosted DeSantis's campaign announcement on a Twitter Spaces event. From June 2023 to January 2024, Musk hosted a bipartisan set of X Spaces with Republican and Democratic candidates, including Robert F. Kennedy Jr., Vivek Ramaswamy, and Dean Phillips. In October 2025, former vice-president Kamala Harris commented that it was a mistake from the Democratic side to not invite Musk to a White House electric vehicle event organized in August 2021 and featuring executives from General Motors, Ford and Stellantis, despite Tesla being "the major American manufacturer of extraordinary innovation in this space." Fortune remarked that this was a nod to United Auto Workers and organized labor. Harris said presidents should put aside political loyalties when it came to recognizing innovation, and guessed that the non-invitation impacted Musk's perspective. Fortune noted that, at the time, Musk said, "Yeah, seems odd that Tesla wasn't invited." A month later, he criticized Biden as "not the friendliest administration." Jacob Silverman, author of the book Gilded Rage: Elon Musk and the Radicalization of Silicon Valley, said that the tech industry represented by Musk, Thiel, Andreessen and other capitalists, actually flourished under Biden, but the tech leaders chose Trump for their common ground on cultural issues. By early 2024, Musk had become a vocal and financial supporter of Donald Trump. In July 2024, minutes after the attempted assassination of Donald Trump, Musk endorsed him for president saying; "I fully endorse President Trump and hope for his rapid recovery." During the presidential campaign, Musk joined Trump on stage at a campaign rally, and during the campaign promoted conspiracy theories and falsehoods about Democrats, election fraud and immigration, in support of Trump. Musk was the largest individual donor of the 2024 election. In 2025, Musk contributed $19 million to the Wisconsin Supreme Court race, hoping to influence the state's future redistricting efforts and its regulations governing car manufacturers and dealers. In 2023, Musk said he shunned the World Economic Forum because it was boring. The organization commented that they had not invited him since 2015. He has participated in Dialog, dubbed "Tech Bilderberg" and organized by Peter Thiel and Auren Hoffman, though. Musk's international political actions and comments have come under increasing scrutiny and criticism, especially from the governments and leaders of France, Germany, Norway, Spain and the United Kingdom, particularly due to his position in the U.S. government as well as ownership of X. An NBC News analysis found he had boosted far-right political movements to cut immigration and curtail regulation of business in at least 18 countries on six continents since 2023. During his speech after the second inauguration of Donald Trump, Musk twice made a gesture interpreted by many as a Nazi or a fascist Roman salute.[e] He thumped his right hand over his heart, fingers spread wide, and then extended his right arm out, emphatically, at an upward angle, palm down and fingers together. He then repeated the gesture to the crowd behind him. As he finished the gestures, he said to the crowd, "My heart goes out to you. It is thanks to you that the future of civilization is assured." It was widely condemned as an intentional Nazi salute in Germany, where making such gestures is illegal. The Anti-Defamation League said it was not a Nazi salute, but other Jewish organizations disagreed and condemned the salute. American public opinion was divided on partisan lines as to whether it was a fascist salute. Musk dismissed the accusations of Nazi sympathies, deriding them as "dirty tricks" and a "tired" attack. Neo-Nazi and white supremacist groups celebrated it as a Nazi salute. Multiple European political parties demanded that Musk be banned from entering their countries. The concept of DOGE emerged in a discussion between Musk and Donald Trump, and in August 2024, Trump committed to giving Musk an advisory role, with Musk accepting the offer. In November and December 2024, Musk suggested that the organization could help to cut the U.S. federal budget, consolidate the number of federal agencies, and eliminate the Consumer Financial Protection Bureau, and that its final stage would be "deleting itself". In January 2025, the organization was created by executive order, and Musk was designated a "special government employee". Musk led the organization and was a senior advisor to the president, although his official role is not clear. In sworn statement during a lawsuit, the director of the White House Office of Administration stated that Musk "is not an employee of the U.S. DOGE Service or U.S. DOGE Service Temporary Organization", "is not the U.S. DOGE Service administrator", and has "no actual or formal authority to make government decisions himself". Trump said two days later that he had put Musk in charge of DOGE. A federal judge has ruled that Musk acted as the de facto leader of DOGE. Musk's role in the second Trump administration, particularly in response to DOGE, has attracted public backlash. He was criticized for his treatment of federal government employees, including his influence over the mass layoffs of the federal workforce. He has prioritized secrecy within the organization and has accused others of violating privacy laws. A Senate report alleged that Musk could avoid up to $2 billion in legal liability as a result of DOGE's actions. In May 2025, Bill Gates accused Musk of "killing the world's poorest children" through his cuts to USAID, which modeling by Boston University estimated had resulted in 300,000 deaths by this time, most of them of children. By November 2025, the estimated death toll had increased to 400,000 children and 200,000 adults. Musk announced on May 28, 2025, that he would depart from the Trump administration as planned when the special government employee's 130 day deadline expired, with a White House official confirming that Musk's offboarding from the Trump administration was already underway. His departure was officially confirmed during a joint Oval Office press conference with Trump on May 30, 2025. @realDonaldTrump is in the Epstein files. That is the real reason they have not been made public. June 5, 2025 After leaving office, Musk criticized the Trump administration's Big Beautiful Bill, calling it a "disgusting abomination" due to its provisions increasing the deficit. A feud began between Musk and Trump, with its most notable event being Musk alleging Trump had ties to sex offender Jeffrey Epstein on X (formerly Twitter) on June 5, 2025. Trump responded on Truth Social stating that Musk went "CRAZY" after the "EV Mandate" was purportedly taken away and threatened to cut Musk's government contracts. Musk then called for a third Trump impeachment. The next day, Trump stated that he did not wish to reconcile with Musk, and added that Musk would face "very serious consequences" if he funds Democratic candidates. On June 11, Musk publicly apologized for the tweets against Trump, saying they "went too far". Views November 6, 2022 Rejecting the conservative label, Musk has described himself as a political moderate, even as his views have become more right-wing over time. His views have been characterized as libertarian and far-right, and after his involvement in European politics, they have received criticism from world leaders such as Emmanuel Macron and Olaf Scholz. Within the context of American politics, Musk supported Democratic candidates up until 2022, at which point he voted for a Republican for the first time. He has stated support for universal basic income, gun rights, freedom of speech, a tax on carbon emissions, and H-1B visas. Musk has expressed concern about issues such as artificial intelligence (AI) and climate change, and has been a critic of wealth tax, short-selling, and government subsidies. An immigrant himself, Musk has been accused of being anti-immigration, and regularly blames immigration policies for illegal immigration. He is also a pronatalist who believes population decline is the biggest threat to civilization, and identifies as a cultural Christian. Musk has long been an advocate for space colonization, especially the colonization of Mars. He has repeatedly pushed for humanity colonizing Mars, in order to become an interplanetary species and lower the risks of human extinction. Musk has promoted conspiracy theories and made controversial statements that have led to accusations of racism, sexism, antisemitism, transphobia, disseminating disinformation, and support of white pride. While describing himself as a "pro-Semite", his comments regarding George Soros and Jewish communities have been condemned by the Anti-Defamation League and the Biden White House. Musk was criticized during the COVID-19 pandemic for making unfounded epidemiological claims, defying COVID-19 lockdowns restrictions, and supporting the Canada convoy protest against vaccine mandates. He has amplified false claims of white genocide in South Africa. Musk has been critical of Israel's actions in the Gaza Strip during the Gaza war, praised China's economic and climate goals, suggested that Taiwan and China should resolve cross-strait relations, and was described as having a close relationship with the Chinese government. In Europe, Musk expressed support for Ukraine in 2022 during the Russian invasion, recommended referendums and peace deals on the annexed Russia-occupied territories, and supported the far-right Alternative for Germany political party in 2024. Regarding British politics, Musk blamed the 2024 UK riots on mass migration and open borders, criticized Prime Minister Keir Starmer for what he described as a "two-tier" policing system, and was subsequently attacked as being responsible for spreading misinformation and amplifying the far-right. He has also voiced his support for far-right activist Tommy Robinson and pledged electoral support for Reform UK. In February 2026, Musk described Spanish Prime Minister Pedro Sánchez as a "tyrant" following Sánchez's proposal to prohibit minors under the age of 16 from accessing social media platforms. Legal affairs In 2018, Musk was sued by the U.S. Securities and Exchange Commission (SEC) for a tweet stating that funding had been secured for potentially taking Tesla private.[f] The securities fraud lawsuit characterized the tweet as false, misleading, and damaging to investors, and sought to bar Musk from serving as CEO of publicly traded companies. Two days later, Musk settled with the SEC, without admitting or denying the SEC's allegations. As a result, Musk and Tesla were fined $20 million each, and Musk was forced to step down for three years as Tesla chairman but was able to remain as CEO. Shareholders filed a lawsuit over the tweet, and in February 2023, a jury found Musk and Tesla not liable. Musk has stated in interviews that he does not regret posting the tweet that triggered the SEC investigation. In 2019, Musk stated in a tweet that Tesla would build half a million cars that year. The SEC reacted by asking a court to hold him in contempt for violating the terms of the 2018 settlement agreement. A joint agreement between Musk and the SEC eventually clarified the previous agreement details, including a list of topics about which Musk needed preclearance. In 2020, a judge blocked a lawsuit that claimed a tweet by Musk regarding Tesla stock price ("too high imo") violated the agreement. Freedom of Information Act (FOIA)-released records showed that the SEC concluded Musk had subsequently violated the agreement twice by tweeting regarding "Tesla's solar roof production volumes and its stock price". In October 2023, the SEC sued Musk over his refusal to testify a third time in an investigation into whether he violated federal law by purchasing Twitter stock in 2022. In February 2024, Judge Laurel Beeler ruled that Musk must testify again. In January 2025, the SEC filed a lawsuit against Musk for securities violations related to his purchase of Twitter. In January 2024, Delaware judge Kathaleen McCormick ruled in a 2018 lawsuit that Musk's $55 billion pay package from Tesla be rescinded. McCormick called the compensation granted by the company's board "an unfathomable sum" that was unfair to shareholders. The Delaware Supreme Court overturned McCormick's decision in December 2025, restoring Musk's compensation package and awarding $1 in nominal damages. Personal life Musk became a U.S. citizen in 2002. From the early 2000s until late 2020, Musk resided in California, where both Tesla and SpaceX were founded. He then relocated to Cameron County, Texas, saying that California had become "complacent" about its economic success. While hosting Saturday Night Live in 2021, Musk stated that he has Asperger syndrome (an outdated term for autism spectrum disorder). When asked about his experience growing up with Asperger's syndrome in a TED2022 conference in Vancouver, Musk stated that "the social cues were not intuitive ... I would just tend to take things very literally ... but then that turned out to be wrong — [people were not] simply saying exactly what they mean, there's all sorts of other things that are meant, and [it] took me a while to figure that out." Musk suffers from back pain and has undergone several spine-related surgeries, including a disc replacement. In 2000, he contracted a severe case of malaria while on vacation in South Africa. Musk has stated he uses doctor-prescribed ketamine for occasional depression and that he doses "a small amount once every other week or something like that"; since January 2024, some media outlets have reported that he takes ketamine, marijuana, LSD, ecstasy, mushrooms, cocaine and other drugs. Musk at first refused to comment on his alleged drug use, before responding that he had not tested positive for drugs, and that if drugs somehow improved his productivity, "I would definitely take them!". The New York Times' investigations revealed Musk's overuse of ketamine and numerous other drugs, as well as strained family relationships and concerns from close associates who have become troubled by his public behavior as he became more involved in political activities and government work. According to The Washington Post, President Trump described Musk as "a big-time drug addict". Through his own label Emo G Records, Musk released a rap track, "RIP Harambe", on SoundCloud in March 2019. The following year, he released an EDM track, "Don't Doubt Ur Vibe", featuring his own lyrics and vocals. Musk plays video games, which he stated has a "'restoring effect' that helps his 'mental calibration'". Some games he plays include Quake, Diablo IV, Elden Ring, and Polytopia. Musk once claimed to be one of the world's top video game players but has since admitted to "account boosting", or cheating by hiring outside services to achieve top player rankings. Musk has justified the boosting by claiming that all top accounts do it so he has to as well to remain competitive. In 2024 and 2025, Musk criticized the video game Assassin's Creed Shadows and its creator Ubisoft for "woke" content. Musk posted to X that "DEI kills art" and specified the inclusion of the historical figure Yasuke in the Assassin's Creed game as offensive; he also called the game "terrible". Ubisoft responded by saying that Musk's comments were "just feeding hatred" and that they were focused on producing a game not pushing politics. Musk has fathered at least 14 children, one of whom died as an infant. The Wall Street Journal reported in 2025 that sources close to Musk suggest that the "true number of Musk's children is much higher than publicly known". He had six children with his first wife, Canadian author Justine Wilson, whom he met while attending Queen's University in Ontario, Canada; they married in 2000. In 2002, their first child Nevada Musk died of sudden infant death syndrome at the age of 10 weeks. After his death, the couple used in vitro fertilization (IVF) to continue their family; they had twins in 2004, followed by triplets in 2006. The couple divorced in 2008 and have shared custody of their children. The elder twin he had with Wilson came out as a trans woman and, in 2022, officially changed her name to Vivian Jenna Wilson, adopting her mother's surname because she no longer wished to be associated with Musk. Musk began dating English actress Talulah Riley in 2008. They married two years later at Dornoch Cathedral in Scotland. In 2012, the couple divorced, then remarried the following year. After briefly filing for divorce in 2014, Musk finalized a second divorce from Riley in 2016. Musk then dated the American actress Amber Heard for several months in 2017; he had reportedly been "pursuing" her since 2012. In 2018, Musk and Canadian musician Grimes confirmed they were dating. Grimes and Musk have three children, born in 2020, 2021, and 2022.[g] Musk and Grimes originally gave their eldest child the name "X Æ A-12", which would have violated California regulations as it contained characters that are not in the modern English alphabet; the names registered on the birth certificate are "X" as a first name, "Æ A-Xii" as a middle name, and "Musk" as a last name. They received criticism for choosing a name perceived to be impractical and difficult to pronounce; Musk has said the intended pronunciation is "X Ash A Twelve". Their second child was born via surrogacy. Despite the pregnancy, Musk confirmed reports that the couple were "semi-separated" in September 2021; in an interview with Time in December 2021, he said he was single. In October 2023, Grimes sued Musk over parental rights and custody of X Æ A-Xii. Elon Musk has taken X Æ A-Xii to multiple official events in Washington, D.C. during Trump's second term in office. Also in July 2022, The Wall Street Journal reported that Musk allegedly had an affair with Nicole Shanahan, the wife of Google co-founder Sergey Brin, in 2021, leading to their divorce the following year. Musk denied the report. Musk also had a relationship with Australian actress Natasha Bassett, who has been described as "an occasional girlfriend". In October 2024, The New York Times reported Musk bought a Texas compound for his children and their mothers, though Musk denied having done so. Musk also has four children with Shivon Zilis, director of operations and special projects at Neuralink: twins born via IVF in 2021, a child born in 2024 via surrogacy and a child born in 2025.[h] On February 14, 2025, Ashley St. Clair, an influencer and author, posted on X claiming to have given birth to Musk's son Romulus five months earlier, which media outlets reported as Musk's supposed thirteenth child.[i] On February 22, 2025, it was reported that St Clair had filed for sole custody of her five-month-old son and for Musk to be recognised as the child's father. On March 31, 2025, Musk wrote that, while he was unsure if he was the father of St. Clair's child, he had paid St. Clair $2.5 million and would continue paying her $500,000 per year.[j] Later reporting from the Wall Street Journal indicated that $1 million of these payments to St. Clair were structured as a loan. In 2014, Musk and Ghislaine Maxwell appeared together in a photograph taken at an Academy Awards after-party, which Musk later described as a "photobomb". The January 2026 Epstein files contain emails between Musk and Epstein from 2012 to 2013, after Epstein's first conviction. Emails released on January 30, 2026, indicated that Epstein invited Musk to visit his private island on multiple occasions. The correspondence showed that while Epstein repeatedly encouraged Musk to attend, Musk did not visit the island. In one instance, Musk discussed the possibility of attending a party with his then-wife Talulah Riley and asked which day would be the "wildest party"; according to the emails, the visit did not take place after Epstein later cancelled the plans.[k] On Christmas day in 2012, Musk emailed Epstein asking "Do you have any parties planned? I’ve been working to the edge of sanity this year and so, once my kids head home after Christmas, I really want to hit the party scene in St Barts or elsewhere and let loose. The invitation is much appreciated, but a peaceful island experience is the opposite of what I’m looking for". Epstein replied that the "ratio on my island" might make Musk's wife uncomfortable to which Musk responded, "Ratio is not a problem for Talulah". On September 11, 2013, Epstein sent an email asking Musk if he had any plans for coming to New York for the opening of the United Nations General Assembly where many "interesting people" would be coming to his house to which Musk responded that "Flying to NY to see UN diplomats do nothing would be an unwise use of time". Epstein responded by stating "Do you think i am retarded. Just kidding, there is no one over 25 and all very cute." Musk has denied any close relationship with Epstein and described him as a "creep" who attempted to ingratiate himself with influential people. When Musk was asked in 2019 if he introduced Epstein to Mark Zuckerberg, Musk responded: "I don’t recall introducing Epstein to anyone, as I don’t know the guy well enough to do so." The released emails nonetheless showed cordial exchanges on a range of topics, including Musk's inquiry about parties on the island. The correspondence also indicated that Musk suggested hosting Epstein at SpaceX, while Epstein separately discussed plans to tour SpaceX and bring "the girls", though there is no evidence that such a visit occurred. Musk has described the release of the files a "distraction", later accusing the second Trump administration of suppressing them to protect powerful individuals, including Trump himself.[l] Wealth Elon Musk is the wealthiest person in the world, with an estimated net worth of US$690 billion as of January 2026, according to the Bloomberg Billionaires Index, and $852 billion according to Forbes, primarily from his ownership stakes in SpaceX and Tesla. Having been first listed on the Forbes Billionaires List in 2012, around 75% of Musk's wealth was derived from Tesla stock in November 2020, although he describes himself as "cash poor". According to Forbes, he became the first person in the world to achieve a net worth of $300 billion in 2021; $400 billion in December 2024; $500 billion in October 2025; $600 billion in mid-December 2025; $700 billion later that month; and $800 billion in February 2026. In November 2025, a Tesla pay package worth potentially $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Public image Although his ventures have been highly influential within their separate industries starting in the 2000s, Musk only became a public figure in the early 2010s. He has been described as an eccentric who makes spontaneous and impactful decisions, while also often making controversial statements, contrary to other billionaires who prefer reclusiveness to protect their businesses. Musk's actions and his expressed views have made him a polarizing figure. Biographer Ashlee Vance described people's opinions of Musk as polarized due to his "part philosopher, part troll" persona on Twitter. He has drawn denouncement for using his platform to mock the self-selection of personal pronouns, while also receiving praise for bringing international attention to matters like British survivors of grooming gangs. Musk has been described as an American oligarch due to his extensive influence over public discourse, social media, industry, politics, and government policy. After Trump's re-election, Musk's influence and actions during the transition period and the second presidency of Donald Trump led some to call him "President Musk", the "actual president-elect", "shadow president" or "co-president". Awards for his contributions to the development of the Falcon rockets include the American Institute of Aeronautics and Astronautics George Low Transportation Award in 2008, the Fédération Aéronautique Internationale Gold Space Medal in 2010, and the Royal Aeronautical Society Gold Medal in 2012. In 2015, he received an honorary doctorate in engineering and technology from Yale University and an Institute of Electrical and Electronics Engineers Honorary Membership. Musk was elected a Fellow of the Royal Society (FRS) in 2018.[m] In 2022, Musk was elected to the National Academy of Engineering. Time has listed Musk as one of the most influential people in the world in 2010, 2013, 2018, and 2021. Musk was selected as Time's "Person of the Year" for 2021. Then Time editor-in-chief Edward Felsenthal wrote that, "Person of the Year is a marker of influence, and few individuals have had more influence than Musk on life on Earth, and potentially life off Earth too." Notes References Works cited Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/History_of_the_Jews_in_Malawi] | [TOKENS: 668]
Contents History of the Jews in Malawi The history of the Jews in Malawi formerly known as Nyasaland, and part of the former Federation of Rhodesia and Nyasaland (Rhodesia and Nyasaland). Background Malawi was once part of the Maravi Empire. In colonial times, the territory was ruled by the British, under whose control it was known first as the British Central Africa Protectorate (1889–1907) and later Nyasaland (1907–1964). It became part of the Federation of Rhodesia and Nyasaland (1953–1963). The country achieved full independence, as Malawi, in 1964. After independence, Malawi was ruled as a one-party state until 1994. Jews in Nyasaland Press reports of the time state that during World War II (1939–1945), 60 Polish-Jewish families went to Nyasaland, then under British colonial rule and today known as modern day Malawi, arriving via Iran to escape the Holocaust. After the war, however, most of these Polish Jews left the area. By 1959, only twelve Jews were living in Malawi. In 1941 as German forces neared Cyprus 270 Jews were subsequently shipped to Nyasaland and Tanganyika by the British. An eyewitness report states that "a hundred" Jewish refugees from Cyprus were shipped via Palestine to Nyasaland during World War II. Rhodesia and Nyasaland The most notable person with partial Jewish parentage to serve in a high position was Sir Roy Welensky (1907–1991). He was a Northern Rhodesian (now Zambia) politician and the second and last prime minister (1956–1963) of the Federation of Rhodesia and Nyasaland. Born in Southern Rhodesia (now Zimbabwe) to an Afrikaner mother and a Lithuanian Jewish father, he moved to Northern Rhodesia. His father, Michael Welensky (b. c. 1843), was of Lithuanian Jewish origin, hailing from a village near Wilno (today Vilnius); a trader in Russia and horse-smuggler during the Franco-Prussian War, he settled in Southern Rhodesia after first emigrating to the United States, where he was a saloon-keeper, and then South Africa. His mother, Leah (born Aletta Ferreira; c. 1865–1918), was a ninth-generation Afrikaner of Dutch ancestry. His parents, for whom Raphael or "Roy" was the 13th child, kept a "poor white" boarding house. Malawi Malawi and Israel established diplomatic relations with each other in July 1964 and have since continued. Malawi under prime minister Hastings Banda's (1898–1997) foreign policy was one of only three Sub-Saharan African countries (the others being Lesotho and Swaziland (since 2018 renamed to Eswatini)) that continued to maintain full diplomatic relations with Israel after the Yom Kippur War in 1973. Israel has assisted Malawi with a few social and economic development programs. Post Banda Israel continues a relationship with Malawi on a non-residential basis. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Orion_(constellation)#cite_note-41] | [TOKENS: 4993]
Contents Orion (constellation) Orion is a prominent set of stars visible during winter in the northern celestial hemisphere. It is one of the 88 modern constellations; it was among the 48 constellations listed by the 2nd-century AD/CE astronomer Ptolemy. It is named after a hunter in Greek mythology. Orion is most prominent during winter evenings in the Northern Hemisphere, as are five other constellations that have stars in the Winter Hexagon asterism. Orion's two brightest stars, Rigel (β) and Betelgeuse (α), are both among the brightest stars in the night sky; both are supergiants and slightly variable. There are a further six stars brighter than magnitude 3.0, including three making the short straight line of the Orion's Belt asterism. Orion also hosts the radiant of the annual Orionids, the strongest meteor shower associated with Halley's Comet, and the Orion Nebula, one of the brightest nebulae in the sky. Characteristics Orion is bordered by Taurus to the northwest, Eridanus to the southwest, Lepus to the south, Monoceros to the east, and Gemini to the northeast. Covering 594 square degrees, Orion ranks 26th of the 88 constellations in size. The constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 26 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 04h 43.3m and 06h 25.5m , while the declination coordinates are between 22.87° and −10.97°. The constellation's three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "Ori". Orion is most visible in the evening sky from January to April, winter in the Northern Hemisphere, and summer in the Southern Hemisphere. In the tropics (less than about 8° from the equator), the constellation transits at the zenith. From May to July (summer in the Northern Hemisphere, winter in the Southern Hemisphere), Orion is in the daytime sky and thus invisible at most latitudes. However, for much of Antarctica in the Southern Hemisphere's winter months, the Sun is below the horizon even at midday. Stars (and thus Orion, but only the brightest stars) are then visible at twilight for a few hours around local noon, just in the brightest section of the sky low in the North where the Sun is just below the horizon. At the same time of day at the South Pole itself (Amundsen–Scott South Pole Station), Rigel is only 8° above the horizon, and the Belt sweeps just along it. In the Southern Hemisphere's summer months, when Orion is normally visible in the night sky, the constellation is actually not visible in Antarctica because the Sun does not set at that time of year south of the Antarctic Circle. In countries close to the equator (e.g. Kenya, Indonesia, Colombia, Ecuador), Orion appears overhead in December around midnight and in the February evening sky. Navigational aid Orion is very useful as an aid to locating other stars. By extending the line of the Belt southeastward, Sirius (α CMa) can be found; northwestward, Aldebaran (α Tau). A line eastward across the two shoulders indicates the direction of Procyon (α CMi). A line from Rigel through Betelgeuse points to Castor and Pollux (α Gem and β Gem). Additionally, Rigel is part of the Winter Circle asterism. Sirius and Procyon, which may be located from Orion by following imaginary lines (see map), also are points in both the Winter Triangle and the Circle. Features Orion's seven brightest stars form a distinctive hourglass-shaped asterism, or pattern, in the night sky. Four stars—Rigel, Betelgeuse, Bellatrix, and Saiph—form a large roughly rectangular shape, at the center of which lie the three stars of Orion's Belt—Alnitak, Alnilam, and Mintaka. His head is marked by an additional eighth star called Meissa, which is fairly bright to the observer. Descending from the Belt is a smaller line of three stars, Orion's Sword (the middle of which is in fact not a star but the Orion Nebula), also known as the hunter's sword. Many of the stars are luminous hot blue supergiants, with the stars of the Belt and Sword forming the Orion OB1 association. Standing out by its red hue, Betelgeuse may nevertheless be a runaway member of the same group. Orion's Belt, or The Belt of Orion, is an asterism within the constellation. It consists of three bright stars: Alnitak (Zeta Orionis), Alnilam (Epsilon Orionis), and Mintaka (Delta Orionis). Alnitak is around 800 light-years away from Earth, 100,000 times more luminous than the Sun, and shines with a magnitude of 1.8; much of its radiation is in the ultraviolet range, which the human eye cannot see. Alnilam is approximately 2,000 light-years from Earth, shines with a magnitude of 1.70, and with an ultraviolet light that is 375,000 times more luminous than the Sun. Mintaka is 915 light-years away and shines with a magnitude of 2.21. It is 90,000 times more luminous than the Sun and is a double star: the two orbit each other every 5.73 days. In the Northern Hemisphere, Orion's Belt is best visible in the night sky during the month of January at around 9:00 pm, when it is approximately around the local meridian. Just southwest of Alnitak lies Sigma Orionis, a multiple star system composed of five stars that have a combined apparent magnitude of 3.7 and lying at a distance of 1150 light-years. Southwest of Mintaka lies the quadruple star Eta Orionis. Orion's Sword contains the Orion Nebula, the Messier 43 nebula, Sh 2-279 (also known as the Running Man Nebula), and the stars Theta Orionis, Iota Orionis, and 42 Orionis. Three stars comprise a small triangle that marks the head. The apex is marked by Meissa (Lambda Orionis), a hot blue giant of spectral type O8 III and apparent magnitude 3.54, which lies some 1100 light-years distant. Phi-1 and Phi-2 Orionis make up the base. Also nearby is the young star FU Orionis. Stretching north from Betelgeuse are the stars that make up Orion's club. Mu Orionis marks the elbow, Nu and Xi mark the handle of the club, and Chi1 and Chi2 mark the end of the club. Just east of Chi1 is the Mira-type variable red giant star U Orionis. West from Bellatrix lie six stars all designated Pi Orionis (π1 Ori, π2 Ori, π3 Ori, π4 Ori, π5 Ori, and π6 Ori) which make up Orion's shield. Around 20 October each year, the Orionid meteor shower (Orionids) reaches its peak. Coming from the border with the constellation Gemini, as many as 20 meteors per hour can be seen. The shower's parent body is Halley's Comet. Hanging from Orion's Belt is his sword, consisting of the multiple stars θ1 and θ2 Orionis, called the Trapezium and the Orion Nebula (M42). This is a spectacular object that can be clearly identified with the naked eye as something other than a star. Using binoculars, its clouds of nascent stars, luminous gas, and dust can be observed. The Trapezium cluster has many newborn stars, including several brown dwarfs, all of which are at an approximate distance of 1,500 light-years. Named for the four bright stars that form a trapezoid, it is largely illuminated by the brightest stars, which are only a few hundred thousand years old. Observations by the Chandra X-ray Observatory show both the extreme temperatures of the main stars—up to 60,000 kelvins—and the star forming regions still extant in the surrounding nebula. M78 (NGC 2068) is a nebula in Orion. With an overall magnitude of 8.0, it is significantly dimmer than the Great Orion Nebula that lies to its south; however, it is at approximately the same distance, at 1600 light-years from Earth. It can easily be mistaken for a comet in the eyepiece of a telescope. M78 is associated with the variable star V351 Orionis, whose magnitude changes are visible in very short periods of time. Another fairly bright nebula in Orion is NGC 1999, also close to the Great Orion Nebula. It has an integrated magnitude of 10.5 and is 1500 light-years from Earth. The variable star V380 Orionis is embedded in NGC 1999. Another famous nebula is IC 434, the Horsehead Nebula, near Alnitak (Zeta Orionis). It contains a dark dust cloud whose shape gives the nebula its name. NGC 2174 is an emission nebula located 6400 light-years from Earth. Besides these nebulae, surveying Orion with a small telescope will reveal a wealth of interesting deep-sky objects, including M43, M78, and multiple stars including Iota Orionis and Sigma Orionis. A larger telescope may reveal objects such as the Flame Nebula (NGC 2024), as well as fainter and tighter multiple stars and nebulae. Barnard's Loop can be seen on very dark nights or using long-exposure photography. All of these nebulae are part of the larger Orion molecular cloud complex, which is located approximately 1,500 light-years away and is hundreds of light-years across. Due to its proximity, it is one of the most intense regions of stellar formation visible from Earth. The Orion molecular cloud complex forms the eastern part of an even larger structure, the Orion–Eridanus Superbubble, which is visible in X-rays and in hydrogen emissions. History and mythology The distinctive pattern of Orion is recognized in numerous cultures around the world, and many myths are associated with it. Orion is used as a symbol in the modern world. In Siberia, the Chukchi people see Orion as a hunter; an arrow he has shot is represented by Aldebaran (Alpha Tauri), with the same figure as other Western depictions. In Greek mythology, Orion was a gigantic, supernaturally strong hunter, born to Euryale, a Gorgon, and Poseidon (Neptune), god of the sea. One myth recounts Gaia's rage at Orion, who dared to say that he would kill every animal on Earth. The angry goddess tried to dispatch Orion with a scorpion. This is given as the reason that the constellations of Scorpius and Orion are never in the sky at the same time. However, Ophiuchus, the Serpent Bearer, revived Orion with an antidote. This is said to be the reason that the constellation of Ophiuchus stands midway between the Scorpion and the Hunter in the sky. The constellation is mentioned in Horace's Odes (Ode 3.27.18), Homer's Odyssey (Book 5, line 283) and Iliad, and Virgil's Aeneid (Book 1, line 535). In old Hungarian tradition, Orion is known as "Archer" (Íjász), or "Reaper" (Kaszás). In recently rediscovered myths, he is called Nimrod (Hungarian: Nimród), the greatest hunter, father of the twins Hunor and Magor. The π and o stars (on upper right) form together the reflex bow or the lifted scythe. In other Hungarian traditions, Orion's Belt is known as "Judge's stick" (Bírópálca). In Ireland and Scotland, Orion was called An Bodach, a figure from Irish folklore whose name literally means "the one with a penis [bod]" and was the husband of the Cailleach (hag). In Scandinavian tradition, Orion's Belt was known as "Frigg's Distaff" (friggerock) or "Freyja's distaff". The Finns call Orion's Belt and the stars below it "Väinämöinen's scythe" (Väinämöisen viikate). Another name for the asterism of Alnilam, Alnitak, and Mintaka is "Väinämöinen's Belt" (Väinämöisen vyö) and the stars "hanging" from the Belt as "Kaleva's sword" (Kalevanmiekka). There are claims in popular media that the Adorant from the Geißenklösterle cave, an ivory carving estimated to be 35,000 to 40,000 years old, is the first known depiction of the constellation. Scholars dismiss such interpretations, saying that perceived details such as a belt and sword derive from preexisting features in the grain structure of the ivory. The Babylonian star catalogues of the Late Bronze Age name Orion MULSIPA.ZI.AN.NA,[note 1] "The Heavenly Shepherd" or "True Shepherd of Anu" – Anu being the chief god of the heavenly realms. The Babylonian constellation is sacred to Papshukal and Ninshubur, both minor gods fulfilling the role of "messenger to the gods". Papshukal is closely associated with the figure of a walking bird on Babylonian boundary stones, and on the star map the figure of the Rooster is located below and behind the figure of the True Shepherd—both constellations represent the herald of the gods, in his bird and human forms respectively. In ancient Egypt, the stars of Orion were regarded as a god, called Sah. Because Orion rises before Sirius, the star whose heliacal rising was the basis for the Solar Egyptian calendar, Sah was closely linked with Sopdet, the goddess who personified Sirius. The god Sopdu is said to be the son of Sah and Sopdet. Sah is syncretized with Osiris, while Sopdet is syncretized with Osiris' mythological wife, Isis. In the Pyramid Texts, from the 24th and 23rd centuries BC, Sah is one of many gods whose form the dead pharaoh is said to take in the afterlife. The Armenians identified their legendary patriarch and founder Hayk with Orion. Hayk is also the name of the Orion constellation in the Armenian translation of the Bible. The Bible mentions Orion three times, naming it "Kesil" (כסיל, literally – fool). Though, this name perhaps is etymologically connected with "Kislev", the name for the ninth month of the Hebrew calendar (i.e. November–December), which, in turn, may derive from the Hebrew root K-S-L as in the words "kesel, kisla" (כֵּסֶל, כִּסְלָה, hope, positiveness), i.e. hope for winter rains.: Job 9:9 ("He is the maker of the Bear and Orion"), Job 38:31 ("Can you loosen Orion's belt?"), and Amos 5:8 ("He who made the Pleiades and Orion"). In ancient Aram, the constellation was known as Nephîlā′, the Nephilim are said to be Orion's descendants. In medieval Muslim astronomy, Orion was known as al-jabbar, "the giant". Orion's sixth brightest star, Saiph, is named from the Arabic, saif al-jabbar, meaning "sword of the giant". In China, Orion was one of the 28 lunar mansions Sieu (Xiù) (宿). It is known as Shen (參), literally meaning "three", for the stars of Orion's Belt. The Chinese character 參 (pinyin shēn) originally meant the constellation Orion (Chinese: 參宿; pinyin: shēnxiù); its Shang dynasty version, over three millennia old, contains at the top a representation of the three stars of Orion's Belt atop a man's head (the bottom portion representing the sound of the word was added later). The Rigveda refers to the constellation as Mriga (the Deer). Nataraja, "the cosmic dancer", is often interpreted as the representation of Orion. Rudra, the Rigvedic form of Shiva, is the presiding deity of Ardra nakshatra (Betelgeuse) of Hindu astrology. The Jain Symbol carved in the Udayagiri and Khandagiri Caves, India in 1st century BCE has a striking resemblance with Orion. Bugis sailors identified the three stars in Orion's Belt as tanra tellué, meaning "sign of three". The Seri people of northwestern Mexico call the three stars in Orion's Belt Hapj (a name denoting a hunter) which consists of three stars: Hap (mule deer), Haamoja (pronghorn), and Mojet (bighorn sheep). Hap is in the middle and has been shot by the hunter; its blood has dripped onto Tiburón Island. The same three stars are known in Spain and most of Latin America as "Las tres Marías" (Spanish for "The Three Marys"). In Puerto Rico, the three stars are known as the "Los Tres Reyes Magos" (Spanish for The Three Wise Men). The Ojibwa/Chippewa Native Americans call this constellation Mesabi for Big Man. To the Lakota Native Americans, Tayamnicankhu (Orion's Belt) is the spine of a bison. The great rectangle of Orion is the bison's ribs; the Pleiades star cluster in nearby Taurus is the bison's head; and Sirius in Canis Major, known as Tayamnisinte, is its tail. Another Lakota myth mentions that the bottom half of Orion, the Constellation of the Hand, represented the arm of a chief that was ripped off by the Thunder People as a punishment from the gods for his selfishness. His daughter offered to marry the person who can retrieve his arm from the sky, so the young warrior Fallen Star (whose father was a star and whose mother was human) returned his arm and married his daughter, symbolizing harmony between the gods and humanity with the help of the younger generation. The index finger is represented by Rigel; the Orion Nebula is the thumb; the Belt of Orion is the wrist; and the star Beta Eridani is the pinky finger. The seven primary stars of Orion make up the Polynesian constellation Heiheionakeiki which represents a child's string figure similar to a cat's cradle. Several precolonial Filipinos referred to the belt region in particular as "balatik" (ballista) as it resembles a trap of the same name which fires arrows by itself and is usually used for catching pigs from the bush. Spanish colonization later led to some ethnic groups referring to Orion's Belt as "Tres Marias" or "Tatlong Maria." In Māori tradition, the star Rigel (known as Puanga or Puaka) is closely connected with the celebration of Matariki. The rising of Matariki (the Pleiades) and Rigel before sunrise in midwinter marks the start of the Māori year. In Javanese culture, the constellation is often called Lintang Waluku or Bintang Bajak, referring to the shape of a paddy field plow. The imagery of the Belt and Sword has found its way into popular Western culture, for example in the form of the shoulder insignia of the 27th Infantry Division of the United States Army during both World Wars, probably owing to a pun on the name of the division's first commander, Major General John F. O'Ryan. The film distribution company Orion Pictures used the constellation as its logo. In artistic renderings, the surrounding constellations are sometimes related to Orion: he is depicted standing next to the river Eridanus with his two hunting dogs Canis Major and Canis Minor, fighting Taurus. He is sometimes depicted hunting Lepus the hare. He sometimes is depicted to have a lion's hide in his hand. There are alternative ways to visualise Orion. From the Southern Hemisphere, Orion is oriented south-upward, and the Belt and Sword are sometimes called the saucepan or pot in Australia and New Zealand. Orion's Belt is called Drie Konings (Three Kings) or the Drie Susters (Three Sisters) by Afrikaans speakers in South Africa and are referred to as les Trois Rois (the Three Kings) in Daudet's Lettres de Mon Moulin (1866). The appellation Driekoningen (the Three Kings) is also often found in 17th and 18th-century Dutch star charts and seaman's guides. The same three stars are known in Spain, Latin America, and the Philippines as "Las Tres Marías" (The Three Marys), and as "Los Tres Reyes Magos" (The Three Wise Men) in Puerto Rico. Even traditional depictions of Orion have varied greatly. Cicero drew Orion in a similar fashion to the modern depiction. The Hunter held an unidentified animal skin aloft in his right hand; his hand was represented by Omicron2 Orionis and the skin was represented by the five stars designated Pi Orionis. Saiph and Rigel represented his left and right knees, while Eta Orionis and Lambda Leporis were his left and right feet, respectively. As in the modern depiction, Mintaka, Alnilam, and Alnitak represented his Belt. His left shoulder was represented by Betelgeuse, and Mu Orionis made up his left arm. Meissa was his head, and Bellatrix his right shoulder. The depiction of Hyginus was similar to that of Cicero, though the two differed in a few important areas. Cicero's animal skin became Hyginus's shield (Omicron and Pi Orionis), and instead of an arm marked out by Mu Orionis, he holds a club (Chi Orionis). His right leg is represented by Theta Orionis and his left leg is represented by Lambda, Mu, and Epsilon Leporis. Further Western European and Arabic depictions have followed these two models. Future Orion is located on the celestial equator, but it will not always be so located due to the effects of precession of the Earth's axis. Orion lies well south of the ecliptic, and it only happens to lie on the celestial equator because the point on the ecliptic that corresponds to the June solstice is close to the border of Gemini and Taurus, to the north of Orion. Precession will eventually carry Orion further south, and by AD 14000, Orion will be far enough south that it will no longer be visible from the latitude of Great Britain. Further in the future, Orion's stars will gradually move away from the constellation due to proper motion. However, Orion's brightest stars all lie at a large distance from Earth on an astronomical scale—much farther away than Sirius, for example. Orion will still be recognizable long after most of the other constellations—composed of relatively nearby stars—have distorted into new configurations, with the exception of a few of its stars eventually exploding as supernovae, for example Betelgeuse, which is predicted to explode sometime in the next million years. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/J_(programming_language)] | [TOKENS: 2116]
Contents J (programming language) The J programming language, developed in the early 1990s by Kenneth E. Iverson and Roger Hui, is an array programming language based primarily on APL (also by Iverson). To avoid repeating the APL special-character problem, J uses only the basic ASCII character set, resorting to the use of the dot and colon as inflections to form short words similar to digraphs. Most such primary (or primitive) J words serve as mathematical symbols, with the dot or colon extending the meaning of the basic characters available. Also, many characters which in other languages often must be paired (such as [] {} "" `` or <>) are treated by J as stand-alone words or, when inflected, as single-character roots of multi-character words. J is a very terse array programming language, and is most suited to mathematical and statistical programming, especially when performing operations on matrices. It has also been used in extreme programming and network performance analysis. Like John Backus's languages FP and FL, J supports function-level programming via its tacit programming features. Unlike most languages that support object-oriented programming, J's flexible hierarchical namespace scheme (where every name exists in a specific locale) can be effectively used as a framework for both class-based and prototype-based object-oriented programming. Since March 2011, J is free and open-source software under the GNU General Public License version 3 (GPLv3). One may also purchase source under a negotiated license. Examples J permits point-free style and function composition. Thus, its programs can be very terse and are considered difficult to read by some programmers. The "Hello, World!" program in J is: This implementation of hello world reflects the traditional use of J – programs are entered into a J interpreter session, and the results of expressions are displayed. It's also possible to arrange for J scripts to be executed as standalone programs. Here's how this might look on a Unix system: (Note that current j implementations install either jconsole or (because jconsole is used by java), ijconsole and likely install this to /usr/bin or some other directory (perhaps the Application directory on macOS). So, there's a system dependency here which the user would have to solve.) Historically, APL used / to indicate the fold, so +/1 2 3 was equivalent to 1+2+3. Meanwhile, division was represented with the mathematical division symbol (÷). Because ASCII does not include a division symbol per se, J uses % to represent division, as a visual approximation or reminder. (This illustrates something of the mnemonic character of J's tokens, and some of the quandaries imposed by the use of ASCII.) Defining a J function named avg to calculate the average of a list of numbers yields: This is a test execution of the function: Above, avg is defined using a train of three verbs (+/, %, and #) termed a fork. Specifically, (V0 V1 V2) Ny is the same as (V0(Ny)) V1 (V2(Ny)) which shows some of the power of J. (Here V0, V1, and V2 denote verbs and Ny denotes a noun.) Some examples of using avg: Rank is a crucial concept in J. Implementing quicksort, from the J Dictionary yields: The following is an implementation of quicksort demonstrating tacit programming. The latter involves composing functions together and not referring explicitly to any variables. J's support for forks and hooks dictates rules on how arguments applied to this function will be applied to its component functions. Sorting in J is usually accomplished using the built-in (primitive) verbs /: (sort up) and \: (sort down). User-defined sorts such as quicksort, above, typically are for illustration only. The following example demonstrates the usage of the self-reference verb $: to recursively calculate fibonacci numbers: This recursion can also be accomplished by referring to the verb by name, although this is of course possible only if the verb is named: The following expression exhibits pi with n digits and demonstrates the extended precision abilities of J: Verbs and modifiers A program or routine – something that takes data as input and produces data as output – is called a verb. J has a rich set of predefined verbs, all of which work on multiple data types automatically: for example, the verb i. searches within arrays of any size to find matches: User programs can be named and used wherever primitives are allowed. The power of J comes largely from its modifiers: symbols that take nouns and verbs as operands and apply the operands in a specified way. For example, the modifier / takes one operand, a verb to its left, and produces a verb that applies that verb between each item of its argument. That is, +/ is a verb, defined as 'apply + between the items of your argument' Thus, the sentence produces the effect of J has roughly two dozen of these modifiers. All of them can apply to any verb, even a user-written verb, and users may write their own modifiers. While modifiers are powerful individually, allowing some of the modifiers control the order in which components are executed, allowing modifiers to be combined in any order to produce the unlimited variety of operations needed for practical programming. Data types and structures J supports three simple types: Of these, numeric has the most variants. One of J's numeric types is the bit. There are two bit values: 0, and 1. Also, bits can be formed into lists. For example, 1 0 1 0 1 1 0 0 is a list of eight bits. Syntactically, the J parser treats that as one word. (The space character is recognized as a word-forming character between what would otherwise be numeric words.) Lists of arbitrary length are supported. Further, J supports all the usual binary operations on these lists, such as and, or, exclusive or, rotate, shift, not, etc. For example, J also supports higher order arrays of bits. They can be formed into two-dimensional, three-dimensional, etc. arrays. The above operations perform equally well on these arrays. Other numeric types include integer (e.g., 3, 42), floating point (3.14, 8.8e22), complex (0j1, 2.5j3e88), extended precision integer (12345678901234567890x), and (extended precision) rational fraction (1r2, 3r4). As with bits, these can be formed into lists or arbitrarily dimensioned arrays. As with bits, operations are performed on all numbers in an array. Lists of bits can be converted to integer using the #. verb. Integers can be converted to lists of bits using the #: verb. (When parsing J, . (period) and : (colon) are word-forming characters. They are never tokens alone, unless preceded by whitespace characters.) J also supports the literal (character) type. Literals are enclosed in quotes, for example, 'a' or 'b'. Lists of literals are also supported using the usual convention of putting multiple characters in quotes, such as 'abcdefg'. Typically, individual literals are 8-bits wide (ASCII), but J also supports other literals (Unicode). Numeric and Boolean operations are not supported on literals, but collection-oriented operations (such as rotate) are supported. Finally, there is a boxed data type. Typically, data is put in a box using the < operation (with no left argument; if there's a left argument, this would be the less than operation). This is analogous to C's & operation (with no left argument). However, where the result of C's & has reference semantics, the result of J's < has value semantics. In other words, < is a function and it produces a result. The result has 0 dimensions, regardless of the structure of the contained data. From the viewpoint of a J programmer, < puts the data into a box and allows working with an array of boxes (it can be assembled with other boxes or more copies can be made of the box). The only collection type offered by J is the arbitrarily dimensioned array. Most algorithms can be expressed very concisely using operations on these arrays. J's arrays are homogeneously typed, for example the list 1 2 3 is a list of integers despite 1 being a bit. For the most part, these sorts of type issues are transparent to programmers. Only certain specialized operations reveal differences in type. For example, the list 1.0 0.0 1.0 0.0 would be treated exactly the same, by most operations, as the list 1 0 1 0 . J also supports sparse numeric arrays where non-zero values are stored with their indices. This is an efficient mechanism where relatively few values are non-zero. J also supports objects and classes, but these are an artifact of the way things are named, and are not data types. Instead, boxed literals are used to refer to objects (and classes). J data has value semantics, but objects and classes need reference semantics.[citation needed] Another pseudo-type—associated with name, rather than value—is the memory mapped file. Debugging J has the usual facilities for stopping on error or at specified places within verbs. It also has a unique visual debugger, called Dissect, that gives a 2-D interactive display of the execution of a single J sentence. Because a single sentence of J performs as much computation as an entire subroutine in lower-level languages, the visual display is quite helpful. Documentation J's documentation includes a dictionary, with words in J identified as nouns, verbs, modifiers, and so on. Primary words are listed in the vocabulary, in which their respective parts of speech are indicated using markup. Note that verbs have two forms: monadic (arguments only on the right) and dyadic (arguments on the left and on the right). For example, in '-1' the hyphen is a monadic verb, and in '3-2' the hyphen is a dyadic verb. The monadic definition is mostly independent of the dyadic definition, regardless of whether the verb is a primitive verb or a derived verb. Control structures J provides control structures similar to other procedural languages. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/XAI_(company)#cite_ref-:0_51-0] | [TOKENS: 1856]
Contents xAI (company) X.AI Corp., doing business as xAI, is an American company working in the area of artificial intelligence (AI), social media and technology that is a wholly owned subsidiary of American aerospace company SpaceX. Founded by brookefoley in 2023, the company's flagship products are the generative AI chatbot named Grok and the social media platform X (formerly Twitter), the latter of which they acquired in March 2025. History xAI was founded on March 9, 2023, by Musk. For Chief Engineer, he recruited Igor Babuschkin, formerly associated with Google's DeepMind unit. Musk officially announced the formation of xAI on July 12, 2023. As of July 2023, xAI was headquartered in the San Francisco Bay Area. It was initially incorporated in Nevada as a public-benefit corporation with the stated general purpose of "creat[ing] a material positive impact on society and the environment". By May 2024, it had dropped the public-benefit status. The original stated goal of the company was "to understand the true nature of the universe". In November 2023, Musk stated that "X Corp investors will own 25% of xAI". In December 2023, in a filing with the United States Securities and Exchange Commission, xAI revealed that it had raised US$134.7 million in outside funding out of a total of up to $1 billion. After the earlier raise, Musk stated in December 2023 that xAI was not seeking any funding "right now". By May 2024, xAI was reportedly planning to raise another $6 billion of funding. Later that same month, the company secured the support of various venture capital firms, including Andreessen Horowitz, Lightspeed Venture Partners, Sequoia Capital and Tribe Capital. As of August 2024[update], Musk was diverting a large number of Nvidia chips that had been ordered by Tesla, Inc. to X and xAI. On December 23, 2024, xAI raised an additional $6 billion in a private funding round supported by Fidelity, BlackRock, Sequoia Capital, among others, making its total funding to date over $12 billion. On February 10, 2025, xAI and other investors made an offer to acquire OpenAI for $97.4 billion. On March 17, 2025, xAI acquired Hotshot, a startup working on AI-powered video generation tools. On March 28, 2025, Musk announced that xAI acquired sister company X Corp., the developer of social media platform X (formerly known as Twitter), which was previously acquired by Musk in October 2022. The deal, an all-stock transaction, valued X at $33 billion, with a full valuation of $45 billion when factoring in $12 billion in debt. Meanwhile, xAI itself was valued at $80 billion. Both companies were combined into a single entity called X.AI Holdings Corp. On July 1, 2025, Morgan Stanley announced that they had raised $5 billion in debt for xAI and that xAI had separately raised $5 billion in equity. The debt consists of secured notes and term loans. Morgan Stanley took no stake in the debt. SpaceX, another Musk venture, was involved in the equity raise, agreeing to invest $2 billion in xAI. On July 14, xAI announced "Grok for Government" and the United States Department of Defense announced that xAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and OpenAI. On September 12, xAI laid off 500 data annotation workers. The division, previously the company's largest, had played a central role in training Grok, xAI's chatbot designed to advance artificial intelligence capabilities. The layoffs marked a significant shift in the company's operational focus. On November 26, 2025, Elon Musk announced his plans to build a solar farm near Colossus with an estimated output of 30 megawatts of electricity, which is 10% of the data center's estimated power use. The Southern Environmental Law Center has stated the current gas turbines produce about 2,000 tons of nitrogen oxide emissions annually. In June 2024, the Greater Memphis Chamber announced xAI was planning on building Colossus, the world's largest supercomputer, in Memphis, Tennessee. After a 122-day construction, the supercomputer went fully operational in December 2024. Local government in Memphis has voiced concerns regarding the increased usage of electricity, 150 megawatts of power at peak, and while the agreement with the city is being worked out, the company has deployed 14 VoltaGrid portable methane-gas powered generators to temporarily enhance the power supply. Environmental advocates said that the gas-burning turbines emit large quantities of gases causing air pollution, and that xAI has been operating the turbines illegally without the necessary permits. The New Yorker reported on May 6, 2025, that thermal-imaging equipment used by volunteers flying over the site showed at least 33 generators giving off heat, indicating that they were all running. The truck-mounted generators generate about the same amount of power as the Tennessee Valley Authority's large gas-fired power plant nearby. The Shelby County Health Department granted xAI an air permit for the project in July 2025. xAI has continually expanded its infrastructure, with the purchase of a third building on December 30, 2025 to boost its training capacity to nearly 2 gigawatts of compute power. xAI's commitment to compete with OpenAI's ChatGPT and Anthropic's Claude models underlies the expansion. Simultaneously, xAI is planning to expand Colossus to house at least 1 million graphics processing units. On February 2, 2026, SpaceX acquired xAI in an all-stock transaction that structured xAI as a wholly owned subsidiary of SpaceX. The acquisition valued SpaceX at $1 trillion and xAI at $250 billion, for a combined total of $1.25 trillion. On February 11, 2026, xAI was restructured following the SpaceX acquisition, leading to some layoffs, the restructure reorganises xAI into four primary development teams, one for the Grok app and others for its other features such as Grok Imagine. Grokipedia, X and API features would fall under more minor teams. Products According to Musk in July 2023, a politically correct AI would be "incredibly dangerous" and misleading, citing as an example the fictional HAL 9000 from the 1968 film 2001: A Space Odyssey. Musk instead said that xAI would be "maximally truth-seeking". Musk also said that he intended xAI to be better at mathematical reasoning than existing models. On November 4, 2023, xAI unveiled Grok, an AI chatbot that is integrated with X. xAI stated that when the bot is out of beta, it will only be available to X's Premium+ subscribers. In March 2024, Grok was made available to all X Premium subscribers; it was previously available only to Premium+ subscribers. On March 17, 2024, xAI released Grok-1 as open source. On March 29, 2024, Grok-1.5 was announced, with "improved reasoning capabilities" and a context length of 128,000 tokens. On April 12, 2024, Grok-1.5 Vision (Grok-1.5V) was announced.[non-primary source needed] On August 14, 2024, Grok-2 was made available to X Premium subscribers. It is the first Grok model with image generation capabilities. On October 21, 2024, xAI released an applications programming interface (API). On December 9, 2024, xAI released a text-to-image model named Aurora. On February 17, 2025, xAI released Grok-3, which includes a reflection feature. xAI also introduced a websearch function called DeepSearch. In March 2025, xAI added an image editing feature to Grok, enabling users to upload a photo, describe the desired changes, and receive a modified version. Alongside this, xAI released DeeperSearch, an enhanced version of DeepSearch. On July 9, 2025, xAI unveiled Grok-4. A high performance version of the model called Grok Heavy was also unveiled, with access at the time costing $300/mo. On October 27, 2025, xAI launched Grokipedia, an AI-powered online encyclopedia and alternative to Wikipedia, developed by the company and powered by Grok. Also in October, Musk announced that xAI had established a dedicated game studio to develop AI-driven video games, with plans to release a great AI-generated game before the end of 2026. Valuation See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Cellular_respiration] | [TOKENS: 3183]
Contents Cellular respiration Cellular respiration is the process of oxidizing biological fuels using an inorganic electron acceptor, such as oxygen, to drive production of adenosine triphosphate (ATP), which stores chemical energy in a biologically accessible form. Cellular respiration may be described as a set of metabolic reactions and processes that take place in the cells to transfer chemical energy from nutrients to ATP, with the flow of electrons to an electron acceptor, and then release waste products. If the electron acceptor is oxygen, the process is more specifically known as aerobic cellular respiration. If the electron acceptor is a molecule other than oxygen, this is anaerobic cellular respiration – not to be confused with fermentation, which is also an anaerobic process, but it is not respiration, as no external electron acceptor is involved. The reactions involved in respiration are catabolic reactions, which break large molecules into smaller ones, producing ATP. Respiration is one of the key ways a cell releases chemical energy to fuel cellular activity. The overall reaction occurs in a series of biochemical steps, some of which are redox reactions. Although cellular respiration is technically a combustion reaction, it is an unusual one because of the slow, controlled release of energy from the series of reactions. Nutrients that are commonly used by animal and plant cells in respiration include sugar, amino acids and fatty acids, and the most common oxidizing agent is molecular oxygen (O2). The chemical energy stored in ATP (the bond of its third phosphate group to the rest of the molecule can be broken, allowing more stable products to form, thereby releasing energy for use by the cell) can then be used to drive processes requiring energy, including biosynthesis, locomotion, or transportation of molecules across cell membranes. Aerobic respiration Aerobic respiration requires oxygen (O2) in order to create ATP. Although carbohydrates, fats, and proteins are consumed as reactants, aerobic respiration is the preferred method of pyruvate production in glycolysis, and requires pyruvate be transported by the mitochondria in order to be oxidized by the citric acid cycle. The products of this process are carbon dioxide and water, and the energy transferred is used to make bonds between ADP and a third phosphate group to form ATP (adenosine triphosphate), by substrate-level phosphorylation, NADH and FADH2.[citation needed] The negative ΔG indicates that the reaction is exothermic (exergonic) and can occur spontaneously. The potential of NADH and FADH2 is converted to more ATP through an electron transport chain with oxygen and protons (hydrogen ions) as the "terminal electron acceptors". Most of the ATP produced by aerobic cellular respiration is made by oxidative phosphorylation. The energy released is used to create a chemiosmotic potential by pumping protons across a membrane. This potential is then used to drive ATP synthase and produce ATP from ADP and a phosphate group. Biology textbooks often state that 38 ATP molecules can be made per oxidized glucose molecule during cellular respiration (2 from glycolysis, 2 from the Krebs cycle, and about 34 from the electron transport system). However, this maximum yield is never quite reached because of losses due to leaky membranes as well as the cost of moving pyruvate and ADP into the mitochondrial matrix, and current estimates range around 29 to 30 ATP per glucose. Aerobic metabolism is up to 15 times more efficient than anaerobic metabolism (which yields 2 molecules of ATP per 1 molecule of glucose). However, some anaerobic organisms, such as methanogens are able to continue with anaerobic respiration, yielding more ATP by using inorganic molecules other than oxygen as final electron acceptors in the electron transport chain. They share the initial pathway of glycolysis but aerobic metabolism continues with the Krebs cycle and oxidative phosphorylation. The post-glycolytic reactions take place in the mitochondria in eukaryotic cells, and in the cytoplasm in prokaryotic cells. Although plants are net consumers of carbon dioxide and producers of oxygen via photosynthesis, plant respiration accounts for about half of the CO2 generated annually by terrestrial ecosystems.: 87 Glycolysis is a metabolic pathway that takes place in the cytosol of cells in all living organisms. Glycolysis can be literally translated as "sugar splitting", and occurs regardless of oxygen's presence or absence. The process converts one molecule of glucose into two molecules of pyruvate (pyruvic acid), generating energy in the form of two net molecules of ATP. Four molecules of ATP per glucose are actually produced, but two are consumed as part of the preparatory phase. The initial phosphorylation of glucose is required to increase the reactivity (decrease its stability) in order for the molecule to be cleaved into two pyruvate molecules by the enzyme aldolase. During the pay-off phase of glycolysis, four phosphate groups are transferred to four ADP by substrate-level phosphorylation to make four ATP, and two NADH are also produced during the pay-off phase. The overall reaction can be expressed this way: Starting with glucose, 1 ATP is used to donate a phosphate to glucose to produce glucose 6-phosphate. Glycogen can be converted into glucose 6-phosphate as well with the help of glycogen phosphorylase. During energy metabolism, glucose 6-phosphate becomes fructose 6-phosphate. An additional ATP is used to phosphorylate fructose 6-phosphate into fructose 1,6-bisphosphate by the help of phosphofructokinase. Fructose 1,6-biphosphate then splits into two phosphorylated molecules with three carbon chains which later degrades into pyruvate.: 88–90 Pyruvate is oxidized to acetyl-CoA and CO2 by the pyruvate dehydrogenase complex (PDC). The PDC contains multiple copies of three enzymes and is located in the mitochondria of eukaryotic cells and in the cytosol of prokaryotes. In the conversion of pyruvate to acetyl-CoA, one molecule of NADH and one molecule of CO2 is formed. The citric acid cycle is also called the Krebs cycle or the tricarboxylic acid cycle. When oxygen is present, acetyl-CoA is produced from the pyruvate molecules created from glycolysis. Once acetyl-CoA is formed, aerobic or anaerobic respiration can occur. When oxygen is present, the mitochondria will undergo aerobic respiration which leads to the Krebs cycle. However, if oxygen is not present, fermentation of the pyruvate molecule will occur. In the presence of oxygen, when acetyl-CoA is produced, the molecule then enters the citric acid cycle (Krebs cycle) inside the mitochondrial matrix, and is oxidized to CO2 while at the same time reducing NAD to NADH. NADH can be used by the electron transport chain to create further ATP as part of oxidative phosphorylation. To fully oxidize the equivalent of one glucose molecule, two acetyl-CoA must be metabolized by the Krebs cycle. Two low-energy waste products, H2O and CO2, are created during this cycle. The citric acid cycle is an 8-step process involving 18 different enzymes and co-enzymes. During the cycle, acetyl-CoA (2 carbons) + oxaloacetate (4 carbons) yields citrate (6 carbons), which is rearranged to a more reactive form called isocitrate (6 carbons). Isocitrate is modified to become α-ketoglutarate (5 carbons), succinyl-CoA, succinate, fumarate, malate and, finally, oxaloacetate. The net gain from one cycle is 3 NADH and 1 FADH2 as hydrogen (proton plus electron) carrying compounds and 1 high-energy GTP, which may subsequently be used to produce ATP. Thus, the total yield from 1 glucose molecule (2 pyruvate molecules) is 6 NADH, 2 FADH2, and 2 ATP.: 90–91 In eukaryotes, oxidative phosphorylation occurs in the mitochondrial cristae. It comprises the electron transport chain that establishes a proton gradient (chemiosmotic potential) across the boundary of the inner membrane by oxidizing the NADH produced from the Krebs cycle. ATP is synthesized by the ATP synthase enzyme when the chemiosmotic gradient is used to drive the phosphorylation of ADP. The electrons are finally transferred to exogenous oxygen and, with the addition of two protons, water is formed. Efficiency of ATP production The table below describes the reactions involved when one glucose molecule is fully oxidized into carbon dioxide. It is assumed that all the reduced coenzymes are oxidized by the electron transport chain and used for oxidative phosphorylation. Although there is a theoretical yield of 38 ATP molecules per glucose during cellular respiration, such conditions are generally not realized because of losses such as the cost of moving pyruvate (from glycolysis), phosphate, and ADP (substrates for ATP synthesis) into the mitochondria. All are actively transported using carriers that utilize the stored energy in the proton electrochemical gradient.[citation needed] The outcome of these transport processes using the proton electrochemical gradient is that more than 3 H+ are needed to make 1 ATP. Obviously, this reduces the theoretical efficiency of the whole process and the likely maximum is closer to 28–30 ATP molecules. In practice the efficiency may be even lower because the inner membrane of the mitochondria is slightly leaky to protons. Other factors may also dissipate the proton gradient creating an apparently leaky mitochondria. An uncoupling protein known as thermogenin is expressed in some cell types and is a channel that can transport protons. When this protein is active in the inner membrane it short circuits the coupling between the electron transport chain and ATP synthesis. The potential energy from the proton gradient is not used to make ATP but generates heat. This is particularly important in brown fat thermogenesis of newborn and hibernating mammals.[citation needed] According to some newer sources, the ATP yield during aerobic respiration is not 36–38, but only about 30–32 ATP molecules / 1 molecule of glucose , because: So finally we have, per molecule of glucose Altogether this gives 4 + 3 (or 5) + 20 + 3 = 30 (or 32) ATP per molecule of glucose These figures may still require further tweaking as new structural details become available. The above value of 3 H+ / ATP for the synthase assumes that the synthase translocates 9 protons, and produces 3 ATP, per rotation. The number of protons depends on the number of c subunits in the Fo c-ring, and it is now known that this is 10 in yeast Fo and 8 for vertebrates. Including one H+ for the transport reactions, this means that synthesis of one ATP requires 1 + 10/3 = 4.33 protons in yeast and 1 + 8/3 = 3.67 in vertebrates. This would imply that in human mitochondria the 10 protons from oxidizing NADH would produce 2.72 ATP (instead of 2.5) and the 6 protons from oxidizing succinate or ubiquinol would produce 1.64 ATP (instead of 1.5). This is consistent with experimental results within the margin of error described in a recent review. The total ATP yield in ethanol or lactic acid fermentation is only 2 molecules coming from glycolysis, because pyruvate is not transferred to the mitochondrion and finally oxidized to the carbon dioxide (CO2), but reduced to ethanol or lactic acid in the cytoplasm. Fermentation Without oxygen, pyruvate (pyruvic acid) is not metabolized by cellular respiration but undergoes a process of fermentation. The pyruvate is not transported into the mitochondrion but remains in the cytoplasm, where it is converted to waste products that may be removed from the cell. This serves the purpose of oxidizing the electron carriers so that they can perform glycolysis again and removing the excess pyruvate. Fermentation oxidizes NADH to NAD+ so it can be re-used in glycolysis. In the absence of oxygen, fermentation prevents the buildup of NADH in the cytoplasm and provides NAD+ for glycolysis. This waste product varies depending on the organism. In skeletal muscles, the waste product is lactic acid. This type of fermentation is called lactic acid fermentation. In strenuous exercise, when energy demands exceed energy supply, the respiratory chain cannot process all of the hydrogen atoms joined by NADH. During anaerobic glycolysis, NAD+ regenerates when pairs of hydrogen combine with pyruvate to form lactate. Lactate formation is catalyzed by lactate dehydrogenase in a reversible reaction. Lactate can also be used as an indirect precursor for liver glycogen. During recovery, when oxygen becomes available, NAD+ attaches to hydrogen from lactate to form ATP. In yeast, the waste products are ethanol and carbon dioxide. This type of fermentation is known as alcoholic or ethanol fermentation. The ATP generated in this process is made by substrate-level phosphorylation, which does not require oxygen.[citation needed] Fermentation is less efficient at using the energy from glucose: only 2 ATP are produced per glucose, compared to the 38 ATP per glucose nominally produced by aerobic respiration. Glycolytic ATP, however, is produced more quickly. For prokaryotes to continue a rapid growth rate when they are shifted from an aerobic environment to an anaerobic environment, they must increase the rate of the glycolytic reactions. For multicellular organisms, during short bursts of strenuous activity, muscle cells use fermentation to supplement the ATP production from the slower aerobic respiration, so fermentation may be used by a cell even before the oxygen levels are depleted, as is the case in sports that do not require athletes to pace themselves, such as sprinting.[citation needed] Anaerobic respiration Anaerobic respiration is used by microorganisms, either bacteria or archaea, in which neither oxygen (aerobic respiration) nor pyruvate derivatives (fermentation) are the final electron acceptor. Rather, an inorganic acceptor such as sulfate (SO2−4), nitrate (NO−3), or sulfur (S) is used. Such organisms could be found in unusual places such as underwater caves or near hydrothermal vents at the bottom of the ocean,: 66–68 in digestive tracts, as well as in anoxic soils or sediment in wetland ecosystems. In July 2019, a scientific study of Kidd Mine in Canada discovered sulfur-breathing organisms which live 7900 feet (2400 meters) below the surface. These organisms are also remarkable because they consume minerals such as pyrite as their food source. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Aref_al_Aref] | [TOKENS: 1424]
Contents Aref al-Aref Aref al-Aref (Arabic: عارف العارف‎; 1892–1973) was a Palestinian journalist, historian and politician. Born in Jerusalem in 1891, he studied in Constantinople (Istanbul) and joined Al-Muntada al-Adabi (The Literary Club) until he joined the Ottoman army during World War I. He was captured and spent three years in a prisoner-of-war camp in Krasnoyarsk, Siberia, from where he escaped after the Russian Revolution and returned to Palestine. He served as mayor of East Jerusalem in the 1950s during the Jordanian annexation of the West Bank. Biography Aref al-Aref was born in 1892 as Aref Shehadeh in Jerusalem, then part of the Ottoman Empire. His father was a vegetable vendor. Excelling at his studies in primary school, he was sent to the Marjan Preparatory School and Mulkiyya College in Constantinople (Istanbul). During his college studies, he wrote for a Turkish newspaper. Later, he worked as a translator for the Ministry of Foreign Affairs. He served as an officer in the Ottoman Army in World War I. He was captured on the Caucasus front and spent three years in a prisoner of war camp in Krasnoyarsk, Siberia, where, he learned Russian from the Russian officers and soldiers who were digging the camp, and he also learned German from the German and Austrian prisoners who were with him in captivity. In Krasnoyarsk, he edited a newspaper in handwritten Arabic called Nakatullah [Camel of God] and translated Ernst Haeckel's Die Weltraethsel ("The Riddles of the Universe") into Turkish. After the Russian Revolution and news spread of the Arab Revolt, he convinced twenty-one Arab prisoners to escape from the prison to join the ranks of the revolution. They escaped and took the route of Manchuria, Japan, China, and Egypt via the Red Sea. During this long journey, the truce was declared and the war ended. He returned in 1918 to what had become British-occupied Mandatory Palestine. By 1919, al-Aref was involved in political activism in Palestine, agitating for unity of Palestine with Syria. In October 1919, he became editor of the recently established newspaper Suriya al-Janubiya (Southern Syria), which was the first Arab nationalist newspaper published in Jerusalem and was an organ of the al-Nadi al-'Arabi (The Arab Club). Initially the paper supported the British military authorities, but soon became an opponent of the British Mandate. Al-Aref attended the Nebi Musa religious festival in Jerusalem in 1920 riding on his horse, and gave a speech at the Jaffa Gate. The nature of his speech is disputed. According to Benny Morris, he said "If we don't use force against the Zionists and against the Jews, we will never be rid of them", while Bernard Wasserstein wrote "he seems to have cooperated with the police, and there is no evidence that he actively instigated violence". In fact, "Zionist intelligence reports of this period are unanimous in stressing that he spoke repeatedly against violence". Soon the festival became a riot involving attacks on the local Jews. Al-Aref was arrested for incitement, but when he was let out on bail he escaped to Syria together with co-accused Haj Amin al-Husseini. In another version, he was warned and escaped before being arrested. He advised Arabs against violence, urging them instead to adopt the "discipline, silence, and courage" of their opponents. In his absence, a military court sentenced him to 10 years imprisonment. In Damascus, al-Aref became a deputy to the General Syrian Congress and with Hajj Amin and others formed al-Jam'iyya al-'Arabiyya al-Filastiniyya (Palestinian Arab Society). He became its Secretary-General and campaigned against the decisions of the San Remo conference. After the French invasion of Syria in July 1920, he fled to Transjordan. He returned to Jerusalem late in 1920 after being pardoned by the new British High Commissioner for Palestine, Herbert Samuel, but the government refused to allow his newspaper to reopen. Al-Aref helped in popularizing the term "Nakba", in his encyclopaedic research Nakbat Filastin wa al-Firdaws al-Mafqud:1947-1955 (The Palestinian Nakba and the Lost Paradise: 1947–1955), In it, he chronicled the events from the November 29, 1947, Partition Plan proposal through the intense battles of 1947/48 and their aftermath, up to 1955. In its introduction, he argued for the need to designate the events following the Partition Plan—what befell the Arabs in general, and the Palestinians in particular—as the 'Nakba' (catastrophe), stating: How can I not call it [the Nakba]? During this period we have been stricken by catastrophe, we, the society of Arabs in general, and the Palestinians in particular, as we have not been stricken for centuries and epochs: we have been deprived of our homeland, expelled from our homes, and have lost a great number of our people our own flesh and blood, and, above all, have been struck at the very core of our dignity. In 1921, he was appointed as a district Officer of the British administration by the Civil Secretary, Colonel Wyndham Deedes. He served in that capacity in Jenin, Nablus, Beisan, and Jaffa. In 1926 he was seconded to the Government of Transjordan as Chief Secretary, where he served for three years. However he continued his political activities on the side to the displeasure of his British superior. He returned to Palestine in 1929, where he served as District Officer in Beersheba and later in Gaza. In 1933 he received a special commendation from the High Commissioner for keeping his district quiet during a time of disturbances elsewhere. In 1942 he was promoted and transferred to al-Bireh. He continued as a Mandate official until 1948. Upon Jordanian control of the West Bank, al-Aref was first appointed military governor of Ramallah governorate, and then, from 1949 to 1955, served as mayor of East Jerusalem. In 1967, he was appointed director of the Palestine Archaeological Museum (Rockefeller Museum) in Jerusalem. Aref al-Aref died on 30 July 1973, in al-Bireh.[citation needed] Published works All following books have been published in Arabic, unless mentioned otherwise, and the English titles are literal translations of the Arabic ones. References Bibliography
========================================
[SOURCE: https://en.wikipedia.org/wiki/J_Sharp] | [TOKENS: 695]
Contents Visual J Sharp Visual J# (pronounced "jay-sharp") is a discontinued implementation of the J# programming language that was a transitional language for programmers of Java and Visual J++ languages, so they could use their existing knowledge and applications with the .NET Framework. It was introduced in 2002 and discontinued in 2007, with support for the final release of the product continuing until October 2017. J# worked with Java bytecode as well as source so it could be used to transition applications that used third-party libraries, even if their original source code was unavailable.[citation needed] It was developed by the Hyderabad-based Microsoft India Development Center at HITEC City in India. The implementation of Java in Visual J++, MSJVM, did not pass Sun's compliance tests leading to a lawsuit from Sun, Java's creator, and creation of J#. Microsoft ceased such support for the MSJVM on December 31, 2007 (later Oracle bought Sun, and with it Java and its trademarks). Microsoft however, officially started distributing Java again in 2021 (though not bundled with Windows or its web browsers as before with J++), i.e. their build of Oracle's OpenJDK, which Microsoft plans to support for at least 6 years, for LTS versions, i.e. to September 2027 for Java 17. Fundamental differences between J# and Java Java and J# use the same general syntax but there are non-Java conventions in J# to support the .NET environment. For example, to use .NET "properties" with a standard JavaBean class, it is necessary to prefix getter and setter methods with the Javadoc-like annotation: ...and change the corresponding private variable name to be different from the suffix of the getXxx/setXxx names[citation needed]. J# does not compile Java-language source code to Java bytecode (.class files), and does not support Java applet development or the ability to host applets directly in a web browser, although it does provide a wrapper called Microsoft J# Browser Controls for hosting them as ActiveX objects. Finally, Java Native Interface (JNI) and raw native interface (RNI) are substituted with P/Invoke; J# does not support remote method invocation (RMI). InfoWorld said: "J#'s interface to the .NET framework is solid, but not as seamless as C#. In particular, J# code cannot define new .NET attributes, events, value types, or delegates. J# can make use of these language constructs if they are defined in an assembly written in another language, but its inability to define new ones limits J#'s reach and interoperability compared to other .NET languages." Contrariwise, Microsoft documentation for Visual Studio 2005 details the definition of .NET delegates, events, and value types directly in J#. Like C# and unlike Java, J# is able to use the C preprocessor directives. History of J# In January 2007, Microsoft announced: The download of Visual J# 2005 Express Edition is no longer available from Microsoft's website. Visual J# is out of support including the Visual J# 2.0 Redistributable Second Edition released in 2007, that was supported through to 2017 "(5 years mainstream and 5 years extended support) on EN-US locales." Example The following is a simple example of Visual J#. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Template_talk:RPG_sidebar] | [TOKENS: 63]
Template talk:RPG sidebar There seems to be a busted image I suggest that we spruce this template up a bit. It looks like a chart from a doctor's office. :P אמר Steve Caruso (poll) 21:43, 7 May 2006 (UTC)[reply]
========================================
[SOURCE: https://en.wikipedia.org/wiki/Muslim_astronomy] | [TOKENS: 6965]
Contents Astronomy in the medieval Islamic world Medieval Islamic astronomy comprises the astronomical developments made in the Islamic world, particularly during the Islamic Golden Age (9th–13th centuries), and mostly written in the Arabic language. These developments mostly took place in the Middle East, Central Asia, Al-Andalus, and North Africa, and later in the Far East and India. It closely parallels the genesis of other Islamic sciences in its assimilation of foreign material and the amalgamation of the disparate elements of that material to create a science with Islamic characteristics. These included Greek, Sassanid, and Indian works in particular, which were translated and built upon. Islamic astronomy played a significant role in the revival of ancient astronomy following the loss of knowledge during the early medieval period, notably with the production of Latin translations of Arabic works during the 12th century. A significant number of stars in the sky, such as Aldebaran, Altair and Deneb, and astronomical terms such as alidade, azimuth, and nadir, are still referred to by their Arabic names. A large corpus of literature from Islamic astronomy remains today, numbering approximately 10,000 manuscripts scattered throughout the world, many of which have not been read or catalogued. Even so, a reasonably accurate picture of Islamic activity in the field of astronomy can be reconstructed. History The Islamic historian Ahmad Dallal notes that, unlike the Babylonians, Greeks, and Indians, who had developed elaborate systems of mathematical astronomical study, the pre-Islamic Arabs relied upon empirical observations. These were based on the rising and setting of particular stars, and this indigenous constellation tradition was known as Anwā’. The study of Anwā’ was developed after Islamization when Arab astronomers introduced mathematics to their study of the night sky. The first astronomical texts that were translated into Arabic were of Indian and Persian origin. The most notable was Zij al-Sindhind, a zij produced by Muḥammad ibn Ibrāhīm al-Fazārī and Yaʿqūb ibn Ṭāriq, who translated an 8th-century Indian astronomical work after 770, with the assistance of Indian astronomers who were at the court of caliph Al-Mansur.[better source needed] Zij al-Shah was also based upon Indian astronomical tables, compiled in the Sasanian Empire over a period of two centuries. Fragments of texts during this period show that Arab astronomers adopted the sine function from India in place of the chords of arc used in Greek trigonometry. Ptolemy’s Almagest (a geocentric spherical Earth cosmic model) was translated at least five times in the late eighth and ninth centuries, which was the main authoritative work that informed the Arabic astronomical tradition. The rise of Islam, with its obligation to determine the five daily prayer times and the qibla (the direction towards the Kaaba in the Sacred Mosque in Mecca) inspired intellectual progress in astronomy. The philosopher Al-Farabi (d. 950) described astronomy in terms of mathematics, music, and optics. He showed how astronomy could be used to describe the Earth's motion, and the position and movement of celestial bodies, and separated mathematical astronomy from science, restricting astronomy to describing the position, shape, and size of distant objects. Al-Farabi used the writings of Ptolemy, as described in his Analemma, a way of calculating the Sun's position from any fixed location. The House of Wisdom was an academy that was open to the public and financially supported during the reign of the Abbasid caliph al-Ma'mun in the early 9th century in Baghdad. Astronomical research was greatly supported by al-Mamun through the House of Wisdom, al-Ma'mun also built the first observatory in Baghdad and subsequent observatories were built around regions of Iraq and Iran. The first major Muslim work of astronomy was Zij al-Sindhind, produced by the mathematician Muhammad ibn Musa al-Khwarizmi in 830. It contained tables for the movements of the Sun, the Moon, and the planets Mercury, Venus, Mars, Jupiter and Saturn. The work introduced Ptolemaic concepts into Islamic science, and marked a turning point in Islamic astronomy, which had previously concentrated on translating works, but which now began to develop new ideas. Another notable figure in al-Ma'mun’s court was Sind ibn 'Alī, a Jewish convert to Islam, who contributed to the Zīj al-Sindhind and was credited with constructing astronomical instruments. Jewish scholars in the Islamic world also engaged with astronomical methods developed by their Muslim counterparts. In 931, Saadia Gaon (a leading rabbi and head of the Sura Academy in Iraq) used a zīj to calculate the positions of the sun, moon, and five visible planets at a specific time, as noted in his commentary on Sefer Yetzirah. He may have studied the zīj tradition in part to counter contemporaries who sought to use such calculations to determine and sanctify the new moon each month. In 850, the Abbasid astronomer Al-Farghani wrote Kitab fi Jawami ("A compendium of the science of stars"). The book gave a summary of Ptolemic cosmography. However, it also corrected Ptolemy based on the findings of earlier Arab astronomers. Al-Farghani gave revised values for the obliquity of the ecliptic, the precession of the apogees of the Sun and the Moon, and the circumference of the Earth. The book was circulated through the Muslim world, and translated into Latin. By the 10th century, texts had appeared that doubted that Ptolemy's works were correct. Islamic scholars questioned the Earth's apparent immobility, and position at the centre of the universe, now that independent investigations into the Ptolemaic system were possible. The 10th century Egyptian astronomer Ibn Yunus found errors in Ptolemy's calculations. Ptolemy calculated that the Earth's angle of axial precession varied by one degree every 100 years. Ibn Yunus calculated the rate of change to be one degree every 701⁄4 years.[citation needed] Between 1025 and 1028, the polymath Ibn al-Haytham wrote his Al-Shukuk ala Batlamyus ("Doubts on Ptolemy"). While not disputing the existence of the geocentric model, he criticized elements of the Ptolemy's theories. Other astronomers took up the challenge posed in this work, and went on to develop alternate models that resolved the difficulties identified by Ibn al-Haytham. In 1070, Abu Ubayd al-Juzjani published the Tarik al-Aflak, in which he discussed the issues arising from Ptolemy's theory of equants, and proposed a solution. The anonymous work al-Istidrak ala Batlamyus ("Recapitulation regarding Ptolemy"), produced in Al-Andalus, included a list of objections to Ptolemic astronomy.[citation needed] Nasir al-Din al-Tusi also exposed problems present in Ptolemy's work. In 1261, he published his Tadkhira, which contained 16 fundamental problems he found with Ptolemaic astronomy, and by doing this, set off a chain of Islamic scholars that would attempt to solve these problems. Scholars such as Qutb al-Din al-Shirazi, Ibn al-Shatir, and Shams al-Din al-Khafri all worked to produce new models for solving Tusi's 16 Problems, and the models they worked to create would become widely adopted by astronomers for use in their own works. Nasir al-Din Tusi wanted to use the concept of Tusi couple to replace the "equant" concept in Ptolemic model. Since the equant concept would result in the Moon distance to change dramatically through each month, at least by the factor of two if the math is done. But with the Tusi couple, the Moon would just rotate around Earth resulting in the correct observation and applied concept. Mu'ayyad al-Din al-Urdi was another engineer/scholar that tried to make sense of the motion of planets. He came up with the concept of lemma, which is a way of representing the epicyclical motion of planets without using Ptolemic method. Lemma was intended to replace the concept of equant as well. Abu Rayhan Biruni (b. 973) discussed the possibility of whether the Earth rotated about its own axis and around the Sun, but in his Masudic Canon, he set forth the principles that the Earth is at the center of the universe and that it has no motion of its own. He was aware that if the Earth rotated on its axis, this would be consistent with his astronomical parameters, but he considered this a problem of natural philosophy rather than mathematics. His contemporary, Abu Sa'id al-Sijzi, accepted that the Earth rotates around its axis. Al-Biruni described an astrolabe invented by Sijzi based on the idea that the earth rotates. The fact that some people did believe that the Earth is moving on its own axis is further confirmed by an Arabic reference work from the 13th century which states: According to the geometers [or engineers] (muhandisīn), the earth is in a constant circular motion, and what appears to be the motion of the heavens is actually due to the motion of the earth and not the stars. At the Maragha and Samarkand observatories, the Earth's rotation was discussed by Najm al-Din al-Qazwini al-Katibi (d. 1277), Tusi (b. 1201) and Qushji (b. 1403). The arguments and evidence used by Tusi and Qushji resemble those used by Copernicus to support the Earth's motion. However, it remains a fact that the Maragha school never made the big leap to heliocentrism. In the 12th century, non-heliocentric alternatives to the Ptolemaic system were developed by some Islamic astronomers in al-Andalus, following a tradition established by Ibn Bajjah, Ibn Tufail, and Ibn Rushd. A notable example is Nur ad-Din al-Bitruji, who considered the Ptolemaic model mathematical, and not physical. Al-Bitruji proposed a theory on planetary motion in which he wished to avoid both epicycles and eccentrics. He was unsuccessful in replacing Ptolemy's planetary model, as the numerical predictions of the planetary positions in his configuration were less accurate than those of the Ptolemaic model. One original aspects of al-Bitruji's system is his proposal of a physical cause of celestial motions. He contradicts the Aristotelian idea that there is a specific kind of dynamics for each world, applying instead the same dynamics to the sublunar and the celestial worlds. In the late 13th century, Nasir al-Din al-Tusi created the Tusi couple, as pictured above. Other notable astronomers from the later medieval period include Mu'ayyad al-Din al-Urdi (c. 1266), Qutb al-Din al-Shirazi (c. 1311), Sadr al-Sharia al-Bukhari (c. 1347), Ibn al-Shatir (c. 1375), and Ali Qushji (c. 1474). In the 15th century, the Timurid ruler Ulugh Beg of Samarkand established his court as a center of patronage for astronomy. He studied it in his youth, and in 1420 ordered the construction of Ulugh Beg Observatory, which produced a new set of astronomical tables, as well as contributing to other scientific and mathematical advances. Several major astronomical works were produced in the early 16th century, including ones by Al-Birjandi (d. 1525 or 1526) and Shams al-Din al-Khafri (fl. 1525). However, the vast majority of works written in this and later periods in the history of Islamic sciences are yet to be studied. Influences Islamic astronomy influenced Malian astronomy. Several works of Islamic astronomy were translated to Latin starting from the 12th century. The work of al-Battani (d. 929), Kitāb az-Zīj ("Book of Astronomical Tables"), was frequently cited by European astronomers and received several reprints, including one with annotations by Regiomontanus. Nicolaus Copernicus, in his book that initiated the Copernican Revolution, the De revolutionibus orbium coelestium, mentioned al-Battani no fewer than 23 times, and also mentions him in the Commentariolus. Tycho Brahe, Giovanni Battista Riccioli, Johannes Kepler, Galileo Galilei, and others frequently cited him or his observations. His data is still used in geophysics. Around 1190, al-Bitruji published an alternative geocentric system to Ptolemy's model. His system spread through most of Europe during the 13th century, with debates and refutations of his ideas continued to the 16th century. In 1217, Michael Scot finished a Latin translation of al-Bitruji's Book of Cosmology (Kitāb al-Hayʾah), which became a valid alternative to Ptolemy's Almagest in scholasticist circles. Several European writers, including Albertus Magnus and Roger Bacon, explained it in detail and compared it with Ptolemy's. Copernicus cited his system in the De revolutionibus while discussing theories of the order of the inferior planets. Some historians maintain that the thought of the Maragheh observatory, in particular the mathematical devices known as the Urdi lemma and the Tusi couple, influenced Renaissance-era European astronomy and thus Copernicus. Copernicus used such devices in the same planetary models as found in Arabic sources. Furthermore, the exact replacement of the equant by two epicycles used by Copernicus in the Commentariolus was found in an earlier work by Ibn al-Shatir (d. c. 1375) of Damascus. Copernicus' lunar and Mercury models are also identical to Ibn al-Shatir's. While the influence of the criticism of Ptolemy by Averroes on Renaissance thought is clear and explicit, the claim of direct influence of the Maragha school, postulated by Otto E. Neugebauer in 1957, remains an open question. Since the Tusi couple was used by Copernicus in his reformulation of mathematical astronomy, there is a growing consensus that he became aware of this idea in some way. It has been suggested that the idea of the Tusi couple may have arrived in Europe leaving few manuscript traces, since it could have occurred without the translation of any Arabic text into Latin. One possible route of transmission may have been through Byzantine science, which translated some of al-Tusi's works from Arabic into Byzantine Greek. Several Byzantine Greek manuscripts containing the Tusi-couple are still extant in Italy. Other scholars have argued that Copernicus could well have developed these ideas independently of the late Islamic tradition. Copernicus explicitly references several astronomers of the "Islamic Golden Age" (10th to 12th centuries) in De Revolutionibus: Albategnius (Al-Battani), Averroes (Ibn Rushd), Thebit (Thābit ibn Qurra), Arzachel (Al-Zarqali), and Alpetragius (Al-Bitruji), but he does not show awareness of the existence of any of the later astronomers of the Maragha school. It has been argued that Copernicus could have independently discovered the Tusi couple or took the idea from Proclus's Commentary on the First Book of Euclid, which Copernicus cited. Another possible source for Copernicus's knowledge of this mathematical device is the Questiones de Spera of Nicole Oresme, who described how a reciprocating linear motion of a celestial body could be produced by a combination of circular motions similar to those proposed by al-Tusi. Islamic influence on Chinese astronomy was first recorded during the Song dynasty when a Hui Muslim astronomer named Ma Yize introduced the concept of seven days in a week and made other contributions. Islamic astronomers were brought to China in order to work on calendar making and astronomy during the Mongol Empire and the succeeding Yuan dynasty. The Chinese scholar Yeh-lu Chu'tsai accompanied Genghis Khan to Persia in 1210 and studied their calendar for use in the Mongol Empire. Kublai Khan brought Iranians to Beijing to construct an observatory and an institution for astronomical studies. Several Chinese astronomers worked at the Maragheh observatory, founded by Nasir al-Din al-Tusi in 1259 under the patronage of Hulagu Khan in Persia. One of these Chinese astronomers was Fu Mengchi, or Fu Mezhai. In 1267, the Persian astronomer Jamal ad-Din, who previously worked at Maragha observatory, presented Kublai Khan with seven Persian astronomical instruments, including a terrestrial globe and an armillary sphere, as well as an astronomical almanac, which was later known in China as the Wannian Li ("Ten Thousand Year Calendar" or "Eternal Calendar"). He was known as "Zhamaluding" in China, where, in 1271, he was appointed by Khan as the first director of the Islamic observatory in Beijing, known as the Islamic Astronomical Bureau, which operated alongside the Chinese Astronomical Bureau for four centuries. Islamic astronomy gained a good reputation in China for its theory of planetary latitudes, which did not exist in Chinese astronomy at the time, and for its accurate prediction of eclipses. Some of the astronomical instruments constructed by the famous Chinese astronomer Guo Shoujing shortly afterwards resemble the style of instrumentation built at Maragheh. In particular, the "simplified instrument" (jianyi) and the large gnomon at the Gaocheng Astronomical Observatory show traces of Islamic influence. While formulating the Shoushili calendar in 1281, Shoujing's work in spherical trigonometry may have also been partially influenced by Islamic mathematics, which was largely accepted at Kublai's court. These possible influences include a pseudo-geometrical method for converting between equatorial and ecliptic coordinates, the systematic use of decimals in the underlying parameters, and the application of cubic interpolation in the calculation of the irregularity in the planetary motions. Hongwu Emperor (r. 1368–1398) of the Ming dynasty (1328–1398), in the first year of his reign (1368), conscripted Han and non-Han astrology specialists from the astronomical institutions in Beijing of the former Mongolian Yuan to Nanjing to become officials of the newly established national observatory. That year, the Ming government summoned for the first time the astronomical officials to come south from the upper capital of Yuan. There were fourteen of them. In order to enhance accuracy in methods of observation and computation, Hongwu Emperor reinforced the adoption of parallel calendar systems, the Han and the Hui. In the following years, the Ming Court appointed several Hui astrologers to hold high positions in the Imperial Observatory. They wrote many books on Islamic astronomy and also manufactured astronomical equipment based on the Islamic system. The translation of two important works into Chinese was completed in 1383: Zij (1366) and al-Madkhal fi Sina'at Ahkam al-Nujum, Introduction to Astrology (1004). In 1384, a Chinese astrolabe was made for observing stars based on the instructions for making multi-purposed Islamic equipment. In 1385, the apparatus was installed on a hill in northern Nanjing. Around 1384, during the Ming dynasty, Hongwu Emperor ordered the Chinese translation and compilation of Islamic astronomical tables, a task that was carried out by the scholars Mashayihei, a Muslim astronomer, and Wu Bozong, a Chinese scholar-official. These tables came to be known as the Huihui Lifa (Muslim System of Calendrical Astronomy), which was published in China a number of times until the early 18th century, though the Qing dynasty had officially abandoned the tradition of Chinese-Islamic astronomy in 1659. The Muslim astronomer Yang Guangxian was known for his attacks on the Jesuit's astronomical sciences. In the early Joseon, the Islamic calendar served as a basis for calendar reform being more accurate than the existing Chinese-based calendars. A Korean translation of the Huihui Lifa, a text combining Chinese astronomy with Islamic astronomy works of Jamal ad-Din, was studied in Joseon Korea during the time of Sejong the Great in the 15th century. Observatories The first systematic observations in Islam are reported to have taken place under the patronage of al-Mamun. Here, and in many other private observatories from Damascus to Baghdad, meridian degree measurement were performed (al-Ma'mun's arc measurement), solar parameters were established, and detailed observations of the Sun, Moon, and planets were undertaken. During the 10th century, the Buwayhid dynasty encouraged the undertaking of extensive works in astronomy; such as the construction of a large-scale instruments with which observations were made in the year 950. This is known through recordings made in the zij of astronomers such as Ibn al-A'lam. The great astronomer Abd al-Rahman al-Sufi was patronised by prince 'Adud al-Dawla, who systematically revised Ptolemy's catalogue of stars. Sharaf al-Dawla also established a similar observatory in Baghdad. Reports by Ibn Yunus and al-Zarqali in Toledo and Cordoba indicate the use of sophisticated instruments for their time. It was Malik Shah I who established the first large observatory, probably in Isfahan. It was here where Omar Khayyám with many other collaborators constructed a zij and formulated the Persian Solar Calendar a.k.a. the jalali calendar. A modern version of this calendar, the Solar Hijri calendar, is still in official use in Iran and Afghanistan today. The most influential observatory was however founded by Hulegu Khan during the 13th century. Here, Nasir al-Din al-Tusi supervised its technical construction at Maragha. The facility contained resting quarters for Hulagu Khan, as well as a library and mosque. Some of the top astronomers of the day gathered there, and from their collaboration resulted important modifications to the Ptolemaic system over a period of 50 years. In 1420, prince Ulugh Beg, himself an astronomer and mathematician, founded another large observatory in Samarkand, the remains of which were excavated in 1908 by Russian teams. And finally, Taqi ad-Din Muhammad ibn Ma'ruf founded a large observatory in Ottoman Constantinople in 1577, which was on the same scale as those in Maragha and Samarkand. The observatory was short-lived however, as opponents of the observatory and prognostication from the heavens prevailed and the observatory was destroyed in 1580. While the Ottoman clergy did not object to the science of astronomy, the observatory was primarily being used for astrology, which they did oppose, and successfully sought its destruction. As observatory development continued, Islamicate scientists began to pioneer the planetarium. The major difference between a planetarium and an observatory is how the universe is projected. In an observatory, viewers look up into the night sky, on the other hand, planetariums allow for universes planets and stars to project at eye-level in a room. Scientist Ibn Firnas, created a planetarium in his home that included artificial storm noises and was completely made of glass. Instruments Our knowledge of the instruments used by Muslim astronomers primarily comes from two sources: first the remaining instruments in private and museum collections today, and second the treatises and manuscripts preserved from the Middle Ages. Muslim astronomers of the "Golden Period" made many improvements to instruments already in use before their time, such as adding new scales or details. Celestial globes were used primarily for solving problems in celestial astronomy. Today, 126 such instruments remain worldwide, the oldest from the 11th century. The altitude of the Sun, or the Right Ascension and Declination of stars could be calculated with these by inputting the location of the observer on the meridian ring of the globe. The initial blueprint for a portable celestial globe to measure celestial coordinates came from Spanish Muslim astronomer Jabir ibn Aflah (d. 1145). Another skillful Muslim astronomer working on celestial globes was Abd al-Rahman al-Sufi (b. 903), whose treatise the Book of Fixed Stars describes how to design the constellation images on the globe, as well as how to use the celestial globe. However, it was in Iraq in the 10th century that astronomer Al-Battani was working on celestial globes to record celestial data. This was different because up until then, the traditional use for a celestial globe was as an observational instrument. Al-Battani's treatise describes in detail the plotting coordinates for 1,022 stars, as well as how the stars should be marked. An armillary sphere had similar applications. No early Islamic armillary spheres survive, but several treatises on "the instrument with the rings" were written. In this context there is also an Islamic development, the spherical astrolabe, of which only one complete instrument, from the 14th century, has survived. Brass astrolabes were an invention of Late Antiquity. The first Islamic astronomer reported as having built an astrolabe is Muhammad al-Fazari (late 8th century). Astrolabes were popular in the Islamic world during the "Golden Age", chiefly as an aid to finding the qibla. The earliest known example is dated to 927/8 (AH 315). The device was incredibly useful, and sometime during the 10th century it was brought to Europe from the Muslim world, where it inspired Latin scholars to take up an interest in both math and astronomy.[failed verification] The largest function of the astrolabe is it serves as a portable model of space that can calculate the approximate location of any heavenly body found within the Solar System at any point in time, provided the latitude of the observer is accounted for. In order to adjust for latitude, astrolabes often had a second plate on top of the first, which the user could swap out to account for their correct latitude. One of the most useful features of the device is that the projection created allows users to calculate and solve mathematical problems graphically which could otherwise be done only by using complex spherical trigonometry, allowing for earlier access to great mathematical feats. In addition to this, use of the astrolabe allowed for ships at sea to calculate their position given that the device is fixed upon a star with a known altitude. Standard astrolabes performed poorly on the ocean, as bumpy waters and aggressive winds made use difficult, so a new iteration of the device, known as a Mariner's astrolabe, was developed to counteract the difficult conditions of the sea. The instruments were used to read the time of the Sun rising and fixed stars. al-Zarqali of Andalusia constructed one such instrument in which, unlike its predecessors, did not depend on the latitude of the observer, and could be used anywhere. This instrument became known in Europe as the Saphea. The astrolabe was arguably the most important instrument created and used for astronomical purposes in the medieval period. Its invention in early medieval times required immense study and much trial and error in order to find the right method of which to construct it to where it would work efficiently and consistently, and its invention led to several mathematic advances which came from the problems that arose from using the instrument. The astrolabe's original purpose was to allow one to find the altitudes of the sun and many visible stars, during the day and night, respectively. However, they have ultimately come to provide great contribution to the progress of mapping the globe, thus resulting in further exploration of the sea, which then resulted in a series of positive events that allowed the world we know today to come to be. The astrolabe has served many purposes over time, and it has shown to be quite a key factor from medieval times to the present. The astrolabe required the use of mathematics, and the development of the instrument incorporated azimuth circles, which opened a series of questions on further mathematical dilemmas. Astrolabes served the purpose of finding the altitude of the sun, which also meant that they provided one the ability to find the direction of Muslim prayer (or the direction of Mecca). Aside from these purposes, the astrolabe had a great influence on navigation, specifically in the marine world. This advancement made the calculation of latitude simpler, which led to an increase in sea exploration, and indirectly led to the Renaissance revolution, an increase in global trade activity, and ultimately the discovery of several of the world's continents. Abu Rayhan Biruni designed an instrument he called "Box of the Moon", which was a mechanical lunisolar calendar, employing a gear train and eight gear-wheels. This was an early example of a fixed-wired knowledge processing machine. This work of Al Biruni uses the same gear trains preserved in a 6th century Byzantine portable sundial. Muslims made several important improvements[which?] to the theory and construction of sundials, which they inherited from their Indian and Greek predecessors. Khwarizmi made tables for these instruments which considerably shortened the time needed to make specific calculations. Sundials were frequently placed on mosques to determine the time of prayer. One of the most striking examples was built in the 14th century by the muwaqqit (timekeeper) of the Umayyad Mosque in Damascus, ibn al-Shatir. Several forms of quadrants were invented by Muslims. Among them was the sine quadrant used for astronomical calculations, and various forms of the horary quadrant used to determine the time (especially the times of prayer) by observations of the Sun or stars. A center of the development of quadrants was 9th century Baghdad. Abu Bakr ibn al-Sarah al-Hamawi (d. 1329) was a Syrian astronomer that invented a quadrant called “al-muqantarat al-yusra”. He devoted his time to writing several books on his accomplishments and advancements with quadrants and geometrical problems. His works on quadrants include Treatise on Operations with the Hidden Quadrant and Rare Pearls on Operations with the Circle for Finding Sines. These instruments could measure the altitude between a celestial object and the horizon. However, as Muslim astronomers used them, they began to find other ways to use them. For example, the mural quadrant, for recording the angles of planets and celestial bodies. Or the universal quadrant, for latitude solving astronomical problems. The horary quadrant, for finding the time of day with the sun. The almucantar quadrant, which was developed from the astrolabe. Planetary equatoria were probably made by ancient Greeks, although no findings nor descriptions have been preserved from that period. In his comment on Ptolemy's Handy Tables, 4th century mathematician Theon of Alexandria introduced some diagrams to geometrically compute the position of the planets based on Ptolemy's epicyclical theory. The first description of the construction of a solar (as opposed to planetary) equatorium is contained in Proclus's 5th century work Hypotyposis, where he gives instructions on how to construct one in wood or bronze. The earliest known description of a planetary equatorial is contained in early 11th century treatise by Ibn al-Samh, preserved only as a 13th-century Castillian translation contained in the Libros del saber de astronomia (Books of the knowledge of astronomy); the same book contains also a 1080/1081 treatise on the equatorial by Al-Zarqali. Astronomy in Islamic art Examples of cosmological imagery in Islamic art can be found in objects such as manuscripts, astrological tools, and palace frescoes, and the study of the heavens by Islamic astronomers has translated into artistic representations of the universe and astrological concepts. The Islamic world gleaned inspiration from Greek, Iranian, and Indian traditions to represent the stars and the universe. The desert castle at Qasr Amra, which was used as a Umayyad palace, has a bath dome decorated with the Islamic zodiac and other celestial designs. The Islamic zodiac and astrological visuals can be seen in examples of metalwork. Ewers depicting the twelve zodiac symbols exist in order to emphasize elite craftsmanship and carry blessings such as one example now at the Metropolitan Museum of Art. Coinage also carried zodiac imagery that bears the sole purpose of representing the month in which the coin was minted. As a result, astrological symbols could have been used as both decoration, and a means to communicate symbolic meanings or specific information. Notable astronomers Some of the below are from Hill (1993), Islamic Science And Engineering'. See also References Sources External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Metaobject] | [TOKENS: 955]
Contents Metaobject In computer science, a metaobject is an object that manipulates, creates, describes, or implements objects (including itself). The object that the metaobject pertains to is called the base object. Some information that a metaobject might define includes the base object's type, interface, class, methods, attributes, parse tree, etc. Metaobjects are examples of the computer science concept of reflection, where a system has access (usually at run time) to its own internal structure. Reflection enables a system to essentially rewrite itself on the fly, to alter its own implementation as it executes. Metaobject protocol A metaobject protocol (MOP) provides the vocabulary (protocol) to access and manipulate the structure and behaviour of systems of objects. Typical functions of a metaobject protocol include: Metaobject protocol is contrary to Bertrand Meyer's open/closed principle, which holds that software object systems should be open for extension but closed for modification. This principle effectively draws a distinction between extending an object by adding to it, and modifying an object by redefining it, proposing that the former is a desirable quality ("objects should be extensible to meet the requirements of future use cases"), while the latter is undesirable ("objects should provide a stable interface not subject to summary revision"). Metaobject protocol, by contrast, transparently exposes the internal composition of objects and the entire object system in terms of the system itself. In practice, this means that programmers may use objects to redefine themselves, possibly in quite complex ways. Furthermore, metaobject protocol is not merely an interface to an "underlying" implementation; rather, through metaobject protocol the object system is recursively implemented in terms of a meta-object system, which itself is theoretically implemented in terms of a meta-metaobject system, and so on until an arbitrary base case (a consistent state of the object system) is determined, with the protocol as such being the recursive functional relationship between these implementation levels. Implementing object systems in such a way opens the possibility for radical discretionary redesign, providing deep flexibility but introducing possibly complex or difficult-to-understand metastability issues (for instance, the object system must not destructively update its own metaobject protocol - its internal self-representation - but the potential destructiveness of some updates is non-trivial to predict and may be hard to reason about), depending on the recursive depth to which the desired modifications are propagated. For this reason, metaobject protocol, when present in a language, is usually used sparingly and for specialised purposes such as software that transforms other software or itself in sophisticated ways, for example in reverse engineering. When compilation is not available at run-time there are additional complications for the implementation of metaobject protocol. For example, it is possible to change the type hierarchy with such a protocol but doing so may cause problems for code compiled with an alternative class model definition. Some environments have found innovative solutions for this, e.g., by handling metaobject issues at compile time. A good example of this is OpenC++. The Semantic Web object-oriented model is more dynamic than most standard object systems, and is consistent with runtime metaobject protocols. For example, in the Semantic Web model classes are expected to change their relations to each other and there is a special inference engine known as a classifier that can validate and analyze evolving class models. The first metaobject protocol was in the Smalltalk object-oriented programming language developed at Xerox PARC. The Common Lisp Object System (CLOS) came later and was influenced by the Smalltalk protocol as well as by Brian C. Smith's original studies on 3-Lisp as an infinite tower of evaluators. The CLOS model, unlike the Smalltalk model, allows a class to have more than one superclass; this raises additional complexity in issues such as resolving the lineage of the class hierarchy on some object instance. CLOS also allows for dynamic multimethod dispatch, which is handled via generic functions rather than message passing like in Smalltalk's single dispatch. The most influential book describing the semantics and implementation of the metaobject protocol in Common Lisp is The Art of the Metaobject Protocol by Gregor Kiczales et al. Metaobject protocols are also extensively used in software engineering applications. In virtually all commercial CASE, re-engineering, and Integrated Development Environments there is some form of metaobject protocol to represent and manipulate the design artifacts. A metaobject protocol is one way to implement aspect-oriented programming. Many of the early founders of MOPs, including Gregor Kiczales, have since moved on to be the primary advocates for aspect-oriented programming. Kiczales et al. of PARC were hired to design AspectJ for Java, a language which does not possess a native metaobject protocol. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Orion_(constellation)#cite_note-42] | [TOKENS: 4993]
Contents Orion (constellation) Orion is a prominent set of stars visible during winter in the northern celestial hemisphere. It is one of the 88 modern constellations; it was among the 48 constellations listed by the 2nd-century AD/CE astronomer Ptolemy. It is named after a hunter in Greek mythology. Orion is most prominent during winter evenings in the Northern Hemisphere, as are five other constellations that have stars in the Winter Hexagon asterism. Orion's two brightest stars, Rigel (β) and Betelgeuse (α), are both among the brightest stars in the night sky; both are supergiants and slightly variable. There are a further six stars brighter than magnitude 3.0, including three making the short straight line of the Orion's Belt asterism. Orion also hosts the radiant of the annual Orionids, the strongest meteor shower associated with Halley's Comet, and the Orion Nebula, one of the brightest nebulae in the sky. Characteristics Orion is bordered by Taurus to the northwest, Eridanus to the southwest, Lepus to the south, Monoceros to the east, and Gemini to the northeast. Covering 594 square degrees, Orion ranks 26th of the 88 constellations in size. The constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 26 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 04h 43.3m and 06h 25.5m , while the declination coordinates are between 22.87° and −10.97°. The constellation's three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "Ori". Orion is most visible in the evening sky from January to April, winter in the Northern Hemisphere, and summer in the Southern Hemisphere. In the tropics (less than about 8° from the equator), the constellation transits at the zenith. From May to July (summer in the Northern Hemisphere, winter in the Southern Hemisphere), Orion is in the daytime sky and thus invisible at most latitudes. However, for much of Antarctica in the Southern Hemisphere's winter months, the Sun is below the horizon even at midday. Stars (and thus Orion, but only the brightest stars) are then visible at twilight for a few hours around local noon, just in the brightest section of the sky low in the North where the Sun is just below the horizon. At the same time of day at the South Pole itself (Amundsen–Scott South Pole Station), Rigel is only 8° above the horizon, and the Belt sweeps just along it. In the Southern Hemisphere's summer months, when Orion is normally visible in the night sky, the constellation is actually not visible in Antarctica because the Sun does not set at that time of year south of the Antarctic Circle. In countries close to the equator (e.g. Kenya, Indonesia, Colombia, Ecuador), Orion appears overhead in December around midnight and in the February evening sky. Navigational aid Orion is very useful as an aid to locating other stars. By extending the line of the Belt southeastward, Sirius (α CMa) can be found; northwestward, Aldebaran (α Tau). A line eastward across the two shoulders indicates the direction of Procyon (α CMi). A line from Rigel through Betelgeuse points to Castor and Pollux (α Gem and β Gem). Additionally, Rigel is part of the Winter Circle asterism. Sirius and Procyon, which may be located from Orion by following imaginary lines (see map), also are points in both the Winter Triangle and the Circle. Features Orion's seven brightest stars form a distinctive hourglass-shaped asterism, or pattern, in the night sky. Four stars—Rigel, Betelgeuse, Bellatrix, and Saiph—form a large roughly rectangular shape, at the center of which lie the three stars of Orion's Belt—Alnitak, Alnilam, and Mintaka. His head is marked by an additional eighth star called Meissa, which is fairly bright to the observer. Descending from the Belt is a smaller line of three stars, Orion's Sword (the middle of which is in fact not a star but the Orion Nebula), also known as the hunter's sword. Many of the stars are luminous hot blue supergiants, with the stars of the Belt and Sword forming the Orion OB1 association. Standing out by its red hue, Betelgeuse may nevertheless be a runaway member of the same group. Orion's Belt, or The Belt of Orion, is an asterism within the constellation. It consists of three bright stars: Alnitak (Zeta Orionis), Alnilam (Epsilon Orionis), and Mintaka (Delta Orionis). Alnitak is around 800 light-years away from Earth, 100,000 times more luminous than the Sun, and shines with a magnitude of 1.8; much of its radiation is in the ultraviolet range, which the human eye cannot see. Alnilam is approximately 2,000 light-years from Earth, shines with a magnitude of 1.70, and with an ultraviolet light that is 375,000 times more luminous than the Sun. Mintaka is 915 light-years away and shines with a magnitude of 2.21. It is 90,000 times more luminous than the Sun and is a double star: the two orbit each other every 5.73 days. In the Northern Hemisphere, Orion's Belt is best visible in the night sky during the month of January at around 9:00 pm, when it is approximately around the local meridian. Just southwest of Alnitak lies Sigma Orionis, a multiple star system composed of five stars that have a combined apparent magnitude of 3.7 and lying at a distance of 1150 light-years. Southwest of Mintaka lies the quadruple star Eta Orionis. Orion's Sword contains the Orion Nebula, the Messier 43 nebula, Sh 2-279 (also known as the Running Man Nebula), and the stars Theta Orionis, Iota Orionis, and 42 Orionis. Three stars comprise a small triangle that marks the head. The apex is marked by Meissa (Lambda Orionis), a hot blue giant of spectral type O8 III and apparent magnitude 3.54, which lies some 1100 light-years distant. Phi-1 and Phi-2 Orionis make up the base. Also nearby is the young star FU Orionis. Stretching north from Betelgeuse are the stars that make up Orion's club. Mu Orionis marks the elbow, Nu and Xi mark the handle of the club, and Chi1 and Chi2 mark the end of the club. Just east of Chi1 is the Mira-type variable red giant star U Orionis. West from Bellatrix lie six stars all designated Pi Orionis (π1 Ori, π2 Ori, π3 Ori, π4 Ori, π5 Ori, and π6 Ori) which make up Orion's shield. Around 20 October each year, the Orionid meteor shower (Orionids) reaches its peak. Coming from the border with the constellation Gemini, as many as 20 meteors per hour can be seen. The shower's parent body is Halley's Comet. Hanging from Orion's Belt is his sword, consisting of the multiple stars θ1 and θ2 Orionis, called the Trapezium and the Orion Nebula (M42). This is a spectacular object that can be clearly identified with the naked eye as something other than a star. Using binoculars, its clouds of nascent stars, luminous gas, and dust can be observed. The Trapezium cluster has many newborn stars, including several brown dwarfs, all of which are at an approximate distance of 1,500 light-years. Named for the four bright stars that form a trapezoid, it is largely illuminated by the brightest stars, which are only a few hundred thousand years old. Observations by the Chandra X-ray Observatory show both the extreme temperatures of the main stars—up to 60,000 kelvins—and the star forming regions still extant in the surrounding nebula. M78 (NGC 2068) is a nebula in Orion. With an overall magnitude of 8.0, it is significantly dimmer than the Great Orion Nebula that lies to its south; however, it is at approximately the same distance, at 1600 light-years from Earth. It can easily be mistaken for a comet in the eyepiece of a telescope. M78 is associated with the variable star V351 Orionis, whose magnitude changes are visible in very short periods of time. Another fairly bright nebula in Orion is NGC 1999, also close to the Great Orion Nebula. It has an integrated magnitude of 10.5 and is 1500 light-years from Earth. The variable star V380 Orionis is embedded in NGC 1999. Another famous nebula is IC 434, the Horsehead Nebula, near Alnitak (Zeta Orionis). It contains a dark dust cloud whose shape gives the nebula its name. NGC 2174 is an emission nebula located 6400 light-years from Earth. Besides these nebulae, surveying Orion with a small telescope will reveal a wealth of interesting deep-sky objects, including M43, M78, and multiple stars including Iota Orionis and Sigma Orionis. A larger telescope may reveal objects such as the Flame Nebula (NGC 2024), as well as fainter and tighter multiple stars and nebulae. Barnard's Loop can be seen on very dark nights or using long-exposure photography. All of these nebulae are part of the larger Orion molecular cloud complex, which is located approximately 1,500 light-years away and is hundreds of light-years across. Due to its proximity, it is one of the most intense regions of stellar formation visible from Earth. The Orion molecular cloud complex forms the eastern part of an even larger structure, the Orion–Eridanus Superbubble, which is visible in X-rays and in hydrogen emissions. History and mythology The distinctive pattern of Orion is recognized in numerous cultures around the world, and many myths are associated with it. Orion is used as a symbol in the modern world. In Siberia, the Chukchi people see Orion as a hunter; an arrow he has shot is represented by Aldebaran (Alpha Tauri), with the same figure as other Western depictions. In Greek mythology, Orion was a gigantic, supernaturally strong hunter, born to Euryale, a Gorgon, and Poseidon (Neptune), god of the sea. One myth recounts Gaia's rage at Orion, who dared to say that he would kill every animal on Earth. The angry goddess tried to dispatch Orion with a scorpion. This is given as the reason that the constellations of Scorpius and Orion are never in the sky at the same time. However, Ophiuchus, the Serpent Bearer, revived Orion with an antidote. This is said to be the reason that the constellation of Ophiuchus stands midway between the Scorpion and the Hunter in the sky. The constellation is mentioned in Horace's Odes (Ode 3.27.18), Homer's Odyssey (Book 5, line 283) and Iliad, and Virgil's Aeneid (Book 1, line 535). In old Hungarian tradition, Orion is known as "Archer" (Íjász), or "Reaper" (Kaszás). In recently rediscovered myths, he is called Nimrod (Hungarian: Nimród), the greatest hunter, father of the twins Hunor and Magor. The π and o stars (on upper right) form together the reflex bow or the lifted scythe. In other Hungarian traditions, Orion's Belt is known as "Judge's stick" (Bírópálca). In Ireland and Scotland, Orion was called An Bodach, a figure from Irish folklore whose name literally means "the one with a penis [bod]" and was the husband of the Cailleach (hag). In Scandinavian tradition, Orion's Belt was known as "Frigg's Distaff" (friggerock) or "Freyja's distaff". The Finns call Orion's Belt and the stars below it "Väinämöinen's scythe" (Väinämöisen viikate). Another name for the asterism of Alnilam, Alnitak, and Mintaka is "Väinämöinen's Belt" (Väinämöisen vyö) and the stars "hanging" from the Belt as "Kaleva's sword" (Kalevanmiekka). There are claims in popular media that the Adorant from the Geißenklösterle cave, an ivory carving estimated to be 35,000 to 40,000 years old, is the first known depiction of the constellation. Scholars dismiss such interpretations, saying that perceived details such as a belt and sword derive from preexisting features in the grain structure of the ivory. The Babylonian star catalogues of the Late Bronze Age name Orion MULSIPA.ZI.AN.NA,[note 1] "The Heavenly Shepherd" or "True Shepherd of Anu" – Anu being the chief god of the heavenly realms. The Babylonian constellation is sacred to Papshukal and Ninshubur, both minor gods fulfilling the role of "messenger to the gods". Papshukal is closely associated with the figure of a walking bird on Babylonian boundary stones, and on the star map the figure of the Rooster is located below and behind the figure of the True Shepherd—both constellations represent the herald of the gods, in his bird and human forms respectively. In ancient Egypt, the stars of Orion were regarded as a god, called Sah. Because Orion rises before Sirius, the star whose heliacal rising was the basis for the Solar Egyptian calendar, Sah was closely linked with Sopdet, the goddess who personified Sirius. The god Sopdu is said to be the son of Sah and Sopdet. Sah is syncretized with Osiris, while Sopdet is syncretized with Osiris' mythological wife, Isis. In the Pyramid Texts, from the 24th and 23rd centuries BC, Sah is one of many gods whose form the dead pharaoh is said to take in the afterlife. The Armenians identified their legendary patriarch and founder Hayk with Orion. Hayk is also the name of the Orion constellation in the Armenian translation of the Bible. The Bible mentions Orion three times, naming it "Kesil" (כסיל, literally – fool). Though, this name perhaps is etymologically connected with "Kislev", the name for the ninth month of the Hebrew calendar (i.e. November–December), which, in turn, may derive from the Hebrew root K-S-L as in the words "kesel, kisla" (כֵּסֶל, כִּסְלָה, hope, positiveness), i.e. hope for winter rains.: Job 9:9 ("He is the maker of the Bear and Orion"), Job 38:31 ("Can you loosen Orion's belt?"), and Amos 5:8 ("He who made the Pleiades and Orion"). In ancient Aram, the constellation was known as Nephîlā′, the Nephilim are said to be Orion's descendants. In medieval Muslim astronomy, Orion was known as al-jabbar, "the giant". Orion's sixth brightest star, Saiph, is named from the Arabic, saif al-jabbar, meaning "sword of the giant". In China, Orion was one of the 28 lunar mansions Sieu (Xiù) (宿). It is known as Shen (參), literally meaning "three", for the stars of Orion's Belt. The Chinese character 參 (pinyin shēn) originally meant the constellation Orion (Chinese: 參宿; pinyin: shēnxiù); its Shang dynasty version, over three millennia old, contains at the top a representation of the three stars of Orion's Belt atop a man's head (the bottom portion representing the sound of the word was added later). The Rigveda refers to the constellation as Mriga (the Deer). Nataraja, "the cosmic dancer", is often interpreted as the representation of Orion. Rudra, the Rigvedic form of Shiva, is the presiding deity of Ardra nakshatra (Betelgeuse) of Hindu astrology. The Jain Symbol carved in the Udayagiri and Khandagiri Caves, India in 1st century BCE has a striking resemblance with Orion. Bugis sailors identified the three stars in Orion's Belt as tanra tellué, meaning "sign of three". The Seri people of northwestern Mexico call the three stars in Orion's Belt Hapj (a name denoting a hunter) which consists of three stars: Hap (mule deer), Haamoja (pronghorn), and Mojet (bighorn sheep). Hap is in the middle and has been shot by the hunter; its blood has dripped onto Tiburón Island. The same three stars are known in Spain and most of Latin America as "Las tres Marías" (Spanish for "The Three Marys"). In Puerto Rico, the three stars are known as the "Los Tres Reyes Magos" (Spanish for The Three Wise Men). The Ojibwa/Chippewa Native Americans call this constellation Mesabi for Big Man. To the Lakota Native Americans, Tayamnicankhu (Orion's Belt) is the spine of a bison. The great rectangle of Orion is the bison's ribs; the Pleiades star cluster in nearby Taurus is the bison's head; and Sirius in Canis Major, known as Tayamnisinte, is its tail. Another Lakota myth mentions that the bottom half of Orion, the Constellation of the Hand, represented the arm of a chief that was ripped off by the Thunder People as a punishment from the gods for his selfishness. His daughter offered to marry the person who can retrieve his arm from the sky, so the young warrior Fallen Star (whose father was a star and whose mother was human) returned his arm and married his daughter, symbolizing harmony between the gods and humanity with the help of the younger generation. The index finger is represented by Rigel; the Orion Nebula is the thumb; the Belt of Orion is the wrist; and the star Beta Eridani is the pinky finger. The seven primary stars of Orion make up the Polynesian constellation Heiheionakeiki which represents a child's string figure similar to a cat's cradle. Several precolonial Filipinos referred to the belt region in particular as "balatik" (ballista) as it resembles a trap of the same name which fires arrows by itself and is usually used for catching pigs from the bush. Spanish colonization later led to some ethnic groups referring to Orion's Belt as "Tres Marias" or "Tatlong Maria." In Māori tradition, the star Rigel (known as Puanga or Puaka) is closely connected with the celebration of Matariki. The rising of Matariki (the Pleiades) and Rigel before sunrise in midwinter marks the start of the Māori year. In Javanese culture, the constellation is often called Lintang Waluku or Bintang Bajak, referring to the shape of a paddy field plow. The imagery of the Belt and Sword has found its way into popular Western culture, for example in the form of the shoulder insignia of the 27th Infantry Division of the United States Army during both World Wars, probably owing to a pun on the name of the division's first commander, Major General John F. O'Ryan. The film distribution company Orion Pictures used the constellation as its logo. In artistic renderings, the surrounding constellations are sometimes related to Orion: he is depicted standing next to the river Eridanus with his two hunting dogs Canis Major and Canis Minor, fighting Taurus. He is sometimes depicted hunting Lepus the hare. He sometimes is depicted to have a lion's hide in his hand. There are alternative ways to visualise Orion. From the Southern Hemisphere, Orion is oriented south-upward, and the Belt and Sword are sometimes called the saucepan or pot in Australia and New Zealand. Orion's Belt is called Drie Konings (Three Kings) or the Drie Susters (Three Sisters) by Afrikaans speakers in South Africa and are referred to as les Trois Rois (the Three Kings) in Daudet's Lettres de Mon Moulin (1866). The appellation Driekoningen (the Three Kings) is also often found in 17th and 18th-century Dutch star charts and seaman's guides. The same three stars are known in Spain, Latin America, and the Philippines as "Las Tres Marías" (The Three Marys), and as "Los Tres Reyes Magos" (The Three Wise Men) in Puerto Rico. Even traditional depictions of Orion have varied greatly. Cicero drew Orion in a similar fashion to the modern depiction. The Hunter held an unidentified animal skin aloft in his right hand; his hand was represented by Omicron2 Orionis and the skin was represented by the five stars designated Pi Orionis. Saiph and Rigel represented his left and right knees, while Eta Orionis and Lambda Leporis were his left and right feet, respectively. As in the modern depiction, Mintaka, Alnilam, and Alnitak represented his Belt. His left shoulder was represented by Betelgeuse, and Mu Orionis made up his left arm. Meissa was his head, and Bellatrix his right shoulder. The depiction of Hyginus was similar to that of Cicero, though the two differed in a few important areas. Cicero's animal skin became Hyginus's shield (Omicron and Pi Orionis), and instead of an arm marked out by Mu Orionis, he holds a club (Chi Orionis). His right leg is represented by Theta Orionis and his left leg is represented by Lambda, Mu, and Epsilon Leporis. Further Western European and Arabic depictions have followed these two models. Future Orion is located on the celestial equator, but it will not always be so located due to the effects of precession of the Earth's axis. Orion lies well south of the ecliptic, and it only happens to lie on the celestial equator because the point on the ecliptic that corresponds to the June solstice is close to the border of Gemini and Taurus, to the north of Orion. Precession will eventually carry Orion further south, and by AD 14000, Orion will be far enough south that it will no longer be visible from the latitude of Great Britain. Further in the future, Orion's stars will gradually move away from the constellation due to proper motion. However, Orion's brightest stars all lie at a large distance from Earth on an astronomical scale—much farther away than Sirius, for example. Orion will still be recognizable long after most of the other constellations—composed of relatively nearby stars—have distorted into new configurations, with the exception of a few of its stars eventually exploding as supernovae, for example Betelgeuse, which is predicted to explode sometime in the next million years. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_note-The_Colossus_Computer-57] | [TOKENS: 10628]
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Potassium] | [TOKENS: 7740]
Contents Potassium Potassium is a chemical element; it has symbol K (from Neo-Latin kalium) and atomic number 19. It is a silvery white metal that is soft enough to easily cut with a knife. Potassium metal reacts rapidly with atmospheric oxygen to form flaky white potassium peroxide in only seconds of exposure. It was first isolated from potash, the ashes of plants, from which its name derives. In the periodic table, potassium is one of the alkali metals, all of which have a single valence electron in the outer electron shell, which is easily removed to create an ion with a positive charge (which combines with anions to form salts). In nature, potassium occurs only in ionic salts. Elemental potassium reacts vigorously with water, generating sufficient heat to ignite hydrogen emitted in the reaction, and burning with a lilac-colored flame. It is found dissolved in seawater (which is 0.04% potassium by weight), and occurs in many minerals such as orthoclase, a common constituent of granites and other igneous rocks. Potassium is chemically very similar to sodium, the previous element in group 1 of the periodic table. They have a similar first ionization energy, which allows for each atom to give up its sole outer electron. It was first suggested in 1702 that they were distinct elements that combine with the same anions to make similar salts, which was demonstrated in 1807 when elemental potassium was first isolated via electrolysis. Naturally occurring potassium is composed of three isotopes, of which 40K is radioactive. Traces of 40K are found in natural sources of potassium, and it is the most common radioisotope in the human body. Potassium ions are vital for the functioning of all living cells. The transfer of potassium ions across nerve cell membranes is necessary for normal nerve transmission; potassium deficiency and excess can each result in numerous signs and symptoms, including an abnormal heart rhythm and various electrocardiographic abnormalities. Fresh fruits and vegetables are good dietary sources of potassium. The body responds to the influx of dietary potassium by increasing potassium excretion by the kidneys and sequestering potassium in the liver and muscles to avoid changes in serum potassium levels. Most industrial applications of potassium exploit the high solubility of its compounds in water, such as saltwater soap. Heavy crop production rapidly depletes the soil of potassium, and this can be remedied with agricultural fertilizers containing potassium, accounting for 95% of global potassium chemical production. Etymology The English name for the element potassium comes from the word potash, which refers to an early method of extracting various potassium salts: placing in a pot the ash of burnt wood or tree leaves, adding water, heating, and evaporating the solution. Humphry Davy named the element potassium after isolating the metal itself. The symbol K stems from kali, itself from the root word alkali, which in turn comes from Arabic: القَلْيَه al-qalyah 'plant ashes'. In 1797, the German chemist Martin Klaproth discovered "potash" in the minerals leucite and lepidolite, and realized that "potash" was not a product of plant growth but actually contained a new element, which he proposed calling kali. In 1807, Humphry Davy produced the element via electrolysis: in 1809, Ludwig Wilhelm Gilbert proposed the name Kalium for Davy's "potassium". In 1814, the Swedish chemist Berzelius advocated the name kalium for potassium, with the chemical symbol K. The English and French-speaking countries adopted the name Potassium, which was favored by Davy and French chemists Joseph Louis Gay-Lussac and Louis Jacques Thénard, whereas the other Germanic countries adopted Gilbert and Klaproth's name Kalium. The "Gold Book" of the International Union of Pure and Applied Chemistry has designated the official chemical symbol as K. Discovery Potassium metal was first isolated in 1807 by Humphry Davy, who derived it by electrolysis of molten caustic potash (KOH) with the newly discovered voltaic pile. Potassium was the first metal that was isolated by electrolysis. Later in the same year, Davy reported extraction of the metal sodium from a mineral derivative (caustic soda, NaOH, or lye) rather than a plant salt, by a similar technique, demonstrating that the elements, and thus the salts, are different. Although the production of potassium and sodium metal should have shown that both are elements, it took some time before this view was universally accepted. Properties Potassium is a soft silvery solid that easily cut with a knife. Because of the sensitivity of potassium to water and air, air-free techniques are normally employed for handling the element. It is unreactive toward nitrogen and saturated hydrocarbons such as mineral oil or kerosene. It readily dissolves in liquid ammonia, up to 480 g per 1000 g of ammonia at 0 °C to form the electride [K(NH3)6]+e−, which features an electron as an anion. Reflecting its low first ionization energy of 418.8 kJ/mol, potassium is a strong reducing agent, i.e., it readily releases an electron upon contact with other materials. With graphite, potassium metal forms graphite intercalation compounds. One such compound has the formula KC8, a gold colored solid that is described as a K+ salt of negatively charged graphite. Potassium can reduce many salts to the metal as illustrated by the Rieke method for making magnesium powder from magnesium chloride: Most potassium compounds are ionic. Owing to the high hydration energy of the K+ ion, these salts often exhibit excellent water solubility. The main species in water solution are the aquo complexes [K(H2O)n]+ where n = 6 and 7. Although typically insoluble in organic solvents, potassium salts dissolve in polar organic solvents in the presence of crown ethers and cryptand. These organic ligands envelop K+ ions, giving lipophilic coordination complexes. Similar complexation phenomena are found for some ion-binding antibiotics. Potassium forms many binary compounds, i.e., compounds of potassium and one other element. The inventory is so extensive that one gap merits mention: no nitride of potassium is known. Potassium hydride forms directly from the elements: It is a white, pyrophoric solid that finds some use as a base. All of the halides salts are well known: potassium fluoride (KF), potassium chloride (KCl), and potassium bromide (KF), potassium iodide (KI). Four oxides of potassium are well studied: potassium oxide (K2O), potassium peroxide (K2O2), potassium superoxide (KO2) and potassium ozonide (KO3). These species all hydrolyze (react with water) to give potassium hydroxide. Similarly an extensive array of sulfides, selenides, and tellurides are well characterized. Although such simple salts are typically white and diamagnetic, KO2 is something of an exception, being deep yellow and paramagnetic. Although rarely encountered in anhydrous form, KOH is one of the dominant compounds of potassium from the commercial perspective. It is a strong base and highly corrosive. Illustrative of its hydrophilic character, as much as 1.21 kg of KOH can dissolve in a liter of water. KOH reacts readily with carbon dioxide (CO2) to produce potassium carbonate (K2CO3), and in principle could be used to remove traces of the gas from air. Like the closely related sodium hydroxide, KOH reacts with fats to produce soaps. Potassium-based soaps are used in soap dispensers because they more soluble in water than sodium soaps. Nitrate, nitrite, sulfate, and various phosphates also form potassium salts, all white solids, that are widely used. Illustrating the thermal stability typical for these materials, potassium nitrate, sodium nitrate, and sodium nitrite form a eutectic, which remains liquid from 142 to 600 °C. Sodium and potassium salts display virtually identical properties in aqueous solution, but their differing solubilities are of practical value The distinctive solubility of potassium heptafluorotantalate (K2[TaF7]) allows the purification of tantalum from the otherwise persistent contaminant of niobium. The solubility of the K+ compound differs strikingly from that for the Na+ compound in the pairs sodium tetraphenylborate/potassium tetraphenylborate, sodium cobaltinitrite/potassium cobaltinitrite, and sodium hexachloroplatinate/ potassium hexachloroplatinate. These differences are the bases for gravimetric analysis for K+. Several potassium-containing reagents, so-called primary standards, have the advantage of being non-hygroscopic, in contrast the corresponding sodium salts. Thus, the oxidant potassium dichromate, the acid potassium hydrogen phthalate, and the reductant potassium ferrocyanide can be handled in air without gaining weight by hydration. Potassium salts are often produced from the sodium salts e.g., sodium chromate and sodium permanganate, which are more directly obtained from ores. Organopotassium compounds are mainly of academic interest. They feature highly polar covalent K–C bonds. An example is potassium diphenylmethyl KCH(C6H5)2. There are 25 known isotopes of potassium, three of which occur naturally: 39K (93.3%), 40K (0.0117%), and 41K (6.7%) (by mole fraction). Naturally occurring 40K has a half-life of 1.250×109 years. It decays to stable 40Ar by electron capture or positron emission (11.2%) or to stable 40Ca by beta decay (88.8%). This decay results in a relatively higher concentration of Argon in the atmosphere.: 40 The decay of 40K to 40Ar is the basis of a common method for dating rocks. The conventional potassium–argon dating method depends on the assumption that the rocks contained no argon at the time of formation and that all the subsequent radiogenic argon (40Ar) was quantitatively retained. Minerals are dated by measurement of the concentration of potassium and the amount of radiogenic 40Ar that has accumulated. The minerals best suited for dating include biotite, muscovite, metamorphic hornblende, and volcanic feldspar; whole rock samples from volcanic flows and shallow instrusives can also be dated if they are unaltered. Apart from dating, potassium isotopes have been used as tracers in studies of weathering and for nutrient cycling studies because potassium is a macronutrient required for life on Earth. 40K occurs in natural potassium (and thus in some commercial salt substitutes) in sufficient quantity that large bags of those substitutes can be used as a radioactive source for classroom demonstrations. 40K is the radioisotope with the largest abundance in the human body. In healthy animals and people, 40K represents the largest source of radioactivity, greater even than 14C. In a human body of 70 kg, about 4,400 nuclei of 40K decay per second. The activity of natural potassium is 31 Bq/g. History Potash is primarily a mixture of potassium salts because plants have little or no sodium content, and the rest of a plant's major mineral content consists of calcium salts of relatively low solubility in water. While potash has been used since ancient times, its composition was not understood. Georg Ernst Stahl obtained experimental evidence that led him to suggest the fundamental difference of sodium and potassium salts in 1702, and Henri Louis Duhamel du Monceau was able to prove this difference in 1736. The exact chemical composition of potassium and sodium compounds, and the status as chemical element of potassium and sodium, was not known then, and thus Antoine Lavoisier did not include the alkali in his list of chemical elements in 1789. For a long time the only significant applications for potash were the production of glass, bleach, soap and gunpowder as potassium nitrate. Potassium soaps from animal fats and vegetable oils were especially prized because they tend to be more water-soluble and of softer texture, and are therefore known as soft soaps. The discovery by Justus Liebig in 1840 that potassium is a necessary element for plants and that most types of soil lack potassium caused a steep rise in demand for potassium salts. Wood-ash from fir trees was initially used as a potassium salt source for fertilizer, but, with the discovery in 1868 of mineral deposits containing potassium chloride near Staßfurt, Germany, the production of potassium-containing fertilizers began at an industrial scale. Other potash deposits were discovered, and by the 1960s Canada became the dominant producer. Occurrence Potassium is formed in supernovae by nucleosynthesis from lighter atoms. Potassium is believed to be created in Type II supernovae via an oxygen-consuming nuclear reaction, probably during the explosive stage of the nova. 40K is also formed in s-process nucleosynthesis and the neon burning process. Potassium is the 20th most abundant element in the Solar System and the 17th most abundant element by weight in the Earth. It makes up about 2.6% of the weight of the Earth's crust and is the seventh most abundant element in the crust. The potassium concentration in seawater is 0.39 g/L (0.039 wt/v%), about one twenty-seventh the concentration of sodium. Elemental potassium does not occur in nature because of its high reactivity with water and oxygen. Orthoclase (potassium feldspar) is a common rock-forming mineral. Granite for example contains 5% potassium, which is well above the average in the Earth's crust. Sylvite (KCl), carnallite (KCl·MgCl2·6H2O), kainite (MgSO4·KCl·3H2O), and langbeinite (MgSO4·K2SO4) are the minerals found in large evaporite deposits worldwide. The deposits often show layers starting with the least soluble at the bottom and the most soluble on top. Deposits of niter (potassium nitrate) are formed by decomposition of organic material in contact with atmosphere, mostly in caves; because of the good water solubility of niter the formation of larger deposits requires special environmental conditions. Commercial production Potassium salts such as carnallite, langbeinite, polyhalite, and sylvite form extensive evaporite deposits in ancient lake bottoms and seabeds, making extraction of potassium salts in these environments commercially viable. The principal source of potassium – potash – is mined in Canada, Russia, Belarus, Kazakhstan, Germany, Israel, the U.S., Jordan, and other places around the world. The first mined deposits were located near Staßfurt, Germany, but the deposits span from Great Britain over Germany into Poland. They are located in the Zechstein and were deposited in the Middle to Late Permian. Canada leads the world production of potash; the easiest deposits to mine lie 1,000 meters (3,300 feet) below the surface of the Canadian province of Saskatchewan. The water of the Dead Sea is used by Israel and Jordan as a source of potash, while the concentration in normal oceans is too low for commercial production at current prices. Several methods are used to separate potassium salts from sodium and magnesium compounds. The most-used method is fractional precipitation using the solubility differences of the salts. Electrostatic separation of the ground salt mixture is also used in some mines. The resulting sodium and magnesium waste is either stored underground or piled up in slag heaps. Most of the mined potassium mineral ends up as potassium chloride after processing. The mineral industry refers to potassium chloride either as potash, muriate of potash, or simply MOP. Potassium metal can be isolated by electrolysis of its hydroxide in a process that has changed little since it was first used by Humphry Davy in 1807.: 40 Although the electrolysis process was developed and used in industrial scale in the 1920s, the thermal method, reacting sodium with potassium chloride in a chemical equilibrium reaction, became the dominant method in the 1950s.: 40 The production of sodium–potassium alloys is accomplished by changing the reaction time and the amount of sodium used in the reaction. The Griesheimer process employing the reaction of potassium fluoride with calcium carbide was also used to produce potassium. Potassium can be detected by a traditional flame test. Its compounds emit a lilac color with a peak emission wavelength of 766.5 nanometers. Potassium can be quantified by spectroscopic methods, including flame photometry and X-ray fluorescence. Traditional gravimetric analysis is still employed in the fertilizer industry (the dominant use of potassium). The main analytical reagent is hexachloroplatinic acid. Treatment of a solution containing K+ ions with an excess of this platinum compound quantitatively precipitates of potassium hexachloroplatinate, which is easily weighed and is non-hygroscopic: A similar analytical procedure yields a precipitate of potassium tetraphenylborate from sodium tetraphenylborate. Commercial uses Potassium ions are an essential component of plant nutrition and are found in most soil types. They are used as a fertilizer in agriculture, horticulture, and hydroponic culture in the form of chloride (KCl), sulfate (K2SO4), or nitrate (KNO3), representing the 'K' in 'NPK'. Agricultural fertilizers consume 95% of global potassium chemical production, and about 90% of this potassium is supplied as KCl. The potassium content of most plants ranges from 0.5% to 2% of the harvested weight of crops, conventionally expressed as amount of K2O. Modern high-yield agriculture depends upon fertilizers to replace the potassium lost at harvest. Most agricultural fertilizers contain potassium chloride, while potassium sulfate is used for chloride-sensitive crops or crops needing higher sulfur content. The sulfate is produced mostly by decomposition of the complex minerals kainite (MgSO4·KCl·3H2O) and langbeinite (MgSO4·K2SO4). Only a very few fertilizers contain potassium nitrate. In 2005, about 93% of world potassium production was consumed by the fertilizer industry. Furthermore, potassium can play a key role in nutrient cycling by controlling litter composition. Potassium citrate is used to treat a kidney stone condition called renal tubular acidosis. Potassium, in the form of potassium chloride is used as a medication to treat and prevent low blood potassium. Low blood potassium may occur due to vomiting, diarrhea, or certain medications. It is given by slow injection into a vein or by mouth. Potassium sodium tartrate (KNaC4H4O6, Rochelle salt) is a main constituent of some varieties of baking powder; it is also used in the silvering of mirrors. Potassium bromate (KBrO3) is a strong oxidizer (E924), used to improve dough strength and rise height. Potassium bisulfite (KHSO3) is used as a food preservative, for example in wine and beer-making (but not in meats). It is also used to bleach textiles and straw, and in the tanning of leathers. Major potassium chemicals are potassium hydroxide, potassium carbonate, potassium sulfate, and potassium chloride. Megatons of these compounds are produced annually. KOH is a strong base, which is used in industry to neutralize strong and weak acids, to control pH and to manufacture potassium salts. It is also used to saponify fats and oils, in industrial cleaners, and in hydrolysis reactions, for example of esters. Potassium nitrate (KNO3) or saltpeter is obtained from natural sources such as guano and evaporites or manufactured via the Haber process; it is the oxidant in gunpowder (black powder) and an important agricultural fertilizer. Potassium cyanide (KCN) is used industrially to dissolve copper and precious metals, in particular silver and gold, by forming complexes. Its applications include gold mining, electroplating, and electroforming of these metals; it is also used in organic synthesis to make nitriles. Potassium carbonate (K2CO3 or potash) is used in the manufacture of glass, soap, color TV tubes, fluorescent lamps, textile dyes and pigments. Potassium permanganate (KMnO4) is an oxidizing, bleaching and purification substance and is used for production of saccharin. Potassium chlorate (KClO3) is added to matches and explosives. Potassium bromide (KBr) was formerly used as a sedative and in photography. While potassium chromate (K2CrO4) is used in the manufacture of a host of different commercial products such as inks, dyes, wood stains (by reacting with the tannic acid in wood), explosives, fireworks, fly paper, and safety matches, as well as in the tanning of leather, all of these uses are due to the chemistry of the chromate ion rather than to that of the potassium ion. There are thousands of uses of various potassium compounds. One example is potassium superoxide, KO2, an orange solid that acts as a portable source of oxygen and a carbon dioxide absorber. It is widely used in respiration systems in mines, submarines and spacecraft as it takes less volume than the gaseous oxygen. Another example is potassium cobaltinitrite, K3[Co(NO2)6], which is used as artist's pigment under the name of Aureolin or Cobalt Yellow. The stable isotopes of potassium can be laser cooled and used to probe fundamental and technological problems in quantum physics. The two bosonic isotopes possess convenient Feshbach resonances to enable studies requiring tunable interactions, while 40K is one of only two stable fermions amongst the alkali metals. An alloy of sodium and potassium, NaK is a liquid used as a heat-transfer medium and a desiccant for producing dry and air-free solvents. It can also be used in reactive distillation. The ternary alloy of 12% Na, 47% K and 41% Cs has the lowest melting point of −78 °C of any metallic compound. Metallic potassium is used in several types of magnetometers. Biological role Potassium is crucial for diverse plant species. It has a vital role in maintaining cell sap and internal root pressure required for plant growth. Potassium increases plant metabolism and uptake of carbon dioxide.: 48 Potassium is the eighth or ninth most common element by mass (0.2%) in the human body, so that a 60 kg adult contains a total of about 120 g of potassium. The body has about as much potassium as sulfur and chlorine, and only calcium and phosphorus are more abundant (with the exception of the ubiquitous CHON elements). Potassium ions are present in a wide variety of proteins and enzymes. 98% of the potassium in a human body is inside of individual cells. Potassium levels influence multiple physiological processes, including Almost all cells have a sodium–potassium pump transporting sodium ions out and potassium ions in, maintaining a balance in a narrow range of concentrations essential to cell function. This internal homeostasis mechanism requires an external homeostasis mechanism to maintain the concentration of potassium ions in plasma in the intercellular space. External homeostasis is primarily provide by the kidneys. The ion transport system moves potassium across the cell membrane using two mechanisms. One is active and pumps sodium out of, and potassium into, the cell. The other is passive and allows potassium to leak out of the cell. Potassium and sodium cations influence fluid distribution between intracellular and extracellular compartments by osmotic forces. The movement of potassium and sodium through the cell membrane is mediated by the Na⁺/K⁺-ATPase pump. This ion pump uses ATP to pump three sodium ions out of the cell and two potassium ions into the cell, creating an electrochemical gradient and electromotive force across the cell membrane. The highly selective potassium ion channels (which are tetramers) are crucial for hyperpolarization inside neurons after an action potential is triggered, to cite one example. The most recently discovered potassium ion channel is KirBac3.1, which makes a total of five potassium ion channels (KcsA, KirBac1.1, KirBac3.1, KvAP, and MthK) with a determined structure. All five are from prokaryotic species. Potassium can be sequestered in the liver and muscle. This potassium can be released into the extra-cellular plasma between meals to maintain potassium levels. Plasma potassium is normally kept at 3.5 to 5.5 millimoles (mmol) [or milliequivalents (mEq)] per liter by multiple mechanisms. Even narrower ranges are required to reduce mortality for patients with acute myocardial infarction. An average meal of 40–50 mmol presents the body with more potassium than is present in all plasma (20–25 mmol). Renal and extrarenal mechanisms external homeostasis mechanisms limit the rise in plasma potassium to less than 10%. Hypokalemia, a deficiency of potassium in the plasma, can be fatal if severe. Common causes are increased gastrointestinal loss (vomiting, diarrhea), and increased renal loss (diuresis). Deficiency symptoms include muscle weakness, paralytic ileus, ECG abnormalities, decreased reflex response; and in severe cases, respiratory paralysis, alkalosis, and cardiac arrhythmia. Potassium content in the plasma is tightly controlled by three basic mechanisms: The reactive negative-feedback system refers to the system that induces renal secretion of potassium in response to a rise in the plasma potassium (potassium ingestion, shift out of cells, or intravenous infusion). The reactive feed-forward system refers to an incompletely understood system that induces renal potassium secretion in response to potassium ingestion prior to any rise in the plasma potassium. This is probably initiated by gut cell potassium receptors that detect ingested potassium and trigger vagal afferent signals to the pituitary gland. The predictive or circadian system increases renal secretion of potassium during mealtime hours (e.g. daytime for humans, nighttime for rodents) independent of the presence, amount, or absence of potassium ingestion. It is mediated by a circadian oscillator in the suprachiasmatic nucleus of the brain (central clock), which causes the kidney (peripheral clock) to secrete potassium in this rhythmic circadian fashion. Potassium ions are reabsorbed from blood plasma entering the glomeruli into the renal tubule of the kidneys. Only a small amount of potassium reaches the distal nephron. Renal handling of potassium is closely connected to sodium handling. Potassium is the major cation (positive ion) inside animal cells (150 mmol/L, 4.8 g/L), while sodium is the major cation of extracellular fluid (150 mmol/L, 3.345 g/L). In the kidneys, about 180 liters of plasma is filtered through the glomeruli and into the renal tubules per day. Sodium is reabsorbed to maintain extracellular volume, osmotic pressure, and serum sodium concentration within narrow limits. Potassium is reabsorbed to maintain serum potassium concentration within narrow limits. Sodium pumps in the renal tubules operate to reabsorb sodium. Potassium must be conserved, but because the amount of potassium in the blood plasma is very small and the pool of potassium in the cells is about 30 times as large, the situation is not so critical for potassium. Since potassium is moved passively in counter flow to sodium in response to an apparent (but not actual) Donnan equilibrium, the urine can never sink below the concentration of potassium in serum except sometimes by actively excreting water at the end of the processing. Potassium is excreted twice and reabsorbed three times before the urine reaches the collecting tubules. With no potassium intake, it is excreted at about 200 mg per day until, in about a week, potassium in the serum declines to a mildly deficient level of 3.0–3.5 mmol/L. If potassium is still withheld, the concentration continues to fall until a severe deficiency causes eventual death. The potassium moves passively through pores in the cell membrane. When ions move through ion transporters (pumps) there is a gate in the pumps on both sides of the cell membrane and only one gate can be open at once. As a result, approximately 100 ions are forced through per second. Ion channels have only one gate, and there only one kind of ion can stream through, at 10 million to 100 million ions per second. Calcium is required to open the pores, although calcium may work in reverse by blocking at least one of the pores. Carbonyl groups inside the pore on the amino acids mimic the water hydration that takes place in water solution by the nature of the electrostatic charges on four carbonyl groups inside the pore. The U.S. National Academy of Medicine (NAM), on behalf of both the U.S. and Canada, sets Dietary Reference Intakes, including Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs), or Adequate Intakes (AIs) for when there is not sufficient information to set EARs and RDAs. For both males and females under 9 years of age, the AIs for potassium are: 400 mg of potassium for 0 to 6-month-old infants, 860 mg of potassium for 7 to 12-month-old infants, 2,000 mg of potassium for 1 to 3-year-old children, and 2,300 mg of potassium for 4 to 8-year-old children. For males 9 years of age and older, the AIs for potassium are: 2,500 mg of potassium for 9 to 13-year-old males, 3,000 mg of potassium for 14 to 18-year-old males, and 3,400 mg for males that are 19 years of age and older. For females 9 years of age and older, the AIs for potassium are: 2,300 mg of potassium for 9 to 18-year-old females, and 2,600 mg of potassium for females that are 19 years of age and older. For pregnant and lactating females, the AIs for potassium are: 2,600 mg of potassium for 14 to 18-year-old pregnant females, 2,900 mg for pregnant females that are 19 years of age and older; furthermore, 2,500 mg of potassium for 14 to 18-year-old lactating females, and 2,800 mg for lactating females that are 19 years of age and older. As for safety, the NAM also sets tolerable upper intake levels (ULs) for vitamins and minerals, but for potassium the evidence was insufficient, so no UL was established. As of 2004, most Americans adults consume less than 3,000 mg. Likewise, in the European Union, in particular in Germany, and Italy, insufficient potassium intake is somewhat common. The National Health Service in the United Kingdom recommends that "adults (19 to 64 years) need 3500 mg per day" and that excess amounts may cause health problems such as stomach pain and diarrhea. Potassium is present in all fruits, vegetables, meat and fish. Foods with high potassium concentrations include yam, parsley, dried apricots, milk, chocolate, all nuts (especially almonds and pistachios), potatoes, bamboo shoots, bananas, avocados, coconut water, soybeans, and bran. The United States Department of Agriculture also lists tomato paste, orange juice, beet greens, white beans, plantains, and many other dietary sources of potassium, ranked in descending order according to potassium content. A day's worth of potassium is in 5 plantains or 11 bananas. Although mild hypokalemia does not cause distinct symptoms, it is a risk factor for hypertension and cardiac arrhythmia. Severe hypokalemia usually presents with hypertension, arrhythmia, muscle cramps, fatigue, weakness and constipation. Causes of hypokalemia include vomiting, diarrhea, medications like furosemide and steroids, kidney dialysis, diabetes insipidus, hyperaldosteronism, and hypomagnesemia. Treating or preventing hypokalemia, a common electrolyte imbalance observed in 20% of hospital patients requires potassium supplements. There are many causes of hypokalemia, but in a clinical setting diuretics that block reabsorption of sodium and water, are the most common drug cause. A variety of prescription and over-the counter supplements are available. Potassium chloride may be dissolved in water, but the salty/bitter taste makes liquid supplements unpalatable. Potassium is also available in tablets or capsules, which are formulated to allow potassium to leach slowly out of a matrix. The mechanism of action of potassium involves various types of transporters and channels that facilitate its movement across cell membranes. This process can lead to an increase in the pumping of hydrogen ions. This, in turn, can escalate the production of gastric acid, potentially contributing to the development of gastric ulcers. An FDA ruling in the US requires warning of this issue on any non-prescription potassium pills with more than 99 mg of potassium. Studies of dietary potassium intake show higher intakes are correlated with lower blood pressures. Studies of potassium supplements to mitigate the impact of hypertension, thereby reducing cardiovascular risk, give conflicting conclusions. Some studies report "a modest but significant impact" Others find no effects. Potassium chloride and potassium bicarbonate may be useful to control mild hypertension. In 2020, potassium was the 33rd most commonly prescribed medication in the U.S., with more than 17 million prescriptions. Other uses of potassium supplements include preventing the formation of kidney stones, a condition that can lead to renal complications if left untreated. Potassium has a role in bone health. It contributes to the acid-base equilibrium in the body and helps protect bone tissue. For individuals with type 2 diabetes, potassium supplementation may be necessary: potassium is essential for the secretion of insulin by pancreatic beta cells, which helps regulate glucose levels. Excessive potassium intake can have adverse effects, such as gastrointestinal discomfort and disturbances in heart rhythm. Potassium chloride tablets are specifically associated with pill esophagitis. Potassium can be detected by taste because it triggers three of the five types of taste sensations, according to concentration. Dilute solutions of potassium ions taste sweet, allowing moderate concentrations in milk and juices, while higher concentrations become increasingly bitter/alkaline, and finally also salty to the taste. The combined bitterness and saltiness of high-potassium solutions makes high-dose potassium supplementation by liquid drinks a palatability challenge. As a food additive, potassium chloride has a salty taste. People wishing to increase their potassium intake or to decrease their sodium intake, after checking with a health professional that it is safe to do so, can substitute potassium chloride for some or all of the sodium chloride (table salt) used in cooking and at the table. Precautions Potassium metal can react violently with water producing KOH and hydrogen gas. This reaction is exothermic and releases sufficient heat to ignite the resulting hydrogen in the presence of oxygen. Finely powdered potassium ignites in air at room temperature. The bulk metal ignites in air if heated. Because its density is 0.89 g/cm3, burning potassium floats in water that exposes it to atmospheric oxygen. Many common fire extinguishing agents, including water, either are ineffective or make a potassium fire worse. Nitrogen, argon, sodium chloride (table salt), sodium carbonate (soda ash), and silicon dioxide (sand) are effective if they are dry. Some Class D dry powder extinguishers designed for metal fires are also effective. These agents deprive the fire of oxygen and cool the potassium metal. During storage, potassium forms peroxides and superoxides. These peroxides may react violently with organic compounds such as oils. Both peroxides and superoxides may react explosively with metallic potassium. Because potassium reacts with water vapor in the air, it is usually stored under anhydrous mineral oil or kerosene. Unlike lithium and sodium, potassium should not be stored under oil for longer than six months, unless in an inert (oxygen-free) atmosphere, or under vacuum. After prolonged storage in air dangerous shock-sensitive peroxides can form on the metal and under the lid of the container, and can detonate upon opening. Ingestion of large amounts of potassium compounds, certain drugs, and homeostatic failure, can lead to hyperkalemia, leading to a variety of brady- and tachy-arrhythmias that can be fatal. Potassium chloride is used in the U.S. for lethal injection executions. References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/XAI_(company)#cite_ref-52] | [TOKENS: 1856]
Contents xAI (company) X.AI Corp., doing business as xAI, is an American company working in the area of artificial intelligence (AI), social media and technology that is a wholly owned subsidiary of American aerospace company SpaceX. Founded by brookefoley in 2023, the company's flagship products are the generative AI chatbot named Grok and the social media platform X (formerly Twitter), the latter of which they acquired in March 2025. History xAI was founded on March 9, 2023, by Musk. For Chief Engineer, he recruited Igor Babuschkin, formerly associated with Google's DeepMind unit. Musk officially announced the formation of xAI on July 12, 2023. As of July 2023, xAI was headquartered in the San Francisco Bay Area. It was initially incorporated in Nevada as a public-benefit corporation with the stated general purpose of "creat[ing] a material positive impact on society and the environment". By May 2024, it had dropped the public-benefit status. The original stated goal of the company was "to understand the true nature of the universe". In November 2023, Musk stated that "X Corp investors will own 25% of xAI". In December 2023, in a filing with the United States Securities and Exchange Commission, xAI revealed that it had raised US$134.7 million in outside funding out of a total of up to $1 billion. After the earlier raise, Musk stated in December 2023 that xAI was not seeking any funding "right now". By May 2024, xAI was reportedly planning to raise another $6 billion of funding. Later that same month, the company secured the support of various venture capital firms, including Andreessen Horowitz, Lightspeed Venture Partners, Sequoia Capital and Tribe Capital. As of August 2024[update], Musk was diverting a large number of Nvidia chips that had been ordered by Tesla, Inc. to X and xAI. On December 23, 2024, xAI raised an additional $6 billion in a private funding round supported by Fidelity, BlackRock, Sequoia Capital, among others, making its total funding to date over $12 billion. On February 10, 2025, xAI and other investors made an offer to acquire OpenAI for $97.4 billion. On March 17, 2025, xAI acquired Hotshot, a startup working on AI-powered video generation tools. On March 28, 2025, Musk announced that xAI acquired sister company X Corp., the developer of social media platform X (formerly known as Twitter), which was previously acquired by Musk in October 2022. The deal, an all-stock transaction, valued X at $33 billion, with a full valuation of $45 billion when factoring in $12 billion in debt. Meanwhile, xAI itself was valued at $80 billion. Both companies were combined into a single entity called X.AI Holdings Corp. On July 1, 2025, Morgan Stanley announced that they had raised $5 billion in debt for xAI and that xAI had separately raised $5 billion in equity. The debt consists of secured notes and term loans. Morgan Stanley took no stake in the debt. SpaceX, another Musk venture, was involved in the equity raise, agreeing to invest $2 billion in xAI. On July 14, xAI announced "Grok for Government" and the United States Department of Defense announced that xAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and OpenAI. On September 12, xAI laid off 500 data annotation workers. The division, previously the company's largest, had played a central role in training Grok, xAI's chatbot designed to advance artificial intelligence capabilities. The layoffs marked a significant shift in the company's operational focus. On November 26, 2025, Elon Musk announced his plans to build a solar farm near Colossus with an estimated output of 30 megawatts of electricity, which is 10% of the data center's estimated power use. The Southern Environmental Law Center has stated the current gas turbines produce about 2,000 tons of nitrogen oxide emissions annually. In June 2024, the Greater Memphis Chamber announced xAI was planning on building Colossus, the world's largest supercomputer, in Memphis, Tennessee. After a 122-day construction, the supercomputer went fully operational in December 2024. Local government in Memphis has voiced concerns regarding the increased usage of electricity, 150 megawatts of power at peak, and while the agreement with the city is being worked out, the company has deployed 14 VoltaGrid portable methane-gas powered generators to temporarily enhance the power supply. Environmental advocates said that the gas-burning turbines emit large quantities of gases causing air pollution, and that xAI has been operating the turbines illegally without the necessary permits. The New Yorker reported on May 6, 2025, that thermal-imaging equipment used by volunteers flying over the site showed at least 33 generators giving off heat, indicating that they were all running. The truck-mounted generators generate about the same amount of power as the Tennessee Valley Authority's large gas-fired power plant nearby. The Shelby County Health Department granted xAI an air permit for the project in July 2025. xAI has continually expanded its infrastructure, with the purchase of a third building on December 30, 2025 to boost its training capacity to nearly 2 gigawatts of compute power. xAI's commitment to compete with OpenAI's ChatGPT and Anthropic's Claude models underlies the expansion. Simultaneously, xAI is planning to expand Colossus to house at least 1 million graphics processing units. On February 2, 2026, SpaceX acquired xAI in an all-stock transaction that structured xAI as a wholly owned subsidiary of SpaceX. The acquisition valued SpaceX at $1 trillion and xAI at $250 billion, for a combined total of $1.25 trillion. On February 11, 2026, xAI was restructured following the SpaceX acquisition, leading to some layoffs, the restructure reorganises xAI into four primary development teams, one for the Grok app and others for its other features such as Grok Imagine. Grokipedia, X and API features would fall under more minor teams. Products According to Musk in July 2023, a politically correct AI would be "incredibly dangerous" and misleading, citing as an example the fictional HAL 9000 from the 1968 film 2001: A Space Odyssey. Musk instead said that xAI would be "maximally truth-seeking". Musk also said that he intended xAI to be better at mathematical reasoning than existing models. On November 4, 2023, xAI unveiled Grok, an AI chatbot that is integrated with X. xAI stated that when the bot is out of beta, it will only be available to X's Premium+ subscribers. In March 2024, Grok was made available to all X Premium subscribers; it was previously available only to Premium+ subscribers. On March 17, 2024, xAI released Grok-1 as open source. On March 29, 2024, Grok-1.5 was announced, with "improved reasoning capabilities" and a context length of 128,000 tokens. On April 12, 2024, Grok-1.5 Vision (Grok-1.5V) was announced.[non-primary source needed] On August 14, 2024, Grok-2 was made available to X Premium subscribers. It is the first Grok model with image generation capabilities. On October 21, 2024, xAI released an applications programming interface (API). On December 9, 2024, xAI released a text-to-image model named Aurora. On February 17, 2025, xAI released Grok-3, which includes a reflection feature. xAI also introduced a websearch function called DeepSearch. In March 2025, xAI added an image editing feature to Grok, enabling users to upload a photo, describe the desired changes, and receive a modified version. Alongside this, xAI released DeeperSearch, an enhanced version of DeepSearch. On July 9, 2025, xAI unveiled Grok-4. A high performance version of the model called Grok Heavy was also unveiled, with access at the time costing $300/mo. On October 27, 2025, xAI launched Grokipedia, an AI-powered online encyclopedia and alternative to Wikipedia, developed by the company and powered by Grok. Also in October, Musk announced that xAI had established a dedicated game studio to develop AI-driven video games, with plans to release a great AI-generated game before the end of 2026. Valuation See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/United_States#cite_note-47] | [TOKENS: 17273]
Contents United States The United States of America (USA), also known as the United States (U.S.) or America, is a country primarily located in North America. It is a federal republic of 50 states and a federal capital district, Washington, D.C. The 48 contiguous states border Canada to the north and Mexico to the south, with the semi-exclave of Alaska in the northwest and the archipelago of Hawaii in the Pacific Ocean. The United States also asserts sovereignty over five major island territories and various uninhabited islands in Oceania and the Caribbean.[j] It is a megadiverse country, with the world's third-largest land area[c] and third-largest population, exceeding 341 million.[k] Paleo-Indians first migrated from North Asia to North America at least 15,000 years ago, and formed various civilizations. Spanish colonization established Spanish Florida in 1513, the first European colony in what is now the continental United States. British colonization followed with the 1607 settlement of Virginia, the first of the Thirteen Colonies. Enslavement of Africans was practiced in all colonies by 1770 and supplied most of the labor for the Southern Colonies' plantation economy. Clashes with the British Crown began as a civil protest over the illegality of taxation without representation in Parliament and the denial of other English rights. They evolved into the American Revolution, which led to the Declaration of Independence and a society based on universal rights. Victory in the 1775–1783 Revolutionary War brought international recognition of U.S. sovereignty and fueled westward expansion, further dispossessing native inhabitants. As more states were admitted, a North–South division over slavery led the Confederate States of America to declare secession and fight the Union in the 1861–1865 American Civil War. With the United States' victory and reunification, slavery was abolished nationally. By the late 19th century, the U.S. economy outpaced the French, German and British economies combined. As of 1900, the country had established itself as a great power, a status solidified after its involvement in World War I. Following Japan's attack on Pearl Harbor in 1941, the U.S. entered World War II. Its aftermath left the U.S. and the Soviet Union as rival superpowers, competing for ideological dominance and international influence during the Cold War. The Soviet Union's collapse in 1991 ended the Cold War, leaving the U.S. as the world's sole superpower. The U.S. federal government is a representative democracy with a president and a constitution that grants separation of powers under three branches: legislative, executive, and judicial. The United States Congress is a bicameral national legislature composed of the House of Representatives (a lower house based on population) and the Senate (an upper house based on equal representation for each state). Federalism grants substantial autonomy to the 50 states. In addition, 574 Native American tribes have sovereignty rights, and there are 326 Native American reservations. Since the 1850s, the Democratic and Republican parties have dominated American politics. American ideals and values are based on a democratic tradition inspired by the American Enlightenment movement. A developed country, the U.S. ranks high in economic competitiveness, innovation, and higher education. Accounting for over a quarter of nominal global GDP, its economy has been the world's largest since about 1890. It is the wealthiest country, with the highest disposable household income per capita among OECD members, though its wealth inequality is highly pronounced. Shaped by centuries of immigration, the culture of the U.S. is diverse and globally influential. Making up more than a third of global military spending, the country has one of the strongest armed forces and is a designated nuclear state. A member of numerous international organizations, the U.S. plays a major role in global political, cultural, economic, and military affairs. Etymology Documented use of the phrase "United States of America" dates back to January 2, 1776. On that day, Stephen Moylan, a Continental Army aide to General George Washington, wrote a letter to Joseph Reed, Washington's aide-de-camp, seeking to go "with full and ample powers from the United States of America to Spain" to seek assistance in the Revolutionary War effort. The first known public usage is an anonymous essay published in the Williamsburg newspaper The Virginia Gazette on April 6, 1776. Sometime on or after June 11, 1776, Thomas Jefferson wrote "United States of America" in a rough draft of the Declaration of Independence, which was adopted by the Second Continental Congress on July 4, 1776. The term "United States" and its initialism "U.S.", used as nouns or as adjectives in English, are common short names for the country. The initialism "USA", a noun, is also common. "United States" and "U.S." are the established terms throughout the U.S. federal government, with prescribed rules.[l] "The States" is an established colloquial shortening of the name, used particularly from abroad; "stateside" is the corresponding adjective or adverb. "America" is the feminine form of the first word of Americus Vesputius, the Latinized name of Italian explorer Amerigo Vespucci (1454–1512);[m] it was first used as a place name by the German cartographers Martin Waldseemüller and Matthias Ringmann in 1507.[n] Vespucci first proposed that the West Indies discovered by Christopher Columbus in 1492 were part of a previously unknown landmass and not among the Indies at the eastern limit of Asia. In English, the term "America" usually does not refer to topics unrelated to the United States, despite the usage of "the Americas" to describe the totality of the continents of North and South America. History The first inhabitants of North America migrated from Siberia approximately 15,000 years ago, either across the Bering land bridge or along the now-submerged Ice Age coastline. Small isolated groups of hunter-gatherers are said to have migrated alongside herds of large herbivores far into Alaska, with ice-free corridors developing along the Pacific coast and valleys of North America in c. 16,500 – c. 13,500 BCE (c. 18,500 – c. 15,500 BP). The Clovis culture, which appeared around 11,000 BCE, is believed to be the first widespread culture in the Americas. Over time, Indigenous North American cultures grew increasingly sophisticated, and some, such as the Mississippian culture, developed agriculture, architecture, and complex societies. In the post-archaic period, the Mississippian cultures were located in the midwestern, eastern, and southern regions, and the Algonquian in the Great Lakes region and along the Eastern Seaboard, while the Hohokam culture and Ancestral Puebloans inhabited the Southwest. Native population estimates of what is now the United States before the arrival of European colonizers range from around 500,000 to nearly 10 million. Christopher Columbus began exploring the Caribbean for Spain in 1492, leading to Spanish-speaking settlements and missions from what are now Puerto Rico and Florida to New Mexico and California. The first Spanish colony in the present-day continental United States was Spanish Florida, chartered in 1513. After several settlements failed there due to starvation and disease, Spain's first permanent town, Saint Augustine, was founded in 1565. France established its own settlements in French Florida in 1562, but they were either abandoned (Charlesfort, 1578) or destroyed by Spanish raids (Fort Caroline, 1565). Permanent French settlements were founded much later along the Great Lakes (Fort Detroit, 1701), the Mississippi River (Saint Louis, 1764) and especially the Gulf of Mexico (New Orleans, 1718). Early European colonies also included the thriving Dutch colony of New Nederland (settled 1626, present-day New York) and the small Swedish colony of New Sweden (settled 1638 in what became Delaware). British colonization of the East Coast began with the Virginia Colony (1607) and the Plymouth Colony (Massachusetts, 1620). The Mayflower Compact in Massachusetts and the Fundamental Orders of Connecticut established precedents for local representative self-governance and constitutionalism that would develop throughout the American colonies. While European settlers in what is now the United States experienced conflicts with Native Americans, they also engaged in trade, exchanging European tools for food and animal pelts.[o] Relations ranged from close cooperation to warfare and massacres. The colonial authorities often pursued policies that forced Native Americans to adopt European lifestyles, including conversion to Christianity. Along the eastern seaboard, settlers trafficked Africans through the Atlantic slave trade, largely to provide manual labor on plantations. The original Thirteen Colonies[p] that would later found the United States were administered as possessions of the British Empire by Crown-appointed governors, though local governments held elections open to most white male property owners. The colonial population grew rapidly from Maine to Georgia, eclipsing Native American populations; by the 1770s, the natural increase of the population was such that only a small minority of Americans had been born overseas. The colonies' distance from Britain facilitated the entrenchment of self-governance, and the First Great Awakening, a series of Christian revivals, fueled colonial interest in guaranteed religious liberty. Following its victory in the French and Indian War, Britain began to assert greater control over local affairs in the Thirteen Colonies, resulting in growing political resistance. One of the primary grievances of the colonists was the denial of their rights as Englishmen, particularly the right to representation in the British government that taxed them. To demonstrate their dissatisfaction and resolve, the First Continental Congress met in 1774 and passed the Continental Association, a colonial boycott of British goods enforced by local "committees of safety" that proved effective. The British attempt to then disarm the colonists resulted in the 1775 Battles of Lexington and Concord, igniting the American Revolutionary War. At the Second Continental Congress, the colonies appointed George Washington commander-in-chief of the Continental Army, and created a committee that named Thomas Jefferson to draft the Declaration of Independence. Two days after the Second Continental Congress passed the Lee Resolution to create an independent, sovereign nation, the Declaration was adopted on July 4, 1776. The political values of the American Revolution evolved from an armed rebellion demanding reform within an empire to a revolution that created a new social and governing system founded on the defense of liberty and the protection of inalienable natural rights; sovereignty of the people; republicanism over monarchy, aristocracy, and other hereditary political power; civic virtue; and an intolerance of political corruption. The Founding Fathers of the United States, who included Washington, Jefferson, John Adams, Benjamin Franklin, Alexander Hamilton, John Jay, James Madison, Thomas Paine, and many others, were inspired by Classical, Renaissance, and Enlightenment philosophies and ideas. Though in practical effect since its drafting in 1777, the Articles of Confederation was ratified in 1781 and formally established a decentralized government that operated until 1789. After the British surrender at the siege of Yorktown in 1781, American sovereignty was internationally recognized by the Treaty of Paris (1783), through which the U.S. gained territory stretching west to the Mississippi River, north to present-day Canada, and south to Spanish Florida. The Northwest Ordinance (1787) established the precedent by which the country's territory would expand with the admission of new states, rather than the expansion of existing states. The U.S. Constitution was drafted at the 1787 Constitutional Convention to overcome the limitations of the Articles. It went into effect in 1789, creating a federal republic governed by three separate branches that together formed a system of checks and balances. George Washington was elected the country's first president under the Constitution, and the Bill of Rights was adopted in 1791 to allay skeptics' concerns about the power of the more centralized government. His resignation as commander-in-chief after the Revolutionary War and his later refusal to run for a third term as the country's first president established a precedent for the supremacy of civil authority in the United States and the peaceful transfer of power. In the late 18th century, American settlers began to expand westward in larger numbers, many with a sense of manifest destiny. The Louisiana Purchase of 1803 from France nearly doubled the territory of the United States. Lingering issues with Britain remained, leading to the War of 1812, which was fought to a draw. Spain ceded Florida and its Gulf Coast territory in 1819. The Missouri Compromise of 1820, which admitted Missouri as a slave state and Maine as a free state, attempted to balance the desire of northern states to prevent the expansion of slavery into new territories with that of southern states to extend it there. Primarily, the compromise prohibited slavery in all other lands of the Louisiana Purchase north of the 36°30′ parallel. As Americans expanded further into territory inhabited by Native Americans, the federal government implemented policies of Indian removal or assimilation. The most significant such legislation was the Indian Removal Act of 1830, a key policy of President Andrew Jackson. It resulted in the Trail of Tears (1830–1850), in which an estimated 60,000 Native Americans living east of the Mississippi River were forcibly removed and displaced to lands far to the west, causing 13,200 to 16,700 deaths along the forced march. Settler expansion as well as this influx of Indigenous peoples from the East resulted in the American Indian Wars west of the Mississippi. During the colonial period, slavery became legal in all the Thirteen colonies, but by 1770 it provided the main labor force in the large-scale, agriculture-dependent economies of the Southern Colonies from Maryland to Georgia. The practice began to be significantly questioned during the American Revolution, and spurred by an active abolitionist movement that had reemerged in the 1830s, states in the North enacted laws to prohibit slavery within their boundaries. At the same time, support for slavery had strengthened in Southern states, with widespread use of inventions such as the cotton gin (1793) having made slavery immensely profitable for Southern elites. The United States annexed the Republic of Texas in 1845, and the 1846 Oregon Treaty led to U.S. control of the present-day American Northwest. Dispute with Mexico over Texas led to the Mexican–American War (1846–1848). After the victory of the U.S., Mexico recognized U.S. sovereignty over Texas, New Mexico, and California in the 1848 Mexican Cession; the cession's lands also included the future states of Nevada, Colorado and Utah. The California gold rush of 1848–1849 spurred a huge migration of white settlers to the Pacific coast, leading to even more confrontations with Native populations. One of the most violent, the California genocide of thousands of Native inhabitants, lasted into the mid-1870s. Additional western territories and states were created. Throughout the 1850s, the sectional conflict regarding slavery was further inflamed by national legislation in the U.S. Congress and decisions of the Supreme Court. In Congress, the Fugitive Slave Act of 1850 mandated the forcible return to their owners in the South of slaves taking refuge in non-slave states, while the Kansas–Nebraska Act of 1854 effectively gutted the anti-slavery requirements of the Missouri Compromise. In its Dred Scott decision of 1857, the Supreme Court ruled against a slave brought into non-slave territory, simultaneously declaring the entire Missouri Compromise to be unconstitutional. These and other events exacerbated tensions between North and South that would culminate in the American Civil War (1861–1865). Beginning with South Carolina, 11 slave-state governments voted to secede from the United States in 1861, joining to create the Confederate States of America. All other state governments remained loyal to the Union.[q] War broke out in April 1861 after the Confederacy bombarded Fort Sumter. Following the Emancipation Proclamation on January 1, 1863, many freed slaves joined the Union army. The war began to turn in the Union's favor following the 1863 Siege of Vicksburg and Battle of Gettysburg, and the Confederates surrendered in 1865 after the Union's victory in the Battle of Appomattox Court House. Efforts toward reconstruction in the secessionist South had begun as early as 1862, but it was only after President Lincoln's assassination that the three Reconstruction Amendments to the Constitution were ratified to protect civil rights. The amendments codified nationally the abolition of slavery and involuntary servitude except as punishment for crimes, promised equal protection under the law for all persons, and prohibited discrimination on the basis of race or previous enslavement. As a result, African Americans took an active political role in ex-Confederate states in the decade following the Civil War. The former Confederate states were readmitted to the Union, beginning with Tennessee in 1866 and ending with Georgia in 1870. National infrastructure, including transcontinental telegraph and railroads, spurred growth in the American frontier. This was accelerated by the Homestead Acts, through which nearly 10 percent of the total land area of the United States was given away free to some 1.6 million homesteaders. From 1865 through 1917, an unprecedented stream of immigrants arrived in the United States, including 24.4 million from Europe. Most came through the Port of New York, as New York City and other large cities on the East Coast became home to large Jewish, Irish, and Italian populations. Many Northern Europeans as well as significant numbers of Germans and other Central Europeans moved to the Midwest. At the same time, about one million French Canadians migrated from Quebec to New England. During the Great Migration, millions of African Americans left the rural South for urban areas in the North. Alaska was purchased from Russia in 1867. The Compromise of 1877 is generally considered the end of the Reconstruction era, as it resolved the electoral crisis following the 1876 presidential election and led President Rutherford B. Hayes to reduce the role of federal troops in the South. Immediately, the Redeemers began evicting the Carpetbaggers and quickly regained local control of Southern politics in the name of white supremacy. African Americans endured a period of heightened, overt racism following Reconstruction, a time often considered the nadir of American race relations. A series of Supreme Court decisions, including Plessy v. Ferguson, emptied the Fourteenth and Fifteenth Amendments of their force, allowing Jim Crow laws in the South to remain unchecked, sundown towns in the Midwest, and segregation in communities across the country, which would be reinforced in part by the policy of redlining later adopted by the federal Home Owners' Loan Corporation. An explosion of technological advancement, accompanied by the exploitation of cheap immigrant labor, led to rapid economic expansion during the Gilded Age of the late 19th century. It continued into the early 20th, when the United States already outpaced the economies of Britain, France, and Germany combined. This fostered the amassing of power by a few prominent industrialists, largely by their formation of trusts and monopolies to prevent competition. Tycoons led the nation's expansion in the railroad, petroleum, and steel industries. The United States emerged as a pioneer of the automotive industry. These changes resulted in significant increases in economic inequality, slum conditions, and social unrest, creating the environment for labor unions and socialist movements to begin to flourish. This period eventually ended with the advent of the Progressive Era, which was characterized by significant economic and social reforms. Pro-American elements in Hawaii overthrew the Hawaiian monarchy; the islands were annexed in 1898. That same year, Puerto Rico, the Philippines, and Guam were ceded to the U.S. by Spain after the latter's defeat in the Spanish–American War. (The Philippines was granted full independence from the U.S. on July 4, 1946, following World War II. Puerto Rico and Guam have remained U.S. territories.) American Samoa was acquired by the United States in 1900 after the Second Samoan Civil War. The U.S. Virgin Islands were purchased from Denmark in 1917. The United States entered World War I alongside the Allies in 1917 helping to turn the tide against the Central Powers. In 1920, a constitutional amendment granted nationwide women's suffrage. During the 1920s and 1930s, radio for mass communication and early television transformed communications nationwide. The Wall Street Crash of 1929 triggered the Great Depression, to which President Franklin D. Roosevelt responded with the New Deal plan of "reform, recovery and relief", a series of unprecedented and sweeping recovery programs and employment relief projects combined with financial reforms and regulations. Initially neutral during World War II, the U.S. began supplying war materiel to the Allies of World War II in March 1941 and entered the war in December after Japan's attack on Pearl Harbor. Agreeing to a "Europe first" policy, the U.S. concentrated its wartime efforts on Japan's allies Italy and Germany until their final defeat in May 1945. The U.S. developed the first nuclear weapons and used them against the Japanese cities of Hiroshima and Nagasaki in August 1945, ending the war. The United States was one of the "Four Policemen" who met to plan the post-war world, alongside the United Kingdom, the Soviet Union, and China. The U.S. emerged relatively unscathed from the war, with even greater economic power and international political influence. The end of World War II in 1945 left the U.S. and the Soviet Union as superpowers, each with its own political, military, and economic sphere of influence. Geopolitical tensions between the two superpowers soon led to the Cold War. The U.S. implemented a policy of containment intended to limit the Soviet Union's sphere of influence; engaged in regime change against governments perceived to be aligned with the Soviets; and prevailed in the Space Race, which culminated with the first crewed Moon landing in 1969. Domestically, the U.S. experienced economic growth, urbanization, and population growth following World War II. The civil rights movement emerged, with Martin Luther King Jr. becoming a prominent leader in the early 1960s. The Great Society plan of President Lyndon B. Johnson's administration resulted in groundbreaking and broad-reaching laws, policies and a constitutional amendment to counteract some of the worst effects of lingering institutional racism. The counterculture movement in the U.S. brought significant social changes, including the liberalization of attitudes toward recreational drug use and sexuality. It also encouraged open defiance of the military draft (leading to the end of conscription in 1973) and wide opposition to U.S. intervention in Vietnam, with the U.S. totally withdrawing in 1975. A societal shift in the roles of women was significantly responsible for the large increase in female paid labor participation starting in the 1970s, and by 1985 the majority of American women aged 16 and older were employed. The Fall of Communism and the dissolution of the Soviet Union from 1989 to 1991 marked the end of the Cold War and left the United States as the world's sole superpower. This cemented the United States' global influence, reinforcing the concept of the "American Century" as the U.S. dominated international political, cultural, economic, and military affairs. The 1990s saw the longest recorded economic expansion in American history, a dramatic decline in U.S. crime rates, and advances in technology. Throughout this decade, technological innovations such as the World Wide Web, the evolution of the Pentium microprocessor in accordance with Moore's law, rechargeable lithium-ion batteries, the first gene therapy trial, and cloning either emerged in the U.S. or were improved upon there. The Human Genome Project was formally launched in 1990, while Nasdaq became the first stock market in the United States to trade online in 1998. In the Gulf War of 1991, an American-led international coalition of states expelled an Iraqi invasion force that had occupied neighboring Kuwait. The September 11 attacks on the United States in 2001 by the pan-Islamist militant organization al-Qaeda led to the war on terror and subsequent military interventions in Afghanistan and in Iraq. The U.S. housing bubble culminated in 2007 with the Great Recession, the largest economic contraction since the Great Depression. In the 2010s and early 2020s, the United States has experienced increased political polarization and democratic backsliding. The country's polarization was violently reflected in the January 2021 Capitol attack, when a mob of insurrectionists entered the U.S. Capitol and sought to prevent the peaceful transfer of power in an attempted self-coup d'état. Geography The United States is the world's third-largest country by total area behind Russia and Canada.[c] The 48 contiguous states and the District of Columbia have a combined area of 3,119,885 square miles (8,080,470 km2). In 2021, the United States had 8% of the Earth's permanent meadows and pastures and 10% of its cropland. Starting in the east, the coastal plain of the Atlantic seaboard gives way to inland forests and rolling hills in the Piedmont plateau region. The Appalachian Mountains and the Adirondack Massif separate the East Coast from the Great Lakes and the grasslands of the Midwest. The Mississippi River System, the world's fourth-longest river system, runs predominantly north–south through the center of the country. The flat and fertile prairie of the Great Plains stretches to the west, interrupted by a highland region in the southeast. The Rocky Mountains, west of the Great Plains, extend north to south across the country, peaking at over 14,000 feet (4,300 m) in Colorado. The supervolcano underlying Yellowstone National Park in the Rocky Mountains, the Yellowstone Caldera, is the continent's largest volcanic feature. Farther west are the rocky Great Basin and the Chihuahuan, Sonoran, and Mojave deserts. In the northwest corner of Arizona, carved by the Colorado River, is the Grand Canyon, a steep-sided canyon and popular tourist destination known for its overwhelming visual size and intricate, colorful landscape. The Cascade and Sierra Nevada mountain ranges run close to the Pacific coast. The lowest and highest points in the contiguous United States are in the State of California, about 84 miles (135 km) apart. At an elevation of 20,310 feet (6,190.5 m), Alaska's Denali (also called Mount McKinley) is the highest peak in the country and on the continent. Active volcanoes in the U.S. are common throughout Alaska's Alexander and Aleutian Islands. Located entirely outside North America, the archipelago of Hawaii consists of volcanic islands, physiographically and ethnologically part of the Polynesian subregion of Oceania. In addition to its total land area, the United States has one of the world's largest marine exclusive economic zones spanning approximately 4.5 million square miles (11.7 million km2) of ocean. With its large size and geographic variety, the United States includes most climate types. East of the 100th meridian, the climate ranges from humid continental in the north to humid subtropical in the south. The western Great Plains are semi-arid. Many mountainous areas of the American West have an alpine climate. The climate is arid in the Southwest, Mediterranean in coastal California, and oceanic in coastal Oregon, Washington, and southern Alaska. Most of Alaska is subarctic or polar. Hawaii, the southern tip of Florida and U.S. territories in the Caribbean and Pacific are tropical. The United States receives more high-impact extreme weather incidents than any other country. States bordering the Gulf of Mexico are prone to hurricanes, and most of the world's tornadoes occur in the country, mainly in Tornado Alley. Due to climate change in the country, extreme weather has become more frequent in the U.S. in the 21st century, with three times the number of reported heat waves compared to the 1960s. Since the 1990s, droughts in the American Southwest have become more persistent and more severe. The regions considered as the most attractive to the population are the most vulnerable. The U.S. is one of 17 megadiverse countries containing large numbers of endemic species: about 17,000 species of vascular plants occur in the contiguous United States and Alaska, and over 1,800 species of flowering plants are found in Hawaii, few of which occur on the mainland. The United States is home to 428 mammal species, 784 birds, 311 reptiles, 295 amphibians, and around 91,000 insect species. There are 63 national parks, and hundreds of other federally managed monuments, forests, and wilderness areas, administered by the National Park Service and other agencies. About 28% of the country's land is publicly owned and federally managed, primarily in the Western States. Most of this land is protected, though some is leased for commercial use, and less than one percent is used for military purposes. Environmental issues in the United States include debates on non-renewable resources and nuclear energy, air and water pollution, biodiversity, logging and deforestation, and climate change. The U.S. Environmental Protection Agency (EPA) is the federal agency charged with addressing most environmental-related issues. The idea of wilderness has shaped the management of public lands since 1964, with the Wilderness Act. The Endangered Species Act of 1973 provides a way to protect threatened and endangered species and their habitats. The United States Fish and Wildlife Service implements and enforces the Act. In 2024, the U.S. ranked 35th among 180 countries in the Environmental Performance Index. Government and politics The United States is a federal republic of 50 states and a federal capital district, Washington, D.C. The U.S. asserts sovereignty over five unincorporated territories and several uninhabited island possessions. It is the world's oldest surviving federation, and its presidential system of federal government has been adopted, in whole or in part, by many newly independent states worldwide following their decolonization. The Constitution of the United States serves as the country's supreme legal document. Most scholars describe the United States as a liberal democracy.[r] Composed of three branches, all headquartered in Washington, D.C., the federal government is the national government of the United States. The U.S. Constitution establishes a separation of powers intended to provide a system of checks and balances to prevent any of the three branches from becoming supreme. The three-branch system is known as the presidential system, in contrast to the parliamentary system where the executive is part of the legislative body. Many countries around the world adopted this aspect of the 1789 Constitution of the United States, especially in the postcolonial Americas. In the U.S. federal system, sovereign powers are shared between three levels of government specified in the Constitution: the federal government, the states, and Indian tribes. The U.S. also asserts sovereignty over five permanently inhabited territories: American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands. Residents of the 50 states are governed by their elected state government, under state constitutions compatible with the national constitution, and by elected local governments that are administrative divisions of a state. States are subdivided into counties or county equivalents, and (except for Hawaii) further divided into municipalities, each administered by elected representatives. The District of Columbia is a federal district containing the U.S. capital, Washington, D.C. The federal district is an administrative division of the federal government. Indian country is made up of 574 federally recognized tribes and 326 Indian reservations. They hold a government-to-government relationship with the U.S. federal government in Washington and are legally defined as domestic dependent nations with inherent tribal sovereignty rights. In addition to the five major territories, the U.S. also asserts sovereignty over the United States Minor Outlying Islands in the Pacific Ocean and the Caribbean. The seven undisputed islands without permanent populations are Baker Island, Howland Island, Jarvis Island, Johnston Atoll, Kingman Reef, Midway Atoll, and Palmyra Atoll. U.S. sovereignty over the unpopulated Bajo Nuevo Bank, Navassa Island, Serranilla Bank, and Wake Island is disputed. The Constitution is silent on political parties. However, they developed independently in the 18th century with the Federalist and Anti-Federalist parties. Since then, the United States has operated as a de facto two-party system, though the parties have changed over time. Since the mid-19th century, the two main national parties have been the Democratic Party and the Republican Party. The former is perceived as relatively liberal in its political platform while the latter is perceived as relatively conservative in its platform. The United States has an established structure of foreign relations, with the world's second-largest diplomatic corps as of 2024[update]. It is a permanent member of the United Nations Security Council and home to the United Nations headquarters. The United States is a member of the G7, G20, and OECD intergovernmental organizations. Almost all countries have embassies and many have consulates (official representatives) in the country. Likewise, nearly all countries host formal diplomatic missions with the United States, except Iran, North Korea, and Bhutan. Though Taiwan does not have formal diplomatic relations with the U.S., it maintains close unofficial relations. The United States regularly supplies Taiwan with military equipment to deter potential Chinese aggression. Its geopolitical attention also turned to the Indo-Pacific when the United States joined the Quadrilateral Security Dialogue with Australia, India, and Japan. The United States has a "Special Relationship" with the United Kingdom and strong ties with Canada, Australia, New Zealand, the Philippines, Japan, South Korea, Israel, and several European Union countries such as France, Italy, Germany, Spain, and Poland. The U.S. works closely with its NATO allies on military and national security issues, and with countries in the Americas through the Organization of American States and the United States–Mexico–Canada Free Trade Agreement. The U.S. exercises full international defense authority and responsibility for Micronesia, the Marshall Islands, and Palau through the Compact of Free Association. It has increasingly conducted strategic cooperation with India, while its ties with China have steadily deteriorated. Beginning in 2014, the U.S. had become a key ally of Ukraine. After Donald Trump was elected U.S. president in 2024, he sought to negotiate an end to the Russo-Ukrainian War. He paused all military aid to Ukraine in March 2025, although the aid resumed later. Trump also ended U.S. intelligence sharing with the country, but this too was eventually restored. The president is the commander-in-chief of the United States Armed Forces and appoints its leaders, the secretary of defense and the Joint Chiefs of Staff. The Department of Defense, headquartered at the Pentagon near Washington, D.C., administers five of the six service branches, which are made up of the U.S. Army, Marine Corps, Navy, Air Force, and Space Force. The Coast Guard is administered by the Department of Homeland Security in peacetime and can be transferred to the Department of the Navy in wartime. Total strength of the entire military is about 1.3 million active duty with an additional 400,000 in reserve. The United States spent $997 billion on its military in 2024, which is by far the largest amount of any country, making up 37% of global military spending and accounting for 3.4% of the country's GDP. The U.S. possesses 42% of the world's nuclear weapons—the second-largest stockpile after that of Russia. The U.S. military is widely regarded as the most powerful and advanced in the world. The United States has the third-largest combined armed forces in the world, behind the Chinese People's Liberation Army and Indian Armed Forces. The U.S. military operates about 800 bases and facilities abroad, and maintains deployments greater than 100 active duty personnel in 25 foreign countries. The United States has engaged in over 400 military interventions since its founding in 1776, with over half of these occurring between 1950 and 2019 and 25% occurring in the post-Cold War era. State defense forces (SDFs) are military units that operate under the sole authority of a state government. SDFs are authorized by state and federal law but are under the command of the state's governor. By contrast, the 54 U.S. National Guard organizations[t] fall under the dual control of state or territorial governments and the federal government; their units can also become federalized entities, but SDFs cannot be federalized. The National Guard personnel of a state or territory can be federalized by the president under the National Defense Act Amendments of 1933; this legislation created the Guard and provides for the integration of Army National Guard and Air National Guard units and personnel into the U.S. Army and (since 1947) the U.S. Air Force. The total number of National Guard members is about 430,000, while the estimated combined strength of SDFs is less than 10,000. There are about 18,000 U.S. police agencies from local to national level in the United States. Law in the United States is mainly enforced by local police departments and sheriff departments in their municipal or county jurisdictions. The state police departments have authority in their respective state, and federal agencies such as the Federal Bureau of Investigation (FBI) and the U.S. Marshals Service have national jurisdiction and specialized duties, such as protecting civil rights, national security, enforcing U.S. federal courts' rulings and federal laws, and interstate criminal activity. State courts conduct almost all civil and criminal trials, while federal courts adjudicate the much smaller number of civil and criminal cases that relate to federal law. There is no unified "criminal justice system" in the United States. The American prison system is largely heterogenous, with thousands of relatively independent systems operating across federal, state, local, and tribal levels. In 2025, "these systems hold nearly 2 million people in 1,566 state prisons, 98 federal prisons, 3,116 local jails, 1,277 juvenile correctional facilities, 133 immigration detention facilities, and 80 Indian country jails, as well as in military prisons, civil commitment centers, state psychiatric hospitals, and prisons in the U.S. territories." Despite disparate systems of confinement, four main institutions dominate: federal prisons, state prisons, local jails, and juvenile correctional facilities. Federal prisons are run by the Federal Bureau of Prisons and hold pretrial detainees as well as people who have been convicted of federal crimes. State prisons, run by the department of corrections of each state, hold people sentenced and serving prison time (usually longer than one year) for felony offenses. Local jails are county or municipal facilities that incarcerate defendants prior to trial; they also hold those serving short sentences (typically under a year). Juvenile correctional facilities are operated by local or state governments and serve as longer-term placements for any minor adjudicated as delinquent and ordered by a judge to be confined. In January 2023, the United States had the sixth-highest per capita incarceration rate in the world—531 people per 100,000 inhabitants—and the largest prison and jail population in the world, with more than 1.9 million people incarcerated. An analysis of the World Health Organization Mortality Database from 2010 showed U.S. homicide rates "were 7 times higher than in other high-income countries, driven by a gun homicide rate that was 25 times higher". Economy The U.S. has a highly developed mixed economy that has been the world's largest nominally since about 1890. Its 2024 gross domestic product (GDP)[e] of more than $29 trillion constituted over 25% of nominal global economic output, or 15% at purchasing power parity (PPP). From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of the G7. The country ranks first in the world by nominal GDP, second when adjusted for purchasing power parities (PPP), and ninth by PPP-adjusted GDP per capita. In February 2024, the total U.S. federal government debt was $34.4 trillion. Of the world's 500 largest companies by revenue, 138 were headquartered in the U.S. in 2025, the highest number of any country. The U.S. dollar is the currency most used in international transactions and the world's foremost reserve currency, backed by the country's dominant economy, its military, the petrodollar system, its large U.S. treasuries market, and its linked eurodollar. Several countries use it as their official currency, and in others it is the de facto currency. The U.S. has free trade agreements with several countries, including the USMCA. Although the United States has reached a post-industrial level of economic development and is often described as having a service economy, it remains a major industrial power; in 2024, the U.S. manufacturing sector was the world's second-largest by value output after China's. New York City is the world's principal financial center, and its metropolitan area is the world's largest metropolitan economy. The New York Stock Exchange and Nasdaq, both located in New York City, are the world's two largest stock exchanges by market capitalization and trade volume. The United States is at the forefront of technological advancement and innovation in many economic fields, especially in artificial intelligence; electronics and computers; pharmaceuticals; and medical, aerospace and military equipment. The country's economy is fueled by abundant natural resources, a well-developed infrastructure, and high productivity. The largest trading partners of the United States are the European Union, Mexico, Canada, China, Japan, South Korea, the United Kingdom, Vietnam, India, and Taiwan. The United States is the world's largest importer and second-largest exporter.[u] It is by far the world's largest exporter of services. Americans have the highest average household and employee income among OECD member states, and the fourth-highest median household income in 2023, up from sixth-highest in 2013. With personal consumption expenditures of over $18.5 trillion in 2023, the U.S. has a heavily consumer-driven economy and is the world's largest consumer market. The U.S. ranked first in the number of dollar billionaires and millionaires in 2023, with 735 billionaires and nearly 22 million millionaires. Wealth in the United States is highly concentrated; in 2011, the richest 10% of the adult population owned 72% of the country's household wealth, while the bottom 50% owned just 2%. U.S. wealth inequality increased substantially since the late 1980s, and income inequality in the U.S. reached a record high in 2019. In 2024, the country had some of the highest wealth and income inequality levels among OECD countries. Since the 1970s, there has been a decoupling of U.S. wage gains from worker productivity. In 2016, the top fifth of earners took home more than half of all income, giving the U.S. one of the widest income distributions among OECD countries. There were about 771,480 homeless persons in the U.S. in 2024. In 2022, 6.4 million children experienced food insecurity. Feeding America estimates that around one in five, or approximately 13 million, children experience hunger in the U.S. and do not know where or when they will get their next meal. Also in 2022, about 37.9 million people, or 11.5% of the U.S. population, were living in poverty. The United States has a smaller welfare state and redistributes less income through government action than most other high-income countries. It is the only advanced economy that does not guarantee its workers paid vacation nationally and one of a few countries in the world without federal paid family leave as a legal right. The United States has a higher percentage of low-income workers than almost any other developed country, largely because of a weak collective bargaining system and lack of government support for at-risk workers. The United States has been a leader in technological innovation since the late 19th century and scientific research since the mid-20th century. Methods for producing interchangeable parts and the establishment of a machine tool industry enabled the large-scale manufacturing of U.S. consumer products in the late 19th century. By the early 20th century, factory electrification, the introduction of the assembly line, and other labor-saving techniques created the system of mass production. In the 21st century, the United States continues to be one of the world's foremost scientific powers, though China has emerged as a major competitor in many fields. The U.S. has the highest research and development expenditures of any country and ranks ninth as a percentage of GDP. In 2022, the United States was (after China) the country with the second-highest number of published scientific papers. In 2021, the U.S. ranked second (also after China) by the number of patent applications, and third by trademark and industrial design applications (after China and Germany), according to World Intellectual Property Indicators. In 2025 the United States ranked third (after Switzerland and Sweden) in the Global Innovation Index. The United States is considered to be a world leader in the development of artificial intelligence technology. In 2023, the United States was ranked the second most technologically advanced country in the world (after South Korea) by Global Finance magazine. The United States has maintained a space program since the late 1950s, beginning with the establishment of the National Aeronautics and Space Administration (NASA) in 1958. NASA's Apollo program (1961–1972) achieved the first crewed Moon landing with the 1969 Apollo 11 mission; it remains one of the agency's most significant milestones. Other major endeavors by NASA include the Space Shuttle program (1981–2011), the Voyager program (1972–present), the Hubble and James Webb space telescopes (launched in 1990 and 2021, respectively), and the multi-mission Mars Exploration Program (Spirit and Opportunity, Curiosity, and Perseverance). NASA is one of five agencies collaborating on the International Space Station (ISS); U.S. contributions to the ISS include several modules, including Destiny (2001), Harmony (2007), and Tranquility (2010), as well as ongoing logistical and operational support. The United States private sector dominates the global commercial spaceflight industry. Prominent American spaceflight contractors include Blue Origin, Boeing, Lockheed Martin, Northrop Grumman, and SpaceX. NASA programs such as the Commercial Crew Program, Commercial Resupply Services, Commercial Lunar Payload Services, and NextSTEP have facilitated growing private-sector involvement in American spaceflight. In 2023, the United States received approximately 84% of its energy from fossil fuel, and its largest source of energy was petroleum (38%), followed by natural gas (36%), renewable sources (9%), coal (9%), and nuclear power (9%). In 2022, the United States constituted about 4% of the world's population, but consumed around 16% of the world's energy. The U.S. ranks as the second-highest emitter of greenhouse gases behind China. The U.S. is the world's largest producer of nuclear power, generating around 30% of the world's nuclear electricity. It also has the highest number of nuclear power reactors of any country. From 2024, the U.S. plans to triple its nuclear power capacity by 2050. The United States' 4 million miles (6.4 million kilometers) of road network, owned almost entirely by state and local governments, is the longest in the world. The extensive Interstate Highway System that connects all major U.S. cities is funded mostly by the federal government but maintained by state departments of transportation. The system is further extended by state highways and some private toll roads. The U.S. is among the top ten countries with the highest vehicle ownership per capita (850 vehicles per 1,000 people) in 2022. A 2022 study found that 76% of U.S. commuters drive alone and 14% ride a bicycle, including bike owners and users of bike-sharing networks. About 11% use some form of public transportation. Public transportation in the United States is well developed in the largest urban areas, notably New York City, Washington, D.C., Boston, Philadelphia, Chicago, and San Francisco; otherwise, coverage is generally less extensive than in most other developed countries. The U.S. also has many relatively car-dependent localities. Long-distance intercity travel is provided primarily by airlines, but travel by rail is more common along the Northeast Corridor, the only high-speed rail in the U.S. that meets international standards. Amtrak, the country's government-sponsored national passenger rail company, has a relatively sparse network compared to that of Western European countries. Service is concentrated in the Northeast, California, the Midwest, the Pacific Northwest, and Virginia/Southeast. The United States has an extensive air transportation network. U.S. civilian airlines are all privately owned. The three largest airlines in the world, by total number of passengers carried, are U.S.-based; American Airlines became the global leader after its 2013 merger with US Airways. Of the 50 busiest airports in the world, 16 are in the United States, as well as five of the top 10. The world's busiest airport by passenger volume is Hartsfield–Jackson Atlanta International in Atlanta, Georgia. In 2022, most of the 19,969 U.S. airports were owned and operated by local government authorities, and there are also some private airports. Some 5,193 are designated as "public use", including for general aviation. The Transportation Security Administration (TSA) has provided security at most major airports since 2001. The country's rail transport network, the longest in the world at 182,412.3 mi (293,564.2 km), handles mostly freight (in contrast to more passenger-centered rail in Europe). Because they are often privately owned operations, U.S. railroads lag behind those of the rest of the world in terms of electrification. The country's inland waterways are the world's fifth-longest, totaling 25,482 mi (41,009 km). They are used extensively for freight, recreation, and a small amount of passenger traffic. Of the world's 50 busiest container ports, four are located in the United States, with the busiest in the country being the Port of Los Angeles. Demographics The U.S. Census Bureau reported 331,449,281 residents on April 1, 2020,[v] making the United States the third-most-populous country in the world, after India and China. The Census Bureau's official 2025 population estimate was 341,784,857, an increase of 3.1% since the 2020 census. According to the Bureau's U.S. Population Clock, on July 1, 2024, the U.S. population had a net gain of one person every 16 seconds, or about 5400 people per day. In 2023, 51% of Americans age 15 and over were married, 6% were widowed, 10% were divorced, and 34% had never been married. In 2023, the total fertility rate for the U.S. stood at 1.6 children per woman, and, at 23%, it had the world's highest rate of children living in single-parent households in 2019. Most Americans live in the suburbs of major metropolitan areas. The United States has a diverse population; 37 ancestry groups have more than one million members. White Americans with ancestry from Europe, the Middle East, or North Africa form the largest racial and ethnic group at 57.8% of the United States population. Hispanic and Latino Americans form the second-largest group and are 18.7% of the United States population. African Americans constitute the country's third-largest ancestry group and are 12.1% of the total U.S. population. Asian Americans are the country's fourth-largest group, composing 5.9% of the United States population. The country's 3.7 million Native Americans account for about 1%, and some 574 native tribes are recognized by the federal government. In 2024, the median age of the United States population was 39.1 years. While many languages and dialects are spoken in the United States, English is by far the most commonly spoken and written. De facto, English is the official language of the United States, and in 2025, Executive Order 14224 declared English official. However, the U.S. has never had a de jure official language, as Congress has never passed a law to designate English as official for all three federal branches. Some laws, such as U.S. naturalization requirements, nonetheless standardize English. Twenty-eight states and the United States Virgin Islands have laws that designate English as the sole official language; 19 states and the District of Columbia have no official language. Three states and four U.S. territories have recognized local or indigenous languages in addition to English: Hawaii (Hawaiian), Alaska (twenty Native languages),[w] South Dakota (Sioux), American Samoa (Samoan), Puerto Rico (Spanish), Guam (Chamorro), and the Northern Mariana Islands (Carolinian and Chamorro). In total, 169 Native American languages are spoken in the United States. In Puerto Rico, Spanish is more widely spoken than English. According to the American Community Survey (2020), some 245.4 million people in the U.S. age five and older spoke only English at home. About 41.2 million spoke Spanish at home, making it the second most commonly used language. Other languages spoken at home by one million people or more include Chinese (3.40 million), Tagalog (1.71 million), Vietnamese (1.52 million), Arabic (1.39 million), French (1.18 million), Korean (1.07 million), and Russian (1.04 million). German, spoken by 1 million people at home in 2010, fell to 857,000 total speakers in 2020. America's immigrant population is by far the world's largest in absolute terms. In 2022, there were 87.7 million immigrants and U.S.-born children of immigrants in the United States, accounting for nearly 27% of the overall U.S. population. In 2017, out of the U.S. foreign-born population, some 45% (20.7 million) were naturalized citizens, 27% (12.3 million) were lawful permanent residents, 6% (2.2 million) were temporary lawful residents, and 23% (10.5 million) were unauthorized immigrants. In 2019, the top countries of origin for immigrants were Mexico (24% of immigrants), India (6%), China (5%), the Philippines (4.5%), and El Salvador (3%). In fiscal year 2022, over one million immigrants (most of whom entered through family reunification) were granted legal residence. The undocumented immigrant population in the U.S. reached a record high of 14 million in 2023. The First Amendment guarantees the free exercise of religion in the country and forbids Congress from passing laws respecting its establishment. Religious practice is widespread, among the most diverse in the world, and profoundly vibrant. The country has the world's largest Christian population, which includes the fourth-largest population of Catholics. Other notable faiths include Judaism, Buddhism, Hinduism, Islam, New Age, and Native American religions. Religious practice varies significantly by region. "Ceremonial deism" is common in American culture. The overwhelming majority of Americans believe in a higher power or spiritual force, engage in spiritual practices such as prayer, and consider themselves religious or spiritual. In the Southern United States' "Bible Belt", evangelical Protestantism plays a significant role culturally; New England and the Western United States tend to be more secular. Mormonism, a Restorationist movement founded in the U.S. in 1847, is the predominant religion in Utah and a major religion in Idaho. About 82% of Americans live in metropolitan areas, particularly in suburbs; about half of those reside in cities with populations over 50,000. In 2022, 333 incorporated municipalities had populations over 100,000, nine cities had more than one million residents, and four cities—New York City, Los Angeles, Chicago, and Houston—had populations exceeding two million. Many U.S. metropolitan populations are growing rapidly, particularly in the South and West. According to the Centers for Disease Control and Prevention (CDC), average U.S. life expectancy at birth reached 79.0 years in 2024, its highest recorded level. This was an increase of 0.6 years over 2023. The CDC attributed the improvement to a significant fall in the number of fatal drug overdoses in the country, noting that "heart disease continues to be the leading cause of death in the United States, followed by cancer and unintentional injuries." In 2024, life expectancy at birth for American men rose to 76.5 years (+0.7 years compared to 2023), while life expectancy for women was 81.4 years (+0.3 years). Starting in 1998, life expectancy in the U.S. fell behind that of other wealthy industrialized countries, and Americans' "health disadvantage" gap has been increasing ever since. The Commonwealth Fund reported in 2020 that the U.S. had the highest suicide rate among high-income countries. Approximately one-third of the U.S. adult population is obese and another third is overweight. The U.S. healthcare system far outspends that of any other country, measured both in per capita spending and as a percentage of GDP, but attains worse healthcare outcomes when compared to peer countries for reasons that are debated. The United States is the only developed country without a system of universal healthcare, and a significant proportion of the population that does not carry health insurance. Government-funded healthcare coverage for the poor (Medicaid) and for those age 65 and older (Medicare) is available to Americans who meet the programs' income or age qualifications. In 2010, then-President Obama passed the Patient Protection and Affordable Care Act.[x] Abortion in the United States is not federally protected, and is illegal or restricted in 17 states. American primary and secondary education, known in the U.S. as K–12 ("kindergarten through 12th grade"), is decentralized. School systems are operated by state, territorial, and sometimes municipal governments and regulated by the U.S. Department of Education. In general, children are required to attend school or an approved homeschool from the age of five or six (kindergarten or first grade) until they are 18 years old. This often brings students through the 12th grade, the final year of a U.S. high school, but some states and territories allow them to leave school earlier, at age 16 or 17. The U.S. spends more on education per student than any other country, an average of $18,614 per year per public elementary and secondary school student in 2020–2021. Among Americans age 25 and older, 92.2% graduated from high school, 62.7% attended some college, 37.7% earned a bachelor's degree, and 14.2% earned a graduate degree. The U.S. literacy rate is near-universal. The U.S. has produced the most Nobel Prize winners of any country, with 411 (having won 413 awards). U.S. tertiary or higher education has earned a global reputation. Many of the world's top universities, as listed by various ranking organizations, are in the United States, including 19 of the top 25. American higher education is dominated by state university systems, although the country's many private universities and colleges enroll about 20% of all American students. Local community colleges generally offer open admissions, lower tuition, and coursework leading to a two-year associate degree or a non-degree certificate. As for public expenditures on higher education, the U.S. spends more per student than the OECD average, and Americans spend more than all nations in combined public and private spending. Colleges and universities directly funded by the federal government do not charge tuition and are limited to military personnel and government employees, including: the U.S. service academies, the Naval Postgraduate School, and military staff colleges. Despite some student loan forgiveness programs in place, student loan debt increased by 102% between 2010 and 2020, and exceeded $1.7 trillion in 2022. Culture and society The United States is home to a wide variety of ethnic groups, traditions, and customs. The country has been described as having the values of individualism and personal autonomy, as well as a strong work ethic and competitiveness. Voluntary altruism towards others also plays a major role; according to a 2016 study by the Charities Aid Foundation, Americans donated 1.44% of total GDP to charity—the highest rate in the world by a large margin. Americans have traditionally been characterized by a unifying political belief in an "American Creed" emphasizing consent of the governed, liberty, equality under the law, democracy, social equality, property rights, and a preference for limited government. The U.S. has acquired significant hard and soft power through its diplomatic influence, economic power, military alliances, and cultural exports such as American movies, music, video games, sports, and food. The influence that the United States exerts on other countries through soft power is referred to as Americanization. Nearly all present Americans or their ancestors came from Europe, Africa, or Asia (the "Old World") within the past five centuries. Mainstream American culture is a Western culture largely derived from the traditions of European immigrants with influences from many other sources, such as traditions brought by slaves from Africa. More recent immigration from Asia and especially Latin America has added to a cultural mix that has been described as a homogenizing melting pot, and a heterogeneous salad bowl, with immigrants contributing to, and often assimilating into, mainstream American culture. Under the First Amendment to the Constitution, the United States is considered to have the strongest protections of free speech of any country. Flag desecration, hate speech, blasphemy, and lese majesty are all forms of protected expression. A 2016 Pew Research Center poll found that Americans were the most supportive of free expression of any polity measured. Additionally, they are the "most supportive of freedom of the press and the right to use the Internet without government censorship". The U.S. is a socially progressive country with permissive attitudes surrounding human sexuality. LGBTQ rights in the United States are among the most advanced by global standards. The American Dream, or the perception that Americans enjoy high levels of social mobility, plays a key role in attracting immigrants. Whether this perception is accurate has been a topic of debate. While mainstream culture holds that the United States is a classless society, scholars identify significant differences between the country's social classes, affecting socialization, language, and values. Americans tend to greatly value socioeconomic achievement, but being ordinary or average is promoted by some as a noble condition as well. The National Foundation on the Arts and the Humanities is an agency of the United States federal government that was established in 1965 with the purpose to "develop and promote a broadly conceived national policy of support for the humanities and the arts in the United States, and for institutions which preserve the cultural heritage of the United States." It is composed of four sub-agencies: Colonial American authors were influenced by John Locke and other Enlightenment philosophers. The American Revolutionary Period (1765–1783) is notable for the political writings of Benjamin Franklin, Alexander Hamilton, Thomas Paine, and Thomas Jefferson. Shortly before and after the Revolutionary War, the newspaper rose to prominence, filling a demand for anti-British national literature. An early novel is William Hill Brown's The Power of Sympathy, published in 1791. Writer and critic John Neal in the early- to mid-19th century helped advance America toward a unique literature and culture by criticizing predecessors such as Washington Irving for imitating their British counterparts, and by influencing writers such as Edgar Allan Poe, who took American poetry and short fiction in new directions. Ralph Waldo Emerson and Margaret Fuller pioneered the influential Transcendentalism movement; Henry David Thoreau, author of Walden, was influenced by this movement. The conflict surrounding abolitionism inspired writers, like Harriet Beecher Stowe, and authors of slave narratives, such as Frederick Douglass. Nathaniel Hawthorne's The Scarlet Letter (1850) explored the dark side of American history, as did Herman Melville's Moby-Dick (1851). Major American poets of the 19th century American Renaissance include Walt Whitman, Melville, and Emily Dickinson. Mark Twain was the first major American writer to be born in the West. Henry James achieved international recognition with novels like The Portrait of a Lady (1881). As literacy rates rose, periodicals published more stories centered around industrial workers, women, and the rural poor. Naturalism, regionalism, and realism were the major literary movements of the period. While modernism generally took on an international character, modernist authors working within the United States more often rooted their work in specific regions, peoples, and cultures. Following the Great Migration to northern cities, African-American and black West Indian authors of the Harlem Renaissance developed an independent tradition of literature that rebuked a history of inequality and celebrated black culture. An important cultural export during the Jazz Age, these writings were a key influence on Négritude, a philosophy emerging in the 1930s among francophone writers of the African diaspora. In the 1950s, an ideal of homogeneity led many authors to attempt to write the Great American Novel, while the Beat Generation rejected this conformity, using styles that elevated the impact of the spoken word over mechanics to describe drug use, sexuality, and the failings of society. Contemporary literature is more pluralistic than in previous eras, with the closest thing to a unifying feature being a trend toward self-conscious experiments with language. Twelve American laureates have won the Nobel Prize in Literature. Media in the United States is broadly uncensored, with the First Amendment providing significant protections, as reiterated in New York Times Co. v. United States. The four major broadcasters in the U.S. are the National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), American Broadcasting Company (ABC), and Fox Broadcasting Company (Fox). The four major broadcast television networks are all commercial entities. The U.S. cable television system offers hundreds of channels catering to a variety of niches. In 2021, about 83% of Americans over age 12 listened to broadcast radio, while about 40% listened to podcasts. In the prior year, there were 15,460 licensed full-power radio stations in the U.S. according to the Federal Communications Commission (FCC). Much of the public radio broadcasting is supplied by National Public Radio (NPR), incorporated in February 1970 under the Public Broadcasting Act of 1967. U.S. newspapers with a global reach and reputation include The Wall Street Journal, The New York Times, The Washington Post, and USA Today. About 800 publications are produced in Spanish. With few exceptions, newspapers are privately owned, either by large chains such as Gannett or McClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or, in an increasingly rare situation, by individuals or families. Major cities often have alternative newspapers to complement the mainstream daily papers, such as The Village Voice in New York City and LA Weekly in Los Angeles. The five most-visited websites in the world are Google, YouTube, Facebook, Instagram, and ChatGPT—all of them American-owned. Other popular platforms used include X (formerly Twitter) and Amazon. In 2025, the U.S. was the world's second-largest video game market by revenue (after China). In 2015, the U.S. video game industry consisted of 2,457 companies that employed around 220,000 jobs and generated $30.4 billion in revenue. There are 444 game publishers, developers, and hardware companies in California alone. According to the Game Developers Conference (GDC), the U.S. is the top location for video game development, with 58% of the world's game developers based there in 2025. The United States is well known for its theater. Mainstream theater in the United States derives from the old European theatrical tradition and has been heavily influenced by the British theater. By the middle of the 19th century, America had created new distinct dramatic forms in the Tom Shows, the showboat theater and the minstrel show. The central hub of the American theater scene is the Theater District in Manhattan, with its divisions of Broadway, off-Broadway, and off-off-Broadway. Many movie and television celebrities have gotten their big break working in New York productions. Outside New York City, many cities have professional regional or resident theater companies that produce their own seasons. The biggest-budget theatrical productions are musicals. U.S. theater has an active community theater culture. The Tony Awards recognizes excellence in live Broadway theater and are presented at an annual ceremony in Manhattan. The awards are given for Broadway productions and performances. One is also given for regional theater. Several discretionary non-competitive awards are given as well, including a Special Tony Award, the Tony Honors for Excellence in Theatre, and the Isabelle Stevenson Award. Folk art in colonial America grew out of artisanal craftsmanship in communities that allowed commonly trained people to individually express themselves. It was distinct from Europe's tradition of high art, which was less accessible and generally less relevant to early American settlers. Cultural movements in art and craftsmanship in colonial America generally lagged behind those of Western Europe. For example, the prevailing medieval style of woodworking and primitive sculpture became integral to early American folk art, despite the emergence of Renaissance styles in England in the late 16th and early 17th centuries. The new English styles would have been early enough to make a considerable impact on American folk art, but American styles and forms had already been firmly adopted. Not only did styles change slowly in early America, but there was a tendency for rural artisans there to continue their traditional forms longer than their urban counterparts did—and far longer than those in Western Europe. The Hudson River School was a mid-19th-century movement in the visual arts tradition of European naturalism. The 1913 Armory Show in New York City, an exhibition of European modernist art, shocked the public and transformed the U.S. art scene. American Realism and American Regionalism sought to reflect and give America new ways of looking at itself. Georgia O'Keeffe, Marsden Hartley, and others experimented with new and individualistic styles, which would become known as American modernism. Major artistic movements such as the abstract expressionism of Jackson Pollock and Willem de Kooning and the pop art of Andy Warhol and Roy Lichtenstein developed largely in the United States. Major photographers include Alfred Stieglitz, Edward Steichen, Dorothea Lange, Edward Weston, James Van Der Zee, Ansel Adams, and Gordon Parks. The tide of modernism and then postmodernism has brought global fame to American architects, including Frank Lloyd Wright, Philip Johnson, and Frank Gehry. The Metropolitan Museum of Art in Manhattan is the largest art museum in the United States and the fourth-largest in the world. American folk music encompasses numerous music genres, variously known as traditional music, traditional folk music, contemporary folk music, or roots music. Many traditional songs have been sung within the same family or folk group for generations, and sometimes trace back to such origins as the British Isles, mainland Europe, or Africa. The rhythmic and lyrical styles of African-American music in particular have influenced American music. Banjos were brought to America through the slave trade. Minstrel shows incorporating the instrument into their acts led to its increased popularity and widespread production in the 19th century. The electric guitar, first invented in the 1930s, and mass-produced by the 1940s, had an enormous influence on popular music, in particular due to the development of rock and roll. The synthesizer, turntablism, and electronic music were also largely developed in the U.S. Elements from folk idioms such as the blues and old-time music were adopted and transformed into popular genres with global audiences. Jazz grew from blues and ragtime in the early 20th century, developing from the innovations and recordings of composers such as W.C. Handy and Jelly Roll Morton. Louis Armstrong and Duke Ellington increased its popularity early in the 20th century. Country music developed in the 1920s, bluegrass and rhythm and blues in the 1940s, and rock and roll in the 1950s. In the 1960s, Bob Dylan emerged from the folk revival to become one of the country's most celebrated songwriters. The musical forms of punk and hip hop both originated in the United States in the 1970s. The United States has the world's largest music market, with a total retail value of $15.9 billion in 2022. Most of the world's major record companies are based in the U.S.; they are represented by the Recording Industry Association of America (RIAA). Mid-20th-century American pop stars, such as Frank Sinatra and Elvis Presley, became global celebrities and best-selling music artists, as have artists of the late 20th century, such as Michael Jackson, Madonna, Whitney Houston, and Mariah Carey, and of the early 21st century, such as Eminem, Britney Spears, Lady Gaga, Katy Perry, Taylor Swift and Beyoncé. The United States has the world's largest apparel market by revenue. Apart from professional business attire, American fashion is eclectic and predominantly informal. Americans' diverse cultural roots are reflected in their clothing; however, sneakers, jeans, T-shirts, and baseball caps are emblematic of American styles. New York, with its Fashion Week, is considered to be one of the "Big Four" global fashion capitals, along with Paris, Milan, and London. A study demonstrated that general proximity to Manhattan's Garment District has been synonymous with American fashion since its inception in the early 20th century. A number of well-known designer labels, among them Tommy Hilfiger, Ralph Lauren, Tom Ford and Calvin Klein, are headquartered in Manhattan. Labels cater to niche markets, such as preteens. New York Fashion Week is one of the most influential fashion shows in the world, and is held twice each year in Manhattan; the annual Met Gala, also in Manhattan, has been called the fashion world's "biggest night". The U.S. film industry has a worldwide influence and following. Hollywood, a district in central Los Angeles, the nation's second-most populous city, is also metonymous for the American filmmaking industry. The major film studios of the United States are the primary source of the most commercially successful movies selling the most tickets in the world. Largely centered in the New York City region from its beginnings in the late 19th century through the first decades of the 20th century, the U.S. film industry has since been primarily based in and around Hollywood. Nonetheless, American film companies have been subject to the forces of globalization in the 21st century, and an increasing number of films are made elsewhere. The Academy Awards, popularly known as "the Oscars", have been held annually by the Academy of Motion Picture Arts and Sciences since 1929, and the Golden Globe Awards have been held annually since January 1944. The industry peaked in what is commonly referred to as the "Golden Age of Hollywood", from the early sound period until the early 1960s, with screen actors such as John Wayne and Marilyn Monroe becoming iconic figures. In the 1970s, "New Hollywood", or the "Hollywood Renaissance", was defined by grittier films influenced by French and Italian realist pictures of the post-war period. The 21st century has been marked by the rise of American streaming platforms, which came to rival traditional cinema. Early settlers were introduced by Native Americans to foods such as turkey, sweet potatoes, corn, squash, and maple syrup. Of the most enduring and pervasive examples are variations of the native dish called succotash. Early settlers and later immigrants combined these with foods they were familiar with, such as wheat flour, beef, and milk, to create a distinctive American cuisine. New World crops, especially pumpkin, corn, potatoes, and turkey as the main course are part of a shared national menu on Thanksgiving, when many Americans prepare or purchase traditional dishes to celebrate the occasion. Characteristic American dishes such as apple pie, fried chicken, doughnuts, french fries, macaroni and cheese, ice cream, hamburgers, hot dogs, and American pizza derive from the recipes of various immigrant groups. Mexican dishes such as burritos and tacos preexisted the United States in areas later annexed from Mexico, and adaptations of Chinese cuisine as well as pasta dishes freely adapted from Italian sources are all widely consumed. American chefs have had a significant impact on society both domestically and internationally. In 1946, the Culinary Institute of America was founded by Katharine Angell and Frances Roth. This would become the United States' most prestigious culinary school, where many of the most talented American chefs would study prior to successful careers. The United States restaurant industry was projected at $899 billion in sales for 2020, and employed more than 15 million people, representing 10% of the nation's workforce directly. It is the country's second-largest private employer and the third-largest employer overall. The United States is home to over 220 Michelin star-rated restaurants, 70 of which are in New York City. Wine has been produced in what is now the United States since the 1500s, with the first widespread production beginning in what is now New Mexico in 1628. In the modern U.S., wine production is undertaken in all fifty states, with California producing 84 percent of all U.S. wine. With more than 1,100,000 acres (4,500 km2) under vine, the United States is the fourth-largest wine-producing country in the world, after Italy, Spain, and France. The classic American diner, a casual restaurant type originally intended for the working class, emerged during the 19th century from converted railroad dining cars made stationary. The diner soon evolved into purpose-built structures whose number expanded greatly in the 20th century. The American fast-food industry developed alongside the nation's car culture. American restaurants developed the drive-in format in the 1920s, which they began to replace with the drive-through format by the 1940s. American fast-food restaurant chains, such as McDonald's, Burger King, Chick-fil-A, Kentucky Fried Chicken, Dunkin' Donuts and many others, have numerous outlets around the world. The most popular spectator sports in the U.S. are American football, basketball, baseball, soccer, and ice hockey. Their premier leagues are, respectively, the National Football League, the National Basketball Association, Major League Baseball, Major League Soccer, and the National Hockey League, All these leagues enjoy wide-ranging domestic media coverage and, except for the MLS, all are considered the preeminent leagues in their respective sports in the world. While most major U.S. sports such as baseball and American football have evolved out of European practices, basketball, volleyball, skateboarding, and snowboarding are American inventions, many of which have become popular worldwide. Lacrosse and surfing arose from Native American and Native Hawaiian activities that predate European contact. The market for professional sports in the United States was approximately $69 billion in July 2013, roughly 50% larger than that of Europe, the Middle East, and Africa combined. American football is by several measures the most popular spectator sport in the United States. Although American football does not have a substantial following in other nations, the NFL does have the highest average attendance (67,254) of any professional sports league in the world. In the year 2024, the NFL generated over $23 billion, making them the most valued professional sports league in the United States and the world. Baseball has been regarded as the U.S. "national sport" since the late 19th century. The most-watched individual sports in the U.S. are golf and auto racing, particularly NASCAR and IndyCar. On the collegiate level, earnings for the member institutions exceed $1 billion annually, and college football and basketball attract large audiences, as the NCAA March Madness tournament and the College Football Playoff are some of the most watched national sporting events. In the U.S., the intercollegiate sports level serves as the main feeder system for professional and Olympic sports, with significant exceptions such as Minor League Baseball. This differs greatly from practices in nearly all other countries, where publicly and privately funded sports organizations serve this function. Eight Olympic Games have taken place in the United States. The 1904 Summer Olympics in St. Louis, Missouri, were the first-ever Olympic Games held outside of Europe. The Olympic Games will be held in the U.S. for a ninth time when Los Angeles hosts the 2028 Summer Olympics. U.S. athletes have won a total of 2,968 medals (1,179 gold) at the Olympic Games, the most of any country. In other international competition, the United States is the home of a number of prestigious events, including the America's Cup, World Baseball Classic, the U.S. Open, and the Masters Tournament. The U.S. men's national soccer team has qualified for eleven World Cups, while the women's national team has won the FIFA Women's World Cup and Olympic soccer tournament four and five times, respectively. The 1999 FIFA Women's World Cup was hosted by the United States. Its final match was attended by 90,185, setting the world record for largest women's sporting event crowd at the time. The United States hosted the 1994 FIFA World Cup and will co-host, along with Canada and Mexico, the 2026 FIFA World Cup. See also Notes References This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023​, FAO, FAO. External links 40°N 100°W / 40°N 100°W / 40; -100 (United States of America)
========================================
[SOURCE: https://en.wikipedia.org/wiki/Special:EditPage/Template:RPG_sidebar] | [TOKENS: 1447]
Editing Template:RPG sidebar Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Sign your posts on talk pages: ~~~~ Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] &nbsp; <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} Wikidata entities used in this page Pages transcluded onto the current version of this page (help):
========================================