id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
14,539
https://en.wikipedia.org/wiki/Internet
The Internet (or internet) is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP) to communicate between networks and devices. It is a network of networks that consists of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries a vast range of information resources and services, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, internet telephony, and file sharing. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching in the 1960s and the design of computer networks for data communication. The set of rules (communication protocols) to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. The funding of the National Science Foundation Network as a new backbone in the 1980s, as well as private funding for other commercial extensions, encouraged worldwide participation in the development of new networking technologies and the merger of many networks using DARPA's Internet protocol suite. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet, and generated sustained exponential growth as generations of institutional, personal, and mobile computers were connected to the internetwork. Although the Internet was widely used by academia in the 1980s, the subsequent commercialization of the Internet in the 1990s and beyond incorporated its services and technologies into virtually every aspect of modern life. Most traditional communication media, including telephone, radio, television, paper mail, and newspapers, are reshaped, redefined, or even bypassed by the Internet, giving birth to new services such as email, Internet telephone, Internet television, online music, digital newspapers, and video streaming websites. Newspapers, books, and other print publishing have adapted to website technology or have been reshaped into blogging, web feeds, and online news aggregators. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has grown exponentially for major retailers, small businesses, and entrepreneurs, as it enables firms to extend their "brick and mortar" presence to serve a larger market or even sell goods and services entirely online. Business-to-business and financial services on the Internet affect supply chains across entire industries. The Internet has no single centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. In November 2006, the Internet was included on USA Todays list of the New Seven Wonders. Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. When it came into common use, most publications treated the word Internet as a capitalized proper noun; this has become less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services, a collection of documents (web pages) and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office (IPTO) at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense (DoD). Research into packet switching, one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory (NPL) in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network and routing concepts proposed by Baran were incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles (UCLA) and the Stanford Research Institute (now SRI International) on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. In a sign of future growth, 15 sites were connected to the young ARPANET by the end of 1971. These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and NDRE), and to Peter Kirstein's research group at University College London (UCL), which provided a gateway to British academic networks, forming the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network or "a network of networks". In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". They used the term internet as a shorthand for internetwork in , and later RFCs repeated this use. Cerf and Kahn credit Louis Pouzin and others with important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers (ISPs) emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web with its discussion forums, blogs, social networking services, and online shopping sites. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. , the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. ICANN coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. Regional Internet registries (RIRs) were established for five regions of the world. The African Network Information Center (AfriNIC) for Africa, the American Registry for Internet Numbers (ARIN) for North America, the Asia–Pacific Network Information Centre (APNIC) for Asia and the Pacific region, the Latin American and Caribbean Internet Addresses Registry (LACNIC) for Latin America and the Caribbean region, and the Réseaux IP Européens – Network Coordination Centre (RIPE NCC) for Europe, the Middle East, and Central Asia were delegated to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals (anyone may join) as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the IETF, Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues. Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, modems etc. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. The internet packets are carried by other full-fledged networking protocols with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers. Service tiers Internet service providers (ISPs) establish the worldwide connectivity between individual networks at various levels of scope. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET. Access Common methods of Internet access by users include dial-up with a computer modem via telephone circuits, broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology (e.g. 3G, 4G). The Internet may often be accessed from computers in libraries and Internet cafés. Internet access points exist in many public places such as airport halls and coffee shops. Various terms are used, such as public Internet kiosk, public access terminal, and Web payphone. Many hotels also have public terminals that are usually fee-based. These terminals are widely accessed for various usages, such as ticket booking, bank deposit, or online payment. Wi-Fi provides wireless access to the Internet via local computer networks. Hotspots providing such access include Wi-Fi cafés, where users need to bring their own wireless devices, such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based. Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh, where the Internet can then be accessed from places such as a park bench. Experiments have also been conducted with proprietary mobile wireless networks like Ricochet, various high-speed data services over cellular networks, and fixed wireless services. Modern smartphones can also access the Internet through the cellular carrier network. For Web browsing, these devices provide applications such as Google Chrome, Safari, and Firefox and a wide variety of other Internet software may be installed from app stores. Internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. Mobile communication The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The number of subscriptions was predicted to rise to 5.7 billion users in 2020. , 80% of the world's population were covered by a 4G network. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. Zero-rating, the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost, has offered opportunities to surmount economic hurdles but has also been accused by its critics as creating a two-tiered Internet. To address the issues with zero-rating, an alternative model has emerged in the concept of 'equal rating' and is being tested in experiments by Mozilla and Orange in Africa. Equal rating prevents prioritization of one type of content and zero-rates all content up to a specified data cap. In a study published by Chatham House, 15 out of 19 countries researched in Latin America had some kind of hybrid or zero-rated product offered. Some countries in the region had a handful of plans to choose from (across all mobile network operators) while others, such as Colombia, offered as many as 30 pre-paid and 34 post-paid plans. A study of eight countries in the Global South found that zero-rated data plans exist in every country, although there is a great range in the frequency with which they are offered and actually used in each. The study looked at the top three to five carriers by market share in Bangladesh, Colombia, Ghana, India, Kenya, Nigeria, Peru and Philippines. Across the 181 plans examined, 13 percent were offering zero-rated services. Another study, covering Ghana, Kenya, Nigeria and South Africa, found Facebook's Free Basics and Wikipedia Zero to be the most commonly zero-rated content. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in and . At the top is the application layer, where communication is described in terms of the objects or data structures most appropriate for each application. For example, a web browser operates in a client–server application model and exchanges information with the HyperText Transfer Protocol (HTTP) and an application-germane data structure, such as the HyperText Markup Language (HTML). Below this top layer, the transport layer connects applications on different hosts with a logical channel through the network. It provides this service with a variety of possible characteristics, such as ordered, reliable delivery (TCP), and an unreliable datagram service (UDP). Underlying these layers are the networking technologies that interconnect networks at their borders and exchange traffic across them. The Internet layer implements the Internet Protocol (IP) which enables computers to identify and locate each other by IP address and route their traffic via intermediate (transit) networks. The Internet Protocol layer code is independent of the type of network that it is physically running over. At the bottom of the architecture is the link layer, which connects nodes on the same physical link, and contains protocols that do not require routers for traversal to other links. The protocol suite does not explicitly specify hardware methods to transfer bits, or protocols to manage such hardware, but assumes that appropriate technology is available. Examples of that technology include Wi-Fi, Ethernet, and DSL. Internet protocol The most prominent component of the Internet model is the Internet Protocol (IP). IP enables internetworking and, in essence, establishes the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6. IP Addresses For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via DHCP, or are configured. However, the network also supports other addressing systems. Users generally enter domain names (e.g. "en.wikipedia.org") instead of IP addresses because they are easier to remember; they are converted by the Domain Name System (DNS) into IP addresses which are more efficient for routing purposes. IPv4 Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. IPv6 Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries (RIRs) began to urge all resource managers to plan rapid adoption and conversion. IPv6 is not directly interoperable by design with IPv4. In essence, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities must exist for internetworking or nodes must have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol. Network infrastructure, however, has been lagging in this development. Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts, e.g., peering agreements, and by technical specifications or protocols that describe the exchange of data over the network. Indeed, the Internet is defined by its interconnections and routing policies. Subnetwork A subnetwork or subnet is a logical subdivision of an IP network. The practice of dividing a network into two or more networks is called subnetting. Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface. The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range to belong to this network. The IPv6 address specification is a large address block with 296 addresses, having a 32-bit routing prefix. For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, is the subnet mask for the prefix . Traffic is exchanged between subnetworks through routers when the routing prefixes of the source address and the destination address differ. A router serves as a logical or physical boundary between the subnets. The benefits of subnetting an existing network vary with each deployment scenario. In the address allocation architecture of the Internet using CIDR and in large organizations, it is necessary to allocate address space efficiently. Subnetting may also enhance routing efficiency or have advantages in network management when subnetworks are administratively controlled by different entities in a larger organization. Subnets may be arranged logically in a hierarchical architecture, partitioning an organization's network address space into a tree-like routing structure. Routing Computers and routers use routing tables in their operating system to direct IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet. The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. IETF While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the Internet Engineering Task Force (IETF). The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices (BCP) when implementing Internet technologies. Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. Most servers that provide these services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. World Wide Web The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft's Internet Explorer/Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain any combination of computer data, including graphics, sounds, text, video, multimedia and interactive content that runs while the user is interacting with the page. Client-side software can include animations, games, office applications and scientific demonstrations. Through keyword-driven Internet research using search engines like Yahoo!, Bing and Google, users worldwide have easy, instant access to a vast and diverse amount of online information. Compared to printed media, books, encyclopedias and traditional libraries, the World Wide Web has enabled the decentralization of information on a large scale. The Web has enabled individuals and organizations to publish ideas and information to a potentially large audience online at greatly reduced expense and time delay. Publishing a web page, a blog, or building a website involves little initial cost and many cost-free services are available. However, publishing and maintaining large, professional websites with attractive, diverse and up-to-date information is still a difficult and expensive proposition. Many individuals and some companies and groups use web logs or blogs, which are largely used as easily being able to update online diaries. Some commercial organizations encourage staff to communicate advice in their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information and be attracted to the corporation as a result. Advertising on popular web pages can be lucrative, and e-commerce, which is the sale of products and services directly via the Web, continues to grow. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television. Many common online advertising practices are controversial and increasingly subject to regulation. When the Web developed in the 1990s, a typical web page was stored in completed form on a web server, formatted in HTML, ready for transmission to a web browser in response to a request. Over time, the process of creating and serving web pages has become dynamic, creating a flexible design, layout, and content. Websites are often created using content management software with, initially, very little content. Contributors to these systems, who may be paid staff, members of an organization or the public, fill underlying databases with content using editing pages designed for that purpose while casual visitors view and read this content in HTML form. There may or may not be editorial, approval and security systems built into the process of taking newly entered content and making it available to the target visitors. Communication Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Pictures, documents, and other files are sent as email attachments. Email messages can be cc-ed to multiple email addresses. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP). The idea began in the early 1990s with walkie-talkie-like voice applications for personal computers. VoIP systems now dominate many markets and are as easy to use and as convenient as a traditional telephone. The benefit has been substantial cost savings over traditional telephone calls, especially over long distances. Cable, ADSL, and mobile data networks provide Internet access in customer premises and inexpensive VoIP network adapters provide the connection for traditional analog telephone sets. The voice quality of VoIP often exceeds that of traditional calls. Remaining problems for VoIP include the situation that emergency services may not be universally available and that devices rely on a local power supply, while older traditional phones are powered from the local loop, and typically operate during a power failure. Data transfer File sharing is an example of transferring large amounts of data across the Internet. A computer file can be emailed to customers, colleagues and friends as an attachment. It can be uploaded to a website or File Transfer Protocol (FTP) server for easy download by others. It can be put into a "shared location" or onto a file server for instant use by colleagues. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. In any of these cases, access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by digital signatures or by MD5 or other message digests. These simple features of the Internet, over a worldwide basis, are changing the production, sale, and distribution of anything that can be reduced to a computer file for transmission. This includes all manner of print publications, software products, news, music, film, video, photography, graphics and the other arts. This in turn has caused seismic shifts in each of the existing industries that previously controlled the production and distribution of these products. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Many radio and television broadcasters provide Internet feeds of their live audio and video productions. They may also allow time-shift viewing or listening such as Preview, Classic Clips and Listen Again features. These providers have been joined by a range of pure Internet "broadcasters" who never had on-air licenses. This means that an Internet-connected device, such as a computer or something more specific, can be used to access online media in much the same way as was previously possible only with a television or radio receiver. The range of available types of content is much wider, from specialized technical webcasts to on-demand popular multimedia services. Podcasting is a variation on this theme, where—usually audio—material is downloaded and played back on a computer or shifted to a portable media player to be listened to on the move. These techniques using simple equipment allow anybody, with little censorship or licensing control, to broadcast audio-visual material worldwide. Digital media streaming increases the demand for network bandwidth. For example, standard image quality needs 1 Mbit/s link speed for SD 480p, HD 720p quality requires 2.5 Mbit/s, and the top-of-the-line HDX quality needs 4.5 Mbit/s for 1080p. Webcams are a low-cost extension of this phenomenon. While some webcams can give full-frame-rate video, the picture either is usually small or updates slowly. Internet users can watch animals around an African waterhole, ships in the Panama Canal, traffic at a local roundabout or monitor their own premises, live and in real time. Video chat rooms and video conferencing are also popular with many uses being found for personal webcams, with and without two-way sound. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses an HTML5 based web player by default to stream and show video files. Registered users may upload an unlimited amount of video and build their own personal profile. YouTube claims that its users watch hundreds of millions, and upload hundreds of thousands of videos daily. Social impact The Internet has enabled new forms of social interaction, activities, and social associations. This phenomenon has given rise to the scholarly study of the sociology of the Internet. The early Internet left an impact on some writers who used symbolism to write about it, such as describing the Internet as a "means to connect individuals in a vast invisible net over all the earth." Users Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022 China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. The prevalent language for communication via the Internet has always been English. This may be a result of the origin of the Internet, as well as the language's role as a lingua franca and as a world language. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). The Internet's technologies have developed enough in recent years, especially in the use of Unicode, that good facilities are available for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. In a US study in 2005, the percentage of men using the Internet was very slightly ahead of the percentage of women, although this difference reversed in those under 30. Men logged on more often, spent more time online, and were more likely to be broadband users, whereas women tended to make more use of opportunities to communicate (such as email). Men were more likely to use the Internet to pay bills, participate in auctions, and for recreation such as downloading music and videos. Men and women were equally likely to use the Internet for shopping and banking. In 2008, women significantly outnumbered men on most social networking services, such as Facebook and Myspace, although the ratios varied with age. Women watched more streaming content, whereas men downloaded more. Men were more likely to blog. Among those who blog, men were more likely to have a professional blog, whereas women were more likely to have a personal blog. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. Usage The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly. Within the limitations imposed by small screens and other limited facilities of such pocket-sized devices, the services of the Internet, including email and the web, may be available. Service providers may restrict the services offered and mobile data charges may be significantly higher than other access methods. Educational material at all levels from pre-school to post-doctoral is available from websites. Examples range from CBeebies, through school and high-school revision guides and virtual universities, to access to top-end scholarly literature through the likes of Google Scholar. For distance education, help with homework and other assignments, self-guided learning, whiling away spare time or just looking up more detail on an interesting fact, it has never been easier for people to access educational information at any level from anywhere. The Internet in general and the World Wide Web in particular are important enablers of both formal and informal education. Further, the Internet allows researchers (especially those from the social and behavioral sciences) to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software. Not only can a group cheaply communicate and share ideas but the wide reach of the Internet allows such groups more easily to form. An example of this is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice). Internet chat, whether using an IRC chat room, an instant messaging system, or a social networking service, allows colleagues to stay in touch in a very convenient way while working at their computers during the day. Messages can be exchanged even more quickly and conveniently than via email. These systems may allow files to be exchanged, drawings and images to be shared, or voice and video contact between team members. Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work. Business and project teams can share calendars as well as documents and other information. Such collaboration occurs in a wide variety of areas including scientific research, software development, conference planning, political activism and creative writing. Social and political collaboration is also becoming more widespread as both Internet access and computer literacy spread. The Internet allows computer users to remotely access other computers and information stores easily from any access point. Access may be with computer security; i.e., authentication and encryption technologies, depending on the requirements. This is encouraging new ways of remote work, collaboration and information sharing in many industries. An accountant sitting at home can audit the books of a company based in another country, on a server situated in a third country that is remotely maintained by IT specialists in a fourth. These accounts could have been created by home-working bookkeepers, in other remote locations, based on information emailed to them from offices all over the world. Some of these things were possible before the widespread use of the Internet, but the cost of private leased lines would have made many of them infeasible in practice. An office worker away from their desk, perhaps on the other side of the world on a business trip or a holiday, can access their emails, access their data using cloud computing, or open a remote desktop session into their office PC using a secure virtual private network (VPN) connection on the Internet. This can give the worker complete access to all of their normal files and data, including email and other applications, while away from the office. It has been referred to among system administrators as the Virtual Private Nightmare, because it extends the secure perimeter of a corporate network into remote locations and its employees' homes. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population". Social networking and entertainment Many people use the World Wide Web to access news, weather and sports reports, to plan and book vacations and to pursue their personal interests. People use chat, messaging and email to make and stay in touch with friends worldwide, sometimes in the same way as some previously had pen pals. Social networking services such as Facebook have created new ways to socialize and interact. Users of these sites are able to add a wide variety of information to pages, pursue common interests, and connect with others. It is also possible to find existing acquaintances, to allow communication among existing groups of people. Sites like LinkedIn foster commercial and business connections. YouTube and Flickr specialize in users' videos and photographs. Social networking services are also widely used by businesses and other organizations to promote their brands, to market to their customers and to encourage posts to "go viral". "Black hat" social media techniques are also employed by some organizations, such as spam accounts and astroturfing. A risk for both individuals' and organizations' writing posts (especially public posts) on social networking services is that especially foolish or controversial posts occasionally lead to an unexpected and possibly large-scale backlash on social media from other Internet users. This is also a risk in relation to controversial offline behavior, if it is widely made known. The nature of this backlash can range widely from counter-arguments and public mockery, through insults and hate speech, to, in extreme cases, rape and death threats. The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment in response to posts they have made on social media, and Twitter in particular has been criticized in the past for not doing enough to aid victims of online abuse. For organizations, such a backlash can cause overall brand damage, especially if reported by the media. However, this is not always the case, as any brand damage in the eyes of people with an opposing opinion to that presented by the organization could sometimes be outweighed by strengthening the brand in the eyes of others. Furthermore, if an organization or individual gives in to demands that others perceive as wrong-headed, that can then provoke a counter-backlash. Some websites, such as Reddit, have rules forbidding the posting of personal information of individuals (also known as doxxing), due to concerns about such postings leading to mobs of large numbers of Internet users directing harassment at the specific individuals thereby identified. In particular, the Reddit rule forbidding the posting of personal information is widely understood to imply that all identifying photos and names must be censored in Facebook screenshots posted to Reddit. However, the interpretation of this rule in relation to public Twitter posts is less clear, and in any case, like-minded people online have many other ways they can use to direct each other's attention to public social media posts they disagree with. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Children may also encounter material that they may find upsetting, or material that their parents consider to be not age-appropriate. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from inappropriate material on the Internet. The most popular social networking services, such as Facebook and Twitter, commonly forbid users under the age of 13. However, these policies are typically trivial to circumvent by registering an account with a false birth date, and a significant number of children aged under 13 join such sites anyway. Social networking services for younger children, which claim to provide better levels of protection for children, also exist. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. The Internet pornography and online gambling industries have taken advantage of the World Wide Web. Although many governments have attempted to restrict both industries' use of the Internet, in general, this has failed to stop their widespread popularity. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Non-subscribers were limited to certain types of game play or certain games. Many people use the Internet to access and download music, movies and other works for their enjoyment and relaxation. Free and fee-based services exist for all of these activities, using centralized servers and distributed peer-to-peer technologies. Some of these sources exercise more care with respect to the original artists' copyrights than others. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread. A 2017 book claimed that the Internet consolidates most aspects of human endeavor into singular arenas of which all of humanity are potential members and competitors, with fundamentally negative impacts on mental health as a result. While successes in each field of activity are pervasively visible and trumpeted, they are reserved for an extremely thin sliver of the world's most exceptional, leaving everyone else behind. Whereas, before the Internet, expectations of success in any field were supported by reasonable probabilities of achievement at the village, suburb, city or even state level, the same expectations in the Internet world are virtually certain to bring disappointment today: there is always someone else, somewhere on the planet, who can do better and take the now one-and-only top spot. Cybersectarianism is a new organizational form that involves, "highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards." In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq. Cyberslacking can become a drain on corporate resources; the average UK employee spent 57 minutes a day surfing the Web while at work, according to a 2003 study by Peninsula Business Services. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business Electronic business (e-business) encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion for 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. Author Andrew Keen, a long-time critic of the social transformations caused by the Internet, has focused on the economic effects of consolidation from Internet businesses. Keen cites a 2013 Institute for Local Self-Reliance report saying brick-and-mortar retailers employ 47 people for every $10 million in sales while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Remote work Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, most conveniently the worker's home. It can be efficient and useful for companies as it allows workers to communicate over long distances, saving significant amounts of travel time and cost. More workers have adequate bandwidth at home to use these tools to link their home to their corporate intranet and internal communication networks. Collaborative publishing Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. Politics and political revolutions The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. The New York Times suggested that social media websites, such as Facebook and Twitter, helped people organize the political revolutions in Egypt, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Philanthropy The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. Kiva raises funds for local intermediary microfinance organizations that post stories and updates on behalf of the borrowers. Lenders can contribute as little as $25 to loans of their choice and receive their money back as borrowers repay. Kiva falls short of being a pure peer-to-peer charity, in that loans are disbursed before being funded by lenders and borrowers do not communicate with lenders themselves. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users. Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. Surveillance The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Packet capture is the monitoring of data traffic on a computer network. Computers communicate over the Internet by breaking up messages (emails, images, videos, web pages, files, etc.) into small chunks called "packets", which are routed through a network of computers, until they reach their destination, where they are assembled back into a complete "message" again. Packet Capture Appliance intercepts these packets as they are traveling through the network, in order to examine their contents using other programs. A packet capture is an information gathering tool, but not an analysis tool. That is it gathers "messages" but it does not analyze them and figure out what they mean. Other programs are needed to perform traffic analysis and sift through intercepted data looking for important/useful information. Under the Communications Assistance For Law Enforcement Act all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic. The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Censorship Some governments, such as those of Burma, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive websites on individual computers or networks in order to limit access by children to pornographic material or depiction of violence. Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. Traffic volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for. Outages An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Energy use Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Crowdfunding Crowdsourcing Cyberspace Darknet Deep web Hyphanet Internet industry jargon Index of Internet-related articles Internet metaphors Internet video "Internets" Outline of the Internet Notes References Sources Further reading First Monday, a peer-reviewed journal on the Internet by the University Library of the University of Illinois at Chicago, The Internet Explained, Vincent Zegna & Mike Pepper, Sonet Digital, November 2005, pp. 1–7. External links The Internet Society Living Internet, Internet history and related information, including information from many creators of the Internet 1969 establishments in the United States American inventions Computer-related introductions in 1969 Cultural globalization Digital technology Mass media technology Telecommunications New media Promotion and marketing communications Public services Telegraphy Transport systems Virtual reality Main topic articles
Internet
[ "Physics", "Technology" ]
13,490
[ "Information and communications technology", "Internet", "New media", "Transport systems", "Mass media technology", "Telecommunications", "Physical systems", "Transport", "Digital technology", "Multimedia" ]
14,551
https://en.wikipedia.org/wiki/Tertiary%20sector%20of%20the%20economy
The tertiary sector of the economy, generally known as the service sector, is the third of the three economic sectors in the three-sector model (also known as the economic cycle). The others are the primary sector (raw materials) and the secondary sector (manufacturing). The tertiary sector consists of the provision of services instead of end products. Services (also known as "intangible goods") include attention, advice, access, experience and affective labour. The tertiary sector involves the provision of services to other businesses as well as to final consumers. Services may involve the transport, distribution and sale of goods from a producer to a consumer, as may happen in wholesaling and retailing, pest control or financial services. The goods may be transformed in the process of providing the service, as happens in the restaurant industry. However, the focus is on people by interacting with them and serving the customers rather than transforming the physical goods. The production of information has been long regarded as a service, but some economists now attribute it to a fourth sector, called the quaternary sector. Difficulty of definition It is sometimes hard to determine whether a given company is part of the secondary or the tertiary sector. It is not only companies that have been classified as part of a sector in some schemes, since governments and their services (such as the police or military), as well as nonprofit organizations (such as charities or research associations), can also be seen as part of that sector. To classify a business as a service, one can use classification systems such as the United Nations' International Standard Industrial Classification standard, the United States' Standard Industrial Classification (SIC) code system and its new replacement, the North American Industrial Classification System (NAICS), the Statistical Classification of Economic Activities in the European Community (NACE) in the EU and similar systems elsewhere. These governmental classification systems have a first-level of hierarchy that reflects whether the economic goods are tangible or intangible. For purposes of finance and market research, market-based classification systems such as the Global Industry Classification Standard and the Industry Classification Benchmark are used to classify businesses that participate in the service sector. Unlike governmental classification systems, the first level of market-based classification systems divides the economy into functionally related markets or industries. The second or third level of these hierarchies then reflects whether goods or services are produced. Theory of progression For the last 100 years, there has been a substantial shift from the primary and secondary sectors to the tertiary sector in industrialized countries. This shift is called tertiarisation. The tertiary sector is now the largest sector of the economy in the Western world, and is also the fastest-growing sector. In examining the growth of the service sector in the early nineties, the globalist Kenichi Ohmae noted that: Economies tend to follow a developmental progression that takes them from heavy reliance on agriculture and mining, toward the development of manufacturing (e.g. automobiles, textiles, shipbuilding, steel) and finally toward a more service-based structure. The first economy to follow this path in the modern world was the United Kingdom. The speed at which other economies have made the transition to service-based (or "post-industrial") economies has increased over time. Historically, manufacturing tended to be more open to international trade and competition than services. However, with dramatic cost reduction and speed and reliability improvements in the transportation of people and the communication of information, the service sector now includes some of the most intensive international competition, despite residual protectionism. Issues for service providers Service providers face obstacles selling services that goods-sellers rarely face. Services are intangible, making it difficult for potential customers to understand what they will receive and what value it will hold for them. Indeed, some, such as consultants and providers of investment services, offer no guarantees of the value for the price paid. Since the quality of most services depends largely on the quality of the individuals providing the services, "people costs" are usually a high fraction of service costs. Whereas a manufacturer may use technology, simplification, and other techniques to lower the cost of goods sold, the service provider often faces an unrelenting pattern of increasing costs. Product differentiation is often difficult. For example, how does one choose one investment adviser over another, since they are often seen to provide identical services. Charging a premium for services is usually an option only for the most established firms, who charge extra based upon brand recognition. List of countries by tertiary output See also Economic sector Indigo Era Post-industrial society Outline of consulting Quaternary sector of the economy Voluntary sector References External links +3 +3 de:Wirtschaftssektor#Terti.C3.A4rsektor_.28Dienstleistungssektor.29
Tertiary sector of the economy
[ "Technology" ]
973
[ "Economic sectors", "Components" ]
14,552
https://en.wikipedia.org/wiki/Primary%20sector%20of%20the%20economy
The primary sector of the economy includes any industry involved in the extraction and production of raw materials, such as farming, logging, fishing, forestry and mining. The primary sector tends to make up a larger portion of the economy in developing countries than it does in developed countries. For example, in 2018, agriculture, forestry, and fishing comprised more than 15% of GDP in sub-Saharan Africa but less than 1% of GDP in North America. In developed countries the primary sector has become more technologically advanced, enabling for example the mechanization of farming, as compared with lower-tech methods in poorer countries. More developed economies may invest additional capital in primary means of production: for example, in the United States corn belt, combine harvesters pick the corn, and sprayers spray large amounts of insecticides, herbicides and fungicides, producing a higher yield than is possible using less capital-intensive techniques. These technological advances and investment allow the primary sector to employ a smaller workforce, so developed countries tend to have a smaller percentage of their workforce involved in primary activities, instead having a higher percentage involved in the secondary and tertiary sectors. List of countries by agricultural output See also Resource curse Three-sector model Notes References Further reading Dwight H. Perkins: Proceedings of the Academy of Political Science, Vol. 31, No. 1, China's Developmental Experience (Mar., 1973) Cameron: General Economic and Social History Historia Económica y Social General, by Maria Inés Barbero, Rubén L. Berenblum, Fernando R. García Molina, Jorge Saborido External links Economy101.net: The Nature of Wealth +1 +1 National accounts Resource economics World economy
Primary sector of the economy
[ "Technology" ]
342
[ "Economic sectors", "Components" ]
14,553
https://en.wikipedia.org/wiki/Secondary%20sector%20of%20the%20economy
In macroeconomics, the secondary sector of the economy is an economic sector in the three-sector theory that describes the role of manufacturing. It encompasses industries that produce a finished, usable product or are involved in construction. This sector generally takes the output of the primary sector (i.e. raw materials like metals, wood) and creates finished goods suitable for sale to domestic businesses or consumers and for export (via distribution through the tertiary sector). Many of these industries consume large quantities of energy, require factories and use machinery; they are often classified as light or heavy based on such quantities. This also produces waste materials and waste heat that may cause environmental problems or pollution (see negative externalities). Examples include textile production, car manufacturing, and handicraft. Manufacturing is an important activity in promoting economic growth and development. Nations that export manufactured products tend to generate higher marginal GDP growth, which supports higher incomes and therefore marginal tax revenue needed to fund such government expenditures as health care and infrastructure. Among developed countries, it is an important source of well-paying jobs for the middle class (e.g., engineering) to facilitate greater social mobility for successive generations on the economy. Currently, an estimated 20% of the labor force in the United States is involved in the secondary industry. The secondary sector depends on the primary sector for the raw materials necessary for production. Countries that primarily produce agricultural and other raw materials (i.e., primary sector) tend to grow slowly and remain either under-developed or developing economies. The value added through the transformation of raw materials into finished goods reliably generates greater profitability, which underlies the faster growth of developed economies. 22nd References +2 +2
Secondary sector of the economy
[ "Technology" ]
346
[ "Economic sectors", "Components" ]
14,554
https://en.wikipedia.org/wiki/Imaginary%20number
An imaginary number is the product of a real number and the imaginary unit , which is defined by its property . The square of an imaginary number is . For example, is an imaginary number, and its square is . The number zero is considered to be both real and imaginary. Originally coined in the 17th century by René Descartes as a derogatory term and regarded as fictitious or useless, the concept gained wide acceptance following the work of Leonhard Euler (in the 18th century) and Augustin-Louis Cauchy and Carl Friedrich Gauss (in the early 19th century). An imaginary number can be added to a real number to form a complex number of the form , where the real numbers and are called, respectively, the real part and the imaginary part of the complex number. History Although the Greek mathematician and engineer Heron of Alexandria is noted as the first to present a calculation involving the square root of a negative number, it was Rafael Bombelli who first set down the rules for multiplication of complex numbers in 1572. The concept had appeared in print earlier, such as in work by Gerolamo Cardano. At the time, imaginary numbers and negative numbers were poorly understood and were regarded by some as fictitious or useless, much as zero once was. Many other mathematicians were slow to adopt the use of imaginary numbers, including René Descartes, who wrote about them in his La Géométrie in which he coined the term imaginary and meant it to be derogatory. The use of imaginary numbers was not widely accepted until the work of Leonhard Euler (1707–1783) and Carl Friedrich Gauss (1777–1855). The geometric significance of complex numbers as points in a plane was first described by Caspar Wessel (1745–1818). In 1843, William Rowan Hamilton extended the idea of an axis of imaginary numbers in the plane to a four-dimensional space of quaternion imaginaries in which three of the dimensions are analogous to the imaginary numbers in the complex field. Geometric interpretation Geometrically, imaginary numbers are found on the vertical axis of the complex number plane, which allows them to be presented perpendicular to the real axis. One way of viewing imaginary numbers is to consider a standard number line positively increasing in magnitude to the right and negatively increasing in magnitude to the left. At 0 on the -axis, a -axis can be drawn with "positive" direction going up; "positive" imaginary numbers then increase in magnitude upwards, and "negative" imaginary numbers increase in magnitude downwards. This vertical axis is often called the "imaginary axis" and is denoted or . In this representation, multiplication by  corresponds to a counterclockwise rotation of 90 degrees about the origin, which is a quarter of a circle. Multiplication by  corresponds to a clockwise rotation of 90 degrees about the origin. Similarly, multiplying by a purely imaginary number , with a real number, both causes a counterclockwise rotation about the origin by 90 degrees and scales the answer by a factor of . When , this can instead be described as a clockwise rotation by 90 degrees and a scaling by . Square roots of negative numbers Care must be used when working with imaginary numbers that are expressed as the principal values of the square roots of negative numbers. For example, if and are both positive real numbers, the following chain of equalities appears reasonable at first glance: But the result is clearly nonsense. The step where the square root was broken apart was illegitimate. (See Mathematical fallacy.) See also −1 Dual number Split-complex number Notes References Bibliography , explains many applications of imaginary expressions. External links How can one show that imaginary numbers really do exist? – an article that discusses the existence of imaginary numbers. 5Numbers programme 4 BBC Radio 4 programme Why Use Imaginary Numbers? Basic Explanation and Uses of Imaginary Numbers
Imaginary number
[ "Mathematics" ]
774
[ "Complex numbers", "Mathematical objects", "Numbers" ]
14,563
https://en.wikipedia.org/wiki/Integer
An integer is the number zero (0), a positive natural number (1, 2, 3, . . .), or the negation of a positive natural number (−1, −2, −3, . . .). The negations or additive inverses of the positive natural numbers are referred to as negative integers. The set of all integers is often denoted by the boldface or blackboard bold The set of natural numbers is a subset of , which in turn is a subset of the set of all rational numbers , itself a subset of the real numbers . Like the set of natural numbers, the set of integers is countably infinite. An integer may be regarded as a real number that can be written without a fractional component. For example, 21, 4, 0, and −2048 are integers, while 9.75, , 5/4, and are not. The integers form the smallest group and the smallest ring containing the natural numbers. In algebraic number theory, the integers are sometimes qualified as rational integers to distinguish them from the more general algebraic integers. In fact, (rational) integers are algebraic integers that are also rational numbers. History The word integer comes from the Latin integer meaning "whole" or (literally) "untouched", from in ("not") plus tangere ("to touch"). "Entire" derives from the same origin via the French word entier, which means both entire and integer. Historically the term was used for a number that was a multiple of 1, or to the whole part of a mixed number. Only positive integers were considered, making the term synonymous with the natural numbers. The definition of integer expanded over time to include negative numbers as their usefulness was recognized. For example Leonhard Euler in his 1765 Elements of Algebra defined integers to include both positive and negative numbers. The phrase the set of the integers was not used before the end of the 19th century, when Georg Cantor introduced the concept of infinite sets and set theory. The use of the letter Z to denote the set of integers comes from the German word Zahlen ("numbers") and has been attributed to David Hilbert. The earliest known use of the notation in a textbook occurs in Algèbre written by the collective Nicolas Bourbaki, dating to 1947. The notation was not adopted immediately. For example, another textbook used the letter J, and a 1960 paper used Z to denote the non-negative integers. But by 1961, Z was generally used by modern algebra texts to denote the positive and negative integers. The symbol is often annotated to denote various sets, with varying usage amongst different authors: , , or for the positive integers, or for non-negative integers, and for non-zero integers. Some authors use for non-zero integers, while others use it for non-negative integers, or for {–1,1} (the group of units of ). Additionally, is used to denote either the set of integers modulo (i.e., the set of congruence classes of integers), or the set of -adic integers. The whole numbers were synonymous with the integers up until the early 1950s. In the late 1950s, as part of the New Math movement, American elementary school teachers began teaching that whole numbers referred to the natural numbers, excluding negative numbers, while integer included the negative numbers. The whole numbers remain ambiguous to the present day. Algebraic properties Like the natural numbers, is closed under the operations of addition and multiplication, that is, the sum and product of any two integers is an integer. However, with the inclusion of the negative natural numbers (and importantly, ), , unlike the natural numbers, is also closed under subtraction. The integers form a ring which is the most basic one, in the following sense: for any ring, there is a unique ring homomorphism from the integers into this ring. This universal property, namely to be an initial object in the category of rings, characterizes the ring . is not closed under division, since the quotient of two integers (e.g., 1 divided by 2) need not be an integer. Although the natural numbers are closed under exponentiation, the integers are not (since the result can be a fraction when the exponent is negative). The following table lists some of the basic properties of addition and multiplication for any integers , , and : The first five properties listed above for addition say that , under addition, is an abelian group. It is also a cyclic group, since every non-zero integer can be written as a finite sum or . In fact, under addition is the only infinite cyclic group—in the sense that any infinite cyclic group is isomorphic to . The first four properties listed above for multiplication say that under multiplication is a commutative monoid. However, not every integer has a multiplicative inverse (as is the case of the number 2), which means that under multiplication is not a group. All the rules from the above property table (except for the last), when taken together, say that together with addition and multiplication is a commutative ring with unity. It is the prototype of all objects of such algebraic structure. Only those equalities of expressions are true in  for all values of variables, which are true in any unital commutative ring. Certain non-zero integers map to zero in certain rings. The lack of zero divisors in the integers (last property in the table) means that the commutative ring  is an integral domain. The lack of multiplicative inverses, which is equivalent to the fact that is not closed under division, means that is not a field. The smallest field containing the integers as a subring is the field of rational numbers. The process of constructing the rationals from the integers can be mimicked to form the field of fractions of any integral domain. And back, starting from an algebraic number field (an extension of rational numbers), its ring of integers can be extracted, which includes as its subring. Although ordinary division is not defined on , the division "with remainder" is defined on them. It is called Euclidean division, and possesses the following important property: given two integers and with , there exist unique integers and such that and , where denotes the absolute value of . The integer is called the quotient and is called the remainder of the division of by . The Euclidean algorithm for computing greatest common divisors works by a sequence of Euclidean divisions. The above says that is a Euclidean domain. This implies that is a principal ideal domain, and any positive integer can be written as the products of primes in an essentially unique way. This is the fundamental theorem of arithmetic. Order-theoretic properties is a totally ordered set without upper or lower bound. The ordering of is given by: . An integer is positive if it is greater than zero, and negative if it is less than zero. Zero is defined as neither negative nor positive. The ordering of integers is compatible with the algebraic operations in the following way: If and , then If and , then Thus it follows that together with the above ordering is an ordered ring. The integers are the only nontrivial totally ordered abelian group whose positive elements are well-ordered. This is equivalent to the statement that any Noetherian valuation ring is either a field—or a discrete valuation ring. Construction Traditional development In elementary school teaching, integers are often intuitively defined as the union of the (positive) natural numbers, zero, and the negations of the natural numbers. This can be formalized as follows. First construct the set of natural numbers according to the Peano axioms, call this . Then construct a set which is disjoint from and in one-to-one correspondence with via a function . For example, take to be the ordered pairs with the mapping . Finally let 0 be some object not in or , for example the ordered pair (0,0). Then the integers are defined to be the union . The traditional arithmetic operations can then be defined on the integers in a piecewise fashion, for each of positive numbers, negative numbers, and zero. For example negation is defined as follows: The traditional style of definition leads to many different cases (each arithmetic operation needs to be defined on each combination of types of integer) and makes it tedious to prove that integers obey the various laws of arithmetic. Equivalence classes of ordered pairs In modern set-theoretic mathematics, a more abstract construction allowing one to define arithmetical operations without any case distinction is often used instead. The integers can thus be formally constructed as the equivalence classes of ordered pairs of natural numbers . The intuition is that stands for the result of subtracting from . To confirm our expectation that and denote the same number, we define an equivalence relation on these pairs with the following rule: precisely when . Addition and multiplication of integers can be defined in terms of the equivalent operations on the natural numbers; by using to denote the equivalence class having as a member, one has: . . The negation (or additive inverse) of an integer is obtained by reversing the order of the pair: . Hence subtraction can be defined as the addition of the additive inverse: . The standard ordering on the integers is given by: if and only if . It is easily verified that these definitions are independent of the choice of representatives of the equivalence classes. Every equivalence class has a unique member that is of the form or (or both at once). The natural number is identified with the class (i.e., the natural numbers are embedded into the integers by map sending to ), and the class is denoted (this covers all remaining classes, and gives the class a second time since –0 = 0. Thus, is denoted by If the natural numbers are identified with the corresponding integers (using the embedding mentioned above), this convention creates no ambiguity. This notation recovers the familiar representation of the integers as . Some examples are: Other approaches In theoretical computer science, other approaches for the construction of integers are used by automated theorem provers and term rewrite engines. Integers are represented as algebraic terms built using a few basic operations (e.g., zero, succ, pred) and using natural numbers, which are assumed to be already constructed (using the Peano approach). There exist at least ten such constructions of signed integers. These constructions differ in several ways: the number of basic operations used for the construction, the number (usually, between 0 and 2), and the types of arguments accepted by these operations; the presence or absence of natural numbers as arguments of some of these operations, and the fact that these operations are free constructors or not, i.e., that the same integer can be represented using only one or many algebraic terms. The technique for the construction of integers presented in the previous section corresponds to the particular case where there is a single basic operation pair that takes as arguments two natural numbers and , and returns an integer (equal to ). This operation is not free since the integer 0 can be written pair(0,0), or pair(1,1), or pair(2,2), etc.. This technique of construction is used by the proof assistant Isabelle; however, many other tools use alternative construction techniques, notable those based upon free constructors, which are simpler and can be implemented more efficiently in computers. Computer science An integer is often a primitive data type in computer languages. However, integer data types can only represent a subset of all integers, since practical computers are of finite capacity. Also, in the common two's complement representation, the inherent definition of sign distinguishes between "negative" and "non-negative" rather than "negative, positive, and 0". (It is, however, certainly possible for a computer to determine whether an integer value is truly positive.) Fixed length integer approximation data types (or subsets) are denoted int or Integer in several programming languages (such as Algol68, C, Java, Delphi, etc.). Variable-length representations of integers, such as bignums, can store any integer that fits in the computer's memory. Other integer data types are implemented with a fixed size, usually a number of bits which is a power of 2 (4, 8, 16, etc.) or a memorable number of decimal digits (e.g., 9 or 10). Cardinality The set of integers is countably infinite, meaning it is possible to pair each integer with a unique natural number. An example of such a pairing is More technically, the cardinality of is said to equal (aleph-null). The pairing between elements of and is called a bijection. See also Canonical factorization of a positive integer Complex integer Hyperinteger Integer complexity Integer lattice Integer part Integer sequence Integer-valued function Mathematical symbols Parity (mathematics) Profinite integer Footnotes References Sources ) External links The Positive Integers – divisor tables and numeral representation tools On-Line Encyclopedia of Integer Sequences cf OEIS Elementary mathematics Abelian group theory Ring theory Elementary number theory Algebraic number theory Sets of real numbers
Integer
[ "Mathematics" ]
2,697
[ "Elementary number theory", "Mathematical objects", "Ring theory", "Elementary mathematics", "Fields of abstract algebra", "Algebraic number theory", "Integers", "Numbers", "Number theory" ]
14,573
https://en.wikipedia.org/wiki/Isaac%20Asimov
Isaac Asimov ( ;  – April 6, 1992) was an American writer and professor of biochemistry at Boston University. During his lifetime, Asimov was considered one of the "Big Three" science fiction writers, along with Robert A. Heinlein and Arthur C. Clarke. A prolific writer, he wrote or edited more than 500 books. He also wrote an estimated 90,000 letters and postcards. Best known for his hard science fiction, Asimov also wrote mysteries and fantasy, as well as popular science and other non-fiction. Asimov's most famous work is the Foundation series, the first three books of which won the one-time Hugo Award for "Best All-Time Series" in 1966. His other major series are the Galactic Empire series and the Robot series. The Galactic Empire novels are set in the much earlier history of the same fictional universe as the Foundation series. Later, with Foundation and Earth (1986), he linked this distant future to the Robot series, creating a unified "future history" for his works. He also wrote more than 380 short stories, including the social science fiction novelette "Nightfall", which in 1964 was voted the best short science fiction story of all time by the Science Fiction Writers of America. Asimov wrote the Lucky Starr series of juvenile science-fiction novels using the pen name Paul French. Most of his popular science books explain concepts in a historical way, going as far back as possible to a time when the science in question was at its simplest stage. Examples include Guide to Science, the three-volume Understanding Physics, and Asimov's Chronology of Science and Discovery. He wrote on numerous other scientific and non-scientific topics, such as chemistry, astronomy, mathematics, history, biblical exegesis, and literary criticism. He was the president of the American Humanist Association. Several entities have been named in his honor, including the asteroid (5020) Asimov, a crater on Mars, a Brooklyn elementary school, Honda's humanoid robot ASIMO, and four literary awards. Surname Asimov's family name derives from the first part of (), meaning 'winter grain' (specifically rye) in which his great-great-great-grandfather dealt, with the Russian surname ending -ov added. Azimov is spelled in the Cyrillic alphabet. When the family arrived in the United States in 1923 and their name had to be spelled in the Latin alphabet, Asimov's father spelled it with an S, believing this letter to be pronounced like Z (as in German), and so it became Asimov. This later inspired one of Asimov's short stories, "Spell My Name with an S". Asimov refused early suggestions of using a more common name as a pseudonym, believing that its recognizability helped his career. After becoming famous, he often met readers who believed that "Isaac Asimov" was a distinctive pseudonym created by an author with a common name. Life Early life Asimov was born in Petrovichi, Russian SFSR, on an unknown date between October 4, 1919, and January 2, 1920, inclusive. Asimov celebrated his birthday on January 2. Asimov's parents were Russian Jews, Anna Rachel (née Berman) and Judah Asimov, the son of a miller. He was named Isaac after his mother's father, Isaac Berman. Asimov wrote of his father, "My father, for all his education as an Orthodox Jew, was not Orthodox in his heart", noting that "he didn't recite the myriad prayers prescribed for every action, and he never made any attempt to teach them to me." In 1921, Asimov and 16 other children in Petrovichi developed double pneumonia. Only Asimov survived. He had two younger siblings: a sister, Marcia (born Manya; June 17, 1922 – April 2, 2011), and a brother, Stanley (July 25, 1929 – August 16, 1995), who would become vice-president of Newsday. Asimov's family travelled to the United States via Liverpool on the RMS Baltic, arriving on February 3, 1923 when he was three years old. His parents spoke Yiddish and English to him; he never learned Russian, his parents using it as a secret language "when they wanted to discuss something privately that my big ears were not to hear". Growing up in Brooklyn, New York, Asimov taught himself to read at the age of five (and later taught his sister to read as well, enabling her to enter school in the second grade). His mother got him into first grade a year early by claiming he was born on September 7, 1919. In third grade he learned about the "error" and insisted on an official correction of the date to January 2. He became a naturalized U.S. citizen in 1928 at the age of eight. After becoming established in the U.S., his parents owned a succession of candy stores in which everyone in the family was expected to work. The candy stores sold newspapers and magazines, which Asimov credited as a major influence in his lifelong love of the written word, as it presented him as a child with an unending supply of new reading material (including pulp science fiction magazines) that he could not have otherwise afforded. Asimov began reading science fiction at age nine, at the time that the genre was becoming more science-centered. Asimov was also a frequent patron of the Brooklyn Public Library during his formative years. Education and career Asimov attended New York City public schools from age five, including Boys High School in Brooklyn. Graduating at 15, he attended the City College of New York for several days before accepting a scholarship at Seth Low Junior College. This was a branch of Columbia University in Downtown Brooklyn designed to absorb some of the academically qualified Jewish and Italian-American students who applied to the more prestigious Columbia College but exceeded the unwritten ethnic admission quotas which were common at the time. Originally a zoology major, Asimov switched to chemistry after his first semester because he disapproved of "dissecting an alley cat". After Seth Low Junior College closed in 1936, Asimov finished his Bachelor of Science degree at Columbia's Morningside Heights campus (later the Columbia University School of General Studies) in 1939. (In 1983, Dr. Robert Pollack (dean of Columbia College, 1982–1989) granted Asimov an honorary doctorate from Columbia College after requiring that Asimov place his foot in a bucket of water to pass the college's swimming requirement.) After two rounds of rejections by medical schools, Asimov applied to the graduate program in chemistry at Columbia in 1939; initially he was rejected and then only accepted on a probationary basis. He completed his Master of Arts degree in chemistry in 1941 and earned a Doctor of Philosophy degree in chemistry in 1948. During his chemistry studies, he also learned French and German. From 1942 to 1945 during World War II, between his masters and doctoral studies, Asimov worked as a civilian chemist at the Philadelphia Navy Yard's Naval Air Experimental Station and lived in the Walnut Hill section of West Philadelphia. In September 1945, he was conscripted into the post-war U.S. Army; if he had not had his birth date corrected while at school, he would have been officially 26 years old and ineligible. In 1946, a bureaucratic error caused his military allotment to be stopped, and he was removed from a task force days before it sailed to participate in Operation Crossroads nuclear weapons tests at Bikini Atoll. He was promoted to corporal on July 11 before receiving an honorable discharge on July 26, 1946. After completing his doctorate and a postdoctoral year with Robert Elderfield, Asimov was offered the position of associate professor of biochemistry at the Boston University School of Medicine. This was in large part due to his years-long correspondence with William Boyd, a former associate professor of biochemistry at Boston University, who initially contacted Asimov to compliment him on his story Nightfall. Upon receiving a promotion to professor of immunochemistry, Boyd reached out to Asimov, requesting him to be his replacement. The initial offer of professorship was withdrawn and Asimov was offered the position of instructor of biochemistry instead, which he accepted. He began work in 1949 with a $5,000 salary (), maintaining this position for several years. By 1952, however, he was making more money as a writer than from the university, and he eventually stopped doing research, confining his university role to lecturing students. In 1955, he was promoted to tenured associate professor. In December 1957, Asimov was dismissed from his teaching post, with effect from June 30, 1958, due to his lack of research. After a struggle over two years, he reached an agreement with the university that he would keep his title and give the opening lecture each year for a biochemistry class. On October 18, 1979, the university honored his writing by promoting him to full professor of biochemistry. Asimov's personal papers from 1965 onward are archived at the university's Mugar Memorial Library, to which he donated them at the request of curator Howard Gotlieb. In 1959, after a recommendation from Arthur Obermayer, Asimov's friend and a scientist on the U.S. missile defense project, Asimov was approached by DARPA to join Obermayer's team. Asimov declined on the grounds that his ability to write freely would be impaired should he receive classified information, but submitted a paper to DARPA titled "On Creativity" containing ideas on how government-based science projects could encourage team members to think more creatively. Personal life Asimov met his first wife, Gertrude Blugerman (May 16, 1917, Toronto, Canada – October 17, 1990, Boston, U.S.), on a blind date on February 14, 1942, and married her on July 26. The couple lived in an apartment in West Philadelphia while Asimov was employed at the Philadelphia Navy Yard (where two of his co-workers were L. Sprague de Camp and Robert A. Heinlein). Gertrude returned to Brooklyn while he was in the army, and they both lived there from July 1946 before moving to Stuyvesant Town, Manhattan, in July 1948. They moved to Boston in May 1949, then to nearby suburbs Somerville in July 1949, Waltham in May 1951, and, finally, West Newton in 1956. They had two children, David (born 1951) and Robyn Joan (born 1955). In 1970, they separated and Asimov moved back to New York, this time to the Upper West Side of Manhattan where he lived for the rest of his life. He began seeing Janet O. Jeppson, a psychiatrist and science-fiction writer, and married her on November 30, 1973, two weeks after his divorce from Gertrude. Asimov was a claustrophile: he enjoyed small, enclosed spaces. In the third volume of his autobiography, he recalls a childhood desire to own a magazine stand in a New York City Subway station, within which he could enclose himself and listen to the rumble of passing trains while reading. Asimov was afraid of flying, doing so only twice: once in the course of his work at the Naval Air Experimental Station and once returning home from Oʻahu in 1946. Consequently, he seldom traveled great distances. This phobia influenced several of his fiction works, such as the Wendell Urth mystery stories and the Robot novels featuring Elijah Baley. In his later years, Asimov found enjoyment traveling on cruise ships, beginning in 1972 when he viewed the Apollo 17 launch from a cruise ship. On several cruises, he was part of the entertainment program, giving science-themed talks aboard ships such as the Queen Elizabeth 2. He sailed to England in June 1974 on the for a trip mostly devoted to lectures in London and Birmingham, though he also found time to visit Stonehenge and Shakespeare's birthplace. Asimov was a teetotaler. He was an able public speaker and was regularly invited to give talks about science in his distinct New York accent. He participated in many science fiction conventions, where he was friendly and approachable. He patiently answered tens of thousands of questions and other mail with postcards and was pleased to give autographs. He was of medium height, and stocky build. In his later years, he adopted a signature style of "mutton-chop" sideburns. He took to wearing bolo ties after his wife Janet objected to his clip-on bow ties. He never learned to swim or ride a bicycle, but did learn to drive a car after he moved to Boston. In his humor book Asimov Laughs Again, he describes Boston driving as "anarchy on wheels". Asimov's wide interests included his participation in later years in organizations devoted to the comic operas of Gilbert and Sullivan. Many of his short stories mention or quote Gilbert and Sullivan. He was a prominent member of The Baker Street Irregulars, the leading Sherlock Holmes society, for whom he wrote an essay arguing that Professor Moriarty's work "The Dynamics of An Asteroid" involved the willful destruction of an ancient, civilized planet. He was also a member of the male-only literary banqueting club the Trap Door Spiders, which served as the basis of his fictional group of mystery solvers, the Black Widowers. He later used his essay on Moriarty's work as the basis for a Black Widowers story, "The Ultimate Crime", which appeared in More Tales of the Black Widowers. In 1984, the American Humanist Association (AHA) named him the Humanist of the Year. He was one of the signers of the Humanist Manifesto. From 1985 until his death in 1992, he served as honorary president of the AHA, and was succeeded by his friend and fellow writer Kurt Vonnegut. He was also a close friend of Star Trek creator Gene Roddenberry, and earned a screen credit as "special science consultant" on Star Trek: The Motion Picture for his advice during production. Asimov was a founding member of the Committee for the Scientific Investigation of Claims of the Paranormal, CSICOP (now the Committee for Skeptical Inquiry) and is listed in its Pantheon of Skeptics. In a discussion with James Randi at CSICon 2016 regarding the founding of CSICOP, Kendrick Frazier said that Asimov was "a key figure in the Skeptical movement who is less well known and appreciated today, but was very much in the public eye back then." He said that Asimov's being associated with CSICOP "gave it immense status and authority" in his eyes. Asimov described Carl Sagan as one of only two people he ever met whose intellect surpassed his own. The other, he claimed, was the computer scientist and artificial intelligence expert Marvin Minsky. Asimov was an on-and-off member and honorary vice president of Mensa International, albeit reluctantly; he described some members of that organization as "brain-proud and aggressive about their IQs". After his father died in 1969, Asimov annually contributed to a Judah Asimov Scholarship Fund at Brandeis University. In 2006, he was named by Carnegie Corporation of New York to the inaugural class of winners of the Great Immigrants Award. Illness and death In 1977, Asimov had a heart attack. In December 1983, he had triple bypass surgery at NYU Medical Center, during which he contracted HIV from a blood transfusion. His HIV status was kept secret out of concern that the anti-AIDS prejudice might extend to his family members. He died in Manhattan on April 6, 1992, and was cremated. The cause of death was reported as heart and kidney failure. Ten years following Asimov's death, Janet and Robyn Asimov agreed that the HIV story should be made public; Janet revealed it in her edition of his autobiography, It's Been a Good Life. Writings Overview Asimov's career can be divided into several periods. His early career, dominated by science fiction, began with short stories in 1939 and novels in 1950. This lasted until about 1958, all but ending after publication of The Naked Sun (1957). He began publishing nonfiction as co-author of a college-level textbook called Biochemistry and Human Metabolism. Following the brief orbit of the first human-made satellite Sputnik I by the USSR in 1957, he wrote more nonfiction, particularly popular science books, and less science fiction. Over the next quarter-century, he wrote only four science fiction novels, and 120 nonfiction books. Starting in 1982, the second half of his science fiction career began with the publication of Foundation's Edge. From then until his death, Asimov published several more sequels and prequels to his existing novels, tying them together in a way he had not originally anticipated, making a unified series. There are many inconsistencies in this unification, especially in his earlier stories. Doubleday and Houghton Mifflin published about 60% of his work up to 1969, Asimov stating that "both represent a father image". Asimov believed his most enduring contributions would be his "Three Laws of Robotics" and the Foundation series. The Oxford English Dictionary credits his science fiction for introducing into the English language the words "robotics", "positronic" (an entirely fictional technology), and "psychohistory" (which is also used for a different study on historical motivations). Asimov coined the term "robotics" without suspecting that it might be an original word; at the time, he believed it was simply the natural analogue of words such as mechanics and hydraulics, but for robots. Unlike his word "psychohistory", the word "robotics" continues in mainstream technical use with Asimov's original definition. Star Trek: The Next Generation featured androids with "positronic brains" and the first-season episode "Datalore" called the positronic brain "Asimov's dream". Asimov was so prolific and diverse in his writing that his books span all major categories of the Dewey Decimal Classification except for category 100, philosophy and psychology. However, he wrote several essays about psychology, and forewords for the books The Humanist Way (1988) and In Pursuit of Truth (1982), which were classified in the 100s category, but none of his own books were classified in that category. According to UNESCO's Index Translationum database, Asimov is the world's 24th-most-translated author. Science fiction Asimov became a science fiction fan in 1929, when he began reading the pulp magazines sold in his family's candy store. At first his father forbade reading pulps until Asimov persuaded him that because the science fiction magazines had "Science" in the title, they must be educational. At age 18 he joined the Futurians science fiction fan club, where he made friends who went on to become science fiction writers or editors. Asimov began writing at the age of 11, imitating The Rover Boys with eight chapters of The Greenville Chums at College. His father bought him a used typewriter at age 16. His first published work was a humorous item on the birth of his brother for Boys High School's literary journal in 1934. In May 1937 he first thought of writing professionally, and began writing his first science fiction story, "Cosmic Corkscrew" (now lost), that year. On May 17, 1938, puzzled by a change in the schedule of Astounding Science Fiction, Asimov visited its publisher Street & Smith Publications. Inspired by the visit, he finished the story on June 19, 1938, and personally submitted it to Astounding editor John W. Campbell two days later. Campbell met with Asimov for more than an hour and promised to read the story himself. Two days later he received a detailed rejection letter. This was the first of what became almost weekly meetings with the editor while Asimov lived in New York, until moving to Boston in 1949; Campbell had a strong formative influence on Asimov and became a personal friend. By the end of the month, Asimov completed a second story, "Stowaway". Campbell rejected it on July 22 but—in "the nicest possible letter you could imagine"—encouraged him to continue writing, promising that Asimov might sell his work after another year and a dozen stories of practice. On October 21, 1938, he sold the third story he finished, "Marooned Off Vesta", to Amazing Stories, edited by Raymond A. Palmer, and it appeared in the March 1939 issue. Asimov was paid $64 (), or one cent a word. Two more stories appeared that year, "The Weapon Too Dreadful to Use" in the May Amazing and "Trends" in the July Astounding, the issue fans later selected as the start of the Golden Age of Science Fiction. For 1940, ISFDB catalogs seven stories in four different pulp magazines, including one in Astounding. His earnings became enough to pay for his education, but not yet enough for him to become a full-time writer. He later said that unlike other Golden Age writers Heinlein and A. E. van Vogt—also first published in 1939, and whose talent and stardom were immediately obvious—Asimov "(this is not false modesty) came up only gradually". Through July 29, 1940, Asimov wrote 22 stories in 25 months, of which 13 were published; he wrote in 1972 that from that date he never wrote a science fiction story that was not published (except for two "special cases"). By 1941 Asimov was famous enough that Donald Wollheim told him that he purchased "The Secret Sense" for a new magazine only because of his name, and the December 1940 issue of Astonishing—featuring Asimov's name in bold—was the first magazine to base cover art on his work, but Asimov later said that neither he nor anyone else—except perhaps Campbell—considered him better than an often published "third rater". Based on a conversation with Campbell, Asimov wrote "Nightfall", his 32nd story, in March and April 1941, and Astounding published it in September 1941. In 1968 the Science Fiction Writers of America voted "Nightfall" the best science fiction short story ever written. In Nightfall and Other Stories Asimov wrote, "The writing of 'Nightfall' was a watershed in my professional career ... I was suddenly taken seriously and the world of science fiction became aware that I existed. As the years passed, in fact, it became evident that I had written a 'classic'." "Nightfall" is an archetypal example of social science fiction, a term he created to describe a new trend in the 1940s, led by authors including him and Heinlein, away from gadgets and space opera and toward speculation about the human condition. After writing "Victory Unintentional" in January and February 1942, Asimov did not write another story for a year. He expected to make chemistry his career, and was paid $2,600 annually at the Philadelphia Navy Yard, enough to marry his girlfriend; he did not expect to make much more from writing than the $1,788.50 he had earned from the 28 stories he had already sold over four years. Asimov left science fiction fandom and no longer read new magazines, and might have left the writing profession had not Heinlein and de Camp been his coworkers at the Navy Yard and previously sold stories continued to appear. In 1942, Asimov published the first of his Foundation stories—later collected in the Foundation trilogy: Foundation (1951), Foundation and Empire (1952), and Second Foundation (1953). The books describe the fall of a vast interstellar empire and the establishment of its eventual successor. They feature his fictional science of psychohistory, whose theories could predict the future course of history according to dynamical laws regarding the statistical analysis of mass human actions. Campbell raised his rate per word, Orson Welles purchased rights to "Evidence", and anthologies reprinted his stories. By the end of the war Asimov was earning as a writer an amount equal to half of his Navy Yard salary, even after a raise, but Asimov still did not believe that writing could support him, his wife, and future children. His "positronic" robot stories—many of which were collected in I, Robot (1950)—were begun at about the same time. They promulgated a set of rules of ethics for robots (see Three Laws of Robotics) and intelligent machines that greatly influenced other writers and thinkers in their treatment of the subject. Asimov notes in his introduction to the short story collection The Complete Robot (1982) that he was largely inspired by the tendency of robots up to that time to fall consistently into a Frankenstein plot in which they destroyed their creators. The Robot series has led to film adaptations. With Asimov's collaboration, in about 1977, Harlan Ellison wrote a screenplay of I, Robot that Asimov hoped would lead to "the first really adult, complex, worthwhile science fiction film ever made". The screenplay has never been filmed and was eventually published in book form in 1994. The 2004 movie I, Robot, starring Will Smith, was based on an unrelated script by Jeff Vintar titled Hardwired, with Asimov's ideas incorporated later after the rights to Asimov's title were acquired. (The title was not original to Asimov but had previously been used for a story by Eando Binder.) Also, one of Asimov's robot short stories, "The Bicentennial Man", was expanded into a novel The Positronic Man by Asimov and Robert Silverberg, and this was adapted into the 1999 movie Bicentennial Man, starring Robin Williams. In 1966 the Foundation trilogy won the Hugo Award for the all-time best series of science fiction and fantasy novels, and they along with the Robot series are his most famous science fiction. Besides movies, his Foundation and Robot stories have inspired other derivative works of science fiction literature, many by well-known and established authors such as Roger MacBride Allen, Greg Bear, Gregory Benford, David Brin, and Donald Kingsbury. At least some of these appear to have been done with the blessing of, or at the request of, Asimov's widow, Janet Asimov. In 1948, he also wrote a spoof chemistry article, "The Endochronic Properties of Resublimated Thiotimoline". At the time, Asimov was preparing his own doctoral dissertation, which would include an oral examination. Fearing a prejudicial reaction from his graduate school evaluation board at Columbia University, Asimov asked his editor that it be released under a pseudonym. When it nevertheless appeared under his own name, Asimov grew concerned that his doctoral examiners might think he wasn't taking science seriously. At the end of the examination, one evaluator turned to him, smiling, and said, "What can you tell us, Mr. Asimov, about the thermodynamic properties of the compound known as thiotimoline". Laughing hysterically with relief, Asimov had to be led out of the room. After a five-minute wait, he was summoned back into the room and congratulated as "Dr. Asimov". Demand for science fiction greatly increased during the 1950s, making it possible for a genre author to write full-time. In 1949, book publisher Doubleday's science fiction editor Walter I. Bradbury accepted Asimov's unpublished "Grow Old with Me" (40,000 words), but requested that it be extended to a full novel of 70,000 words. The book appeared under the Doubleday imprint in January 1950 with the title of Pebble in the Sky. Doubleday published five more original science fiction novels by Asimov in the 1950s, along with the six juvenile Lucky Starr novels, the latter under the pseudonym "Paul French". Doubleday also published collections of Asimov's short stories, beginning with The Martian Way and Other Stories in 1955. The early 1950s also saw Gnome Press publish one collection of Asimov's positronic robot stories as I, Robot and his Foundation stories and novelettes as the three books of the Foundation trilogy. More positronic robot stories were republished in book form as The Rest of the Robots. Book publishers and the magazines Galaxy and Fantasy & Science Fiction ended Asimov's dependence on Astounding. He later described the era as his "'mature' period". Asimov's "The Last Question" (1956), on the ability of humankind to cope with and potentially reverse the process of entropy, was his personal favorite story. In 1972, his stand-alone novel The Gods Themselves was published to general acclaim, winning Best Novel in the Hugo, Nebula, and Locus Awards. In December 1974, former Beatle Paul McCartney approached Asimov and asked him to write the screenplay for a science-fiction movie musical. McCartney had a vague idea for the plot and a small scrap of dialogue, about a rock band whose members discover they are being impersonated by extraterrestrials. The band and their impostors would likely be played by McCartney's group Wings, then at the height of their career. Though not generally a fan of rock music, Asimov was intrigued by the idea and quickly produced a treatment outline of the story adhering to McCartney's overall idea but omitting McCartney's scrap of dialogue. McCartney rejected it, and the treatment now exists only in the Boston University archives. Asimov said in 1969 that he had "the happiest of all my associations with science fiction magazines" with Fantasy & Science Fiction; "I have no complaints about Astounding, Galaxy, or any of the rest, heaven knows, but F&SF has become something special to me". Beginning in 1977, Asimov lent his name to Isaac Asimov's Science Fiction Magazine (now Asimov's Science Fiction) and wrote an editorial for each issue. There was also a short-lived Asimov's SF Adventure Magazine and a companion Asimov's Science Fiction Anthology reprint series, published as magazines (in the same manner as the stablemates Ellery Queen's Mystery Magazines and Alfred Hitchcock's Mystery Magazines "anthologies"). Due to pressure by fans on Asimov to write another book in his Foundation series, he did so with Foundation's Edge (1982) and Foundation and Earth (1986), and then went back to before the original trilogy with Prelude to Foundation (1988) and Forward the Foundation (1992), his last novel. Popular science Asimov and two colleagues published a textbook in 1949, with two more editions by 1969. During the late 1950s and 1960s, Asimov substantially decreased his fiction output (he published only four adult novels between 1957's The Naked Sun and 1982's Foundation's Edge, two of which were mysteries). He greatly increased his nonfiction production, writing mostly on science topics; the launch of Sputnik in 1957 engendered public concern over a "science gap". Asimov explained in The Rest of the Robots that he had been unable to write substantial fiction since the summer of 1958, and observers understood him as saying that his fiction career had ended, or was permanently interrupted. Asimov recalled in 1969 that "the United States went into a kind of tizzy, and so did I. I was overcome by the ardent desire to write popular science for an America that might be in great danger through its neglect of science, and a number of publishers got an equally ardent desire to publish popular science for the same reason". Fantasy and Science Fiction invited Asimov to continue his regular nonfiction column, begun in the now-folded bimonthly companion magazine Venture Science Fiction Magazine. The first of 399 monthly F&SF columns appeared in November 1958 and they continued until his terminal illness. These columns, periodically collected into books by Doubleday, gave Asimov a reputation as a "Great Explainer" of science; he described them as his only popular science writing in which he never had to assume complete ignorance of the subjects on the part of his readers. The column was ostensibly dedicated to popular science but Asimov had complete editorial freedom, and wrote about contemporary social issues in essays such as "Thinking About Thinking" and "Knock Plastic!". In 1975 he wrote of these essays: "I get more pleasure out of them than out of any other writing assignment." Asimov's first wide-ranging reference work, The Intelligent Man's Guide to Science (1960), was nominated for a National Book Award, and in 1963 he won a Hugo Award—his first—for his essays for F&SF. The popularity of his science books and the income he derived from them allowed him to give up most academic responsibilities and become a full-time freelance writer. He encouraged other science fiction writers to write popular science, stating in 1967 that "the knowledgeable, skillful science writer is worth his weight in contracts", with "twice as much work as he can possibly handle". The great variety of information covered in Asimov's writings prompted Kurt Vonnegut to ask, "How does it feel to know everything?" Asimov replied that he only knew how it felt to have the 'reputation' of omniscience: "Uneasy". Floyd C. Gale said that "Asimov has a rare talent. He can make your mental mouth water over dry facts", and "science fiction's loss has been science popularization's gain". Asimov said that "Of all the writing I do, fiction, non-fiction, adult, or juvenile, these F & SF articles are by far the most fun". He regretted, however, that he had less time for fiction—causing dissatisfied readers to send him letters of complaint—stating in 1969 that "In the last ten years, I've done a couple of novels, some collections, a dozen or so stories, but that's nothing". In his essay "To Tell a Chemist" (1965), Asimov proposed a simple shibboleth for distinguishing chemists from non-chemists: ask the person to read the word "unionized". Chemists, he noted, will read un-ionized (electrically neutral), while non-chemists will read union-ized (belonging to a trade union). Coined terms Asimov coined the term "robotics" in his 1941 story "Liar!", though he later remarked that he believed then that he was merely using an existing word, as he stated in Gold ("The Robot Chronicles"). While acknowledging the Oxford Dictionary reference, he incorrectly states that the word was first printed about one third of the way down the first column of page 100 in the March 1942 issue of Astounding Science Fiction – the printing of his short story "Runaround". In the same story, Asimov also coined the term "positronic" (the counterpart to "electronic" for positrons). Asimov coined the term "psychohistory" in his Foundation stories to name a fictional branch of science which combines history, sociology, and mathematical statistics to make general predictions about the future behavior of very large groups of people, such as the Galactic Empire. Asimov said later that he should have called it psychosociology. It was first introduced in the five short stories (1942–1944) which would later be collected as the 1951 fix-up novel Foundation. Somewhat later, the term "psychohistory" was applied by others to research of the effects of psychology on history. Other writings In addition to his interest in science, Asimov was interested in history. Starting in the 1960s, he wrote 14 popular history books, including The Greeks: A Great Adventure (1965), The Roman Republic (1966), The Roman Empire (1967), The Egyptians (1967) The Near East: 10,000 Years of History (1968), and Asimov's Chronology of the World (1991). He published Asimov's Guide to the Bible in two volumes—covering the Old Testament in 1967 and the New Testament in 1969—and then combined them into one 1,300-page volume in 1981. Complete with maps and tables, the guide goes through the books of the Bible in order, explaining the history of each one and the political influences that affected it, as well as biographical information about the important characters. His interest in literature manifested itself in several annotations of literary works, including Asimov's Guide to Shakespeare (1970), Asimov's Annotated Don Juan (1972), Asimov's Annotated Paradise Lost (1974), and The Annotated Gulliver's Travels (1980). Asimov was also a noted mystery author and a frequent contributor to Ellery Queen's Mystery Magazine. He began by writing science fiction mysteries such as his Wendell Urth stories, but soon moved on to writing "pure" mysteries. He published two full-length mystery novels, and wrote 66 stories about the Black Widowers, a group of men who met monthly for dinner, conversation, and a puzzle. He got the idea for the Widowers from his own association in a stag group called the Trap Door Spiders, and all of the main characters (with the exception of the waiter, Henry, who he admitted resembled Wodehouse's Jeeves) were modeled after his closest friends. A parody of the Black Widowers, "An Evening with the White Divorcés," was written by author, critic, and librarian Jon L. Breen. Asimov joked, "all I can do ... is to wait until I catch him in a dark alley, someday." Toward the end of his life, Asimov published a series of collections of limericks, mostly written by himself, starting with Lecherous Limericks, which appeared in 1975. Limericks: Too Gross, whose title displays Asimov's love of puns, contains 144 limericks by Asimov and an equal number by John Ciardi. He even created a slim volume of Sherlockian limericks. Asimov featured Yiddish humor in Azazel, The Two Centimeter Demon. The two main characters, both Jewish, talk over dinner, or lunch, or breakfast, about anecdotes of "George" and his friend Azazel. Asimov's Treasury of Humor is both a working joke book and a treatise propounding his views on humor theory. According to Asimov, the most essential element of humor is an abrupt change in point of view, one that suddenly shifts focus from the important to the trivial, or from the sublime to the ridiculous. Particularly in his later years, Asimov to some extent cultivated an image of himself as an amiable lecher. In 1971, as a response to the popularity of sexual guidebooks such as The Sensuous Woman (by "J") and The Sensuous Man (by "M"), Asimov published The Sensuous Dirty Old Man under the byline "Dr. 'A (although his full name was printed on the paperback edition, first published 1972). However, by 2016, Asimov's habit of groping women was seen as sexual harassment and came under criticism, and was cited as an early example of inappropriate behavior that can occur at science fiction conventions. Asimov published three volumes of autobiography. In Memory Yet Green (1979) and In Joy Still Felt (1980) cover his life up to 1978. The third volume, I. Asimov: A Memoir (1994), covered his whole life (rather than following on from where the second volume left off). The epilogue was written by his widow Janet Asimov after his death. The book won a Hugo Award in 1995. Janet Asimov edited It's Been a Good Life (2002), a condensed version of his three autobiographies. He also published three volumes of retrospectives of his writing, Opus 100 (1969), Opus 200 (1979), and Opus 300 (1984). In 1987, the Asimovs co-wrote How to Enjoy Writing: A Book of Aid and Comfort. In it they offer advice on how to maintain a positive attitude and stay productive when dealing with discouragement, distractions, rejection, and thick-headed editors. The book includes many quotations, essays, anecdotes, and husband-wife dialogues about the ups and downs of being an author. Asimov and Star Trek creator Gene Roddenberry developed a unique relationship during Star Treks initial launch in the late 1960s. Asimov wrote a critical essay on Star Treks scientific accuracy for TV Guide magazine. Roddenberry retorted respectfully with a personal letter explaining the limitations of accuracy when writing a weekly series. Asimov corrected himself with a follow-up essay to TV Guide claiming that despite its inaccuracies, Star Trek was a fresh and intellectually challenging science fiction television show. The two remained friends to the point where Asimov even served as an advisor on a number of Star Trek projects. In 1973, Asimov published a proposal for calendar reform, called the World Season Calendar. It divides the year into four seasons (named A–D) of 13 weeks (91 days) each. This allows days to be named, e.g., "D-73" instead of December 1 (due to December 1 being the 73rd day of the 4th quarter). An extra 'year day' is added for a total of 365 days. Awards and recognition Asimov won more than a dozen annual awards for particular works of science fiction and a half-dozen lifetime awards. He also received 14 honorary doctorate degrees from universities. 1955 – Guest of Honor at the 13th World Science Fiction Convention 1957 – Thomas Alva Edison Foundation Award for best science book for youth, for Building Blocks of the Universe 1960 – Howard W. Blakeslee Award from the American Heart Association for The Living River 1962 – Boston University's Publication Merit Award 1963 – A special Hugo Award for "adding science to science fiction," for essays published in The Magazine of Fantasy and Science Fiction 1963 – Fellow of the American Academy of Arts and Sciences 1964 – The Science Fiction Writers of America voted "Nightfall" (1941) the all-time best science fiction short story 1965 – James T. Grady Award of the American Chemical Society (now called the James T. Grady-James H. Stack Award for Interpreting Chemistry) 1966 – Best All-time Novel Series Hugo Award for the Foundation trilogy 1967 – Edward E. Smith Memorial Award 1967 – AAAS-Westinghouse Science Writing Award for Magazine Writing, for essay "Over the Edge of the Universe" (in the March 1967 Harper's Magazine) 1972 – Nebula Award for Best Novel for The Gods Themselves 1973 – Hugo Award for Best Novel for The Gods Themselves 1973 – Locus Award for Best Novel for The Gods Themselves 1975 – Golden Plate Award of the American Academy of Achievement 1975 – Klumpke-Roberts Award "for outstanding contributions to the public understanding and appreciation of astronomy" 1975 – Locus Award for Best Reprint Anthology for Before the Golden Age 1977 – Hugo Award for Best Novelette for The Bicentennial Man 1977 – Nebula Award for Best Novelette for The Bicentennial Man 1977 – Locus Award for Best Novelette for The Bicentennial Man 1981 – An asteroid, 5020 Asimov, was named in his honor 1981 – Locus Award for Best Non-Fiction Book for In Joy Still Felt: The Autobiography of Isaac Asimov, 1954–1978 1983 – Hugo Award for Best Novel for Foundation's Edge 1983 – Locus Award for Best Science Fiction Novel for Foundation's Edge 1984 – Humanist of the Year 1986 – The Science Fiction and Fantasy Writers of America named him its 8th SFWA Grand Master (presented in 1987). 1987 – Locus Award for Best Short Story for "Robot Dreams" 1992 – Hugo Award for Best Novelette for "Gold" 1995 – Hugo Award for Best Non-Fiction Book for I. Asimov: A Memoir 1995 – Locus Award for Best Non-Fiction Book for I. Asimov: A Memoir 1996 – A 1946 Retro-Hugo for Best Novel of 1945 was given at the 1996 WorldCon for "The Mule", the 7th Foundation story, published in Astounding Science Fiction 1997 – The Science Fiction and Fantasy Hall of Fame inducted Asimov in its second class of two deceased and two living persons, along with H. G. Wells. 2000 – Asimov was featured on a stamp in Israel 2001 – The Isaac Asimov Memorial Debates at the Hayden Planetarium in New York were inaugurated 2009 – A crater on the planet Mars, Asimov, was named in his honor 2010 – In the US Congress bill about the designation of the National Robotics Week as an annual event, a tribute to Isaac Asimov is as follows: "Whereas the second week in April each year is designated as 'National Robotics Week', recognizing the accomplishments of Isaac Asimov, who immigrated to America, taught science, wrote science books for children and adults, first used the term robotics, developed the Three Laws of Robotics, and died in April 1992: Now, therefore, be it resolved ..." 2015 – Selected as a member of the New York State Writers Hall of Fame. 2016 – A 1941 Retro-Hugo for Best Short Story of 1940 was given at the 2016 WorldCon for Robbie, his first positronic robot story, published in Super Science Stories, September 1940 2018 – A 1943 Retro-Hugo for Best Short Story of 1942 was given at the 2018 WorldCon for Foundation, published in Astounding Science-Fiction, May 1942 Writing style Asimov was his own secretary, typist, indexer, proofreader, and literary agent. He wrote a typed first draft composed at the keyboard at 90 words per minute; he imagined an ending first, then a beginning, then "let everything in-between work itself out as I come to it". (Asimov used an outline only once, later describing it as "like trying to play the piano from inside a straitjacket".) After correcting a draft by hand, he retyped the document as the final copy and only made one revision with minor editor-requested changes; a word processor did not save him much time, Asimov said, because 95% of the first draft was unchanged. After disliking making multiple revisions of "Black Friar of the Flame", Asimov refused to make major, second, or non-editorial revisions ("like chewing used gum"), stating that "too large a revision, or too many revisions, indicate that the piece of writing is a failure. In the time it would take to salvage such a failure, I could write a new piece altogether and have infinitely more fun in the process". He submitted "failures" to another editor. Asimov's fiction style is extremely unornamented. In 1980, science fiction scholar James Gunn wrote of I, Robot: Asimov addressed such criticism in 1989 at the beginning of Nemesis: Gunn cited examples of a more complex style, such as the climax of "Liar!". Sharply drawn characters occur at key junctures of his storylines: Susan Calvin in "Liar!" and "Evidence", Arkady Darell in Second Foundation, Elijah Baley in The Caves of Steel, and Hari Seldon in the Foundation prequels. Other than books by Gunn and Joseph Patrouch, there is relatively little literary criticism on Asimov (particularly when compared to the sheer volume of his output). Cowart and Wymer's Dictionary of Literary Biography (1981) gives a possible reason: Gunn's and Patrouch's studies of Asimov both state that a clear, direct prose style is still a style. Gunn's 1982 book comments in detail on each of Asimov's novels. He does not praise all of Asimov's fiction (nor does Patrouch), but calls some passages in The Caves of Steel "reminiscent of Proust". When discussing how that novel depicts night falling over futuristic New York City, Gunn says that Asimov's prose "need not be ashamed anywhere in literary society". Although he prided himself on his unornamented prose style (for which he credited Clifford D. Simak as an early influence), and said in 1973 that his style had not changed, Asimov also enjoyed giving his longer stories complicated narrative structures, often by arranging chapters in nonchronological ways. Some readers have been put off by this, complaining that the nonlinearity is not worth the trouble and adversely affects the clarity of the story. For example, the first third of The Gods Themselves begins with Chapter 6, then backtracks to fill in earlier material. (John Campbell advised Asimov to begin his stories as late in the plot as possible. This advice helped Asimov create "Reason", one of the early Robot stories). Patrouch found that the interwoven and nested flashbacks of The Currents of Space did serious harm to that novel, to such an extent that only a "dyed-in-the-kyrt Asimov fan" could enjoy it. In his later novel Nemesis one group of characters lives in the "present" and another group starts in the "past", beginning 15 years earlier and gradually moving toward the time of the first group. Alien life Asimov once explained that his reluctance to write about aliens came from an incident early in his career when Astoundings editor John Campbell rejected one of his science fiction stories because the alien characters were portrayed as superior to the humans. The nature of the rejection led him to believe that Campbell may have based his bias towards humans in stories on a real-world racial bias. Unwilling to write only weak alien races, and concerned that a confrontation would jeopardize his and Campbell's friendship, he decided he would not write about aliens at all. Nevertheless, in response to these criticisms, he wrote The Gods Themselves, which contains aliens and alien sex. The book won the Nebula Award for Best Novel in 1972, and the Hugo Award for Best Novel in 1973. Asimov said that of all his writings, he was most proud of the middle section of The Gods Themselves, the part that deals with those themes. In the Hugo Award–winning novelette "Gold", Asimov describes an author, based on himself, who has one of his books (The Gods Themselves) adapted into a "compu-drama", essentially photo-realistic computer animation. The director criticizes the fictionalized Asimov ("Gregory Laborian") for having an extremely nonvisual style, making it difficult to adapt his work, and the author explains that he relies on ideas and dialogue rather than description to get his points across. Romance and women In the early days of science fiction some authors and critics felt that the romantic elements were inappropriate in science fiction stories, which were supposedly to be focused on science and technology. Isaac Asimov was a supporter of this point of view, expressed in his 1938-1939 letters to Astounding, where he described such elements as "mush" and "slop". To his dismay, these letters were met with a strong opposition. Asimov attributed the lack of romance and sex in his fiction to the "early imprinting" from starting his writing career when he had never been on a date and "didn't know anything about girls". He was sometimes criticized for the general absence of sex (and of extraterrestrial life) in his science fiction. He claimed he wrote The Gods Themselves (1972) to respond to these criticisms, which often came from New Wave science fiction (and often British) writers. The second part (of three) of the novel is set on an alien world with three sexes, and the sexual behavior of these creatures is extensively depicted. Views Religion Asimov was an atheist, and a humanist. He did not oppose religious conviction in others, but he frequently railed against superstitious and pseudoscientific beliefs that tried to pass themselves off as genuine science. During his childhood, his parents observed the traditions of Orthodox Judaism less stringently than they had in Petrovichi; they did not force their beliefs upon young Isaac, and he grew up without strong religious influences, coming to believe that the Torah represented Hebrew mythology in the same way that the Iliad recorded Greek mythology. When he was 13, he chose not to have a bar mitzvah. As his books Treasury of Humor and Asimov Laughs Again record, Asimov was willing to tell jokes involving God, Satan, the Garden of Eden, Jerusalem, and other religious topics, expressing the viewpoint that a good joke can do more to provoke thought than hours of philosophical discussion. For a brief while, his father worked in the local synagogue to enjoy the familiar surroundings and, as Isaac put it, "shine as a learned scholar" versed in the sacred writings. This scholarship was a seed for his later authorship and publication of Asimov's Guide to the Bible, an analysis of the historic foundations for the Old and New Testaments. For many years, Asimov called himself an atheist; he considered the term somewhat inadequate, as it described what he did not believe rather than what he did. Eventually, he described himself as a "humanist" and considered that term more practical. Asimov continued to identify himself as a secular Jew, as stated in his introduction to Jack Dann's anthology of Jewish science fiction, Wandering Stars: "I attend no services and follow no ritual and have never undergone that curious puberty rite, the Bar Mitzvah. It doesn't matter. I am Jewish." When asked in an interview in 1982 if he was an atheist, Asimov replied, Likewise, he said about religious education: "I would not be satisfied to have my kids choose to be religious without trying to argue them out of it, just as I would not be satisfied to have them decide to smoke regularly or engage in any other practice I consider detrimental to mind or body." In his last volume of autobiography, Asimov wrote, The same memoir states his belief that Hell is "the drooling dream of a sadist" crudely affixed to an all-merciful God; if even human governments were willing to curtail cruel and unusual punishments, wondered Asimov, why would punishment in the afterlife not be restricted to a limited term? Asimov rejected the idea that a human belief or action could merit infinite punishment. If an afterlife existed, he claimed, the longest and most severe punishment would be reserved for those who "slandered God by inventing Hell". Asimov said about using religious motifs in his writing: Politics Asimov became a staunch supporter of the Democratic Party during the New Deal, and thereafter remained a political liberal. He was a vocal opponent of the Vietnam War in the 1960s and in a television interview during the early 1970s he publicly endorsed George McGovern. He was unhappy about what he considered an "irrationalist" viewpoint taken by many radical political activists from the late 1960s and onwards. In his second volume of autobiography, In Joy Still Felt, Asimov recalled meeting the counterculture figure Abbie Hoffman. Asimov's impression was that the 1960s' counterculture heroes had ridden an emotional wave which, in the end, left them stranded in a "no-man's land of the spirit" from which he wondered if they would ever return. Asimov vehemently opposed Richard Nixon, considering him "a crook and a liar". He closely followed Watergate, and was pleased when the president was forced to resign. Asimov was dismayed over the pardon extended to Nixon by his successor Gerald Ford: "I was not impressed by the argument that it has spared the nation an ordeal. To my way of thinking, the ordeal was necessary to make certain it would never happen again." After Asimov's name appeared in the mid-1960s on a list of people the Communist Party USA "considered amenable" to its goals, the FBI investigated him. Because of his academic background, the bureau briefly considered Asimov as a possible candidate for known Soviet spy ROBPROF, but found nothing suspicious in his life or background. Asimov appeared to hold an equivocal attitude towards Israel. In his first autobiography, he indicates his support for the safety of Israel, though insisting that he was not a Zionist. In his third autobiography, Asimov stated his opposition to the creation of a Jewish state, on the grounds that he was opposed to having nation-states in general, and supported the notion of a single humanity. Asimov especially worried about the safety of Israel given that it had been created among Muslim neighbors "who will never forgive, never forget and never go away", and said that Jews had merely created for themselves another "Jewish ghetto". Social issues Asimov believed that "science fiction ... serve[s] the good of humanity". He considered himself a feminist even before women's liberation became a widespread movement; he argued that the issue of women's rights was closely connected to that of population control. Furthermore, he believed that homosexuality must be considered a "moral right" on population grounds, as must all consenting adult sexual activity that does not lead to reproduction. He issued many appeals for population control, reflecting a perspective articulated by people from Thomas Malthus through Paul R. Ehrlich. In a 1988 interview by Bill Moyers, Asimov proposed computer-aided learning, where people would use computers to find information on subjects in which they were interested. He thought this would make learning more interesting, since people would have the freedom to choose what to learn, and would help spread knowledge around the world. Also, the one-to-one model would let students learn at their own pace. Asimov thought that people would live in space by 2019. In 1983 Asimov wrote: He continues on education: Sexual harassment Asimov would often fondle, kiss and pinch women at conventions and elsewhere without regard for their consent. According to Alec Nevala-Lee, author of an Asimov biography and writer on the history of science fiction, he often defended himself by saying that far from showing objections, these women cooperated. In a 1971 satirical piece, The Sensuous Dirty Old Man, Asimov wrote: "The question then is not whether or not a girl should be touched. The question is merely where, when, and how she should be touched." According to Nevala-Lee, however, "many of these encounters were clearly nonconsensual." He wrote that Asimov's behaviour, as a leading science-fiction author and personality, contributed to an undesirable atmosphere for women in the male-dominated science fiction community. In support of this, he quoted some of Asimov's contemporary fellow-authors such as Judith Merril, Harlan Ellison and Frederik Pohl, as well as editors such as Timothy Seldes. Additional specific incidents were reported by other people including Edward L. Ferman, long-time editor of The Magazine of Fantasy & Science Fiction, who wrote "...instead of shaking my date's hand, he shook her left breast. Environment and population Asimov's defense of civil applications of nuclear power, even after the Three Mile Island nuclear power plant incident, damaged his relations with some of his fellow liberals. In a letter reprinted in Yours, Isaac Asimov, he states that although he would prefer living in "no danger whatsoever" to living near a nuclear reactor, he would still prefer a home near a nuclear power plant to a slum on Love Canal or near "a Union Carbide plant producing methyl isocyanate", the latter being a reference to the Bhopal disaster. In the closing years of his life, Asimov blamed the deterioration of the quality of life that he perceived in New York City on the shrinking tax base caused by the middle-class flight to the suburbs, though he continued to support high taxes on the middle class to pay for social programs. His last nonfiction book, Our Angry Earth (1991, co-written with his long-time friend, science fiction author Frederik Pohl), deals with elements of the environmental crisis such as overpopulation, oil dependence, war, global warming, and the destruction of the ozone layer. In response to being presented by Bill Moyers with the question "What do you see happening to the idea of dignity to human species if this population growth continues at its present rate?", Asimov responded: Other authors Asimov enjoyed the writings of J. R. R. Tolkien, and used The Lord of the Rings as a plot point in a Black Widowers story, titled Nothing like Murder. In the essay "All or Nothing" (for The Magazine of Fantasy and Science Fiction, Jan 1981), Asimov said that he admired Tolkien and that he had read The Lord of the Rings five times. (The feelings were mutual, with Tolkien saying that he had enjoyed Asimov's science fiction. This would make Asimov an exception to Tolkien's earlier claim that he rarely found "any modern books" that were interesting to him.) He acknowledged other writers as superior to himself in talent, saying of Harlan Ellison, "He is (in my opinion) one of the best writers in the world, far more skilled at the art than I am." Asimov disapproved of the New Wave's growing influence, stating in 1967 "I want science fiction. I think science fiction isn't really science fiction if it lacks science. And I think the better and truer the science, the better and truer the science fiction". The feelings of friendship and respect between Asimov and Arthur C. Clarke were demonstrated by the so-called "Clarke–Asimov Treaty of Park Avenue", negotiated as they shared a cab in New York. This stated that Asimov was required to insist that Clarke was the best science fiction writer in the world (reserving second-best for himself), while Clarke was required to insist that Asimov was the best science writer in the world (reserving second-best for himself). Thus, the dedication in Clarke's book Report on Planet Three (1972) reads: "In accordance with the terms of the Clarke–Asimov treaty, the second-best science writer dedicates this book to the second-best science-fiction writer." In 1980, Asimov wrote a highly critical review of George Orwell's 1984. Though dismissive of his attacks, James Machell has stated that they "are easier to understand when you consider that Asimov viewed 1984 as dangerous literature. He opines that if communism were to spread across the globe, it would come in a completely different form to the one in 1984, and by looking to Orwell as an authority on totalitarianism, 'we will be defending ourselves against assaults from the wrong direction and we will lose'." Asimov became a fan of mystery stories at the same time as science fiction. He preferred to read the former because "I read every [science fiction] story keenly aware that it might be worse than mine, in which case I had no patience with it, or that it might be better, in which case I felt miserable". Asimov wrote "I make no secret of the fact that in my mysteries I use Agatha Christie as my model. In my opinion, her mysteries are the best ever written, far better than the Sherlock Holmes stories, and Hercule Poirot is the best detective fiction has seen. Why should I not use as my model what I consider the best?" He enjoyed Sherlock Holmes, but considered Arthur Conan Doyle to be "a slapdash and sloppy writer." Asimov also enjoyed humorous stories, particularly those of P. G. Wodehouse. In non-fiction writing, Asimov particularly admired the writing style of Martin Gardner, and tried to emulate it in his own science books. On meeting Gardner for the first time in 1965, Asimov told him this, to which Gardner answered that he had based his own style on Asimov's. Influence Paul Krugman, holder of a Nobel Prize in Economics, stated Asimov's concept of psychohistory inspired him to become an economist. John Jenkins, who has reviewed the vast majority of Asimov's written output, once observed, "It has been pointed out that most science fiction writers since the 1950s have been affected by Asimov, either modeling their style on his or deliberately avoiding anything like his style." Along with such figures as Bertrand Russell and Karl Popper, Asimov left his mark as one of the most distinguished interdisciplinarians of the 20th century. "Few individuals", writes James L. Christian, "understood better than Isaac Asimov what synoptic thinking is all about. His almost 500 books—which he wrote as a specialist, a knowledgeable authority, or just an excited layman—range over almost all conceivable subjects: the sciences, history, literature, religion, and of course, science fiction." Bibliography Depending on the counting convention used, and including all titles, charts, and edited collections, there may be currently over 500 books in Asimov's bibliography—as well as his individual short stories, individual essays, and criticism. For his 100th, 200th, and 300th books (based on his personal count), Asimov published Opus 100 (1969), Opus 200 (1979), and Opus 300 (1984), celebrating his writing. An extensive bibliography of Isaac Asimov's works has been compiled by Ed Seiler. His book writing rate was analysed, showing that he wrote faster as he wrote more. An online exhibit in West Virginia University Libraries' virtually complete Asimov Collection displays features, visuals, and descriptions of some of his more than 600 books, games, audio recordings, videos, and wall charts. Many first, rare, and autographed editions are in the Libraries' Rare Book Room. Book jackets and autographs are presented online along with descriptions and images of children's books, science fiction art, multimedia, and other materials in the collection. Science fiction "Greater Foundation" series The Robot series was originally separate from the Foundation series. The Galactic Empire novels were published as independent stories, set earlier in the same future as Foundation. Later in life, Asimov synthesized the Robot series into a single coherent "history" that appeared in the extension of the Foundation series. All of these books were published by Doubleday & Co, except the original Foundation trilogy which was originally published by Gnome Books before being bought and republished by Doubleday. The Robot series: (first Elijah Baley SF-crime novel) (second Elijah Baley SF-crime novel) (third Elijah Baley SF-crime novel) (sequel to the Elijah Baley trilogy) Galactic Empire novels: (early Galactic Empire) (long before the Empire) (Republic of Trantor still expanding) Foundation prequels: Original Foundation trilogy: (also published with the title 'The Man Who Upset the Universe' as a 35¢ Ace paperback, D-125, in about 1952) Extended Foundation series: Lucky Starr series (as Paul French) All published by Doubleday & Co David Starr, Space Ranger (1952) Lucky Starr and the Pirates of the Asteroids (1953) Lucky Starr and the Oceans of Venus (1954) Lucky Starr and the Big Sun of Mercury (1956) Lucky Starr and the Moons of Jupiter (1957) Lucky Starr and the Rings of Saturn (1958) Norby Chronicles (with Janet Asimov) All published by Walker & Company Norby, the Mixed-Up Robot (1983) Norby's Other Secret (1984) Norby and the Lost Princess (1985) Norby and the Invaders (1985) Norby and the Queen's Necklace (1986) Norby Finds a Villain (1987) Norby Down to Earth (1988) Norby and Yobo's Great Adventure (1989) Norby and the Oldest Dragon (1990) Norby and the Court Jester (1991) Novels not part of a series Novels marked with an asterisk (*) have minor connections to Foundation universe. The End of Eternity (1955), Doubleday (*) Fantastic Voyage (1966), Bantam Books (paperback) and Houghton Mifflin (hardback) (a novelization of the movie) The Gods Themselves (1972), Doubleday Fantastic Voyage II: Destination Brain (1987), Doubleday (not a sequel to Fantastic Voyage, but a similar, independent story) Nemesis (1989), Bantam Doubleday Dell (*) Nightfall (1990), Doubleday, with Robert Silverberg (based on "Nightfall", a 1941 short story written by Asimov) Child of Time (1992), Bantam Doubleday Dell, with Robert Silverberg (based on "The Ugly Little Boy", a 1958 short story written by Asimov) The Positronic Man (1992), Bantam Doubleday Dell, with Robert Silverberg (*) (based on The Bicentennial Man, a 1976 novella written by Asimov) Short-story collections Mysteries Novels The Death Dealers (1958), Avon Books, republished as A Whiff of Death by Walker & Company Murder at the ABA (1976), Doubleday, also published as Authorized Murder Short-story collections Black Widowers series Tales of the Black Widowers (1974), Doubleday More Tales of the Black Widowers (1976), Doubleday Casebook of the Black Widowers (1980), Doubleday Banquets of the Black Widowers (1984), Doubleday Puzzles of the Black Widowers (1990), Doubleday The Return of the Black Widowers (2003), Carroll & Graf Other mysteries Asimov's Mysteries (1968), Doubleday The Key Word and Other Mysteries (1977), Walker The Union Club Mysteries (1983), Doubleday The Disappearing Man and Other Mysteries (1985), Walker The Best Mysteries of Isaac Asimov (1986), Doubleday Nonfiction Popular science Collections of Asimov's essays for F&SF The following books collected essays which were originally published as monthly columns in The Magazine of Fantasy and Science Fiction and collected by Doubleday & Co Fact and Fancy (1962) View from a Height (1963) Adding a Dimension (1964) Of Time and Space and Other Things (1965) From Earth to Heaven (1966) Science, Numbers, and I (1968) The Solar System and Back (1970) The Stars in Their Courses (1971) The Left Hand of the Electron (1972) The Tragedy of the Moon (1973) Asimov On Astronomy (updated version of essays in previous collections) (1974) Asimov On Chemistry (updated version of essays in previous collections) (1974) Of Matters Great and Small (1975) Asimov On Physics (updated version of essays in previous collections) (1976) The Planet That Wasn't (1976) Asimov On Numbers (updated version of essays in previous collections) (1976) Quasar, Quasar, Burning Bright (1977) The Road to Infinity (1979) The Sun Shines Bright (1981) Counting the Eons (1983) X Stands for Unknown (1984) The Subatomic Monster (1985) Far as Human Eye Could See (1987) The Relativity of Wrong (1988) Asimov on Science: A 30 Year Retrospective 1959–1989 (1989) (features the first essay in the introduction) Out of the Everywhere (1990) The Secret of the Universe (1991) Other general science essay collections Only a Trillion (1957), Abelard-Schuman, ; (1976) revised and updated ed. Is Anyone There? (1967), Doubleday, (which includes the article in which he coined the term "spome") Today and Tomorrow and— (1973), Doubleday Science Past, Science Future (1975), Doubleday, Please Explain (1975), Houghton Mifflin, Life and Time (1978), Doubleday The Roving Mind (1983), Prometheus Books, new edition 1997, The Dangers of Intelligence (1986), Houghton Mifflin Past, Present and Future (1987), Prometheus Books, The Tyrannosaurus Prescription (1989), Prometheus Books Frontiers (1990), Dutton Frontiers II (1993), Dutton Other science books by Asimov The Chemicals of Life (1954), Abelard-Schuman Inside the Atom (1956), Abelard-Schuman, Building Blocks of the Universe (1957; revised 1974), Abelard-Schuman, The World of Carbon (1958), Abelard-Schuman, The World of Nitrogen (1958), Abelard-Schuman, Words of Science and the History Behind Them (1959), Houghton Mifflin The Clock We Live On (1959), Abelard-Schuman, Breakthroughs in Science (1959), Houghton Mifflin, Realm of Numbers (1959), Houghton Mifflin, Realm of Measure (1960), Houghton Mifflin The Wellsprings of Life (1960), Abelard-Schuman, Life and Energy (1962), Doubleday, The Genetic Code (1962), The Orion Press The Human Body: Its Structure and Operation (1963), Houghton Mifflin, , (revised) The Human Brain: Its Capacities and Functions (1963), Houghton Mifflin, Planets for Man (with Stephen H. Dole) (1964), Random House, reprinted by RAND in 2007 An Easy Introduction to the Slide Rule (1965), Houghton Mifflin, The Intelligent Man's Guide to Science (1965), Basic Books The title varied with each of the four editions, the last being Asimov's New Guide to Science (1984) The Universe: From Flat Earth to Quasar (1966), Walker, The Neutrino (1966), Doubleday, ASIN B002JK525W Understanding Physics Vol. I, Motion, Sound, and Heat (1966), Walker, Understanding Physics Vol. II, Light, Magnetism, and Electricity (1966), Walker, Understanding Physics Vol. III, The Electron, Proton, and Neutron (1966), Walker, Photosynthesis (1969), Basic Books, Our World in Space (1974), New York Graphic, Eyes on the Universe: A History of the Telescope (1976), Andre Deutsch Limited, The Collapsing Universe (1977), Walker, Extraterrestrial Civilizations (1979), Crown, A Choice of Catastrophes (1979), Simon & Schuster, Visions of the Universe with illustrations by Kazuaki Iwasaki (1981), Cosmos Store, Exploring the Earth and the Cosmos (1982), Crown, The Measure of the Universe (1983), Harper & Row Think About Space: Where Have We Been and Where Are We Going? with co-author Frank White (1989), Walker Asimov's Chronology of Science and Discovery (1989), Harper & Row, second edition adds content thru 1993, Beginnings: The Story of Origins (1989), Walker Isaac Asimov's Guide to Earth and Space (1991), Random House, Atom: Journey Across the Subatomic Cosmos (1991), Dutton, Mysteries of Deep Space: Quasars, Pulsars and Black Holes (1994) Earth's Moon (1988), Gareth Stevens, revised in 2003 by Richard Hantula The Sun (1988), Gareth Stevens, revised in 2003 by Richard Hantula The Earth (1988), Gareth Stevens, revised in 2004 by Richard Hantula Jupiter (1989), Gareth Stevens, revised in 2004 by Richard Hantula Venus (1990), Gareth Stevens, revised in 2004 by Richard Hantula Literary works All published by Doubleday Asimov's Guide to Shakespeare, vols I and II (1970), Asimov's Annotated "Don Juan" (1972) Asimov's Annotated "Paradise Lost" (1974) Familiar Poems, Annotated (1976) Asimov's The Annotated "Gulliver's Travels" (1980) Asimov's Annotated "Gilbert and Sullivan" (1988) The Bible Words from Genesis (1962), Houghton Mifflin Words from the Exodus (1963), Houghton Mifflin Asimov's Guide to the Bible, vols I and II (1967 and 1969, one-volume ed. 1981), Doubleday, The Story of Ruth (1972), Doubleday, In the Beginning (1981), Crown Autobiography In Memory Yet Green: The Autobiography of Isaac Asimov, 1920–1954 (1979, Doubleday) In Joy Still Felt: The Autobiography of Isaac Asimov, 1954–1978 (1980, Doubleday) I. Asimov: A Memoir (1994, Doubleday) It's Been a Good Life (2002, Prometheus Books), condensation of Asimov's three volumes of autobiography, edited by his widow, Janet Jeppson Asimov History All published by Houghton Mifflin except where otherwise stated The Kite That Won the Revolution (1963), The Greeks: A Great Adventure (1965) The Roman Republic (1966) The Roman Empire (1967) The Egyptians (1967) The Near East (1968) The Dark Ages (1968) Words from History (1968) The Shaping of England (1969) Constantinople: The Forgotten Empire (1970) The Land of Canaan (1971) The Shaping of France (1972) The Shaping of North America: From Earliest Times to 1763 (1973) The Birth of the United States: 1763–1816 (1974) Our Federal Union: The United States from 1816 to 1865 (1975), The Golden Door: The United States from 1865 to 1918 (1977) Asimov's Chronology of the World (1991), HarperCollins, The March of the Millennia (1991), with co-author Frank White, Walker & Company, Humor The Sensuous Dirty Old Man (1971) (As Dr. A), Walker & Company, Isaac Asimov's Treasury of Humor (1971), Houghton Mifflin, Lecherous Limericks (1975), Walker, More Lecherous Limericks (1976), Walker, Still More Lecherous Limericks (1977), Walker, Limericks, Two Gross, with John Ciardi (1978), Norton, A Grossery of Limericks, with John Ciardi (1981), Norton, Limericks for Children (1984), Caedmon Asimov Laughs Again (1992), HarperCollins On writing science fiction Asimov on Science Fiction (1981), Doubleday Asimov's Galaxy (1989), Doubleday Other nonfiction Opus 100 (1969), Houghton Mifflin, Asimov's Biographical Encyclopedia of Science and Technology (1964), Doubleday (revised edition 1972, ) Opus 200 (1979), Houghton Mifflin, Isaac Asimov's Book of Facts (1979), Grosset & Dunlap, Opus 300 (1984), Houghton Mifflin, Our Angry Earth: A Ticking Ecological Bomb (1991), with co-author Frederik Pohl, Tor, . Television, music, and film appearances I Robot, a concept album by the Alan Parsons Project that examined some of Asimov's work The Last Word (1959) The Dick Cavett Show, four appearances 1968–71 The Nature of Things (1969) ABC News coverage of Apollo 11, 1969, with Fred Pohl, interviewed by Rod Serling David Frost interview program, August 1969. Frost asked Asimov if he had ever tried to find God and, after some initial evasion, Asimov answered, "God is much more intelligent than I am—let him try to find me." BBC Horizon "It's About Time" (1979), show hosted by Dudley Moore Target ... Earth? (1980) The David Letterman Show (1980) NBC TV Speaking Freely, interviewed by Edwin Newman (1982) ARTS Network talk show hosted by Studs Terkel and Calvin Trillin, approximately (1982) Oltre New York (1986) Voyage to the Outer Planets and Beyond (1986) Gandahar (1987), a French animated science-fiction film by René Laloux. Asimov wrote the English translation for the film. Bill Moyers interview (1988) Stranieri in America (1988) Adaptations Several of his stories ("The Dead Past", "Sucker Bait", "Satisfaction Guaranteed", "Reason", "Liar!", and "The Naked Sun") were adapted as television plays for the first three series of the science-fiction (later horror) anthology series Out of the Unknown between 1965 and 1969. Only "The Dead Past" and "Sucker Bait" are known to still exist entirely as 16mm telerecordings. Tele-snaps, brief audio recordings and video clips exist for "Satisfaction Guaranteed" and "The Prophet" (adapted from "Reason"), while only production stills, brief audio recordings and video clips exist for "Liar!". Production stills and an almost complete audio recording exist for "The Naked Sun". El robot embustero (1966), short film directed by Antonio Lara de Gavilán, based on short story "Liar!" A halhatatlanság halála (1977), TV movie directed by András Rajnai, based on novel The End of Eternity The Ugly Little Boy (1977), short film directed by Barry Morse and Donald W. Thompson, based on novelette The Ugly Little Boy The End of Eternity (1987), film directed by Andrei Yermash, based on novel The End of Eternity Nightfall (1988), film directed by Paul Mayersberg, based on novelette "Nightfall" Robots (1988), film directed by Doug Smith and Kim Takal, based on the Robot series Robot City (1995), an adventure game released for Windows and Mac OS, based on the book series of the same name that consists of science fiction novels written by multiple authors, inspired by the Robot series. Bicentennial Man (1999), film directed by Chris Columbus, based on novelette "The Bicentennial Man" and on novel The Positronic Man Nightfall (2000), film directed by Gwyneth Gibby, based on novelette "Nightfall" I, Robot (2004), film directed by Alex Proyas, with very tenuous connections with the short stories of the Robot series Eagle Eye (2008), film directed by D. J. Caruso, loosely based on short story "All the Troubles of the World" Formula of Death (2012), TV movie directed by Behdad Avand Amini, based on novel The Death Dealers Spell My Name with an S (2014), short film directed by Samuel Ali, based on short story "Spell My Name with an S" Foundation (2021), series created by David S. Goyer and Josh Friedman, based on the Foundation series References Explanatory footnotes Citations General and cited sources Asimov, Isaac. Isaac Asimov's Treasury of Humor (1971), Boston: Houghton Mifflin, . In Memory Yet Green (1979), New York: Avon, . In Joy Still Felt (1980), New York: Avon . I. Asimov: A Memoir (1994), (hc), (pb). Yours, Isaac Asimov (1996), edited by Stanley Asimov. New York: Doubleday . It's Been a Good Life (2002), edited by Janet Asimov. . Goldman, Stephen H., "Isaac Asimov", in Dictionary of Literary Biography, Vol. 8, Cowart and Wymer eds. (Gale Research, 1981), pp. 15–29. Gunn, James. "On Variations on a Robot", IASFM, July 1980, pp. 56–81. Isaac Asimov: The Foundations of Science Fiction (1982). . The Science of Science-Fiction Writing (2000). . Further reading External links Asimov Online, a vast repository of information about Asimov, maintained by Asimov enthusiast Edward Seiler Jenkins' Spoiler-Laden Guide to Isaac Asimov, reviews of all of Asimov's books 1920 births 1992 deaths 20th-century American essayists 20th-century American male writers 20th-century American memoirists 20th-century American novelists 20th-century American short story writers 20th-century atheists AIDS-related deaths in New York (state) American alternate history writers American atheists American biochemists American critics of religions American historians of science American humanists American humorists American male essayists American male feminists American male non-fiction writers American male novelists American male short story writers American mystery writers American people of Russian-Jewish descent American science fiction writers American science writers American skeptics American writers of Russian descent Analog Science Fiction and Fact people Asimov's Science Fiction people Atheist feminists Bible commentators Boston University faculty Boys High School (Brooklyn) alumni Columbia Graduate School of Arts and Sciences alumni Columbia University School of General Studies alumni Date of birth unknown Fellows of the American Academy of Arts and Sciences Futurians Historians of astronomy Hugo Award–winning writers Humor researchers Jewish American atheists Jewish American essayists Jewish American memoirists Jewish American military personnel Jewish American non-fiction writers Jewish American novelists Jewish American short story writers Jewish American feminists American feminists Jewish skeptics Mensans Military personnel from New York City Naturalized citizens of the United States Nebula Award winners New York (state) Democrats Novelists from Massachusetts Novelists from New York (state) People from Smolensk Oblast People from the Upper West Side Pulp fiction writers SFWA Grand Masters Science Fiction Hall of Fame inductees Scientists from New York City Soviet emigrants to the United States United States Army non-commissioned officers United States Navy civilians Writers about religion and science Writers from Brooklyn Yiddish-speaking people 20th-century American Jews
Isaac Asimov
[ "Astronomy" ]
17,619
[ "People associated with astronomy", "Historians of astronomy", "History of astronomy" ]
14,594
https://en.wikipedia.org/wiki/Troll%20%28slang%29
In slang, a troll is a person who posts deliberately offensive or provocative messages online (such as in social media, a newsgroup, a forum, a chat room, an online video game) or who performs similar behaviors in real life. The methods and motivations of trolls can range from benign to sadistic. These messages can be inflammatory, insincere, digressive, extraneous, or off-topic, and may have the intent of provoking others into displaying emotional responses, or manipulating others' perception, thus acting as a bully or a provocateur. The behavior is typically for the troll's amusement, or to achieve a specific result such as disrupting a rival's online activities or purposefully causing confusion or harm to other people. Trolling behaviors involve tactical aggression to incite emotional responses, which can adversely affect the target's well-being. In this context, the noun and the verb forms of "troll" are frequently associated with Internet discourse. Recently, media attention has equated trolling with online harassment. The Courier-Mail and The Today Show have used "troll" to mean "a person who defaces Internet tribute sites with the aim of causing grief to families". In addition, depictions of trolling have been included in popular fictional works, such as the HBO television program The Newsroom, in which a main character encounters harassing persons online and tries to infiltrate their circles by posting negative sexual comments. Usage Application of the term troll is subjective. Some readers may characterize a post as trolling, while others may regard the same post as a legitimate contribution to the discussion, even if controversial. More potent acts of trolling are blatant harassment or off-topic banter. However, the term Internet troll has also been applied to information warfare, hate speech, and even political activism. The "Trollface" is an image occasionally used to indicate trolling in Internet culture. The word is sometimes incorrectly used to refer to anyone with controversial or differing opinions. Such usage goes against the ordinary meaning of troll in multiple ways. While psychologists have determined that psychopathological sadism, dark triad, and dark tetrad personality traits are common among Internet trolls, some observers claim that trolls do not believe the controversial views they claim. Farhad Manjoo criticises this view, noting that if the person is trolling, they are more intelligent than their critics would believe. Responses One common strategy for dealing with online trolls is to ignore them. This approach, known as "don't feed the trolls," is based on the idea that trolls feed on attention and reactions. By withholding these, the troll may lose interest and stop their disruptive behavior. However, ignoring trolls is not always effective. Trolls may interpret a lack of response as a weakness and escalate their harassment. Reporting the troll to the platform administrators may be necessary in such cases. Most online platforms have guidelines against harassment and abuse, and reporting the troll can lead to their account being suspended or banned. Origin and etymology There are competing theories of where and when "troll" was first used in Internet slang, with numerous unattested accounts of BBS and Usenet origins in the early 1980s or before. The English noun "troll" in the standard sense of ugly dwarf or giant dates to 1610 and originates from the Old Norse word "troll" meaning giant or demon. The word evokes the trolls of Scandinavian folklore and children's tales: antisocial, quarrelsome and slow-witted creatures which make life difficult for travelers. Trolls have existed in folklore and fantasy literature for centuries, and online trolling has been around for as long as the Internet has existed. In modern English usage, "trolling" may describe the fishing technique of slowly dragging a lure or baited hook from a moving boat, whereas trawling describes the generally commercial act of dragging a fishing net. Early non-Internet slang use of "trolling" can be found in the military: by 1972 the term "trolling for MiGs" was documented in use by US Navy pilots in Vietnam. It referred to use of "...decoys, with the mission of drawing...fire away..." The contemporary use of the term is said to have appeared on the Internet in the late 1980s, but the earliest known attestation according to the Oxford English Dictionary is in 1992. The context of the quote cited in the Oxford English Dictionary sets the origin in Usenet in the early 1990s as in the phrase "trolling for newbies", as used in alt.folklore.urban (AFU). Commonly, what is meant is a relatively gentle inside joke by veteran users, presenting questions or topics that had been so overdone that only a new user would respond to them earnestly. For example, a veteran of the group might make a post on the common misconception that glass flows over time. Long-time readers would both recognize the poster's name and know that the topic had been discussed repeatedly, but new subscribers to the group would not realize, and would thus respond. These types of trolls served as a practice to identify group insiders. This definition of trolling, considerably narrower than the modern understanding of the term, was considered a positive contribution. One of the most notorious AFU trollers, David Mikkelson, went on to create the urban folklore website Snopes.com. By the late 1990s, alt.folklore.urban had such heavy traffic and participation that trolling of this sort was frowned upon. Others expanded the term to include the practice of playing a seriously misinformed user, even in newsgroups where one was not a regular; these were often attempts at humor rather than provocation. The noun troll usually referred to an act of trolling – or to the resulting discussion – rather than to the author, though some posts punned on the dual meaning of troll. The August 26, 1997 strip of webcomic Kevin and Kell used the word troll to describe those that deliberately harass or provoke other Internet users, similar to the modern sense of the word. In other languages In Chinese, trolling is referred to as bái mù (), which can be straightforwardly explained as "eyes without pupils", in the sense that while the pupil of the eye is used for vision, the white section of the eye cannot see, and trolling involves blindly talking nonsense over the Internet, having total disregard to sensitivities or being oblivious to the situation at hand, akin to having eyes without pupils. An alternative term is bái làn (), which describes a post completely nonsensical and full of folly made to upset others, and derives from a Taiwanese slang term for the male genitalia, where genitalia that is pale white in color represents that someone is young, and thus foolish. Both terms originate from Taiwan, and are also used in Hong Kong and mainland China. Another term, xiǎo bái (), is a derogatory term for both bái mù and bái làn that is used on anonymous posting Internet forums. Another common term for a troll used in mainland China is pēn zi (). In Hebrew the word refers both to internet trolls, who engage in disruptive behavior on social media and online platforms, or to the mythical creatures similar to trolls found in European mythology. The word is also inflected into a verb form, , which means to engage in trolling behavior on the internet or social media. In Icelandic, þurs (a thurs) or tröll (a troll) may refer to trolls, the verbs þursa (to troll) or þursast (to be trolling, to troll about) may be used. In Japanese, means "fishing" and refers to intentionally misleading posts whose only purpose is to get the readers to react, i.e. get trolled. means "laying waste" and can also be used to refer to simple spamming. In Korean, nak-si (낚시) means "fishing" and refers to Internet trolling attempts, as well as purposely misleading post titles. A person who recognizes the troll after having responded (or, in case of a post title, nak-si, having read the actual post) would often refer to themselves as a caught fish. In Portuguese, more commonly in its Brazilian variant, troll (pronounced in most of Brazil as spelling pronunciation) is the usual term to denote Internet trolls (examples of common derivate terms are trollismo or trollagem, "trolling", and the verb trollar, "to troll", which entered popular use), but an older expression, used by those which want to avoid anglicisms or slangs, is complexo do pombo enxadrista to denote trolling behavior, and pombos enxadristas (literally, "chessplayer pigeons") or simply pombos are the terms used to name the trolls. The terms are explained by an adage or popular saying: "Arguing with fulano (i.e., John Doe) is the same as playing chess with a pigeon: it defecates on the table, drops the pieces and simply flies off, claiming victory." In Thai, the term krian (เกรียน) has been adopted to address Internet trolls. According to the Royal Institute of Thailand, the term, which literally refers to a closely cropped hairstyle worn by schoolboys in Thailand, is from the behaviour of these schoolboys who usually gather to play online games and, during which, make annoying, disruptive, impolite, or unreasonable expressions. Trolling, identity, and anonymity Early incidents of trolling were considered to be the same as flaming, but this has changed with modern usage by the news media to refer to the creation of any content that targets another person. The Internet dictionary, NetLingo, suggests there are four grades of trolling: playtime trolling, tactical trolling, strategic trolling, and domination trolling. The relationship between trolling and flaming was observed in open-access forums in California, on a series of modem-linked computers. CommuniTree was begun in 1978 but was closed in 1982 when accessed by high school teenagers, becoming a ground for trashing and abuse. Some psychologists have suggested that flaming would be caused by deindividuation or decreased self-evaluation: the anonymity of online postings would lead to disinhibition amongst individuals. Others have suggested that although flaming and trolling is often unpleasant, it may be a form of normative behavior that expresses the social identity of a certain user group. According to Tom Postmes, a professor of social and organisational psychology at the universities of Exeter, England, and Groningen, The Netherlands, and the author of Individuality and the Group, who has studied online behavior for 20 years, "Trolls aspire to violence, to the level of trouble they can cause in an environment. They want it to kick off. They want to promote antipathetic emotions of disgust and outrage, which morbidly gives them a sense of pleasure." Someone who brings something off topic into the conversation in order to make that person mad is trolling. The practice of trolling has been documented by a number of academics since the 1990s. This included Steven Johnson in 1997 in the book Interface Culture, and a paper by Judith Donath in 1999. Donath's paper outlines the ambiguity of identity in a disembodied "virtual community" such as Usenet: Donath provides a concise overview of identity deception games which trade on the confusion between physical and epistemic community: Whitney Phillips observes in This is Why We Can't Have Nice Things: Mapping the Relationship Between Online Trolling and Mainstream Culture that certain behaviors are consistent among different types of trolls. First, trolls of the subcultural variety self-identify as trolls. Trolls are also motivated by what is known as lulz, a type of unsympathetic, ambiguous laughter. The final behavior is the insistent need for anonymity. According to Phillips, anonymity allows trolls to engage in behaviors they would not replicate in professional or public settings, with the effectiveness of trolling often being dependent upon the target's lack of anonymity. This can include the disclosure of real-life attachments, interests, and vulnerabilities of the target. A troll can disrupt the discussion on a newsgroup or online forum, disseminate bad advice, and damage the feeling of trust in the online community. In a group that has become sensitized to trollingwhere the rate of deception is highmany honestly naïve questions may be quickly rejected as trolling. This can be quite off-putting to the new user who upon first posting is immediately bombarded with angry accusations. Even if the accusations are unfounded, being branded a troll may be damaging to one's online reputation. Susan Herring and colleagues, in "Searching for Safety Online: Managing 'Trolling' in a Feminist Forum", point out the difficulty inherent in monitoring trolling and maintaining freedom of speech in online communities: "harassment often arises in spaces known for their freedom, lack of censure, and experimental nature". Free speech may lead to tolerance of trolling behavior, complicating the members' efforts to maintain an open, yet supportive discussion area, especially for sensitive topics such as race, gender, and sexuality. Cyberbullying laws vary by state, as trolling is not a crime under U.S. federal law. In an effort to reduce uncivil behavior by increasing accountability, many web sites (e.g. Reuters, Facebook, and Gizmodo) now require commenters to register their names and e-mail addresses. Trolling itself has become its own form of Internet subculture and has developed its own set of rituals, rules, specialized language, and dedicated spaces of practice. The appeal of trolling primarily comes from the thrill of how long one can keep the ruse going before getting caught, and exposed as a troll. When understood this way, Internet trolls are less like vulgar, indiscriminate bullies, and closer to countercultural respondents to a (so called) overly sensitive public. The main elements of why people troll are interactions; trolling exists in the interactive communications between Internet users, influencing people's views both from objective and emotional standpoints. Further, trolling does not target a single individual, but rather targets multiple members of a discussion. Trolling can be easily identified by its offensive content, intended to provoke an emotional reaction from an audience. Corporate, political, and special-interest sponsored trolls Organizations and countries may utilize trolls to manipulate public opinion as part and parcel of an astroturfing initiative. When trolling is sponsored by the government, it is often called state-sponsored Internet propaganda or state-sponsored trolling. Teams of sponsored trolls are sometimes referred to as sockpuppet armies. A 2016 study by Harvard political scientist Gary King reported that the Chinese government's 50 Cent Party creates 440 million pro-government social media posts per year. The report said that government employees were paid to create pro-government posts around the time of national holidays to avoid mass political protests. The Chinese Government ran an editorial in the state-funded Global Times defending censorship and 50 Cent Party trolls. A 2016 study for the NATO Strategic Communications Centre of Excellence on hybrid warfare notes that the Russo-Ukrainian War "demonstrated how fake identities and accounts were used to disseminate narratives through social media, blogs, and web commentaries in order to manipulate, harass, or deceive opponents." The NATO report describes that a "Wikipedia troll" uses a type of message design where a troll does not add "emotional value" to reliable "essentially true" information in re-posts, but presents it "in the wrong context, intending the audience to draw false conclusions." For example, information, without context, from Wikipedia about the military history of the United States "becomes value-laden if it is posted in the comment section of an article criticizing Russia for its military actions and interests in Ukraine. The Wikipedia troll is 'tricky', because in terms of actual text, the information is true, but the way it is expressed gives it a completely different meaning to its readers." Unlike "classic trolls", Wikipedia trolls "have no emotional input, they just supply misinformation" and are one of "the most dangerous" as well as one of "the most effective trolling message designs." Even among people who are "emotionally immune to aggressive messages" and apolitical, "training in critical thinking" is needed, according to the NATO report, because "they have relatively blind trust in Wikipedia sources and are not able to filter information that comes from platforms they consider authoritative." While Russian-language hybrid trolls use the Wikipedia troll message design to promote anti-Western sentiment in comments, they "mostly attack aggressively to maintain emotional attachment to issues covered in articles." Discussions about topics other than international sanctions during the Ukrainian crisis "attracted very aggressive trolling" and became polarized, according to the NATO report, which "suggests that in subjects in which there is little potential for re-educating audiences, emotional harm is considered more effective" for pro-Russian Latvian-language trolls. A 2016 study on fluoridation decision-making in Israel coined the term "Uncertainty Bias" to describe the efforts of power in government, public health and media to aggressively advance agendas by misrepresentation of historical and scientific fact. The authors noted that authorities tended to overlook or to deny situations that involve uncertainty while making unscientific arguments and disparaging comments in order to undermine opposing positions. The New York Times reported in late October 2018 that Saudi Arabia used an online army of Twitter trolls to harass the late Saudi dissident journalist Jamal Khashoggi and other critics of the Saudi government. In October 2018, The Daily Telegraph reported that Facebook "banned hundreds of pages and accounts which it says were fraudulently flooding its site with partisan political content – although they came from the US instead of being associated with Russia." While corporate networking site LinkedIn is considered a platform of good taste and professionalism, companies searching for personal information by promoting jobs that were not real and fake accounts posting political messages has caught the company off guard. Psychological characteristics Researcher Ben Radford wrote about the phenomenon of clowns in history and the modern day in his book Bad Clowns, and found that "bad clowns" have evolved into Internet trolls. They do not dress up as traditional clowns but, for their own amusement, they tease and exploit "human foibles" in order to speak the "truth" and gain a reaction. Like clowns in make-up, Internet trolls hide behind "anonymous accounts and fake usernames". In their eyes, they are the trickster and are performing for a nameless audience via the Internet. Studies conducted in the fields of human–computer interaction and cyberpsychology by other researchers have corroborated Radford's analysis on the phenomenon of Internet trolling as a form of deception-serving entertainment and its correlations to aggressive behaviour, katagelasticism, black humor, and the Dark tetrad. Trolling correlates positively with sadism, trait psychopathy, and Machiavellianism (see dark triad). Trolls take pleasure from causing pain and emotional suffering. Their ability to upset or harm gives them a feeling of power. Psychological researches conducted in the fields of personality psychology and cyberpsychology report that trolling behaviour qualifies as an anti-social behaviour and is strongly correlated to sadistic personality disorder (SPD). Researches have shown that men, compared with women, are more likely to perpetrate trolling behaviour; these gender differences in online anti-social behaviour may be a reflection of gender stereotypes, where agentic characteristics such as competitiveness and dominance are encouraged in men. The results corroborated that gender (male) is a significant predictor of trolling behaviour, alongside trait psychopathy and sadism to be significant positive predictors. Moreover, these studies have shown that people who enjoy trolling online tend to also enjoy hurting other people in everyday life, therefore corroborating a longstanding and persistent pattern of psychopathological sadism. A psychoanalytic and sexologic study on the phenomenon of Internet trolling asserts that anonymity increases the incidence of the trolling behaviour, and that "the internet is becoming a medium to invest our anxieties and not thinking about the repercussions of trolling and affecting the victims mentally and incite a sense of guilt and shame within them". Concern troll Concern trolls pretend to be sympathetic to a certain point of view which they are actually critical of. A concern troll will often declare an interest in joining or allying with a certain cause, while subtly ridiculing it. The concern troll posts in web forums devoted to their declared point of view and attempts to sway the group's actions or opinions while claiming to share their goals, but with professed "concerns". The goal is to sow fear, uncertainty, and doubt within the group, sometimes by appealing to outrage culture. For example, a person who wishes to shame obese people, but disguises this impulse as concern for the health of overweight people, could be considered a concern troll. A verifiable example of concern trolling within politics occurred in 2006 when Tad Furtado, a member of staff for then-Congressman Charles Bass (R-N.H.), was caught posing as a "concerned" supporter of Bass's opponent, Democrat Paul Hodes, on several liberal New Hampshire blogs, using the pseudonyms "IndieNH" or "IndyNH". "IndyNH" expressed concern that Democrats might just be wasting their time or money on Hodes, because Bass was unbeatable. Hodes eventually won the election. Although the term "concern troll" originated in discussions of online behavior, it now sees increasing use to describe similar offline behaviors. For example, James Wolcott of Vanity Fair accused a conservative New York Daily News columnist of "concern troll" behavior in his efforts to downplay the Mark Foley scandal. Wolcott links what he calls concern trolls to what Saul Alinsky calls "Do-Nothings", giving a long quote from Alinsky on the Do-Nothings' method and effects: The Hill published an op-ed piece by Markos Moulitsas of the liberal blog Daily Kos titled "Dems: Ignore 'Concern Trolls. The concern trolls in question were not Internet participants but rather Republicans offering public advice and warnings to the Democrats that could be considered deceptive. Troll sites The online forum TOTSE, as created in 1997, is considered one of the earliest trolling communities, predating 4chan by several years. A New York Times article discussed troll activity at 4chan and at Encyclopedia Dramatica, which it described as "an online compendium of troll humor and troll lore". 4chan's /b/ board is recognized as "one of the Internet's most infamous and active trolling hotspots". This site and others are often used as a base to troll against sites that their members cannot normally post on. These trolls feed off the reactions of their victims because "their agenda is to take delight in causing trouble". Places like Reddit, 4chan, and other anonymous message boards are prime real-estate for online trolls. Because there is no easy way of tracing who someone is, trolls can post very inflammatory content without repercussion. The online French group Ligue du LOL has been accused of organized harassment and described as a troll group. Media coverage and controversy Mainstream media outlets have focused their attention on the willingness of some Internet users to go to extreme lengths to participate in organized psychological harassment. Australia In February 2010, the Australian government became involved after users defaced the Facebook tribute pages of murdered children Trinity Bates and Elliott Fletcher. Australian communications minister Stephen Conroy decried the attacks, committed mainly by 4chan users, as evidence of the need for greater Internet regulation, stating, "This argument that the Internet is some mystical creation that no laws should apply to, that is a recipe for anarchy and the wild west." Facebook responded by strongly urging administrators to be aware of ways to ban users and remove inappropriate content from Facebook pages. In 2012, the Daily Telegraph started a campaign to take action against "Twitter trolls", who abuse and threaten users. Several high-profile Australians including Charlotte Dawson, Robbie Farah, Laura Dundovic, and Ray Hadley have been victims of this phenomenon. India According to journalist Swati Chaturvedi and others, the ruling Bharatiya Janata Party (BJP) runs networks of social media trolls tasked with intimidating political opponents. Bollywood celebrities can face strong social media backlash for their political comments. When actor Shah Rukh Khan criticized the country's intolerance and called for secularism, many promoted a boycott of his upcoming movie, including several right-wing politicians, one of whom compared Khan to a terrorist. In 2015, when the Maharashtra state government banned the sale and consumption of cattle meat (reflecting Hindu beliefs), online trolls attacked stars who criticized the law; actor Rishi Kapoor received insults and had his Hindu faith questioned. Though the death sentence of convicted terrorist Yakub Memon was criticized by "many", including human rights activists and a former Supreme Court chief justice, Bollywood star Salman Khan received "overwhelming" online anger for expressing the same views; the trolling spilled over into real life, with some protestors burning his effigy. Newslaundry covered the phenomenon of "Twitter trolling" in its "Criticles", also characterizing Twitter trolls in its weekly podcasts. The troll community of Kerala has birthed some troll slang in Malayalam due to the use of such new words in trolling events that have become viral; some examples are Kummanadi ("using public transportation without a ticket"), OMKV ("GTFO"), and kiduve or kidu ("cool"; "awesome"). Japan In July 2022, Japanese law banned "online insults", punishable by up to one year of imprisonment. Under this law, an "insult" () is defined as "publicly demeaning someone's social standing without referring to specific facts about them or a specific action." United Kingdom In the United Kingdom, contributions made to the Internet are covered by the Malicious Communications Act 1988 as well as Section 127 of the Communications Act 2003, under which jail sentences were, until 2015, limited to a maximum of six months. In October 2014, the UK's Justice Secretary, Chris Grayling, said that "Internet trolls" would face up to two years in jail, under measures in the Criminal Justice and Courts Bill that extend the maximum sentence and time limits for bringing prosecutions. The House of Lords Select Committee on Communications had earlier recommended against creating a specific offence of trolling. Sending messages which are "grossly offensive or of an indecent, obscene or menacing character" is an offence whether they are received by the intended recipient or not. Several people have been imprisoned in the UK for online harassment. Trolls of the testimonial page of Georgia Varley faced no prosecution due to misunderstandings of the legal system in the wake of the term trolling being popularized. In October 2012, a twenty-year-old man was jailed for twelve weeks for posting offensive jokes to a support group for friends and family of April Jones. Between 2008 and 2017, 5,332 people in London were arrested and charged for behavior on social media deemed in violation of Communications Act 2003. United States On 31 March 2010, NBC's Today ran a segment detailing the deaths of three separate adolescent girls and trolls' subsequent reactions to their deaths. Shortly after the suicide of high school student Alexis Pilkington, anonymous posters began performing organized psychological harassment across various message boards, referring to Pilkington as a "suicidal slut", and posting graphic images on her Facebook memorial page. The segment also included an exposé of a 2006 accident, in which an eighteen-year-old fatally crashed her father's car into a highway pylon; trolls emailed her grieving family the leaked pictures of her mutilated corpse (see Nikki Catsouras photographs controversy). In 2007, the media was fooled by trollers into believing that students were consuming a drug called Jenkem, purportedly made of human waste. A user named Pickwick on TOTSE posted pictures implying that he was inhaling this drug. Major news corporations such as Fox News Channel reported the story and urged parents to warn their children about this drug. Pickwick's pictures of Jenkem were fake and the pictures did not actually feature human waste. In August 2012, the subject of trolling was featured on the HBO television series The Newsroom. The character Neal Sampat encounters harassing individuals online, particularly looking at 4chan, and he ends up choosing to post negative comments himself on an economics-related forum. The attempt by the character to infiltrate trolls' inner circles attracted debate from media reviewers critiquing the series. In 2019, it was alleged that progressive Democrats had created a fake Facebook page which mis-represented the political stance of Roy Moore, a Republican candidate, in the attempt to alienate him from pro-business Republicans. It was also alleged that a "false flag" experiment attempted to link Moore to the use of Russian Twitter bots. The New York Times, when exposing the scam, quoted a New Knowledge report that boasted of its fabrications: "We orchestrated an elaborate 'false flag' operation that planted the idea that the [Roy] Moore campaign was amplified on social media by a Russian botnet. The 2020 Democratic presidential candidate Bernie Sanders has faced criticism for the behavior of some of his supporters online, but has deflected such criticism, suggesting that "Russians" were impersonating people claiming to be "Bernie Bro" supporters. Twitter rejected Sanders' suggestion that Russia could be responsible for the bad reputation of his supporters. A Twitter spokesperson told CNBC: "Using technology and human review in concert, we proactively monitor Twitter to identify attempts at platform manipulation and mitigate them. As is standard, if we have reasonable evidence of state-backed information operations, we'll disclose them following our thorough investigation to our public archive — the largest of its kind in the industry." Twitter had suspended 70 troll accounts that posted content in support of Michael Bloomberg's presidential campaign. The 45th U.S. president Donald Trump infamously used Twitter to denigrate his political opponents and spread misinformation for which he earned the moniker "Troll-In-Chief" by The New Yorker. Examples So-called Gold Membership trolling originated in 2007 on 4chan boards, when users posted fake images claiming to offer upgraded 4chan account privileges; without a "Gold" account, one could not view certain content. This turned out to be a hoax designed to fool board members, especially newcomers. It was copied and became an Internet meme. In some cases, this type of troll has been used as a scam, most notably on Facebook, where fake Facebook Gold Account upgrade ads have proliferated in order to link users to dubious websites and other content. The case of Zeran v. America Online, Inc. resulted primarily from trolling. Six days after the Oklahoma City bombing, anonymous users posted advertisements for shirts celebrating the bombing on AOL message boards, claiming that the shirts could be obtained by contacting Mr. Kenneth Zeran. The posts listed Zeran's address and home phone number. Zeran was subsequently harassed. Anti-scientology protests by Anonymous, commonly known as Project Chanology, are sometimes labeled as "trolling" by media such as Wired, and the participants sometimes explicitly self-identify as "trolls". Neo-Nazi website The Daily Stormer orchestrates what it calls a "Troll Army", and has encouraged trolling of Jewish MP Luciana Berger and Muslim activist Mariam Veiszadeh. Ken McCarthy, going by the online pseudonym "Ken M", is considered one of the greatest internet trolls of all time. Ken M is known for trolling forums and comment sections by playing a "well-meaning moron" online. McCarthy compared his trolling to a comedy routine, where strangers who responded to his comments became unwitting "straight men". Ken M would reply with increasingly absurd statements until his ruse was discovered. In 2020, the official Discord server and Twitch channel for the U.S. Army Esports team became a target of trolling, as people sent anti-U.S. Army messages, memes, and references to war crimes committed by the United States to both. When the team started banning users from their Twitch channel for trolling, they were accused of violating the First Amendment to the United States Constitution by the ACLU and Knight First Amendment Institute at Columbia University. The team has since denied these allegations. In 2021, the Salon columnist Amanda Marcotte, author of Troll Nation: How the Right Became Trump-Worshipping Monsters Set on Rat-F*cking Liberals, America, and Truth Itself (2018), described the American far-right exclusively male organization Proud Boys, the conservative pundit Tucker Carlson, and podcast host Joe Rogan as political commentators who have mastered "the art of trolling as a far-right recruitment strategy" by preying upon the American male insecurities, mediocrity, and fragility. In particular, regarding their respective discriminatory comments about transgender people, she remarks "how crucial gender anxiety is to far-right recruitment". Elon Musk calls himself Chief Troll, has trolled world leaders, and saluted the crowd in what The Atlantic described as a deliberately offensive and provocative way at Donald Trump's second inauguration. See also References Further reading Walter, T.; Hourizi, R.; Moncur, W.; Pitsillides (2012). "Does the Internet Change How We Die And Mourn?" — An overview Online. External links Trolling advocacy and safety The Trolling Academy – trolling advice, comment, and training Get Safe Online – free expert advice on online safety Background and definitions NetLingo definition Academic and debate Searching for Safety Online: Managing "Trolling" in a Feminist Forum How to Respond to Internet Rage Malwebolence – The World of Web Trolling; New York Times Magazine, By Mattathias Schwartz; 3 August 2008. Internet Trolls Are Narcissists, Psychopaths, and Sadists. Jennifer Golbeck for Psychology Today. 18 September 2014. Culture jamming Cyberbullying Internet terminology Pejorative terms for people 1990s neologisms
Troll (slang)
[ "Technology" ]
7,143
[ "Computing terminology", "Internet terminology" ]
14,617
https://en.wikipedia.org/wiki/Intel
Intel Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California, and incorporated in Delaware. Intel designs, manufactures, and sells computer components such as CPUs and related products for business and consumer markets. It is considered one of the world's largest semiconductor chip manufacturers by revenue and ranked in the Fortune 500 list of the largest United States corporations by revenue for nearly a decade, from 2007 to 2016 fiscal years, until it was removed from the ranking in 2018. In 2020, it was reinstated and ranked 45th, being the 7th-largest technology company in the ranking. Intel supplies microprocessors for most manufacturers of computer systems, and is one of the developers of the x86 series of instruction sets found in most personal computers (PCs). It also manufactures chipsets, network interface controllers, flash memory, graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and other devices related to communications and computing. Intel has a strong presence in the high-performance general-purpose and gaming PC market with its Intel Core line of CPUs, whose high-end models are among the fastest consumer CPUs, as well as its Intel Arc series of GPUs. The Open Source Technology Center at Intel hosts PowerTOP and LatencyTOP, and supports other open source projects such as Wayland, Mesa, Threading Building Blocks (TBB), and Xen. Intel was founded on July 18, 1968, by semiconductor pioneers Gordon Moore (of Moore's law) and Robert Noyce, along with investor Arthur Rock, and is associated with the executive leadership and vision of Andrew Grove. The company was a key component of the rise of Silicon Valley as a high-tech center, as well as being an early developer of SRAM and DRAM memory chips, which represented the majority of its business until 1981. Although Intel created the world's first commercial microprocessor chip—the Intel 4004—in 1971, it was not until the success of the PC in the early 1990s that this became its primary business. During the 1990s, the partnership between Microsoft Windows and Intel, known as "Wintel", became instrumental in shaping the PC landscape and solidified Intel's position on the market. As a result, Intel invested heavily in new microprocessor designs in the mid to late 1990s, fostering the rapid growth of the computer industry. During this period, it became the dominant supplier of PC microprocessors, with a market share of 90%, and was known for aggressive and anti-competitive tactics in defense of its market position, particularly against AMD, as well as a struggle with Microsoft for control over the direction of the PC industry. Since the 2000s and especially since the late 2010s, Intel has faced increasing competition, which has led to a reduction in Intel's dominance and market share in the PC market. Nevertheless, with a 68.4% market share as of 2023, Intel still leads the x86 market by a wide margin. In addition, Intel's ability to design and manufacture its own chips is considered a rarity in the semiconductor industry, as most chip designers do not have their own production facilities and instead rely on contract manufacturers (e.g. TSMC, Foxconn and Samsung), as AMD and Nvidia do. Industries Operating segments Client Computing Group51.8% of 2020 revenuesproduces PC processors and related components. Data Center Group33.7% of 2020 revenuesproduces hardware components used in server, network, and storage platforms. Internet of Things Group5.2% of 2020 revenuesoffers platforms designed for retail, transportation, industrial, buildings and home use. Programmable Solutions Group2.4% of 2020 revenuesmanufactures programmable semiconductors (primarily FPGAs). Customers In 2023, Dell accounted for about 19% of Intel's total revenues, Lenovo accounted for 11% of total revenues, and HP Inc. accounted for 10% of total revenues. As of May 2024, the U.S. Department of Defense is another large customer for Intel. In September 2024, Intel reportedly qualified for as much as $3.5 billion in federal grants to make semiconductors for the Defense Department. Market share According to IDC, while Intel enjoyed the biggest market share in both the overall worldwide PC microprocessor market (73.3%) and the mobile PC microprocessor (80.4%) in the second quarter of 2011, the numbers decreased by 1.5% and 1.9% compared to the first quarter of 2011. Intel's market share decreased significantly in the enthusiast market as of 2019, and they have faced delays for their 10 nm products. According to former Intel CEO Bob Swan, the delay was caused by the company's overly aggressive strategy for moving to its next node. Historical market share In the 1980s, Intel was among the world's top ten sellers of semiconductors (10th in 1987). Along with Microsoft Windows, it was part of the "Wintel" personal computer domination in the 1990s and early 2000s. In 1992, Intel became the biggest semiconductor chip maker by revenue and held the position until 2018 when Samsung Electronics surpassed it, but Intel returned to its former position the year after. Other major semiconductor companies include TSMC, GlobalFoundries, Texas Instruments, ASML, STMicroelectronics, United Microelectronics Corporation (UMC), Micron, SK Hynix, Kioxia, and SMIC. Major competitors Intel's competitors in PC chipsets included AMD, VIA Technologies, Silicon Integrated Systems, and Nvidia. Intel's competitors in networking include NXP Semiconductors, Infineon, Broadcom Limited, Marvell Technology Group and Applied Micro Circuits Corporation, and competitors in flash memory included Spansion, Samsung Electronics, Qimonda, Kioxia, STMicroelectronics, Micron, SK Hynix, and IBM. The only major competitor in the x86 processor market is AMD, with which Intel has had full cross-licensing agreements since 1976: each partner can use the other's patented technological innovations without charge after a certain time. However, the cross-licensing agreement is canceled in the event of an AMD bankruptcy or takeover. Some smaller competitors, such as VIA Technologies, produce low-power x86 processors for small factor computers and portable equipment. However, the advent of such mobile computing devices, in particular, smartphones, has led to a decline in PC sales. Since over 95% of the world's smartphones currently use processors cores designed by Arm, using the Arm instruction set, Arm has become a major competitor for Intel's processor market. Arm is also planning to make attempts at setting foot into the PC and server market, with Ampere and IBM each individually designing CPUs for servers and supercomputers. The only other major competitor in processor instruction sets is RISC-V, which is an open source CPU instruction set. The major Chinese phone and telecommunications manufacturer Huawei has released chips based on the RISC-V instruction set due to US sanctions against China. Intel has been involved in several disputes regarding the violation of antitrust laws, which are noted below. Carbon footprint Intel reported total CO2e emissions (direct + indirect) for the twelve months ending December 31, 2020, at 2,882 Kt (+94/+3.4% y-o-y). Intel plans to reduce carbon emissions 10% by 2030 from a 2020 base year. Manufacturing locations Intel has self-reported that they have Wafer fabrication plants in the United States, Ireland, and Israel. They have also self-reported that they have assembly and testing sites mostly in China, Costa Rica, Malaysia, and Vietnam, and one site in the United States. Corporate history Origins Intel was incorporated in Mountain View, California, on July 18, 1968, by Gordon E. Moore (known for "Moore's law"), a chemist; Robert Noyce, a physicist and co-inventor of the integrated circuit; and Arthur Rock, an investor and venture capitalist. Moore and Noyce had left Fairchild Semiconductor, where they were part of the "traitorous eight" who founded it. There were originally 500,000 shares outstanding of which Dr. Noyce bought 245,000 shares, Dr. Moore 245,000 shares, and Mr. Rock 10,000 shares; all at $1 per share. Rock offered $2,500,000 of convertible debentures to a limited group of private investors (equivalent to $21 million in 2022), convertible at $5 per share. Just 2 years later, Intel became a public company via an initial public offering (IPO), raising $6.8 million ($23.50 per share). Intel was one of the very first companies to be listed on the then-newly established National Association of Securities Dealers Automated Quotations (NASDAQ) stock exchange. Intel's third employee was Andy Grove, a chemical engineer, who later ran the company through much of the 1980s and the high-growth 1990s. In deciding on a name, Moore and Noyce quickly rejected "Moore Noyce", near homophone for "more noise" – an ill-suited name for an electronics company, since noise in electronics is usually undesirable and typically associated with bad interference. Instead, they founded the company as NM Electronics on July 18, 1968, but by the end of the month had changed the name to Intel, which stood for Integrated Electronics. Since "Intel" was already trademarked by the hotel chain Intelco, they had to buy the rights for the name. Early history At its founding, Intel was distinguished by its ability to make logic circuits using semiconductor devices. The founders' goal was the semiconductor memory market, widely predicted to replace magnetic-core memory. Its first product, a quick entry into the small, high-speed memory market in 1969, was the 3101 Schottky TTL bipolar 64-bit static random-access memory (SRAM), which was nearly twice as fast as earlier Schottky diode implementations by Fairchild and the Electrotechnical Laboratory in Tsukuba, Japan. In the same year, Intel also produced the 3301 Schottky bipolar 1024-bit read-only memory (ROM) and the first commercial metal–oxide–semiconductor field-effect transistor (MOSFET) silicon gate SRAM chip, the 256-bit 1101. While the 1101 was a significant advance, its complex static cell structure made it too slow and costly for mainframe memories. The three-transistor cell implemented in the first commercially available dynamic random-access memory (DRAM), the 1103 released in 1970, solved these issues. The 1103 was the bestselling semiconductor memory chip in the world by 1972, as it replaced core memory in many applications. Intel's business grew during the 1970s as it expanded and improved its manufacturing processes and produced a wider range of products, still dominated by various memory devices. Intel created the first commercially available microprocessor, the Intel 4004, in 1971. The microprocessor represented a notable advance in the technology of integrated circuitry, as it miniaturized the central processing unit of a computer, which then made it possible for small machines to perform calculations that in the past only very large machines could do. Considerable technological innovation was needed before the microprocessor could actually become the basis of what was first known as a "mini computer" and then known as a "personal computer". Intel also created one of the first microcomputers in 1973. Intel opened its first international manufacturing facility in 1972, in Malaysia, which would host multiple Intel operations, before opening assembly facilities and semiconductor plants in Singapore and Jerusalem in the early 1980s, and manufacturing and development centers in China, India, and Costa Rica in the 1990s. By the early 1980s, its business was dominated by DRAM chips. However, increased competition from Japanese semiconductor manufacturers had, by 1983, dramatically reduced the profitability of this market. The growing success of the IBM personal computer, based on an Intel microprocessor, was among factors that convinced Gordon Moore (CEO since 1975) to shift the company's focus to microprocessors and to change fundamental aspects of that business model. Moore's decision to sole-source Intel's 386 chip played into the company's continuing success. By the end of the 1980s, buoyed by its fortuitous position as microprocessor supplier to IBM and IBM's competitors within the rapidly growing personal computer market, Intel embarked on a 10-year period of unprecedented growth as the primary and most profitable hardware supplier to the PC industry, part of the winning 'Wintel' combination. Moore handed over his position as CEO to Andy Grove in 1987. By launching its Intel Inside marketing campaign in 1991, Intel was able to associate brand loyalty with consumer selection, so that by the end of the 1990s, its line of Pentium processors had become a household name. Challenges to dominance (2000s) After 2000, growth in demand for high-end microprocessors slowed. Competitors, most notably AMD (Intel's largest competitor in its primary x86 architecture market), garnered significant market share, initially in low-end and mid-range processors but ultimately across the product range, and Intel's dominant position in its core market was greatly reduced, mostly due to controversial NetBurst microarchitecture. In the early 2000s then-CEO, Craig Barrett attempted to diversify the company's business beyond semiconductors, but few of these activities were ultimately successful. Litigation Bob had also for a number of years been embroiled in litigation. U.S. law did not initially recognize intellectual property rights related to microprocessor topology (circuit layouts), until the Semiconductor Chip Protection Act of 1984, a law sought by Intel and the Semiconductor Industry Association (SIA). During the late 1980s and 1990s (after this law was passed), Intel also sued companies that tried to develop competitor chips to the 80386 CPU. The lawsuits were noted to significantly burden the competition with legal bills, even if Intel lost the suits. Antitrust allegations had been simmering since the early 1990s and had been the cause of one lawsuit against Intel in 1991. In 2004 and 2005, AMD brought further claims against Intel related to unfair competition. Reorganization and success with Intel Core (2005–2015) In 2005, CEO Paul Otellini reorganized the company to refocus its core processor and chipset business on platforms (enterprise, digital home, digital health, and mobility). On June 6, 2005, Steve Jobs, then CEO of Apple, announced that Apple would be using Intel's x86 processors for its Macintosh computers, switching from the PowerPC architecture developed by the AIM alliance. This was seen as a win for Intel; an analyst called the move "risky" and "foolish", as Intel's current offerings at the time were considered to be behind those of AMD and IBM. In 2006, Intel unveiled its Core microarchitecture to widespread critical acclaim; the product range was perceived as an exceptional leap in processor performance that at a stroke regained much of its leadership of the field. In 2008, Intel had another "tick" when it introduced the Penryn microarchitecture, fabricated using the 45 nm process node. Later that year, Intel released a processor with the Nehalem architecture to positive reception. On June 27, 2006, the sale of Intel's XScale assets was announced. Intel agreed to sell the XScale processor business to Marvell Technology Group for an estimated $600 million and the assumption of unspecified liabilities. The move was intended to permit Intel to focus its resources on its core x86 and server businesses, and the acquisition completed on November 9, 2006. In 2008, Intel spun off key assets of a solar startup business effort to form an independent company, SpectraWatt Inc. In 2011, SpectraWatt filed for bankruptcy. In February 2011, Intel began to build a new microprocessor manufacturing facility in Chandler, Arizona, completed in 2013 at a cost of $5 billion. The building is now the 10 nm-certified Fab 42 and is connected to the other Fabs (12, 22, 32) on Ocotillo Campus via an enclosed bridge known as the Link. The company produces three-quarters of its products in the United States, although three-quarters of its revenue come from overseas. The Alliance for Affordable Internet (A4AI) was launched in October 2013 and Intel is part of the coalition of public and private organizations that also includes Facebook, Google, and Microsoft. Led by Sir Tim Berners-Lee, the A4AI seeks to make Internet access more affordable so that access is broadened in the developing world, where only 31% of people are online. Google will help to decrease Internet access prices so that they fall below the UN Broadband Commission's worldwide target of 5% of monthly income. Attempts at entering the smartphone market In April 2011, Intel began a pilot project with ZTE Corporation to produce smartphones using the Intel Atom processor for China's domestic market. In December 2011, Intel announced that it reorganized several of its business units into a new mobile and communications group that would be responsible for the company's smartphone, tablet, and wireless efforts. Intel planned to introduce Medfield – a processor for tablets and smartphones – to the market in 2012, as an effort to compete with Arm. As a 32-nanometer processor, Medfield is designed to be energy-efficient, which is one of the core features in Arm's chips. At the Intel Developers Forum (IDF) 2011 in San Francisco, Intel's partnership with Google was announced. In January 2012, Google announced Android 2.3, supporting Intel's Atom microprocessor. In 2013, Intel's Kirk Skaugen said that Intel's exclusive focus on Microsoft platforms was a thing of the past and that they would now support all "tier-one operating systems" such as Linux, Android, iOS, and Chrome. In 2014, Intel cut thousands of employees in response to "evolving market trends", and offered to subsidize manufacturers for the extra costs involved in using Intel chips in their tablets. In April 2016, Intel cancelled the SoFIA platform and the Broxton Atom SoC for smartphones, effectively leaving the smartphone market. Intel custom foundry Finding itself with excess fab capacity after the failure of the Ultrabook to gain market traction and with PC sales declining, in 2013 Intel reached a foundry agreement to produce chips for Altera using a 14 nm process. General Manager of Intel's custom foundry division Sunit Rikhi indicated that Intel would pursue further such deals in the future. This was after poor sales of Windows 8 hardware caused a major retrenchment for most of the major semiconductor manufacturers, except for Qualcomm, which continued to see healthy purchases from its largest customer, Apple. As of July 2013, five companies were using Intel's fabs via the Intel Custom Foundry division: Achronix, Tabula, Netronome, Microsemi, and Panasonicmost are field-programmable gate array (FPGA) makers, but Netronome designs network processors. Only Achronix began shipping chips made by Intel using the 22 nm Tri-Gate process. Several other customers also exist but were not announced at the time. The foundry business was closed in 2018 due to Intel's issues with its manufacturing. Security and manufacturing challenges (2016–2021) Intel continued its tick-tock model of a microarchitecture change followed by a die shrink until the 6th-generation Core family based on the Skylake microarchitecture. This model was deprecated in 2016, with the release of the 7th-generation Core family (codenamed Kaby Lake), ushering in the process–architecture–optimization model. As Intel struggled to shrink their process node from 14 nm to 10 nm, processor development slowed down and the company continued to use the Skylake microarchitecture until 2020, albeit with optimizations. 10 nm process node issues While Intel originally planned to introduce 10 nm products in 2016, it later became apparent that there were manufacturing issues with the node. The first microprocessor under that node, Cannon Lake (marketed as 8th-generation Core), was released in small quantities in 2018. The company first delayed the mass production of their 10 nm products to 2017. They later delayed mass production to 2018, and then to 2019. Despite rumors of the process being cancelled, Intel finally introduced mass-produced 10 nm 10th-generation Intel Core mobile processors (codenamed "Ice Lake") in September 2019. Intel later acknowledged that their strategy to shrink to 10 nm was too aggressive. While other foundries used up to four steps in 10 nm or 7 nm processes, the company's 10 nm process required up to five or six multi-pattern steps. In addition, Intel's 10 nm process is denser than its counterpart processes from other foundries. Since Intel's microarchitecture and process node development were coupled, processor development stagnated. Security flaws In early January 2018, it was reported that all Intel processors made since 1995 (besides Intel Itanium and pre-2013 Intel Atom) had been subject to two security flaws dubbed Meltdown and Spectre. Renewed competition and other developments (2018–present) Due to Intel's issues with its 10 nm process node and the company's slow processor development, the company now found itself in a market with intense competition. The company's main competitor, AMD, introduced the Zen microarchitecture and a new chiplet-based design to critical acclaim. Since its introduction, AMD, once unable to compete with Intel in the high-end CPU market, has undergone a resurgence, and Intel's dominance and market share have considerably decreased. In addition, Apple began to transition away from the x86 architecture and Intel processors to their own Apple silicon for their Macintosh computers in 2020. The transition is expected to affect Intel minimally; however, it might prompt other PC manufacturers to reevaluate their reliance on Intel and the x86 architecture. 'IDM 2.0' strategy On March 23, 2021, CEO Pat Gelsinger laid out new plans for the company. These include a new strategy, called IDM 2.0, that includes investments in manufacturing facilities, use of both internal and external foundries, and a new foundry business called Intel Foundry Services (IFS), a standalone business unit. Unlike Intel Custom Foundry, IFS will offer a combination of packaging and process technology, and Intel's IP portfolio including x86 cores. Other plans for the company include a partnership with IBM and a new event for developers and engineers, called "Intel ON". Gelsinger also confirmed that Intel's 7 nm process is on track, and that the first products using their 7 nm process (also known as Intel 4) are Ponte Vecchio and Meteor Lake. In January 2022, Intel reportedly selected New Albany, Ohio, near Columbus, Ohio, as the site for a major new manufacturing facility. The facility will cost at least $20 billion. The company expects the facility to begin producing chips by 2025. The same year Intel also choose Magdeburg, Germany, as a site for two new chip mega factories for €17 billion (topping Tesla's investment in Brandenburg). The start of the construction was initially planned for 2023, but this has been postponed to late 2024, while production start is planned for 2027. Including subcontractors, this would create 10,000 new jobs. In August 2022, Intel signed a $30billion partnership with Brookfield Asset Management to fund its recent factory expansions. As part of the deal, Intel would have a controlling stake by funding 51% of the cost of building new chip-making facilities in Chandler, with Brookfield owning the remaining 49% stake, allowing the companies to split the revenue from those facilities. On January 31, 2023, as part of $3 billion in cost reductions, Intel announced pay cuts affecting employees above midlevel, ranging from 5% upwards. It also suspended bonuses and merit pay increases, while reducing retirement plan matching. These cost reductions followed layoffs announced in the fall of 2022. In October 2023, Intel confirmed it would be the first commercial user of high-NA EUV lithography tool, as part of its plan to regain process leadership from TSMC. In August 2024, following a below-expectations Q2 earnings announcement, Intel announced "significant actions to reduce our costs. We plan to deliver $10 billion in cost savings in 2025, and this includes reducing our head count by roughly 15,000 roles, or 15% of our workforce." In December 2023, Intel unveiled Gaudi3, an artificial intelligence (AI) chip for generative AI software which will launch in 2024 and compete with rival chips from Nvidia and AMD. On June 4, 2024, Intel announced AI chips for data centers, the Xeon 6 processor, aiming for better performance and power efficiency compared to its predecessor. Intel's Gaudi 2 and Gaudi 3 AI accelerators were revealed to be more cost-effective than competitors' offerings. Additionally, Intel disclosed architecture details for its Lunar Lake processors for AI PCs, which were released on September 24, 2024. After posting $1.6 billion in loses for Q2, Intel announced in August 2024 that it intends to cut 15,000 jobs and save $10 billion in 2025. In order to reach this goal, the company will offer early retirement and voluntary departure options. On November 1, 2024, it was announced that Intel will drop out of the Dow Jones Industrial Average on November 8 prior to the stock market open, with Nvidia taking its place. In December 2024, Intel's CEO Pat Gelsinger was ousted amid ongoing struggles to revitalize the company, which has seen a significant decline in stock value during his tenure. Gelsinger's resignation, effective December 1, followed a board meeting where directors expressed dissatisfaction with the slow progress of his ambitious turnaround strategy. Despite efforts to enhance Intel's manufacturing capabilities and compete with rivals like AMD and Nvidia, the company faced mounting challenges, including a $16.6 billion loss and a 60% drop in share prices since Gelsinger's appointment in 2021. Following his departure, Intel appointed David Zinsner and Michelle Johnston Holthaus as interim co-CEOs while searching for a permanent successor. Gelsinger's exit underscored the turmoil at Intel as it grappled with its identity crisis and seeks to regain its position in the semiconductor industry. Product and market history SRAMs, DRAMs, and the microprocessor Intel's first products were shift register memory and random-access memory integrated circuits, and Intel grew to be a leader in the fiercely competitive DRAM, SRAM, and ROM markets throughout the 1970s. Concurrently, Intel engineers Marcian Hoff, Federico Faggin, Stanley Mazor, and Masatoshi Shima invented Intel's first microprocessor. Originally developed for the Japanese company Busicom to replace a number of ASICs in a calculator already produced by Busicom, the Intel 4004 was introduced to the mass market on November 15, 1971, though the microprocessor did not become the core of Intel's business until the mid-1980s. (Note: Intel is usually given credit with Texas Instruments for the almost-simultaneous invention of the microprocessor.) In 1983, at the dawn of the personal computer era, Intel's profits came under increased pressure from Japanese memory-chip manufacturers, and then-president Andy Grove focused the company on microprocessors. Grove described this transition in the book Only the Paranoid Survive. A key element of his plan was the notion, then considered radical, of becoming the single source for successors to the popular 8086 microprocessor. Until then, the manufacture of complex integrated circuits was not reliable enough for customers to depend on a single supplier, but Grove began producing processors in three geographically distinct factories, and ceased licensing the chip designs to competitors such as AMD. When the PC industry boomed in the late 1980s and 1990s, Intel was one of the primary beneficiaries. Early x86 processors and the IBM PC Despite the ultimate importance of the microprocessor, the 4004 and its successors the 8008 and the 8080 were never major revenue contributors at Intel. As the next processor, the 8086 (and its variant the 8088) was completed in 1978, Intel embarked on a major marketing and sales campaign for that chip nicknamed "Operation Crush", and intended to win as many customers for the processor as possible. One design win was the newly created IBM PC division, though the importance of this was not fully realized at the time. IBM introduced its personal computer in 1981, and it was rapidly successful. In 1982, Intel created the 80286 microprocessor, which, two years later, was used in the IBM PC/AT. Compaq, the first IBM PC "clone" manufacturer, produced a desktop system based on the faster 80286 processor in 1985 and in 1986 quickly followed with the first 80386-based system, beating IBM and establishing a competitive market for PC-compatible systems and setting up Intel as a key component supplier. In 1975, the company had started a project to develop a highly advanced 32-bit microprocessor, finally released in 1981 as the Intel iAPX 432. The project was too ambitious and the processor was never able to meet its performance objectives, and it failed in the marketplace. Intel extended the x86 architecture to 32 bits instead. 386 microprocessor During this period Andrew Grove dramatically redirected the company, closing much of its DRAM business and directing resources to the microprocessor business. Of perhaps greater importance was his decision to "single-source" the 386 microprocessor. Prior to this, microprocessor manufacturing was in its infancy, and manufacturing problems frequently reduced or stopped production, interrupting supplies to customers. To mitigate this risk, these customers typically insisted that multiple manufacturers produce chips they could use to ensure a consistent supply. The 8080 and 8086-series microprocessors were produced by several companies, notably AMD, with which Intel had a technology-sharing contract. Grove made the decision not to license the 386 design to other manufacturers, instead, producing it in three geographically distinct factories: Santa Clara, California; Hillsboro, Oregon; and Chandler, a suburb of Phoenix, Arizona. He convinced customers that this would ensure consistent delivery. In doing this, Intel breached its contract with AMD, which sued and was paid millions of dollars in damages but could not manufacture new Intel CPU designs any longer. (Instead, AMD started to develop and manufacture its own competing x86 designs.) As the success of Compaq's Deskpro 386 established the 386 as the dominant CPU choice, Intel achieved a position of near-exclusive dominance as its supplier. Profits from this funded rapid development of both higher-performance chip designs and higher-performance manufacturing capabilities, propelling Intel to a position of unquestioned leadership by the early 1990s. 486, Pentium, and Itanium Intel introduced the 486 microprocessor in 1989, and in 1990 established a second design team, designing the processors code-named "P5" and "P6" in parallel and committing to a major new processor every two years, versus the four or more years such designs had previously taken. The P5 project was earlier known as "Operation Bicycle", referring to the cycles of the processor through two parallel execution pipelines. The P5 was introduced in 1993 as the Intel Pentium, substituting a registered trademark name for the former part number. (Numbers, such as 486, cannot be legally registered as trademarks in the United States.) The P6 followed in 1995 as the Pentium Pro and improved into the Pentium II in 1997. New architectures were developed alternately in Santa Clara, California and Hillsboro, Oregon. The Santa Clara design team embarked in 1993 on a successor to the x86 architecture, codenamed "P7". The first attempt was dropped a year later but quickly revived in a cooperative program with Hewlett-Packard engineers, though Intel soon took over primary design responsibility. The resulting implementation of the IA-64 64-bit architecture was the Itanium, finally introduced in June 2001. The Itanium's performance running legacy x86 code did not meet expectations, and it failed to compete effectively with x86-64, which was AMD's 64-bit extension of the 32-bit x86 architecture (Intel uses the name Intel 64, previously EM64T). In 2017, Intel announced that the Itanium 9700 series (Kittson) would be the last Itanium chips produced. The Hillsboro team designed the Willamette processors (initially code-named P68), which were marketed as the Pentium 4. During this period, Intel undertook two major supporting advertising campaigns. The first campaign, the 1991 "Intel Inside" marketing and branding campaign, is widely known and has become synonymous with Intel itself. The idea of "ingredient branding" was new at the time, with only NutraSweet and a few others making attempts to do so. One of the key architects of the marketing team was the head of the microprocessor division, David House. He coined the slogan "Intel Inside". This campaign established Intel, which had been a component supplier little-known outside the PC industry, as a household name. The second campaign, Intel's Systems Group, which began in the early 1990s, showcased manufacturing of PC motherboards, the main board component of a personal computer, and the one into which the processor (CPU) and memory (RAM) chips are plugged. The Systems Group campaign was lesser known than the Intel Inside campaign. Shortly after, Intel began manufacturing fully configured "white box" systems for the dozens of PC clone companies that rapidly sprang up. At its peak in the mid-1990s, Intel manufactured over 15% of all PCs, making it the third-largest supplier at the time. During the 1990s, Intel Architecture Labs (IAL) was responsible for many of the hardware innovations for the PC, including the PCI Bus, the PCI Express (PCIe) bus, and Universal Serial Bus (USB). IAL's software efforts met with a more mixed fate; its video and graphics software was important in the development of software digital video, but later its efforts were largely overshadowed by competition from Microsoft. The competition between Intel and Microsoft was revealed in testimony by then IAL Vice-president Steven McGeady at the Microsoft antitrust trial (United States v. Microsoft Corp.). Pentium flaw In June 1994, Intel engineers discovered a flaw in the floating-point math subsection of the P5 Pentium microprocessor. Under certain data-dependent conditions, the low-order bits of the result of a floating-point division would be incorrect. The error could compound in subsequent calculations. Intel corrected the error in a future chip revision, and under public pressure it issued a total recall and replaced the defective Pentium CPUs (which were limited to some 60, 66, 75, 90, and 100 MHz models) on customer request. The bug was discovered independently in October 1994 by Thomas Nicely, Professor of Mathematics at Lynchburg College. He contacted Intel but received no response. On October 30, he posted a message about his finding on the Internet. Word of the bug spread quickly and reached the industry press. The bug was easy to replicate; a user could enter specific numbers into the calculator on the operating system. Consequently, many users did not accept Intel's statements that the error was minor and "not even an erratum". During Thanksgiving, in 1994, The New York Times ran a piece by journalist John Markoff spotlighting the error. Intel changed its position and offered to replace every chip, quickly putting in place a large end-user support organization. This resulted in a $475 million charge against Intel's 1994 revenue. Dr. Nicely later learned that Intel had discovered the FDIV bug in its own testing a few months before him (but had decided not to inform customers). The "Pentium flaw" incident, Intel's response to it, and the surrounding media coverage propelled Intel from being a technology supplier generally unknown to most computer users to a household name. Dovetailing with an uptick in the "Intel Inside" campaign, the episode is considered to have been a positive event for Intel, changing some of its business practices to be more end-user focused and generating substantial public awareness, while avoiding a lasting negative impression. Intel Core The Intel Core line originated from the original Core brand, with the release of the 32-bit Yonah CPU, Intel's first dual-core mobile (low-power) processor. Derived from the Pentium M, the processor family used an enhanced version of the P6 microarchitecture. Its successor, the Core 2 family, was released on July 27, 2006. This was based on the Intel Core microarchitecture, and was a 64-bit design. Instead of focusing on higher clock rates, the Core microarchitecture emphasized power efficiency and a return to lower clock speeds. It also provided more efficient decoding stages, execution units, caches, and buses, reducing the power consumption of Core 2-branded CPUs while increasing their processing capacity. In November 2008, Intel released the 1st-generation Core processors based on the Nehalem microarchitecture. Intel also introduced a new naming scheme, with the three variants now named Core i3, i5, and i7 (as well as i9 from 7th-generation onwards). Unlike the previous naming scheme, these names no longer correspond to specific technical features. It was succeeded by the Westmere microarchitecture in 2010, with a die shrink to 32 nm and included Intel HD Graphics. In 2011, Intel released the Sandy Bridge-based 2nd-generation Core processor family. This generation featured an 11% performance increase over Nehalem. It was succeeded by Ivy Bridge-based 3rd-generation Core, introduced at the 2012 Intel Developer Forum. Ivy Bridge featured a die shrink to 22 nm, and supported both DDR3 memory and DDR3L chips. Intel continued its tick-tock model of a microarchitecture change followed by a die shrink until the 6th-generation Core family based on the Skylake microarchitecture. This model was deprecated in 2016, with the release of the 7th-generation Core family based on Kaby Lake, ushering in the process–architecture–optimization model. From 2016 until 2021, Intel later released more optimizations on the Skylake microarchitecture with Kaby Lake R, Amber Lake, Whiskey Lake, Coffee Lake, Coffee Lake R, and Comet Lake. Intel struggled to shrink their process node from 14 nm to 10 nm, with the first microarchitecture under that node, Cannon Lake (marketed as 8th-generation Core), only being released in small quantities in 2018. In 2019, Intel released the 10th-generation of Core processors, codenamed "Amber Lake", "Comet Lake", and "Ice Lake". Ice Lake, based on the Sunny Cove microarchitecture, was produced on the 10 nm process and was limited to low-power mobile processors. Both Amber Lake and Comet Lake were based on a refined 14 nm node, with the latter being used for desktop and high-performance mobile products and the former used for low-power mobile products. In September 2020, 11th-generation Core mobile processors, codenamed Tiger Lake, were launched. Tiger Lake is based on the Willow Cove microarchitecture and a refined 10 nm node. Intel later released 11th-generation Core desktop processors (codenamed "Rocket Lake"), fabricated using Intel's 14 nm process and based on the Cypress Cove microarchitecture, on March 30, 2021. It replaced Comet Lake desktop processors. All 11th-generation Core processors feature new integrated graphics based on the Intel Xe microarchitecture. Both desktop and mobile products were unified under a single process node with the release of 12th-generation Intel Core processors (codenamed "Alder Lake") in late 2021. This generation will be fabricated using Intel's 10 nm process, called Intel 7, for both desktop and mobile processors, and is based on a hybrid architecture utilizing high-performance Golden Cove cores and high-efficiency Gracemont (Atom) cores. Transient execution CPU vulnerability Use of Intel products by Apple Inc. (2005–2019) On June 6, 2005, Steve Jobs, then CEO of Apple, announced that Apple would be transitioning the Macintosh from its long favored PowerPC architecture to the Intel x86 architecture because the future PowerPC road map was unable to satisfy Apple's needs. This was seen as a win for Intel, although an analyst called the move "risky" and "foolish", as Intel's current offerings at the time were considered to be behind those of AMD and IBM. The first Mac computers containing Intel CPUs were announced on January 10, 2006, and Apple had its entire line of consumer Macs running on Intel processors by early August 2006. The Apple Xserve server was updated to Intel Xeon processors from November 2006 and was offered in a configuration similar to Apple's Mac Pro. Despite Apple's use of Intel products, relations between the two companies were strained at times. Rumors of Apple switching from Intel processors to their own designs began circulating as early as 2011. On June 22, 2020, during Apple's annual WWDC, Tim Cook, Apple's CEO, announced that it would be transitioning the company's entire Mac line from Intel CPUs to custom Apple-designed processors based on the Arm architecture over the course of the next two years. In the short term, this transition was estimated to have minimal effects on Intel, as Apple only accounted for 2% to 4% of its revenue. However, at the time it was believed that Apple's shift to its own chips might prompt other PC manufacturers to reassess their reliance on Intel and the x86 architecture. By November 2020, Apple unveiled the M1, its processor custom-designed for the Mac. Solid-state drives (SSDs) In 2008, Intel began shipping mainstream solid-state drives (SSDs) with up to 160 GB storage capacities. As with their CPUs, Intel develops SSD chips using ever-smaller nanometer processes. These SSDs make use of industry standards such as NAND flash, mSATA, PCIe, and NVMe. In 2017, Intel introduced SSDs based on 3D XPoint technology under the Optane brand name. In 2021, SK Hynix acquired most of Intel's NAND memory business for $7 billion, with a remaining transaction worth $2 billion expected in 2025. Intel also discontinued its consumer Optane products in 2021. In July 2022, Intel disclosed in its Q2 earnings report that it would cease future product development within its Optane business, which in turn effectively discontinued the development of 3D XPoint as a whole. Supercomputers The Intel Scientific Computers division was founded in 1984 by Justin Rattner, to design and produce parallel computers based on Intel microprocessors connected in hypercube internetwork topology. In 1992, the name was changed to the Intel Supercomputing Systems Division, and development of the iWarp architecture was also subsumed. The division designed several supercomputer systems, including the Intel iPSC/1, iPSC/2, iPSC/860, Paragon and ASCI Red. In November 2014, Intel stated that it was planning to use optical fibers to improve networking within supercomputers. Fog computing On November 19, 2015, Intel, alongside Arm, Dell, Cisco Systems, Microsoft, and Princeton University, founded the OpenFog Consortium, to promote interests and development in fog computing. Intel's Chief Strategist for the IoT Strategy and Technology Office, Jeff Fedders, became the consortium's first president. Self-driving cars Intel is one of the biggest stakeholders in the self-driving car industry, having joined the race in mid 2017 after joining forces with Mobileye. The company is also one of the first in the sector to research consumer acceptance, after an AAA report quoted a 78% nonacceptance rate of the technology in the U.S. Safety levels of autonomous driving technology, the thought of abandoning control to a machine, and psychological comfort of passengers in such situations were the major discussion topics initially. The commuters also stated that they did not want to see everything the car was doing. This was primarily a referral to the auto-steering wheel with no one sitting in the driving seat. Intel also learned that voice control regulator is vital, and the interface between the humans and machine eases the discomfort condition, and brings some sense of control back. It is important to mention that Intel included only 10 people in this study, which makes the study less credible. In a video posted on YouTube, Intel accepted this fact and called for further testing. Programmable devices Intel formed a new business unit called the Programmable Solutions Group (PSG) on completion of its Altera acquisition. Intel has since sold Stratix, Arria, and Cyclone FPGAs. In 2019, Intel released Agilex FPGAs: chips aimed at data centers, 5G applications, and other uses. In October 2023, Intel announced it would be spinning off PSG into a separate company at the start of 2024, while maintaining majority ownership. Competition, antitrust, and espionage By the end of the 1990s, microprocessor performance had outstripped software demand for that CPU power. Aside from high-end server systems and software, whose demand dropped with the end of the "dot-com bubble", consumer systems ran effectively on increasingly low-cost systems after 2000. Intel's strategy was to develop processors with better performance in a short time, from the appearance of one to the other, as seen with the appearance of the Pentium II in May 1997, the Pentium III in February 1999, and the Pentium 4 in the fall of 2000, making the strategy ineffective since the consumer did not see the innovation as essential, and leaving an opportunity for rapid gains by competitors, notably AMD. This, in turn, lowered the profitability of the processor line and ended an era of unprecedented dominance of the PC hardware by Intel. Intel's dominance in the x86 microprocessor market led to numerous charges of antitrust violations over the years, including FTC investigations in both the late 1980s and in 1999, and civil actions such as the 1997 suit by Digital Equipment Corporation (DEC) and a patent suit by Intergraph. Intel's market dominance (at one time it controlled over 85% of the market for 32-bit x86 microprocessors) combined with Intel's own hardball legal tactics (such as its infamous 338 patent suit versus PC manufacturers) made it an attractive target for litigation, culminating in Intel agreeing to pay AMD $1.25 billion and grant them a perpetual patent cross-license in 2009 as well as several anti-trust judgements in Europe, Korea, and Japan. A case of industrial espionage arose in 1995 that involved both Intel and AMD. Bill Gaede, an Argentine formerly employed both at AMD and at Intel's Arizona plant, was arrested for attempting in 1993 to sell the i486 and P5 Pentium designs to AMD and to certain foreign powers. Gaede videotaped data from his computer screen at Intel and mailed it to AMD, which immediately alerted Intel and authorities, resulting in Gaede's arrest. Gaede was convicted and sentenced to 33 months in prison in June 1996. Corporate affairs Business trends The key trends for Intel are (as of the financial year ending in late December): Leadership and corporate structure Robert Noyce was Intel's CEO at its founding in 1968, followed by co-founder Gordon Moore in 1975. Andy Grove became the company's president in 1979 and added the CEO title in 1987 when Moore became chairman. In 1998, Grove succeeded Moore as chairman, and Craig Barrett, already company president, took over. On May 18, 2005, Barrett handed the reins of the company over to Paul Otellini, who had been the company president and COO and who was responsible for Intel's design win in the original IBM PC. The board of directors elected Otellini as president and CEO, and Barrett replaced Grove as Chairman of the Board. Grove stepped down as chairman but is retained as a special adviser. In May 2009, Barrett stepped down as chairman of the board and was succeeded by Jane Shaw. In May 2012, Intel vice chairman Andy Bryant, who had held the posts of CFO (1994) and Chief Administrative Officer (2007) at Intel, succeeded Shaw as executive chairman. In November 2012, president and CEO Paul Otellini announced that he would step down in May 2013 at the age of 62, three years before the company's mandatory retirement age. During a six-month transition period, Intel's board of directors commenced a search process for the next CEO, in which it considered both internal managers and external candidates such as Sanjay Jha and Patrick Gelsinger. Financial results revealed that, under Otellini, Intel's revenue increased by 55.8% (US$34.2 to 53.3 billion), while its net income increased by 46.7% (US$7.5 billion to 11 billion). On May 2, 2013, Executive Vice President and COO Brian Krzanich was elected as Intel's sixth CEO, a selection that became effective on May 16, 2013, at the company's annual meeting. Reportedly, the board concluded that an insider could proceed with the role and exert an impact more quickly, without the need to learn Intel's processes, and Krzanich was selected on such a basis. Intel's software head Renée James was selected as president of the company, a role that is second to the CEO position. As of May 2013, Intel's board of directors consists of Andy Bryant, John Donahoe, Frank Yeary, Ambassador Charlene Barshefsky, Susan Decker, Reed Hundt, Paul Otellini, James Plummer, David Pottruck, and David Yoffie and Creative director will.i.am. The board was described by former Financial Times journalist Tom Foremski as "an exemplary example of corporate governance of the highest order" and received a rating of ten from GovernanceMetrics International, a form of recognition that has only been awarded to twenty-one other corporate boards worldwide. On June 21, 2018, Intel announced the resignation of Brian Krzanich as CEO, with the exposure of a relationship he had with an employee. Bob Swan was named interim CEO, as the Board began a search for a permanent CEO. On January 31, 2019, Swan transitioned from his role as CFO and interim CEO and was named by the Board as the seventh CEO to lead the company. On January 13, 2021, Intel announced that Swan would be replaced as CEO by Pat Gelsinger, effective February 15. Gelsinger is a former Intel chief technology officer who had previously been head of VMWare. In March 2021, Intel removed the mandatory retirement age for its corporate officers. In October 2023, Intel announced it would be spinning off its Programmable Solutions Group business unit into a separate company at the start of 2024, while maintaining majority ownership and intending to seek an IPO within three years to raise funds. On December 1, 2024, Pat Gelsinger retired from the position of Intel CEO and stepped down from the company’s board of directors. David Zinsner and Michelle Johnston Holthaus were named as interim co-CEO's. Ownership The 10 largest shareholders of Intel as of December 2023 were: Vanguard Group (9.12% of shares) BlackRock (8.04%) State Street (4.45%) Capital International (2.29%) Geode Capital Management (2.01%) Primecap (1.78%) Capital Research Global Investors (1.63%) Morgan Stanley (1.18%) Norges Bank (1.14%) Northern Trust (1.05%) Board of directors : Frank D. Yeary (chairman), managing member of Darwin Capital James Goetz, managing director of Sequoia Capital Andrea Goldsmith, dean of engineering and applied science at Princeton University Alyssa Henry, Square, Inc. executive Omar Ishrak, chairman and former CEO of Medtronic Risa Lavizzo-Mourey, former president and CEO of the Robert Wood Johnson Foundation Tsu-Jae King Liu, professor at the UC Berkeley College of Engineering Barbara G. Novick, co-founder of BlackRock Gregory Smith, CFO of Boeing Dion Weisler, former president and CEO of HP Inc. Lip-Bu Tan, executive chairman of Cadence Design Systems Employment Prior to March 2021, Intel has a mandatory retirement policy for its CEOs when they reach age 65. Andy Grove retired at 62, while both Robert Noyce and Gordon Moore retired at 58. Grove retired as chairman and as a member of the board of directors in 2005 at age 68. Intel's headquarters are located in Santa Clara, California, and the company has operations around the world. Its largest workforce concentration anywhere is in Washington County, Oregon (in the Portland metropolitan area's "Silicon Forest"), with 18,600 employees at several facilities. Outside the United States, the company has facilities in China, Costa Rica, Malaysia, Israel, Ireland, India, Russia, Argentina and Vietnam, in 63 countries and regions internationally. In March 2022, Intel stopped supplying the Russian market because of international sanctions during the Russo-Ukrainian War. In the U.S. Intel employs significant numbers of people in California, Colorado, Massachusetts, Arizona, New Mexico, Oregon, Texas, Washington and Utah. In Oregon, Intel is the state's largest private employer. The company is the largest industrial employer in New Mexico while in Arizona the company has 12,000 employees as of January 2020. Intel invests heavily in research in China and about 100 researchersor 10% of the total number of researchers from Intelare located in Beijing. In 2011, the Israeli government offered Intel $290 million to expand in the country. As a condition, Intel would employ 1,500 more workers in Kiryat Gat and between 600 and 1000 workers in the north. In January 2014, it was reported that Intel would cut about 5,000 jobs from its workforce of 107,000. The announcement was made a day after it reported earnings that missed analyst targets. In March 2014, it was reported that Intel would embark upon a $6 billion plan to expand its activities in Israel. The plan calls for continued investment in existing and new Intel plants until 2030. , Intel employs 10,000 workers at four development centers and two production plants in Israel. Due to declining PC sales, in 2016 Intel cut 12,000 jobs. In 2021, Intel reversed course under new CEO Pat Gelsinger and started hiring thousands of engineers. Diversity Intel has a Diversity Initiative, including employee diversity groups, as well as a supplier diversity program. Like many companies with employee diversity groups, they include groups based on race and nationality as well as sexual identity and religion. In 1994, Intel sanctioned one of the earliest corporate Gay, Lesbian, Bisexual, and Transgender employee groups, and supports a Muslim employees group, a Jewish employees group, and a Bible-based Christian group. Intel has received a 100% rating on numerous Corporate Equality Indices released by the Human Rights Campaign including the first one released in 2002. In addition, the company is frequently named one of the 100 Best Companies for Working Mothers by Working Mother magazine. In January 2015, Intel announced the investment of $300 million over the next five years to enhance gender and racial diversity in their own company as well as the technology industry as a whole. In February 2016, Intel released its Global Diversity & Inclusion 2015 Annual Report. The male-female mix of US employees was reported as 75.2% men and 24.8% women. For US employees in technical roles, the mix was reported as 79.8% male and 20.1% female. NPR reports that Intel is facing a retention problem (particularly for African Americans), not just a pipeline problem. Economic impact in Oregon in 2009 In 2011, ECONorthwest conducted an economic impact analysis of Intel's economic contribution to the state of Oregon. The report found that in 2009 "the total economic impacts attributed to Intel's operations, capital spending, contributions and taxes amounted to almost $14.6 billion in activity, including $4.3 billion in personal income and 59,990 jobs". Through multiplier effects, every 10 Intel jobs supported, on average, was found to create 31 jobs in other sectors of the economy. Supply chain Intel has been addressing supply base reduction as an issue since the mid-1980's, adopting an "n + 1" rule of thumb, whereby the maximum number of suppliers required to maintain production levels for each component is determined, and no more than one additional supplier is engaged with for each component. Intel Israel Intel has been operating in the State of Israel since Dov Frohman founded the Israeli branch of the company in 1974 in a small office in Haifa. Intel Israel currently has development centers in Haifa, Jerusalem and Petah Tikva, and has a manufacturing plant in the Kiryat Gat industrial park that develops and manufactures microprocessors and communications products. Intel employed about 10,000 employees in Israel in 2013. Maxine Fesberg has been the CEO of Intel Israel since 2007 and the Vice President of Intel Global. In December 2016, Fesberg announced her resignation, her position of chief executive officer (CEO) has been filled by Yaniv Gerti since January 2017. In June 2024, the company announced that it was stopping development on a Kiryat Gat-based factory in Israel. The site was expected to cost $25 billion, with $3.2 billion provided by the Israeli government in the form of a grant. Key acquisitions and investments (2010–present) In 2010, Intel purchased McAfee, a manufacturer of computer security technology, for $7.68 billion. As a condition for regulatory approval of the transaction, Intel agreed to provide rival security firms with all necessary information that would allow their products to use Intel's chips and personal computers. After the acquisition, Intel had about 90,000 employees, including about 12,000 software engineers. In September 2016, Intel sold a majority stake in its computer-security unit to TPG Capital, reversing the five-year-old McAfee acquisition. In August 2010, Intel and Infineon Technologies announced that Intel would acquire Infineon's Wireless Solutions business. Intel planned to use Infineon's technology in laptops, smart phones, netbooks, tablets and embedded computers in consumer products, eventually integrating its wireless modem into Intel's silicon chips. In March 2011, Intel bought most of the assets of Cairo-based SySDSoft. In July 2011, Intel announced that it had agreed to acquire Fulcrum Microsystems Inc., a company specializing in network switches. The company used to be included on the EE Times list of 60 Emerging Startups. In October 2011, Intel reached a deal to acquire Telmap, an Israeli-based navigation software company. The purchase price was not disclosed, but Israeli media reported values around $300 million to $350 million. In July 2012, Intel agreed to buy 10% of the shares of ASML Holding NV for $2.1 billion and another $1 billion for 5% of the shares that need shareholder approval to fund relevant research and development efforts, as part of a EUR3.3 billion ($4.1 billion) deal to accelerate the development of 450-millimeter wafer technology and extreme ultra-violet lithography by as much as two years. In July 2013, Intel confirmed the acquisition of Omek Interactive, an Israeli company that makes technology for gesture-based interfaces, without disclosing the monetary value of the deal. An official statement from Intel read: "The acquisition of Omek Interactive will help increase Intel's capabilities in the delivery of more immersive perceptual computing experiences." One report estimated the value of the acquisition between US$30 million and $50 million. The acquisition of a Spanish natural language recognition startup, Indisys was announced in September 2013. The terms of the deal were not disclosed but an email from an Intel representative stated: "Intel has acquired Indisys, a privately held company based in Seville, Spain. The majority of Indisys employees joined Intel. We signed the agreement to acquire the company on May 31 and the deal has been completed." Indysis explains that its artificial intelligence (AI) technology "is a human image, which converses fluently and with common sense in multiple languages and also works in different platforms". In December 2014, Intel bought PasswordBox. In January 2015, Intel purchased a 30% stake in Vuzix, a smart glasses manufacturer. The deal was worth $24.8 million. In February 2015, Intel announced its agreement to purchase German network chipmaker Lantiq, to aid in its expansion of its range of chips in devices with Internet connection capability. In June 2015, Intel announced its agreement to purchase FPGA design company Altera for $16.7 billion, in its largest acquisition to date. The acquisition completed in December 2015. In October 2015, Intel bought cognitive computing company Saffron Technology for an undisclosed price. In August 2016, Intel purchased deep-learning startup Nervana Systems for over $400 million. In December 2016, Intel acquired computer vision startup Movidius for an undisclosed price. In March 2017, Intel announced that they had agreed to purchase Mobileye, an Israeli developer of "autonomous driving" systems for US$15.3 billion. In June 2017, Intel Corporation announced an investment of over for its upcoming Research and Development (R&D) centre in Bangalore, India. In January 2019, Intel announced an investment of over $11 billion on a new Israeli chip plant, as told by the Israeli Finance Minister. In November 2021, Intel recruited some of the employees of the Centaur Technology division from VIA Technologies, a deal worth $125 million, and effectively acquiring the talent and knowhow of their x86 division. VIA retained the x86 licence and associated patents, and its Zhaoxin CPU joint-venture continues. In December 2021, Intel said it will invest $7.1 billion to build a new chip-packaging and testing factory in Malaysia. The new investment will expand the operations of its Malaysian subsidiary across Penang and Kulim, creating more than 4,000 new Intel jobs and more than 5,000 local construction jobs. In December 2021, Intel announced its plan to take Mobileye automotive unit via an IPO of newly issued stock in 2022, maintaining its majority ownership of the company. In February 2022, Intel agreed to acquire Israeli chip manufacturer Tower Semiconductor for $5.4 billion. In August 2023, Intel terminated the acquisition as it failed to obtain approval from Chinese regulators within the 18-month transaction deadline. In May 2022, Intel announced that they have acquired Finnish graphics technology firm Siru innovations. The firm founded by ex-AMD Qualcomm mobile GPU engineers, is focused on developing software and silicon building blocks for GPU's made by other companies and is set to join Intel's fledgling Accelerated Computing Systems and Graphics Group. In May 2022, it was announced that Ericsson and Intel have pooled to launch a tech hub in California to focus on the research and development of cloud RAN technology. The hub focuses on improving Ericsson Cloud RAN and Intel technology, including improving energy efficiency and network performance, reducing time to market, and monetizing new business opportunities such as enterprise applications. Ultrabook fund (2011) In 2011, Intel Capital announced a new fund to support startups working on technologies in line with the company's concept for next-generation notebooks. The company is setting aside a $300 million fund to be spent over the next three to four years in areas related to ultrabooks. Intel announced the ultrabook concept at Computex in 2011. The ultrabook is defined as a thin (less than 0.8 inches [~2 cm] thick) notebook that utilizes Intel processors and also incorporates tablet features such as a touch screen and long battery life. At the Intel Developers Forum in 2011, four Taiwan ODMs showed prototype ultrabooks that used Intel's Ivy Bridge chips. Intel plans to improve power consumption of its chips for ultrabooks, like new Ivy Bridge processors in 2013, which will only have 10W default thermal design power. Intel's goal for Ultrabook's price is below $1000; however, according to two presidents from Acer and Compaq, this goal will not be achieved if Intel does not lower the price of its chips. Open source support Intel has a significant participation in the open source communities since 1999. For example, in 2006 Intel released MIT-licensed X.org drivers for their integrated graphic cards of the i965 family of chipsets. Intel released FreeBSD drivers for some networking cards, available under a BSD-compatible license, which were also ported to OpenBSD. Binary firmware files for non-wireless Ethernet devices were also released under a BSD licence allowing free redistribution. Intel ran the Moblin project until April 23, 2009, when they handed the project over to the Linux Foundation. Intel also runs the LessWatts.org campaigns. However, after the release of the wireless products called Intel Pro/Wireless 2100, 2200BG/2225BG/2915ABG and 3945ABG in 2005, Intel was criticized for not granting free redistribution rights for the firmware that must be included in the operating system for the wireless devices to operate. As a result of this, Intel became a target of campaigns to allow free operating systems to include binary firmware on terms acceptable to the open source community. Linspire-Linux creator Michael Robertson outlined the difficult position that Intel was in releasing to open source, as Intel did not want to upset their large customer Microsoft. Theo de Raadt of OpenBSD also claimed that Intel is being "an Open Source fraud" after an Intel employee presented a distorted view of the situation at an open source conference. In spite of the significant negative attention Intel received as a result of the wireless dealings, the binary firmware still has not gained a license compatible with free software principles. Intel has also supported other open source projects such as Blender and Open 3D Engine. Corporate identity Logo Throughout its history, Intel has had three logos. The first Intel logo, introduced in April 1969, featured the company's name stylized in all lowercase, with the letter "e" dropped below the other letters. The second logo, introduced on January 3, 2006, was inspired by the "Intel Inside" campaign, featuring a swirl around the Intel brand name. The third logo, introduced on September 2, 2020, was inspired by the previous logos. It removes the swirl as well as the classic blue color in almost all parts of the logo, except for the dot in the "i". Intel Inside Intel has become one of the world's most recognizable computer brands following its long-running Intel Inside campaign. The idea for "Intel Inside" came out of a meeting between Intel and one of the major computer resellers, MicroAge. In the late 1980s, Intel's market share was being seriously eroded by upstart competitors such as AMD, Zilog, and others who had started to sell their less expensive microprocessors to computer manufacturers. This was because, by using cheaper processors, manufacturers could make cheaper computers and gain more market share in an increasingly price-sensitive market. In 1989, Intel's Dennis Carter visited MicroAge's headquarters in Tempe, Arizona, to meet with MicroAge's VP of Marketing, Ron Mion. MicroAge had become one of the largest distributors of Compaq, IBM, HP, and others and thus was a primaryalthough indirectdriver of demand for microprocessors. Intel wanted MicroAge to petition its computer suppliers to favor Intel chips. However, Mion felt that the marketplace should decide which processors they wanted. Intel's counterargument was that it would be too difficult to educate PC buyers on why Intel microprocessors were worth paying more for. Mion felt that the public did not really need to fully understand why Intel chips were better, they just needed to feel they were better. So Mion proposed a market test. Intel would pay for a MicroAge billboard somewhere saying, "If you're buying a personal computer, make sure it has Intel inside." In turn, MicroAge would put "Intel Inside" stickers on the Intel-based computers in their stores in that area. To make the test easier to monitor, Mion decided to do the test in Boulder, Colorado, where it had a single store. Virtually overnight, the sales of personal computers in that store dramatically shifted to Intel-based PCs. Intel very quickly adopted "Intel Inside" as its primary branding and rolled it out worldwide. As is often the case with computer lore, other tidbits have been combined to explain how things evolved. "Intel Inside" has not escaped that tendency and there are other "explanations" that had been floating around. Intel's branding campaign started with "The Computer Inside" tagline in 1990 in the U.S. and Europe. The Japan chapter of Intel proposed an "Intel in it" tagline and kicked off the Japanese campaign by hosting EKI-KON (meaning "Station Concert" in Japanese) at the Tokyo railway station dome on Christmas Day, December 25, 1990. Several months later, "The Computer Inside" incorporated the Japan idea to become "Intel Inside" which eventually elevated to the worldwide branding campaign in 1991, by Intel marketing manager Dennis Carter. A case study, "Inside Intel Inside", was put together by Harvard Business School. The five-note jingle was introduced in 1994 and by its tenth anniversary was being heard in 130 countries around the world. The initial branding agency for the "Intel Inside" campaign was DahlinSmithWhite Advertising of Salt Lake City. The Intel swirl logo was the work of DahlinSmithWhite art director Steve Grigg under the direction of Intel president and CEO Andy Grove. The Intel Inside advertising campaign sought public brand loyalty and awareness of Intel processors in consumer computers. Intel paid some of the advertiser's costs for an ad that used the Intel Inside logo and xylo-marimba jingle. In 2008, Intel planned to shift the emphasis of its Intel Inside campaign from traditional media such as television and print to newer media such as the Internet. Intel required that a minimum of 35% of the money it provided to the companies in its co-op program be used for online marketing. The Intel 2010 annual financial report indicated that $1.8 billion (6% of the gross margin and nearly 16% of the total net income) was allocated to all advertising with Intel Inside being part of that. Intel jingle The D–D–G–D–A xylophone/marimba jingle, known as the "Intel bong", used in Intel advertising was produced by Musikvergnuegen and written by Walter Werzowa, once a member of the Austrian 1980s sampling band Edelweiss. The Intel jingle was made in 1994 to coincide with the launch of the Pentium. It was modified in 1999 to coincide with the launch of the Pentium III, although it overlapped with the 1994 version which was phased out in 2004. Advertisements for products featuring Intel processors with prominent MMX branding featured a version of the jingle with an embellishment (shining sound) after the final note. The jingle was remade a second time in 2004 to coincide with the new logo change. Again, it overlapped with the 1999 version and was not mainstreamed until the launch of the Core processors in 2006, with the melody unchanged. Another remake of the jingle debuted with Intel's new visual identity. The company has made use of numerous variants since its rebranding in 2020 (while retaining the mainstream 2006 version). Processor naming strategy In 2006, Intel expanded its promotion of open specification platforms beyond Centrino, to include the Viiv media center PC and the business desktop Intel vPro. In mid-January 2006, Intel announced that they were dropping the long running Pentium name from their processors. The Pentium name was first used to refer to the P5 core Intel processors and was done to comply with court rulings that prevent the trademarking of a string of numbers, so competitors could not just call their processor the same name, as had been done with the prior 386 and 486 processors (both of which had copies manufactured by IBM and AMD). They phased out the Pentium names from mobile processors first, when the new Yonah chips, branded Core Solo and Core Duo, were released. The desktop processors changed when the Core 2 line of processors were released. By 2009, Intel was using a good–better–best strategy with Celeron being good, Pentium better, and the Intel Core family representing the best the company has to offer. According to spokesman Bill Calder, Intel has maintained only the Celeron brand, the Atom brand for netbooks and the vPro lineup for businesses. Since late 2009, Intel's mainstream processors have been called Celeron, Pentium, Core i3, Core i5, Core i7, and Core i9 in order of performance from lowest to highest. The 1st-generation Core products carry a 3 digit name, such as i5-750, and the 2nd-generation products carry a 4 digit name, such as the i5-2500, and from 10th-generation onwards, Intel processors will have a 5 digit name, such as i9-10900K for desktop. In all cases, a 'K' at the end of it shows that it is an unlocked processor, enabling additional overclocking abilities (for instance, 2500K). vPro products will carry the Intel Core i7 vPro processor or the Intel Core i5 vPro processor name. In October 2011, Intel started to sell its Core i7-2700K "Sandy Bridge" chip to customers worldwide. Since 2010, "Centrino" is only being applied to Intel's WiMAX and Wi-Fi technologies. In 2022, Intel announced that they are dropping the Pentium and Celeron naming schemes for their desktop and laptop entry level processors. The "Intel Processor" branding will be replacing the old Pentium and Celeron naming schemes starting in 2023. In 2023, Intel announced that they will be dropping the 'i' in their future processor markings. For example, products such as Core i9, will now be called Core 9. Ultra will be added to the endings of processors that are in the higher end, such as Core Ultra 9. Typography Neo Sans Intel is a customized version of Neo Sans based on the Neo Sans and Neo Tech, designed by Sebastian Lester in 2004. It was introduced alongside Intel's rebranding in 2006. Previously, Intel used Helvetica as its standard typeface in corporate marketing. Intel Clear is a global font announced in 2014 designed for to be used across all communications. The font family was designed by Red Peek Branding and Dalton Maag. Initially available in Latin, Greek and Cyrillic scripts, it replaced Neo Sans Intel as the company's corporate typeface. Intel Clear Hebrew, Intel Clear Arabic were added by Dalton Maag Ltd. Neo Sans Intel remained in logo and to mark processor type and socket on the packaging of Intel's processors. In 2020, as part of a new visual identity, a new typeface, Intel One, was designed. It replaced Intel Clear as the font used by the company in most of its branding, however, it is used alongside Intel Clear typeface. In logo, it replaced Neo Sans Intel typeface. However, it is still used to mark processor type and socket on the packaging of Intel's processors. Intel Brand Book Intel Brand Book is a book produced by Red Peak Branding as part of Intel's new brand identity campaign, celebrating the company's achievements while setting the new standard for what Intel looks, feels and sounds like. Charity In November 2014, Intel designed a Paddington Bear statue—themed "Little Bear Blue"—one of fifty statues created by various celebrities and companies which were located around London. Created prior to the release of the film Paddington, the Intel-designed statue was located outside Framestore in Chancery Lane, London, a British visual-effects company which uses Intel technology for films including Paddington. The statues were then auctioned to raise funds for the National Society for the Prevention of Cruelty to Children (NSPCC). Sponsorships Intel sponsors the Intel Extreme Masters, a series of international esports tournaments. It was also a sponsor for the Formula 1 teams BMW Sauber and Scuderia Ferrari together with AMD, AT&T, Pernod Ricard, Diageo and Vodafone. In 2013, Intel became a sponsor of FC Barcelona. In 2017, Intel became a sponsor of the Olympic Games, lasting from the 2018 Winter Olympics to the 2024 Summer Olympics. In 2024, Intel and Riot Games had an annual sponsorship valued at US$5 million, and one with JD Gaming for US$3.3 million. The company also had a sponsorship with Global Esports. Litigations and regulatory disputes Patent infringement litigation (2006–2007) In October 2006, a Transmeta lawsuit was filed against Intel for patent infringement on computer architecture and power efficiency technologies. The lawsuit was settled in October 2007, with Intel agreeing to pay US$150 million initially and US$20 million per year for the next five years. Both companies agreed to drop lawsuits against each other, while Intel was granted a perpetual non-exclusive license to use current and future patented Transmeta technologies in its chips for 10 years. Antitrust allegations and litigation (2005–2023) In September 2005, Intel filed a response to an AMD lawsuit, disputing AMD's claims, and claiming that Intel's business practices are fair and lawful. In a rebuttal, Intel deconstructed AMD's offensive strategy and argued that AMD struggled largely as a result of its own bad business decisions, including underinvestment in essential manufacturing capacity and excessive reliance on contracting out chip foundries. Legal analysts predicted the lawsuit would drag on for a number of years, since Intel's initial response indicated its unwillingness to settle with AMD. In 2008, a court date was finally set. On November 4, 2009, New York's attorney general filed an antitrust lawsuit against Intel Corp, claiming the company used "illegal threats and collusion" to dominate the market for computer microprocessors. On November 12, 2009, AMD agreed to drop the antitrust lawsuit against Intel in exchange for $1.25 billion. A joint press release published by the two chip makers stated "While the relationship between the two companies has been difficult in the past, this agreement ends the legal disputes and enables the companies to focus all of our efforts on product innovation and development." An antitrust lawsuit and a class-action suit relating to cold calling employees of other companies has been settled. Allegations by Japan Fair Trade Commission (2005) In 2005, the local Fair Trade Commission found that Intel violated the Japanese Antimonopoly Act. The commission ordered Intel to eliminate discounts that had discriminated against AMD. To avoid a trial, Intel agreed to comply with the order. Allegations by regulators in South Korea (2007) In September 2007, South Korean regulators accused Intel of breaking antitrust law. The investigation began in February 2006, when officials raided Intel's South Korean offices. The company risked a penalty of up to 3% of its annual sales if found guilty. In June 2008, the Fair Trade Commission ordered Intel to pay a fine of US$25.5 million for taking advantage of its dominant position to offer incentives to major Korean PC manufacturers on the condition of not buying products from AMD. Allegations by regulators in the United States (2008–2010) New York started an investigation of Intel in January 2008 on whether the company violated antitrust laws in pricing and sales of its microprocessors. In June 2008, the Federal Trade Commission also began an antitrust investigation of the case. In December 2009, the FTC announced it would initiate an administrative proceeding against Intel in September 2010. In November 2009, following a two-year investigation, New York Attorney General Andrew Cuomo sued Intel, accusing them of bribery and coercion, claiming that Intel bribed computer makers to buy more of their chips than those of their rivals and threatened to withdraw these payments if the computer makers were perceived as working too closely with its competitors. Intel has denied these claims. On July 22, 2010, Dell agreed to a settlement with the U.S. Securities and Exchange Commission (SEC) to pay $100 million in penalties resulting from charges that Dell did not accurately disclose accounting information to investors. In particular, the SEC charged that from 2002 to 2006, Dell had an agreement with Intel to receive rebates in exchange for not using chips manufactured by AMD. These substantial rebates were not disclosed to investors, but were used to help meet investor expectations regarding the company's financial performance; "These exclusivity payments grew from 10% of Dell's operating income in FY 2003 to 38% in FY 2006, and peaked at 76% in the first quarter of FY 2007." Dell eventually did adopt AMD as a secondary supplier in 2006, and Intel subsequently stopped their rebates, causing Dell's financial performance to fall. Allegations by the European Union (2007–2023) In July 2007, the European Commission accused Intel of anti-competitive practices, mostly against AMD. The allegations, going back to 2003, include giving preferential prices to computer makers buying most or all of their chips from Intel, paying computer makers to delay or cancel the launch of products using AMD chips, and providing chips at below standard cost to governments and educational institutions. Intel responded that the allegations were unfounded and instead qualified its market behavior as consumer-friendly. General counsel Bruce Sewell responded that the commission had misunderstood some factual assumptions regarding pricing and manufacturing costs. In February 2008, Intel announced that its office in Munich had been raided by European Union regulators. Intel reported that it was cooperating with investigators. Intel faced a fine of up to 10% of its annual revenue if found guilty of stifling competition. AMD subsequently launched a website promoting these allegations. In June 2008, the EU filed new charges against Intel. In May 2009, the EU found that Intel had engaged in anti-competitive practices and subsequently fined Intel €1.06 billion (US$1.44 billion), a record amount. Intel was found to have paid companies, including Acer, Dell, HP, Lenovo and NEC, to exclusively use Intel chips in their products, and therefore harmed other, less successful companies including AMD. The European Commission said that Intel had deliberately acted to keep competitors out of the computer chip market and in doing so had made a "serious and sustained violation of the EU's antitrust rules". In addition to the fine, Intel was ordered by the commission to immediately cease all illegal practices. Intel has said that they will appeal against the commission's verdict. In June 2014, the General Court, which sits below the European Court of Justice, rejected the appeal. In 2022 the €1.06 billion fine was dropped, but was successively re-imposed in September 2023 as a €376.36 million fine. Corporate responsibility record Intel has been accused by some residents of Rio Rancho, New Mexico of allowing volatile organic compounds (VOCs) to be released in excess of their pollution permit. One resident claimed that a release of 1.4 tons of carbon tetrachloride was measured from one acid scrubber during the fourth quarter of 2003 but an emission factor allowed Intel to report no carbon tetrachloride emissions for all of 2003. Another resident alleges that Intel was responsible for the release of other VOCs from their Rio Rancho site and that a necropsy of lung tissue from two deceased dogs in the area indicated trace amounts of toluene, hexane, ethylbenzene, and xylene isomers, all of which are solvents used in industrial settings but also commonly found in gasoline, retail paint thinners and retail solvents. During a sub-committee meeting of the New Mexico Environment Improvement Board, a resident claimed that Intel's own reports documented more than of VOCs were released in June and July 2006. Intel's environmental performance is published annually in their corporate responsibility report. Conflict-free production In 2009, Intel announced that it planned to undertake an effort to remove conflict resources—materials sourced from mines whose profits are used to fund armed militant groups, particularly within the Democratic Republic of the Congo—from its supply chain. Intel sought conflict-free sources of the precious metals common to electronics from within the country, using a system of first- and third-party audits, as well as input from the Enough Project and other organizations. During a keynote address at Consumer Electronics Show 2014, Intel CEO at the time, Brian Krzanich, announced that the company's microprocessors would henceforth be conflict free. In 2016, Intel stated that it had expected its entire supply chain to be conflict-free by the end of the year. In its 2012 rankings on the progress of consumer electronics companies relating to conflict minerals, the Enough Project rated Intel the best of 24 companies, calling it a "Pioneer of progress". In 2014, chief executive Brian Krzanich urged the rest of the industry to follow Intel's lead by also shunning conflict minerals. Age discrimination complaints Intel has faced complaints of age discrimination in firing and layoffs. Intel was sued in 1993 by nine former employees, over allegations that they were laid off because they were over the age of 40. A group called FACE Intel (Former and Current Employees of Intel) claims that Intel weeds out older employees. FACE Intel claims that more than 90% of people who have been laid off or fired from Intel are over the age of 40. Upside magazine requested data from Intel breaking out its hiring and firing by age, but the company declined to provide any. Intel has denied that age plays any role in Intel's employment practices. FACE Intel was founded by Ken Hamidi, who was fired from Intel in 1995 at the age of 47. Hamidi was blocked in a 1999 court decision from using Intel's email system to distribute criticism of the company to employees, which overturned in 2003 in Intel Corp. v. Hamidi. Tax dispute in India In August 2016, Indian officials of the Bruhat Bengaluru Mahanagara Palike (BBMP) parked garbage trucks on Intel's campus and threatened to dump them for evading payment of property taxes between 2007 and 2008, to the tune of . Intel had reportedly been paying taxes as a non-air-conditioned office, when the campus in fact had central air conditioning. Other factors, such as land acquisition and construction improvements, added to the tax burden. Previously, Intel had appealed the demand in the Karnataka high court in July, during which the court ordered Intel to pay BBMP half the owed amount of plus arrears by August 28 of that year. Product issues Recalls Pentium FDIV bug Security vulnerabilities Transient execution CPU vulnerability Instability issues Raptor Lake See also 5 nm process ASCI Red Bumpless Build-up Layer Comparison of ATI graphics processing units Comparison of Intel processors Comparison of Nvidia graphics processing units Cyrix Engineering sample (CPU) Graphics processing unit (GPU) Intel Developer Zone (Intel DZ) Intel Driver Update Utility Intel GMA (Graphics Media Accelerator) Intel HD and Iris Graphics Intel Level Up Intel Loihi Intel Museum Intel Science Talent Search List of Intel chipsets List of Intel CPU microarchitectures List of Intel manufacturing sites List of mergers and acquisitions by Intel List of semiconductor fabrication plants Intel Management Engine Intel-related biographical articles on Wikipedia Bill Gaede Bob Colwell Justin Rattner Sean Maloney Notes References External links 1968 establishments in California 1970s initial public offerings American companies established in 1968 Companies based in Santa Clara, California Companies in the Dow Jones Global Titans 50 Companies listed on the Nasdaq Computer companies established in 1968 Computer companies of the United States Computer hardware companies Computer memory companies Computer storage companies Computer systems companies Former components of the Dow Jones Industrial Average Foundry semiconductor companies Graphics hardware companies Linux companies Manufacturing companies based in the San Francisco Bay Area Manufacturing companies established in 1968 Mobile phone manufacturers Motherboard companies Multinational companies headquartered in the United States Semiconductor companies of the United States Software companies based in the San Francisco Bay Area Software companies established in 1968 Software companies of the United States Superfund sites in California Technology companies of the United States Technology companies based in the San Francisco Bay Area Technology companies established in 1968
Intel
[ "Technology" ]
18,516
[ "Computer systems companies", "Computer systems" ]
14,624
https://en.wikipedia.org/wiki/Inorganic%20chemistry
Inorganic chemistry deals with synthesis and behavior of inorganic and organometallic compounds. This field covers chemical compounds that are not carbon-based, which are the subjects of organic chemistry. The distinction between the two disciplines is far from absolute, as there is much overlap in the subdiscipline of organometallic chemistry. It has applications in every aspect of the chemical industry, including catalysis, materials science, pigments, surfactants, coatings, medications, fuels, and agriculture. Occurrence Many inorganic compounds are found in nature as minerals. Soil may contain iron sulfide as pyrite or calcium sulfate as gypsum. Inorganic compounds are also found multitasking as biomolecules: as electrolytes (sodium chloride), in energy storage (ATP) or in construction (the polyphosphate backbone in DNA). Bonding Inorganic compounds exhibit a range of bonding properties. Some are ionic compounds, consisting of very simple cations and anions joined by ionic bonding. Examples of salts (which are ionic compounds) are magnesium chloride MgCl2, which consists of magnesium cations Mg2+ and chloride anions Cl−; or sodium hydroxide NaOH, which consists of sodium cations Na+ and hydroxide anions OH−. Some inorganic compounds are highly covalent, such as sulfur dioxide and iron pentacarbonyl. Many inorganic compounds feature polar covalent bonding, which is a form of bonding intermediate between covalent and ionic bonding. This description applies to many oxides, carbonates, and halides. Many inorganic compounds are characterized by high melting points. Some salts (e.g., NaCl) are very soluble in water. When one reactant contains hydrogen atoms, a reaction can take place by exchanging protons in acid-base chemistry. In a more general definition, any chemical species capable of binding to electron pairs is called a Lewis acid; conversely any molecule that tends to donate an electron pair is referred to as a Lewis base. As a refinement of acid-base interactions, the HSAB theory takes into account polarizability and size of ions. Subdivisions of inorganic chemistry Subdivisions of inorganic chemistry are numerous, but include: organometallic chemistry, compounds with metal-carbon bonds. This area touches on organic synthesis, which employs many organometallic catalysts and reagents. cluster chemistry, compounds with several metals bound together with metal–metal bonds or bridging ligands. bioinorganic chemistry, biomolecules that contain metals. This area touches on medicinal chemistry. materials chemistry and solid state chemistry, extended (i.e. polymeric) solids exhibiting properties not seen for simple molecules. Many practical themes are associated with these areas, including ceramics. Industrial inorganic chemistry Inorganic chemistry is a highly practical area of science. Traditionally, the scale of a nation's economy could be evaluated by their productivity of sulfuric acid. An important man-made inorganic compound is ammonium nitrate, used for fertilization. The ammonia is produced through the Haber process. Nitric acid is prepared from the ammonia by oxidation. Another large-scale inorganic material is portland cement. Inorganic compounds are used as catalysts such as vanadium(V) oxide for the oxidation of sulfur dioxide and titanium(III) chloride for the polymerization of alkenes. Many inorganic compounds are used as reagents in organic chemistry such as lithium aluminium hydride. Descriptive inorganic chemistry Descriptive inorganic chemistry focuses on the classification of compounds based on their properties. Partly the classification focuses on the position in the periodic table of the heaviest element (the element with the highest atomic weight) in the compound, partly by grouping compounds by their structural similarities Coordination compounds Classical coordination compounds feature metals bound to "lone pairs" of electrons residing on the main group atoms of ligands such as H2O, NH3, Cl−, and CN−. In modern coordination compounds almost all organic and inorganic compounds can be used as ligands. The "metal" usually is a metal from the groups 3–13, as well as the trans-lanthanides and trans-actinides, but from a certain perspective, all chemical compounds can be described as coordination complexes. The stereochemistry of coordination complexes can be quite rich, as hinted at by Werner's separation of two enantiomers of [Co((OH)2Co(NH3)4)3]6+, an early demonstration that chirality is not inherent to organic compounds. A topical theme within this specialization is supramolecular coordination chemistry. Examples: [Co(EDTA)]−, [Co(NH3)6]3+, TiCl4(THF)2. Coordination compounds show a rich diversity of structures, varying from tetrahedral for titanium (e.g., TiCl4) to square planar for some nickel complexes to octahedral for coordination complexes of cobalt. A range of transition metals can be found in biologically important compounds, such as iron in hemoglobin. Examples: iron pentacarbonyl, titanium tetrachloride, cisplatin Main group compounds These species feature elements from groups I, II, III, IV, V, VI, VII, 0 (excluding hydrogen) of the periodic table. Due to their often similar reactivity, the elements in group 3 (Sc, Y, and La) and group 12 (Zn, Cd, and Hg) are also generally included, and the lanthanides and actinides are sometimes included as well. Main group compounds have been known since the beginnings of chemistry, e.g., elemental sulfur and the distillable white phosphorus. Experiments on oxygen, O2, by Lavoisier and Priestley not only identified an important diatomic gas, but opened the way for describing compounds and reactions according to stoichiometric ratios. The discovery of a practical synthesis of ammonia using iron catalysts by Carl Bosch and Fritz Haber in the early 1900s deeply impacted mankind, demonstrating the significance of inorganic chemical synthesis. Typical main group compounds are SiO2, SnCl4, and N2O. Many main group compounds can also be classed as "organometallic", as they contain organic groups, e.g., B(CH3)3). Main group compounds also occur in nature, e.g., phosphate in DNA, and therefore may be classed as bioinorganic. Conversely, organic compounds lacking (many) hydrogen ligands can be classed as "inorganic", such as the fullerenes, buckytubes and binary carbon oxides. Examples: tetrasulfur tetranitride S4N4, diborane B2H6, silicones, buckminsterfullerene C60. Noble gas compounds include several derivatives of xenon and krypton. Examples: xenon hexafluoride XeF6, xenon trioxide XeO3, and krypton difluoride KrF2 Organometallic compounds Usually, organometallic compounds are considered to contain the M-C-H group. The metal (M) in these species can either be a main group element or a transition metal. Operationally, the definition of an organometallic compound is more relaxed to include also highly lipophilic complexes such as metal carbonyls and even metal alkoxides. Organometallic compounds are mainly considered a special category because organic ligands are often sensitive to hydrolysis or oxidation, necessitating that organometallic chemistry employs more specialized preparative methods than was traditional in Werner-type complexes. Synthetic methodology, especially the ability to manipulate complexes in solvents of low coordinating power, enabled the exploration of very weakly coordinating ligands such as hydrocarbons, H2, and N2. Because the ligands are petrochemicals in some sense, the area of organometallic chemistry has greatly benefited from its relevance to industry. Examples: Cyclopentadienyliron dicarbonyl dimer (C5H5)Fe(CO)2CH3, ferrocene Fe(C5H5)2, molybdenum hexacarbonyl Mo(CO)6, triethylborane Et3B, Tris(dibenzylideneacetone)dipalladium(0) Pd2(dba)3) Cluster compounds Clusters can be found in all classes of chemical compounds. According to the commonly accepted definition, a cluster consists minimally of a triangular set of atoms that are directly bonded to each other. But metal–metal bonded dimetallic complexes are highly relevant to the area. Clusters occur in "pure" inorganic systems, organometallic chemistry, main group chemistry, and bioinorganic chemistry. The distinction between very large clusters and bulk solids is increasingly blurred. This interface is the chemical basis of nanoscience or nanotechnology and specifically arise from the study of quantum size effects in cadmium selenide clusters. Thus, large clusters can be described as an array of bound atoms intermediate in character between a molecule and a solid. Examples: Fe3(CO)12, B10H14, [Mo6Cl14]2−, 4Fe-4S Bioinorganic compounds By definition, these compounds occur in nature, but the subfield includes anthropogenic species, such as pollutants (e.g., methylmercury) and drugs (e.g., Cisplatin). The field, which incorporates many aspects of biochemistry, includes many kinds of compounds, e.g., the phosphates in DNA, and also metal complexes containing ligands that range from biological macromolecules, commonly peptides, to ill-defined species such as humic acid, and to water (e.g., coordinated to gadolinium complexes employed for MRI). Traditionally bioinorganic chemistry focuses on electron- and energy-transfer in proteins relevant to respiration. Medicinal inorganic chemistry includes the study of both non-essential and essential elements with applications to diagnosis and therapies. Examples: hemoglobin, methylmercury, carboxypeptidase Solid state compounds This important area focuses on structure, bonding, and the physical properties of materials. In practice, solid state inorganic chemistry uses techniques such as crystallography to gain an understanding of the properties that result from collective interactions between the subunits of the solid. Included in solid state chemistry are metals and their alloys or intermetallic derivatives. Related fields are condensed matter physics, mineralogy, and materials science. Examples: silicon chips, zeolites, YBa2Cu3O7 Spectroscopy and magnetism In contrast to most organic compounds, many inorganic compounds are magnetic and/or colored. These properties provide information on the bonding and structure. The magnetism of inorganic compounds can be comlex. For example, most copper(II) compounds are paramagnetic but CuII2(OAc)4(H2O)2 is almost diamagnetic below room temperature. The explanation is due to magnetic coupling between pairs of Cu(II) sites in the acetate. Qualitative theories Inorganic chemistry has greatly benefited from qualitative theories. Such theories are easier to learn as they require little background in quantum theory. Within main group compounds, VSEPR theory powerfully predicts, or at least rationalizes, the structures of main group compounds, such as an explanation for why NH3 is pyramidal whereas ClF3 is T-shaped. For the transition metals, crystal field theory allows one to understand the magnetism of many simple complexes, such as why [FeIII(CN)6]3− has only one unpaired electron, whereas [FeIII(H2O)6]3+ has five. A particularly powerful qualitative approach to assessing the structure and reactivity begins with classifying molecules according to electron counting, focusing on the numbers of valence electrons, usually at the central atom in a molecule. Molecular symmetry group theory A construct in chemistry is molecular symmetry, as embodied in Group theory. Inorganic compounds display a particularly diverse symmetries, so it is logical that Group Theory is intimately associated with inorganic chemistry. Group theory provides the language to describe the shapes of molecules according to their point group symmetry. Group theory also enables factoring and simplification of theoretical calculations. Spectroscopic features are analyzed and described with respect to the symmetry properties of the, inter alia, vibrational or electronic states. Knowledge of the symmetry properties of the ground and excited states allows one to predict the numbers and intensities of absorptions in vibrational and electronic spectra. A classic application of group theory is the prediction of the number of C–O vibrations in substituted metal carbonyl complexes. The most common applications of symmetry to spectroscopy involve vibrational and electronic spectra. Group theory highlights commonalities and differences in the bonding of otherwise disparate species. For example, the metal-based orbitals transform identically for WF6 and W(CO)6, but the energies and populations of these orbitals differ significantly. A similar relationship exists CO2 and molecular beryllium difluoride. Thermodynamics and inorganic chemistry An alternative quantitative approach to inorganic chemistry focuses on energies of reactions. This approach is highly traditional and empirical, but it is also useful. Broad concepts that are couched in thermodynamic terms include redox potential, acidity, phase changes. A classic concept in inorganic thermodynamics is the Born–Haber cycle, which is used for assessing the energies of elementary processes such as electron affinity, some of which cannot be observed directly. Mechanistic inorganic chemistry An important aspect of inorganic chemistry focuses on reaction pathways, i.e. reaction mechanisms. Main group elements and lanthanides The mechanisms of main group compounds of groups 13–18 are usually discussed in the context of organic chemistry (organic compounds are main group compounds, after all). Elements heavier than C, N, O, and F often form compounds with more electrons than predicted by the octet rule, as explained in the article on hypervalent molecules. The mechanisms of their reactions differ from organic compounds for this reason. Elements lighter than carbon (B, Be, Li) as well as Al and Mg often form electron-deficient structures that are electronically akin to carbocations. Such electron-deficient species tend to react via associative pathways. The chemistry of the lanthanides mirrors many aspects of chemistry seen for aluminium. Transition metal complexes Transition metal and main group compounds often react differently. The important role of d-orbitals in bonding strongly influences the pathways and rates of ligand substitution and dissociation. These themes are covered in articles on coordination chemistry and ligand. Both associative and dissociative pathways are observed. An overarching aspect of mechanistic transition metal chemistry is the kinetic lability of the complex illustrated by the exchange of free and bound water in the prototypical complexes [M(H2O)6]n+: [M(H2O)6]n+ + 6 H2O* → [M(H2O*)6]n+ + 6 H2O where H2O* denotes isotopically enriched water, e.g., H217O The rates of water exchange varies by 20 orders of magnitude across the periodic table, with lanthanide complexes at one extreme and Ir(III) species being the slowest. Redox reactions Redox reactions are prevalent for the transition elements. Two classes of redox reaction are considered: atom-transfer reactions, such as oxidative addition/reductive elimination, and electron-transfer. A fundamental redox reaction is "self-exchange", which involves the degenerate reaction between an oxidant and a reductant. For example, permanganate and its one-electron reduced relative manganate exchange one electron: [MnO4]− + [Mn*O4]2− → [MnO4]2− + [Mn*O4]− Reactions at ligands Coordinated ligands display reactivity distinct from the free ligands. For example, the acidity of the ammonia ligands in [Co(NH3)6]3+ is elevated relative to NH3 itself. Alkenes bound to metal cations are reactive toward nucleophiles whereas alkenes normally are not. The large and industrially important area of catalysis hinges on the ability of metals to modify the reactivity of organic ligands. Homogeneous catalysis occurs in solution and heterogeneous catalysis occurs when gaseous or dissolved substrates interact with surfaces of solids. Traditionally homogeneous catalysis is considered part of organometallic chemistry and heterogeneous catalysis is discussed in the context of surface science, a subfield of solid state chemistry. But the basic inorganic chemical principles are the same. Transition metals, almost uniquely, react with small molecules such as CO, H2, O2, and C2H4. The industrial significance of these feedstocks drives the active area of catalysis. Ligands can also undergo ligand transfer reactions such as transmetalation. Characterization of inorganic compounds Because of the diverse range of elements and the correspondingly diverse properties of the resulting derivatives, inorganic chemistry is closely associated with many methods of analysis. Older methods tended to examine bulk properties such as the electrical conductivity of solutions, melting points, solubility, and acidity. With the advent of quantum theory and the corresponding expansion of electronic apparatus, new tools have been introduced to probe the electronic properties of inorganic molecules and solids. Often these measurements provide insights relevant to theoretical models. Commonly encountered techniques are: X-ray crystallography: This technique allows for the 3D determination of molecular structures. Various forms of spectroscopy: Ultraviolet-visible spectroscopy: Historically, this has been an important tool, since many inorganic compounds are strongly colored NMR spectroscopy: Besides 1H and 13C many other NMR-active nuclei (e.g., 11B, 19F, 31P, and 195Pt) can give important information on compound properties and structure. The NMR of paramagnetic species can provide important structural information. Proton (1H) NMR is also important because the light hydrogen nucleus is not easily detected by X-ray crystallography. Infrared spectroscopy: Mostly for absorptions from carbonyl ligands Electron nuclear double resonance (ENDOR) spectroscopy Mössbauer spectroscopy Electron-spin resonance: ESR (or EPR) allows for the measurement of the environment of paramagnetic metal centres. Electrochemistry: Cyclic voltammetry and related techniques probe the redox characteristics of compounds. Synthetic inorganic chemistry Although some inorganic species can be obtained in pure form from nature, most are synthesized in chemical plants and in the laboratory. Inorganic synthetic methods can be classified roughly according to the volatility or solubility of the component reactants. Soluble inorganic compounds are prepared using methods of organic synthesis. For metal-containing compounds that are reactive toward air, Schlenk line and glove box techniques are followed. Volatile compounds and gases are manipulated in "vacuum manifolds" consisting of glass piping interconnected through valves, the entirety of which can be evacuated to 0.001 mm Hg or less. Compounds are condensed using liquid nitrogen (b.p. 78K) or other cryogens. Solids are typically prepared using tube furnaces, the reactants and products being sealed in containers, often made of fused silica (amorphous SiO2) but sometimes more specialized materials such as welded Ta tubes or Pt "boats". Products and reactants are transported between temperature zones to drive reactions. See also Important publications in inorganic chemistry References
Inorganic chemistry
[ "Chemistry" ]
4,079
[ "nan" ]
14,627
https://en.wikipedia.org/wiki/Isaac%20Newton
Sir Isaac Newton (25 December 1642 – 20 March 1726/27) was an English polymath active as a mathematician, physicist, astronomer, alchemist, theologian, and author who was described in his time as a natural philosopher. Newton was a key figure in the Scientific Revolution and the Enlightenment that followed. Newton's book (Mathematical Principles of Natural Philosophy), first published in 1687, achieved the first great unification in physics and established classical mechanics. Newton also made seminal contributions to optics, and shares credit with German mathematician Gottfried Wilhelm Leibniz for formulating infinitesimal calculus, though he developed calculus years before Leibniz. He contributed to and refined the scientific method, and his work is considered the most influential in bringing forth modern science. In the , Newton formulated the laws of motion and universal gravitation that formed the dominant scientific viewpoint for centuries until it was superseded by the theory of relativity. He used his mathematical description of gravity to derive Kepler's laws of planetary motion, account for tides, the trajectories of comets, the precession of the equinoxes and other phenomena, eradicating doubt about the Solar System's heliocentricity. Newton solved the two-body problem, and introduced the three-body problem. He demonstrated that the motion of objects on Earth and celestial bodies could be accounted for by the same principles. Newton's inference that the Earth is an oblate spheroid was later confirmed by the geodetic measurements of Maupertuis, La Condamine, and others, thereby convincing most European scientists of the superiority of Newtonian mechanics over earlier systems. Newton built the first reflecting telescope and developed a sophisticated theory of colour based on the observation that a prism separates white light into the colours of the visible spectrum. His work on light was collected in his influential book Opticks, published in 1704. He formulated an empirical law of cooling, which was the first heat transfer formulation and serves as the formal basis of convective heat transfer, made the first theoretical calculation of the speed of sound, and introduced the notions of a Newtonian fluid and a black body. Furthermore, he made early investigations into electricity, with an idea from his book Opticks arguably the beginning of the field theory of the electric force. In addition to his creation of calculus, as a mathematician, he generalized the binomial theorem to any real number, contributed to the study of power series, developed a method for approximating the roots of a function, classified most of the cubic plane curves, and also originated the Newton-Cotes formulas for numerical integration. He further devised an early form of regression analysis. Newton was a fellow of Trinity College and the second Lucasian Professor of Mathematics at the University of Cambridge; he was appointed at the age of 26. He was a devout but unorthodox Christian who privately rejected the doctrine of the Trinity. He refused to take holy orders in the Church of England, unlike most members of the Cambridge faculty of the day. Beyond his work on the mathematical sciences, Newton dedicated much of his time to the study of alchemy and biblical chronology, but most of his work in those areas remained unpublished until long after his death. Politically and personally tied to the Whig party, Newton served two brief terms as Member of Parliament for the University of Cambridge, in 1689–1690 and 1701–1702. He was knighted by Queen Anne in 1705 and spent the last three decades of his life in London, serving as Warden (1696–1699) and Master (1699–1727) of the Royal Mint, in which he increased the accuracy and security of British coinage, as well as president of the Royal Society (1703–1727). Early life Isaac Newton was born (according to the Julian calendar in use in England at the time) on Christmas Day, 25 December 1642 (NS 4 January 1643) at Woolsthorpe Manor in Woolsthorpe-by-Colsterworth, a hamlet in the county of Lincolnshire. His father, also named Isaac Newton, had died three months before. Born prematurely, Newton was a small child; his mother Hannah Ayscough reportedly said that he could have fit inside a quart mug. When Newton was three, his mother remarried and went to live with her new husband, the Reverend Barnabas Smith, leaving her son in the care of his maternal grandmother, Margery Ayscough (née Blythe). Newton disliked his stepfather and maintained some enmity towards his mother for marrying him, as revealed by this entry in a list of sins committed up to the age of 19: "Threatening my father and mother Smith to burn them and the house over them." Newton's mother had three children (Mary, Benjamin, and Hannah) from her second marriage. The King's School From the age of about twelve until he was seventeen, Newton was educated at The King's School in Grantham, which taught Latin and Ancient Greek and probably imparted a significant foundation of mathematics. He was removed from school by his mother and returned to Woolsthorpe-by-Colsterworth by October 1659. His mother, widowed for the second time, attempted to make him a farmer, an occupation he hated. Henry Stokes, master at The King's School, persuaded his mother to send him back to school. Motivated partly by a desire for revenge against a schoolyard bully, he became the top-ranked student, distinguishing himself mainly by building sundials and models of windmills. University of Cambridge In June 1661, Newton was admitted to Trinity College at the University of Cambridge. His uncle the Reverend William Ayscough, who had studied at Cambridge, recommended him to the university. At Cambridge, Newton started as a subsizar, paying his way by performing valet duties until he was awarded a scholarship in 1664, which covered his university costs for four more years until the completion of his MA. At the time, Cambridge's teachings were based on those of Aristotle, whom Newton read along with then more modern philosophers, including Descartes and astronomers such as Galileo Galilei and Thomas Street. He set down in his notebook a series of "Quaestiones" about mechanical philosophy as he found it. In 1665, he discovered the generalised binomial theorem and began to develop a mathematical theory that later became calculus. Soon after Newton obtained his BA degree at Cambridge in August 1665, the university temporarily closed as a precaution against the Great Plague. Although he had been undistinguished as a Cambridge student, his private studies and the years following his bachelor's degree have been described as "the richest and most productive ever experienced by a scientist". The next two years alone saw the development of theories on calculus, optics, and the law of gravitation, at his home in Woolsthorpe. Newton has been described as an "exceptionally organized" person when it came to note-taking and his work, further dog-earing pages he saw as important. Furthermore, Newton's "indexes look like present-day indexes: They are alphabetical, by topic." His books showed his interests to be wide-ranging, with Newton himself described as a "Janusian thinker, someone who could mix and combine seemingly disparate fields to stimulate creative breakthroughs." In April 1667, Newton returned to the University of Cambridge, and in October he was elected as a fellow of Trinity. Fellows were required to take holy orders and be ordained as Anglican priests, although this was not enforced in the Restoration years, and an assertion of conformity to the Church of England was sufficient. He made the commitment that "I will either set Theology as the object of my studies and will take holy orders when the time prescribed by these statutes [7 years] arrives, or I will resign from the college." Up until this point he had not thought much about religion and had twice signed his agreement to the Thirty-nine Articles, the basis of Church of England doctrine. By 1675 the issue could not be avoided, and by then his unconventional views stood in the way. His academic work impressed the Lucasian professor Isaac Barrow, who was anxious to develop his own religious and administrative potential (he became master of Trinity College two years later); in 1669, Newton succeeded him, only one year after receiving his MA. Newton argued that this should exempt him from the ordination requirement, and King Charles II, whose permission was needed, accepted this argument; thus, a conflict between Newton's religious views and Anglican orthodoxy was averted. He was appointed at the age of 26. The Lucasian Professor of Mathematics at Cambridge position included the responsibility of instructing geography. In 1672, and again in 1681, Newton published a revised, corrected, and amended edition of the Geographia Generalis, a geography textbook first published in 1650 by the then-deceased Bernhardus Varenius. In the Geographia Generalis, Varenius attempted to create a theoretical foundation linking scientific principles to classical concepts in geography, and considered geography to be a mix between science and pure mathematics applied to quantifying features of the Earth. While it is unclear if Newton ever lectured in geography, the 1733 Dugdale and Shaw English translation of the book stated Newton published the book to be read by students while he lectured on the subject. The Geographia Generalis is viewed by some as the dividing line between ancient and modern traditions in the history of geography, and Newton's involvement in the subsequent editions is thought to be a large part of the reason for this enduring legacy. Newton was elected a Fellow of the Royal Society (FRS) in 1672. Mid-life Calculus Newton's work has been said "to distinctly advance every branch of mathematics then studied". His work on the subject, usually referred to as fluxions or calculus, seen in a manuscript of October 1666, is now published among Newton's mathematical papers. His work De analysi per aequationes numero terminorum infinitas, sent by Isaac Barrow to John Collins in June 1669, was identified by Barrow in a letter sent to Collins that August as the work "of an extraordinary genius and proficiency in these things". Newton later became involved in a dispute with Leibniz over priority in the development of calculus. Both are now credited with independently developing calculus, though with very different mathematical notations. However, it is established that Newton came to develop calculus much earlier than Leibniz. Leibniz's notation is recognized as the more convenient notation, being adopted by continental European mathematicians, and after 1820, by British mathematicians. Historian of science A. Rupert Hall notes that while Leibniz deserves credit for his independent formulation of calculus, Newton was undoubtedly the first to develop it, stating:Hall further notes that in Principia, Newton was able to "formulate and resolve problems by the integration of differential equations" and "in fact, he anticipated in his book many results that later exponents of the calculus regarded as their own novel achievements." It has been noted that despite the convenience of Leibniz's notation, Newton's notation could still have been used to develop multivariate techniques, with his dot notation still widely used in physics. Some academics have noted the richness and depth of Newton's work, such as physicist Roger Penrose, stating "in most cases Newton’s geometrical methods are not only more concise and elegant, they reveal deeper principles than would become evident by the use of those formal methods of calculus that nowadays would seem more direct." Mathematician Vladimir Arnold states "Comparing the texts of Newton with the comments of his successors, it is striking how Newton’s original presentation is more modern, more understandable and richer in ideas than the translation due to commentators of his geometrical ideas into the formal language of the calculus of Leibniz." His work extensively uses calculus in geometric form based on limiting values of the ratios of vanishingly small quantities: in the Principia itself, Newton gave demonstration of this under the name of "the method of first and last ratios" and explained why he put his expositions in this form, remarking also that "hereby the same thing is performed as by the method of indivisibles." Because of this, the Principia has been called "a book dense with the theory and application of the infinitesimal calculus" in modern times and in Newton's time "nearly all of it is of this calculus." His use of methods involving "one or more orders of the infinitesimally small" is present in his De motu corporum in gyrum of 1684 and in his papers on motion "during the two decades preceding 1684". Newton had been reluctant to publish his calculus because he feared controversy and criticism. He was close to the Swiss mathematician Nicolas Fatio de Duillier. In 1691, Duillier started to write a new version of Newton's Principia, and corresponded with Leibniz. In 1693, the relationship between Duillier and Newton deteriorated and the book was never completed. Starting in 1699, Duillier accused Leibniz of plagiarism. Mathematician John Keill accused Leibniz of plagiarism in 1708 in the Royal Society journal, thereby deteriorating the situation even more. The dispute then broke out in full force in 1711 when the Royal Society proclaimed in a study that it was Newton who was the true discoverer and labelled Leibniz a fraud; it was later found that Newton wrote the study's concluding remarks on Leibniz. Thus began the bitter controversy which marred the lives of both men until Leibniz's death in 1716. Newton is credited with the generalised binomial theorem, valid for any exponent. He discovered Newton's identities, Newton's method, classified cubic plane curves (polynomials of degree three in two variables), made substantial contributions to the theory of finite differences, with Newton regarded as "the single most significant contributor to finite difference interpolation", with many formulas created by Newton. He was the first to state Bézout's theorem, and was also the first to use fractional indices and to employ coordinate geometry to derive solutions to Diophantine equations. He approximated partial sums of the harmonic series by logarithms (a precursor to Euler's summation formula) and was the first to use power series with confidence and to revert power series. His work on infinite series was inspired by Simon Stevin's decimals. Optics In 1666, Newton observed that the spectrum of colours exiting a prism in the position of minimum deviation is oblong, even when the light ray entering the prism is circular, which is to say, the prism refracts different colours by different angles. This led him to conclude that colour is a property intrinsic to light – a point which had, until then, been a matter of debate. From 1670 to 1672, Newton lectured on optics. During this period he investigated the refraction of light, demonstrating that the multicoloured image produced by a prism, which he named a spectrum, could be recomposed into white light by a lens and a second prism. Modern scholarship has revealed that Newton's analysis and resynthesis of white light owes a debt to corpuscular alchemy. In his work on Newton's rings in 1671, he used a method that was unprecedented in the 17th century, as "he averaged all of the differences, and he then calculated the difference between the average and the value for the first ring", in effect introducing a now standard method for reducing noise in measurements, and which does not appear elsewhere at the time. He extended his "error-slaying method" to studies of equinoxes in 1700, which was described as an "altogether unprecedented method" but differed in that here "Newton required good values for each of the original equinoctial times, and so he devised a method that allowed them to, as it were, self-correct." Newton is credited with introducing "an embryonic linear regression analysis. Not only did he perform the averaging of a set of data, 50 years before Tobias Mayer, but summing the residuals to zero he forced the regression line to pass through the average point". Newton also "distinguished between two inhomogeneous sets of data and might have thought of an optimal solution in terms of bias, though not in terms of effectiveness". He showed that coloured light does not change its properties by separating out a coloured beam and shining it on various objects, and that regardless of whether reflected, scattered, or transmitted, the light remains the same colour. Thus, he observed that colour is the result of objects interacting with already-coloured light rather than objects generating the colour themselves. This is known as Newton's theory of colour. From this work, he concluded that the lens of any refracting telescope would suffer from the dispersion of light into colours (chromatic aberration). As a proof of the concept, he constructed a telescope using reflective mirrors instead of lenses as the objective to bypass that problem. Building the design, the first known functional reflecting telescope, today known as a Newtonian telescope, involved solving the problem of a suitable mirror material and shaping technique. He grounded his own mirrors out of a custom composition of highly reflective speculum metal, using Newton's rings to judge the quality of the optics for his telescopes. In late 1668, he was able to produce this first reflecting telescope. It was about eight inches long and it gave a clearer and larger image. In 1671, he was asked for a demonstration of his reflecting telescope by the Royal Society. Their interest encouraged him to publish his notes, Of Colours, which he later expanded into the work Opticks. When Robert Hooke criticised some of Newton's ideas, Newton was so offended that he withdrew from public debate. Newton and Hooke had brief exchanges in 1679–80, when Hooke, appointed to manage the Royal Society's correspondence, opened up a correspondence intended to elicit contributions from Newton to Royal Society transactions, which had the effect of stimulating Newton to work out a proof that the elliptical form of planetary orbits would result from a centripetal force inversely proportional to the square of the radius vector. The two men remained generally on poor terms until Hooke's death. Newton argued that light is composed of particles or corpuscles, which were refracted by accelerating into a denser medium. He verged on soundlike waves to explain the repeated pattern of reflection and transmission by thin films (Opticks Bk. II, Props. 12), but still retained his theory of 'fits' that disposed corpuscles to be reflected or transmitted (Props.13). Physicists later favoured a purely wavelike explanation of light to account for the interference patterns and the general phenomenon of diffraction. Despite his known preference of a particle theory, Newton in fact noted that light had both particle-like and wave-like properties in Opticks, and was the first to attempt to reconcile the two theories, thereby anticipating later developments of wave-particle duality, which is the modern understanding of light. In his Hypothesis of Light of 1675, Newton posited the existence of the ether to transmit forces between particles. The contact with the Cambridge Platonist philosopher Henry More revived his interest in alchemy. He replaced the ether with occult forces based on Hermetic ideas of attraction and repulsion between particles. His contributions to science cannot be isolated from his interest in alchemy. This was at a time when there was no clear distinction between alchemy and science. In 1704, Newton published Opticks, in which he expounded his corpuscular theory of light, and included a set of queries at the end. In line with his corpuscle theory, he thought that ordinary matter was made of grosser corpuscles and speculated that through a kind of alchemical transmutation "Are not gross Bodies and Light convertible into one another, ... and may not Bodies receive much of their Activity from the Particles of Light which enter their Composition?" He also constructed a primitive form of a frictional electrostatic generator, using a glass globe. In Opticks, he was the first to show a diagram using a prism as a beam expander, and also the use of multiple-prism arrays. Some 278 years after Newton's discussion, multiple-prism beam expanders became central to the development of narrow-linewidth tunable lasers. Also, the use of these prismatic beam expanders led to the multiple-prism dispersion theory. Subsequent to Newton, much has been amended. Thomas Young and Augustin-Jean Fresnel discarded Newton's particle theory in favour of Huygens' wave theory to show that colour is the visible manifestation of light's wavelength. Science also slowly came to realise the difference between perception of colour and mathematisable optics. The German poet and scientist, Goethe, could not shake the Newtonian foundation but "one hole Goethe did find in Newton's armour, ... Newton had committed himself to the doctrine that refraction without colour was impossible. He, therefore, thought that the object-glasses of telescopes must forever remain imperfect, achromatism and refraction being incompatible. This inference was proved by Dollond to be wrong." Gravity Newton had been developing his theory of gravitation as far back as 1665. In 1679, Newton returned to his work on celestial mechanics by considering gravitation and its effect on the orbits of planets with reference to Kepler's laws of planetary motion. This followed stimulation by a brief exchange of letters in 1679–80 with Hooke, who had been appointed Secretary of the Royal Society, and who opened a correspondence intended to elicit contributions from Newton to Royal Society transactions. Newton's reawakening interest in astronomical matters received further stimulus by the appearance of a comet in the winter of 1680–1681, on which he corresponded with John Flamsteed. After the exchanges with Hooke, Newton worked out a proof that the elliptical form of planetary orbits would result from a centripetal force inversely proportional to the square of the radius vector. Newton communicated his results to Edmond Halley and to the Royal Society in , a tract written on about nine sheets which was copied into the Royal Society's Register Book in December 1684. This tract contained the nucleus that Newton developed and expanded to form the Principia. The was published on 5 July 1687 with encouragement and financial help from Halley. In this work, Newton stated the three universal laws of motion. Together, these laws describe the relationship between any object, the forces acting upon it and the resulting motion, laying the foundation for classical mechanics. They contributed to many advances during the Industrial Revolution which soon followed and were not improved upon for more than 200 years. Many of these advances continue to be the underpinnings of non-relativistic technologies in the modern world. He used the Latin word gravitas (weight) for the effect that would become known as gravity, and defined the law of universal gravitation. In the same work, Newton presented a calculus-like method of geometrical analysis using 'first and last ratios', gave the first analytical determination (based on Boyle's law) of the speed of sound in air, inferred the oblateness of Earth's spheroidal figure, accounted for the precession of the equinoxes as a result of the Moon's gravitational attraction on the Earth's oblateness, initiated the gravitational study of the irregularities in the motion of the Moon, provided a theory for the determination of the orbits of comets, and much more. Newton's biographer David Brewster reported that the complexity of applying his theory of gravity to the motion of the moon was so great it affected Newton's health: "[H]e was deprived of his appetite and sleep" during his work on the problem in 1692–93, and told the astronomer John Machin that "his head never ached but when he was studying the subject". According to Brewster, Edmund Halley also told John Conduitt that when pressed to complete his analysis Newton "always replied that it made his head ache, and kept him awake so often, that he would think of it no more". [Emphasis in original] Newton made clear his heliocentric view of the Solar System—developed in a somewhat modern way because already in the mid-1680s he recognised the "deviation of the Sun" from the centre of gravity of the Solar System. For Newton, it was not precisely the centre of the Sun or any other body that could be considered at rest, but rather "the common centre of gravity of the Earth, the Sun and all the Planets is to be esteem'd the Centre of the World", and this centre of gravity "either is at rest or moves uniformly forward in a right line". (Newton adopted the "at rest" alternative in view of common consent that the centre, wherever it was, was at rest.) Newton was criticised for introducing "occult agencies" into science because of his postulate of an invisible force able to act over vast distances. Later, in the second edition of the Principia (1713), Newton firmly rejected such criticisms in a concluding General Scholium, writing that it was enough that the phenomena implied a gravitational attraction, as they did; but they did not so far indicate its cause, and it was both unnecessary and improper to frame hypotheses of things that were not implied by the phenomena. (Here Newton used what became his famous expression .) With the , Newton became internationally recognised. He acquired a circle of admirers, including the Swiss-born mathematician Nicolas Fatio de Duillier. In 1710, Newton found 72 of the 78 "species" of cubic curves and categorised them into four types. In 1717, and probably with Newton's help, James Stirling proved that every cubic was one of these four types. Newton also claimed that the four types could be obtained by plane projection from one of them, and this was proved in 1731, four years after his death. Philosophy of Science Starting with the second edition of his Principia, Newton included a final section on science philosophy or method. It was here that he wrote his famous line, in Latin, "hypotheses non fingo", which can be translated as "I don't make hypotheses," (the direct translation of "fingo" is "frame", but in context he was advocating against the use of hypotheses in science). He went on to posit that if there is no data to explain a finding, one should simply wait for that data, rather than guessing at an explanation. The full quote as translated is, "Hitherto I have not been able to discover the cause of those properties of gravity from phenomena, and I frame no hypotheses, for whatever is not deduced from the phenomena is to be called an hypothesis; and hypotheses, whether metaphysical or physical, whether of occult qualities or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction. Thus it was that the impenetrability, the mobility, and the impulsive force of bodies, and the laws of motion and of gravitation, were discovered. And to us it is enough that gravity does really exist, and act according to the laws which we have explained, and abundantly serves to account for all the motions of the celestial bodies, and of our sea." This idea that Newton became anti-hypothesis has been disputed, since earlier editions of the Principia were in fact divided in sections headed by hypotheses. However, he seems to have gone away from that, as evidenced from his famous line in his "Opticks", where he wrote, in English, "Hypotheses have no place in experimental science." Later life Royal Mint In the 1690s, Newton wrote a number of religious tracts dealing with the literal and symbolic interpretation of the Bible. A manuscript Newton sent to John Locke in which he disputed the fidelity of 1 John 5:7—the Johannine Comma—and its fidelity to the original manuscripts of the New Testament, remained unpublished until 1785. Newton was also a member of the Parliament of England for Cambridge University in 1689 and 1701, but according to some accounts his only comments were to complain about a cold draught in the chamber and request that the window be closed. He was, however, noted by Cambridge diarist Abraham de la Pryme to have rebuked students who were frightening locals by claiming that a house was haunted. Newton moved to London to take up the post of warden of the Royal Mint during the reign of King William III in 1696, a position that he had obtained through the patronage of Charles Montagu, 1st Earl of Halifax, then Chancellor of the Exchequer. He took charge of England's great recoining, trod on the toes of Lord Lucas, Governor of the Tower, and secured the job of deputy comptroller of the temporary Chester branch for Edmond Halley. Newton became perhaps the best-known Master of the Mint upon the death of Thomas Neale in 1699, a position he held for the last 30 years of his life. These appointments were intended as sinecures, but Newton took them seriously. He retired from his Cambridge duties in 1701, and exercised his authority to reform the currency and punish clippers and counterfeiters. As Warden, and afterwards as Master, of the Royal Mint, Newton estimated that 20 percent of the coins taken in during the Great Recoinage of 1696 were counterfeit. Counterfeiting was high treason, punishable by the felon being hanged, drawn and quartered. Despite this, convicting even the most flagrant criminals could be extremely difficult, but Newton proved equal to the task. Disguised as a habitué of bars and taverns, he gathered much of that evidence himself. For all the barriers placed to prosecution, and separating the branches of government, English law still had ancient and formidable customs of authority. Newton had himself made a justice of the peace in all the home counties. A draft letter regarding the matter is included in Newton's personal first edition of Philosophiæ Naturalis Principia Mathematica, which he must have been amending at the time. Then he conducted more than 100 cross-examinations of witnesses, informers, and suspects between June 1698 and Christmas 1699. He successfully prosecuted 28 coiners, including serial counterfeiter William Chaloner, who was subsequently hanged. Newton was made president of the Royal Society in 1703 and an associate of the French Académie des Sciences. In his position at the Royal Society, Newton made an enemy of John Flamsteed, the Astronomer Royal, by prematurely publishing Flamsteed's Historia Coelestis Britannica, which Newton had used in his studies. Knighthood In April 1705, Queen Anne knighted Newton during a royal visit to Trinity College, Cambridge. The knighthood is likely to have been motivated by political considerations connected with the parliamentary election in May 1705, rather than any recognition of Newton's scientific work or services as Master of the Mint. Newton was the second scientist to be knighted, after Francis Bacon. As a result of a report written by Newton on 21 September 1717 to the Lords Commissioners of His Majesty's Treasury, the bimetallic relationship between gold coins and silver coins was changed by royal proclamation on 22 December 1717, forbidding the exchange of gold guineas for more than 21 silver shillings. This inadvertently resulted in a silver shortage as silver coins were used to pay for imports, while exports were paid for in gold, effectively moving Britain from the silver standard to its first gold standard. It is a matter of debate as to whether he intended to do this or not. It has been argued that Newton conceived of his work at the Mint as a continuation of his alchemical work. Newton was invested in the South Sea Company and lost some £20,000 (£4.4 million in 2020) when it collapsed in around 1720. Toward the end of his life, Newton took up residence at Cranbury Park, near Winchester, with his niece and her husband, until his death. His half-niece, Catherine Barton, served as his hostess in social affairs at his house on Jermyn Street in London; he was her "very loving Uncle", according to his letter to her when she was recovering from smallpox. Death Newton died in his sleep in London on 20 March 1727 (OS 20 March 1726; NS 31 March 1727). He was given a ceremonial funeral, attended by nobles, scientists, and philosophers, and was buried in Westminster Abbey among kings and queens. He was the first scientist to be buried in the abbey. Voltaire may have been present at his funeral. A bachelor, he had divested much of his estate to relatives during his last years, and died intestate. His papers went to John Conduitt and Catherine Barton. Shortly after his death, a plaster death mask was moulded of Newton. It was used by Flemish sculptor John Michael Rysbrack in making a sculpture of Newton. It is now held by the Royal Society, who created a 3D scan of it in 2012. Newton's hair was posthumously examined and found to contain mercury, probably resulting from his alchemical pursuits. Mercury poisoning could explain Newton's eccentricity in late life. Personality Although it was claimed that he was once engaged, Newton never married. The French writer and philosopher Voltaire, who was in London at the time of Newton's funeral, said that he "was never sensible to any passion, was not subject to the common frailties of mankind, nor had any commerce with women—a circumstance which was assured me by the physician and surgeon who attended him in his last moments.” There exists a widespread belief that Newton died a virgin, and writers as diverse as mathematician Charles Hutton, economist John Maynard Keynes, and physicist Carl Sagan have commented on it. Newton had a close friendship with the Swiss mathematician Nicolas Fatio de Duillier, whom he met in London around 1689; some of their correspondence has survived. Their relationship came to an abrupt and unexplained end in 1693, and at the same time Newton suffered a nervous breakdown, which included sending wild accusatory letters to his friends Samuel Pepys and John Locke. His note to the latter included the charge that Locke had endeavoured to "embroil" him with "woemen & by other means". Newton appeared to be relatively modest about his achievements, writing in a later memoir, "I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the sea-shore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me." Nonetheless, he could be fiercely competitive and did on occasion hold grudges against his intellectual rivals, not abstaining from personal attacks when it suited him—a common trait found in many of his contemporaries. In a letter to Robert Hooke in February 1676, for instance, he confessed "If I have seen further it is by standing on the shoulders of giants." Some historians argued that this, written at a time when Newton and Hooke were disputing over optical discoveries, was an oblique attack on Hooke who was presumably short and hunchbacked, rather than (or in addition to) a statement of modesty. On the other hand, the widely known proverb about standing on the shoulders of giants, found in 17th century poet George Herbert's (1651) among others, had as its main point that "a dwarf on a giant's shoulders sees farther of the two", and so in effect place Newton himself rather than Hooke as the 'dwarf' who saw farther. Theology Religious views Although born into an Anglican family, by his thirties Newton held a Christian faith that, had it been made public, would not have been considered orthodox by mainstream Christianity, with historian Stephen Snobelen labelling him a heretic. By 1672, he had started to record his theological researches in notebooks which he showed to no one and which have only been available for public examination since 1972. Over half of what Newton wrote concerned theology and alchemy, and most has never been printed. His writings demonstrate an extensive knowledge of early Church writings and show that in the conflict between Athanasius and Arius which defined the Creed, he took the side of Arius, the loser, who rejected the conventional view of the Trinity. Newton "recognized Christ as a divine mediator between God and man, who was subordinate to the Father who created him." He was especially interested in prophecy, but for him, "the great apostasy was trinitarianism." Newton tried unsuccessfully to obtain one of the two fellowships that exempted the holder from the ordination requirement. At the last moment in 1675 he received a dispensation from the government that excused him and all future holders of the Lucasian chair. Worshipping Jesus Christ as God was, in Newton's eyes, idolatry, an act he believed to be the fundamental sin. In 1999, Snobelen wrote, "Isaac Newton was a heretic. But ... he never made a public declaration of his private faith—which the orthodox would have deemed extremely radical. He hid his faith so well that scholars are still unraveling his personal beliefs." Snobelen concludes that Newton was at least a Socinian sympathiser (he owned and had thoroughly read at least eight Socinian books), possibly an Arian and almost certainly an anti-trinitarian. Although the laws of motion and universal gravitation became Newton's best-known discoveries, he warned against using them to view the Universe as a mere machine, as if akin to a great clock. He said, "So then gravity may put the planets into motion, but without the Divine Power it could never put them into such a circulating motion, as they have about the sun". Along with his scientific fame, Newton's studies of the Bible and of the early Church Fathers were also noteworthy. Newton wrote works on textual criticism, most notably An Historical Account of Two Notable Corruptions of Scripture and Observations upon the Prophecies of Daniel, and the Apocalypse of St. John. He placed the crucifixion of Jesus Christ at 3 April, AD 33, which agrees with one traditionally accepted date. He believed in a rationally immanent world, but he rejected the hylozoism implicit in Leibniz and Baruch Spinoza. The ordered and dynamically informed Universe could be understood, and must be understood, by an active reason. In his correspondence, Newton claimed that in writing the Principia "I had an eye upon such Principles as might work with considering men for the belief of a Deity". He saw evidence of design in the system of the world: "Such a wonderful uniformity in the planetary system must be allowed the effect of choice". But Newton insisted that divine intervention would eventually be required to reform the system, due to the slow growth of instabilities. For this, Leibniz lampooned him: "God Almighty wants to wind up his watch from time to time: otherwise it would cease to move. He had not, it seems, sufficient foresight to make it a perpetual motion." Newton's position was vigorously defended by his follower Samuel Clarke in a famous correspondence. A century later, Pierre-Simon Laplace's work Celestial Mechanics had a natural explanation for why the planet orbits do not require periodic divine intervention. The contrast between Laplace's mechanistic worldview and Newton's one is the most strident considering the famous answer which the French scientist gave Napoleon, who had criticised him for the absence of the Creator in the Mécanique céleste: "Sire, j'ai pu me passer de cette hypothèse" ("Sir, I didn't need this hypothesis"). Scholars long debated whether Newton disputed the doctrine of the Trinity. His first biographer, David Brewster, who compiled his manuscripts, interpreted Newton as questioning the veracity of some passages used to support the Trinity, but never denying the doctrine of the Trinity as such. In the twentieth century, encrypted manuscripts written by Newton and bought by John Maynard Keynes (among others) were deciphered and it became known that Newton did indeed reject Trinitarianism. Religious thought Newton and Robert Boyle's approach to the mechanical philosophy was promoted by rationalist pamphleteers as a viable alternative to the pantheists and enthusiasts, and was accepted hesitantly by orthodox preachers as well as dissident preachers like the latitudinarians. The clarity and simplicity of science was seen as a way to combat the emotional and metaphysical superlatives of both superstitious enthusiasm and the threat of atheism, and at the same time, the second wave of English deists used Newton's discoveries to demonstrate the possibility of a "Natural Religion". The attacks made against pre-Enlightenment "magical thinking", and the mystical elements of Christianity, were given their foundation with Boyle's mechanical conception of the universe. Newton gave Boyle's ideas their completion through mathematical proofs and, perhaps more importantly, was very successful in popularising them. Alchemy Of an estimated ten million words of writing in Newton's papers, about one million deal with alchemy. Many of Newton's writings on alchemy are copies of other manuscripts, with his own annotations. Alchemical texts mix artisanal knowledge with philosophical speculation, often hidden behind layers of wordplay, allegory, and imagery to protect craft secrets. Some of the content contained in Newton's papers could have been considered heretical by the church. In 1888, after spending sixteen years cataloguing Newton's papers, Cambridge University kept a small number and returned the rest to the Earl of Portsmouth. In 1936, a descendant offered the papers for sale at Sotheby's. The collection was broken up and sold for a total of about £9,000. John Maynard Keynes was one of about three dozen bidders who obtained part of the collection at auction. Keynes went on to reassemble an estimated half of Newton's collection of papers on alchemy before donating his collection to Cambridge University in 1946. All of Newton's known writings on alchemy are currently being put online in a project undertaken by Indiana University: "The Chymistry of Isaac Newton" and summarised in a book. In June 2020, two unpublished pages of Newton's notes on Jan Baptist van Helmont's book on plague, De Peste, were being auctioned online by Bonhams. Newton's analysis of this book, which he made in Cambridge while protecting himself from London's 1665–1666 infection, is the most substantial written statement he is known to have made about the plague, according to Bonhams. As far as the therapy is concerned, Newton writes that "the best is a toad suspended by the legs in a chimney for three days, which at last vomited up earth with various insects in it, on to a dish of yellow wax, and shortly after died. Combining powdered toad with the excretions and serum made into lozenges and worn about the affected area drove away the contagion and drew out the poison". Legacy Recognition The mathematician and astronomer Joseph-Louis Lagrange frequently asserted that Newton was the greatest genius who ever lived, and once added that Newton was also "the most fortunate, for we cannot find more than once a system of the world to establish." English poet Alexander Pope wrote the famous epitaph: But this was not allowed to be inscribed in Newton's monument at Westminster. The epitaph added is as follows: which can be translated as follows: Newton has been called "the most influential figure in the history of Western science", and has been regarded as "the central figure in the history of science", who "more than anyone else is the source of our great confidence in the power of science." New Scientist called Newton "the supreme genius and most enigmatic character in the history of science". The philosopher and historian David Hume also declared that Newton was "the greatest and rarest genius that ever arose for the ornament and instruction of the species". In his home of Monticello, Thomas Jefferson, a Founding Father and President of the United States, kept portraits of John Locke, Sir Francis Bacon, and Newton, whom he described as "the three greatest men that have ever lived, without any exception", and who he credited with laying "the foundation of those superstructures which have been raised in the Physical and Moral sciences". Newton has further been called "the towering figure of the Scientific Revolution" and that "In a period rich with outstanding thinkers, Newton was simply the most outstanding." The polymath Johann Wolfgang von Goethe labeled Newton's birth as the "Christmas of the modern age". In the Italian polymath Vilfredo Pareto's estimation, Newton was the greatest human being who ever lived. On the bicentennial of Newton's death in 1927, astronomer James Jeans stated that he "was certainly the greatest man of science, and perhaps the greatest intellect, the human race has seen". Newton ultimately conceived four revolutions—in optics, mathematics, mechanics, and gravity—but also foresaw a fifth in electricity, though he lacked the time and energy in old age to fully accomplish it. The physicist Ludwig Boltzmann called Newton's Principia "the first and greatest work ever written about theoretical physics". Physicist Stephen Hawking similarly called Principia "probably the most important single work ever published in the physical sciences". Physicist Edward Andrade stated that Newton "was capable of greater sustained mental effort than any man, before or since", and noted earlier the place of Isaac Newton in history, stating:The French physicist and mathematician Jean-Baptiste Biot praised Newton's genius, stating that: Despite his rivalry with Gottfried Wilhem Leibniz, Leibniz still praised the work of Newton, with him responding to a question at a dinner in 1701 from Sophia Charlotte, the Queen of Prussia, about his view of Newton with: Mathematician E.T. Bell ranked Newton alongside Carl Friedrich Gauss and Archimedes as the three greatest mathematicians of all time. In The Cambridge Companion to Isaac Newton (2016), he is described as being "from a very young age, an extraordinary problem-solver, as good, it would appear, as humanity has ever produced". He is ultimately ranked among the top two or three greatest theoretical scientists ever, alongside James Clerk Maxwell and Albert Einstein, the greatest mathematician ever alongside Carl F. Gauss, and among the best experimentalists ever, thereby "putting Newton in a class by himself among empirical scientists, for one has trouble in thinking of any other candidate who was in the first rank of even two of these categories." Also noted is "At least in comparison to subsequent scientists, Newton was also exceptional in his ability to put his scientific effort in much wider perspective". Gauss himself had Archimedes and Newton as his heroes, and used terms such as clarissimus or magnus to describe other intellectuals such as great mathematicians and philosophers, but reserved summus for Newton only, and once remarked that "Newton remains forever the master of all masters!" Albert Einstein kept a picture of Newton on his study wall alongside ones of Michael Faraday and of James Clerk Maxwell. Einstein stated that Newton's creation of calculus in relation to his laws of motion was "perhaps the greatest advance in thought that a single individual was ever privileged to make." He also noted the influence of Newton, stating that:In 1999, an opinion poll of 100 of the day's leading physicists voted Einstein the "greatest physicist ever," with Newton the runner-up, while a parallel survey of rank-and-file physicists ranked Newton as the greatest. In 2005, a dual survey of both the public and of members of Britain's Royal Society (formerly headed by Newton) asking who had the greater effect on both the history of science and on the history of mankind, Newton or Einstein, both the public and the Royal Society deemed Newton to have made the greater overall contributions for both. In 1999, Time named Newton the Person of the Century for the 17th century. Newton placed sixth in the 100 Greatest Britons poll conducted by BBC in 2002. However, in 2003, he was voted as the greatest Briton in a poll conducted by BBC World, with Winston Churchill second. He was voted as the greatest Cantabrigian by University of Cambridge students in 2009. Physicist Lev Landau ranked physicists on a logarithmic scale of productivity and genius ranging from 0 to 5. The highest ranking, 0, was assigned to Newton. Einstein was ranked 0.5. A rank of 1 was awarded to the fathers of quantum mechanics, such as Werner Heisenberg and Paul Dirac. Landau, a Nobel prize winner and the discoverer of superfluidity, ranked himself as 2. The SI derived unit of force is named the Newton in his honour. Apple incident Newton himself often told the story that he was inspired to formulate his theory of gravitation by watching the fall of an apple from a tree. The story is believed to have passed into popular knowledge after being related by Catherine Barton, Newton's niece, to Voltaire. Voltaire then wrote in his Essay on Epic Poetry (1727), "Sir Isaac Newton walking in his gardens, had the first thought of his system of gravitation, upon seeing an apple falling from a tree." Although it has been said that the apple story is a myth and that he did not arrive at his theory of gravity at any single moment, acquaintances of Newton (such as William Stukeley, whose manuscript account of 1752 has been made available by the Royal Society) do in fact confirm the incident, though not the apocryphal version that the apple actually hit Newton's head. Stukeley recorded in his Memoirs of Sir Isaac Newton's Life a conversation with Newton in Kensington on 15 April 1726: John Conduitt, Newton's assistant at the Royal Mint and husband of Newton's niece, also described the event when he wrote about Newton's life: It is known from his notebooks that Newton was grappling in the late 1660s with the idea that terrestrial gravity extends, in an inverse-square proportion, to the Moon; however, it took him two decades to develop the full-fledged theory. The question was not whether gravity existed, but whether it extended so far from Earth that it could also be the force holding the Moon to its orbit. Newton showed that if the force decreased as the inverse square of the distance, one could indeed calculate the Moon's orbital period, and get good agreement. He guessed the same force was responsible for other orbital motions, and hence named it "universal gravitation". Various trees are claimed to be "the" apple tree which Newton describes. The King's School, Grantham claims that the tree was purchased by the school, uprooted and transported to the headmaster's garden some years later. The staff of the (now) National Trust-owned Woolsthorpe Manor dispute this, and claim that a tree present in their gardens is the one described by Newton. A descendant of the original tree can be seen growing outside the main gate of Trinity College, Cambridge, below the room Newton lived in when he studied there. The National Fruit Collection at Brogdale in Kent can supply grafts from their tree, which appears identical to Flower of Kent, a coarse-fleshed cooking variety. Commemorations Newton's monument (1731) can be seen in Westminster Abbey, at the north of the entrance to the choir against the choir screen, near his tomb. It was executed by the sculptor Michael Rysbrack (1694–1770) in white and grey marble with design by the architect William Kent. The monument features a figure of Newton reclining on top of a sarcophagus, his right elbow resting on several of his great books and his left hand pointing to a scroll with a mathematical design. Above him is a pyramid and a celestial globe showing the signs of the Zodiac and the path of the comet of 1680. A relief panel depicts putti using instruments such as a telescope and prism. From 1978 until 1988, an image of Newton designed by Harry Ecclestone appeared on Series D £1 banknotes issued by the Bank of England (the last £1 notes to be issued by the Bank of England). Newton was shown on the reverse of the notes holding a book and accompanied by a telescope, a prism and a map of the Solar System. A statue of Isaac Newton, looking at an apple at his feet, can be seen at the Oxford University Museum of Natural History. A large bronze statue, Newton, after William Blake, by Eduardo Paolozzi, dated 1995 and inspired by Blake's etching, dominates the piazza of the British Library in London. A bronze statue of Newton was erected in 1858 in the centre of Grantham where he went to school, prominently standing in front of Grantham Guildhall. The still-surviving farmhouse at Woolsthorpe By Colsterworth is a Grade I listed building by Historic England through being his birthplace and "where he discovered gravity and developed his theories regarding the refraction of light". The Enlightenment Enlightenment philosophers chose a short history of scientific predecessors—Galileo, Boyle, and Newton principally—as the guides and guarantors of their applications of the singular concept of nature and natural law to every physical and social field of the day. In this respect, the lessons of history and the social structures built upon it could be discarded. It is held by European philosophers of the Enlightenment and by historians of the Enlightenment that Newton's publication of the Principia was a turning point in the Scientific Revolution and started the Enlightenment. It was Newton's conception of the universe based upon natural and rationally understandable laws that became one of the seeds for Enlightenment ideology. Locke and Voltaire applied concepts of natural law to political systems advocating intrinsic rights; the physiocrats and Adam Smith applied natural conceptions of psychology and self-interest to economic systems; and sociologists criticised the current social order for trying to fit history into natural models of progress. Monboddo and Samuel Clarke resisted elements of Newton's work, but eventually rationalised it to conform with their strong religious views of nature. Works Published in his lifetime De analysi per aequationes numero terminorum infinitas (1669, published 1711) Of Natures Obvious Laws & Processes in Vegetation (unpublished, –75) De motu corporum in gyrum (1684) Philosophiæ Naturalis Principia Mathematica (1687) Scala graduum Caloris. Calorum Descriptiones & signa (1701) Opticks (1704) Reports as Master of the Mint (1701–1725) Arithmetica Universalis (1707) Published posthumously De mundi systemate (The System of the World) (1728) Optical Lectures (1728) The Chronology of Ancient Kingdoms Amended (1728) Observations on Daniel and The Apocalypse of St. John (1733) Method of Fluxions (1671, published 1736) An Historical Account of Two Notable Corruptions of Scripture (1754) See also Elements of the Philosophy of Newton, a book by Voltaire List of multiple discoveries: seventeenth century List of things named after Isaac Newton List of presidents of the Royal Society References Notes Citations Bibliography Further reading Primary Newton, Isaac. The Principia: Mathematical Principles of Natural Philosophy. University of California Press, (1999) Brackenridge, J. Bruce. The Key to Newton's Dynamics: The Kepler Problem and the Principia: Containing an English Translation of Sections 1, 2, and 3 of Book One from the First (1687) Edition of Newton's Mathematical Principles of Natural Philosophy, University of California Press (1996) Newton, Isaac. The Optical Papers of Isaac Newton. Vol. 1: The Optical Lectures, 1670–1672, Cambridge University Press (1984) Newton, Isaac. Opticks (4th ed. 1730) online edition Newton, I. (1952). Opticks, or A Treatise of the Reflections, Refractions, Inflections & Colours of Light. New York: Dover Publications. Newton, I. Sir Isaac Newton's Mathematical Principles of Natural Philosophy and His System of the World, tr. A. Motte, rev. Florian Cajori. Berkeley: University of California Press (1934)  – 8 volumes. Newton, Isaac. The correspondence of Isaac Newton, ed. H.W. Turnbull and others, 7 vols (1959–77) Newton's Philosophy of Nature: Selections from His Writings edited by H.S. Thayer (1953; online edition) Isaac Newton, Sir; J Edleston; Roger Cotes, Correspondence of Sir Isaac Newton and Professor Cotes, including letters of other eminent men, London, John W. Parker, West Strand; Cambridge, John Deighton (1850, Google Books) Maclaurin, C. (1748). An Account of Sir Isaac Newton's Philosophical Discoveries, in Four Books. London: A. Millar and J. Nourse Newton, I. (1958). Isaac Newton's Papers and Letters on Natural Philosophy and Related Documents, eds. I.B. Cohen and R.E. Schofield. Cambridge: Harvard University Press Newton, I. (1962). The Unpublished Scientific Papers of Isaac Newton: A Selection from the Portsmouth Collection in the University Library, Cambridge, ed. A.R. Hall and M.B. Hall. Cambridge: Cambridge University Press Newton, I. (1975). Isaac Newton's 'Theory of the Moon's Motion''' (1702). London: Dawson Alchemy  – Preface by Albert Einstein. Reprinted by Johnson Reprint Corporation, New York (1972) Keynes took a close interest in Newton and owned many of Newton's private papers. (edited by A.H. White; originally published in 1752) Trabue, J. "Ann and Arthur Storer of Calvert County, Maryland, Friends of Sir Isaac Newton," The American Genealogist 79 (2004): 13–27. Religion Dobbs, Betty Jo Tetter. The Janus Faces of Genius: The Role of Alchemy in Newton's Thought. (1991), links the alchemy to Arianism Force, James E., and Richard H. Popkin, eds. Newton and Religion: Context, Nature, and Influence. (1999), pp. xvii, 325.; 13 papers by scholars using newly opened manuscripts Science Berlinski, David. Newton's Gift: How Sir Isaac Newton Unlocked the System of the World. (2000); Cohen, I. Bernard and Smith, George E., ed. The Cambridge Companion to Newton. (2002). Focuses on philosophical issues only; excerpt and text search; complete edition online This well documented work provides, in particular, valuable information regarding Newton's knowledge of Patristics Hawking, Stephen, ed. On the Shoulders of Giants. Places selections from Newton's Principia in the context of selected writings by Copernicus, Kepler, Galileo and Einstein Newton, Isaac. Papers and Letters in Natural Philosophy'', edited by I. Bernard Cohen. Harvard University Press, 1958, 1978; . External links Enlightening Science digital project : Texts of his papers, "Popularisations" and podcasts at the Newton Project Writings by Newton Newton's works – full texts, at the Newton Project Newton's papers in the Royal Society's archives The Newton Manuscripts at the National Library of Israel – the collection of all his religious writings "Newton Papers"  – Cambridge Digital Library 1642 births 1727 deaths 17th-century alchemists 17th-century apocalypticists 17th-century English astronomers 17th-century English mathematicians 17th-century English male writers 17th-century English writers 17th-century writers in Latin 18th-century alchemists 18th-century apocalypticists 18th-century English astronomers 18th-century British scientists 18th-century English mathematicians 18th-century English male writers 18th-century English writers 18th-century writers in Latin Alumni of Trinity College, Cambridge Antitrinitarians Ballistics experts English scientific instrument makers British writers in Latin Burials at Westminster Abbey Color scientists Copernican Revolution Creators of temperature scales British critics of atheism English alchemists English Anglicans English Christians English inventors English justices of the peace English knights English mathematicians English MPs 1689–1690 English MPs 1701–1702 English physicists Enlightenment scientists Experimental physicists Fellows of the Royal Society Fellows of Trinity College, Cambridge Fluid dynamicists British geometers Linear algebraists Hermeticists History of calculus Knights Bachelor Lucasian Professors of Mathematics Masters of the Mint Members of the pre-1707 Parliament of England for the University of Cambridge Natural philosophers Nontrinitarian Christians Optical physicists People educated at The King's School, Grantham People from South Kesteven District Philosophers of science Post-Reformation Arian Christians Presidents of the Royal Society Theoretical physicists Writers about religion and science
Isaac Newton
[ "Physics", "Chemistry", "Astronomy", "Mathematics" ]
12,655
[ "Scales of temperature", "Physical quantities", "History of astronomy", "Calculus", "Theoretical physics", "Fluid dynamicists", "Copernican Revolution", "Mathematics of infinitesimals", "Creators of temperature scales", "Theoretical physicists", "History of calculus", "Fluid dynamics" ]
14,722
https://en.wikipedia.org/wiki/Irssi
Irssi ( ) is an Internet Relay Chat (IRC) client program for Linux, FreeBSD, macOS and Microsoft Windows. It was originally written by Timo Sirainen, and released under the terms of the GNU GPL-2.0-or-later in January 1999. The program has a text-based user interface was written from scratch using C. It may be customized by editing its config files or by installing plugins and Perl scripts. Though initially developed for Unix-like operating systems, it has been successfully ported to both Windows and macOS. Features Irssi is written in the C programming language and in normal operation uses a text-mode user interface. According to the developers, Irssi was written from scratch, not based on ircII (like BitchX and epic). This freed the developers from having to deal with the constraints of an existing codebase, allowing them to maintain tighter control over issues such as security and customization. Numerous Perl scripts have been made available for Irssi to customise how it looks and operates. Plugins are available which add encryption and protocols such as ICQ and XMPP. Irssi may be configured by using its user interface or by manually editing its configuration files, which use a syntax resembling Perl data structures. Distributions Irssi was written primarily to run on Unix-like operating systems, and binaries and packages are available for Gentoo Linux, Debian, Slackware, SUSE (openSUSE), Frugalware, Fedora, FreeBSD, OpenBSD, NetBSD, DragonFly BSD, Solaris, Arch Linux, Ubuntu, NixOS, and others. Irssi builds and runs on Microsoft Windows under Cygwin, and in 2006, an official Windows standalone build was released. For the Unix-based macOS, text mode ports are available from the Homebrew, MacPorts, and Fink package managers, and two graphical clients have been written based on Irssi, IrssiX, and MacIrssi. The Cocoa client Colloquy was previously based on Irssi, but it now uses its own IRC core implementation. See also Comparison of Internet Relay Chat clients Shell account WeeChat References External links irssi on GitHub on Libera Chat IRC clients Free IRC clients MacOS IRC clients Unix IRC clients Windows IRC clients Free software programmed in C Cross-platform software 1999 software Free software that uses ncurses Console applications Software developed in Finland
Irssi
[ "Technology" ]
521
[ "Software developed in Finland" ]
14,730
https://en.wikipedia.org/wiki/IRC
IRC (Internet Relay Chat) is a text-based chat system for instant messaging. IRC is designed for group communication in discussion forums, called channels, but also allows one-on-one communication via private messages as well as chat and data transfer, including file sharing. Internet Relay Chat is implemented as an application layer protocol to facilitate communication in the form of text. The chat process works on a client–server networking model. Users connect, using a clientwhich may be a web app, a standalone desktop program, or embedded into part of a larger programto an IRC server, which may be part of a larger IRC network. Examples of programs used to connect include Mibbit, IRCCloud, KiwiIRC, and mIRC. IRC usage has been declining steadily since 2003, losing 60 percent of its users. In April 2011, the top 100 IRC networks served more than 200,000 users at a time. History IRC was created by Jarkko Oikarinen in August 1988 to replace a program called MUT (MultiUser Talk) on a BBS called OuluBox at the University of Oulu in Finland, where he was working at the Department of Information Processing Science. Jarkko intended to extend the BBS software he administered, to allow news in the Usenet style, real time discussions and similar BBS features. The first part he implemented was the chat part, which he did with borrowed parts written by his friends Jyrki Kuoppala and Jukka Pihl. The first IRC network was running on a single server named tolsun.oulu.fi. Oikarinen found inspiration in a chat system known as Bitnet Relay, which operated on the BITNET. Jyrki Kuoppala pushed Oikarinen to ask Oulu University to free the IRC code so that it also could be run outside of Oulu, and after they finally got it released, Jyrki Kuoppala immediately installed another server. This was the first "IRC network". Oikarinen got some friends at the Helsinki University of Technology and Tampere University of Technology to start running IRC servers when his number of users increased and other universities soon followed. At this time Oikarinen realized that the rest of the BBS features probably would not fit in his program. Oikarinen contacted people at the University of Denver and Oregon State University. They had their own IRC network running and wanted to connect to the Finnish network. They had obtained the program from one of Oikarinen's friends, Vijay Subramaniam—the first non-Finnish person to use IRC. IRC then grew larger and got used on the entire Finnish national network—FUNET—and then connected to Nordunet, the Scandinavian branch of the Internet. In November 1988, IRC had spread across the Internet and in the middle of 1989, there were some 40 servers worldwide. EFnet In August 1990, the first major disagreement took place in the IRC world. The "A-net" (Anarchy net) included a server named eris.berkeley.edu. It was all open, required no passwords and had no limit on the number of connects. As Greg "wumpus" Lindahl explains: "it had a wildcard server line, so people were hooking up servers and nick-colliding everyone". The "Eris Free Network", EFnet, made the eris machine the first to be Q-lined (Q for quarantine) from IRC. In wumpus' words again: "Eris refused to remove that line, so I formed EFnet. It wasn't much of a fight; I got all the hubs to join, and almost everyone else got carried along." A-net was formed with the eris servers, while EFnet was formed with the non-eris servers. History showed most servers and users went with EFnet. Once A-net disbanded, the name EFnet became meaningless, and once again it was the one and only IRC network. Around that time IRC was used to report on the 1991 Soviet coup d'état attempt throughout a media blackout. It was previously used in a similar fashion during the Gulf War. Chat logs of these and other events are kept in the ibiblio archive. Undernet fork Another fork effort, the first that made a lasting difference, was initiated by "Wildthang" in the United States in October 1992. (It forked off the EFnet ircd version 2.8.10). It was meant to be just a test network to develop bots on but it quickly grew to a network "for friends and their friends". In Europe and Canada a separate new network was being worked on and in December the French servers connected to the Canadian ones, and by the end of the month, the French and Canadian network was connected to the US one, forming the network that later came to be called "The Undernet". The "undernetters" wanted to take ircd further in an attempt to make it use less bandwidth and to try to sort out the channel chaos (netsplits and takeovers) that EFnet started to suffer from. For the latter purpose, the Undernet implemented timestamps, new routing and offered the CService—a program that allowed users to register channels and then attempted to protect them from troublemakers. The first server list presented, from 15 February 1993, includes servers from the U.S., Canada, France, Croatia and Japan. On 15 August, the new user count record was set to 57 users. In May 1993, RFC 1459 was published and details a simple protocol for client/server operation, channels, one-to-one and one-to-many conversations. A significant number of extensions like CTCP, colors and formats are not included in the protocol specifications, nor is character encoding, which led various implementations of servers and clients to diverge. Software implementation varied significantly from one network to the other, each network implementing their own policies and standards in their own code bases. DALnet fork During the summer of 1994, the Undernet was itself forked. The new network was called DALnet (named after its founder: dalvenjah), formed for better user service and more user and channel protections. One of the more significant changes in DALnet was use of longer nicknames (the original ircd limit being 9 letters). DALnet ircd modifications were made by Alexei "Lefler" Kosut. DALnet was thus based on the Undernet ircd server, although the DALnet pioneers were EFnet abandoners. According to James Ng, the initial DALnet people were "ops in #StarTrek sick from the constant splits/lags/takeovers/etc". DALnet quickly offered global WallOps (IRCop messages that can be seen by users who are +w (/mode NickName +w)), longer nicknames, Q:Lined nicknames (nicknames that cannot be used i.e. ChanServ, IRCop, NickServ, etc.), global K:Lines (ban of one person or an entire domain from a server or the entire network), IRCop only communications: GlobOps, +H mode showing that an IRCop is a "helpop" etc. Much of DALnet's new functions were written in early 1995 by Brian "Morpher" Smith and allow users to own nicknames, control channels, send memos, and more. IRCnet fork In July 1996, after months of flame wars and discussions on the mailing list, there was yet another split due to disagreement in how the development of the ircd should evolve. Most notably, the "European" (most of those servers were in Europe) side that later named itself IRCnet argued for nick and channel delays whereas the EFnet side argued for timestamps. There were also disagreements about policies: the European side had started to establish a set of rules directing what IRCops could and could not do, a point of view opposed by the US side. Most (not all) of the IRCnet servers were in Europe, while most of the EFnet servers were in the US. This event is also known as "The Great Split" in many IRC societies. EFnet has since (as of August 1998) grown and passed the number of users it had then. In the (northern) autumn of the year 2000, EFnet had some 50,000 users and IRCnet 70,000. Modern IRC IRC has changed much over its life on the Internet. New server software has added a multitude of new features. Services: Network-operated bots to facilitate registration of nicknames and channels, sending messages for offline users and network operator functions. Extra modes: While the original IRC system used a set of standard user and channel modes, new servers add many new modes for features such as removing color codes from text, or obscuring a user's hostmask ("cloaking") to protect from denial-of-service attacks. Proxy detection: Most modern servers support detection of users attempting to connect through an insecure (misconfigured or exploited) proxy server, which can then be denied a connection. This proxy detection software is used by several networks, although that real time list of proxies is defunct since early 2006. Additional commands: New commands can be such things as shorthand commands to issue commands to Services, to network-operator-only commands to manipulate a user's hostmask. Encryption: For the client-to-server leg of the connection TLS might be used (messages cease to be secure once they are relayed to other users on standard connections, but it makes eavesdropping on or wiretapping an individual's IRC sessions difficult). For client-to-client communication, SDCC (Secure DCC) can be used. Connection protocol: IRC can be connected to via IPv4, the old version of the Internet Protocol, or by IPv6, the current standard of the protocol. , a new standardization effort is under way under a working group called IRCv3, which focuses on more advanced client features such as instant notifications, better history support and improved security. , no major IRC networks have fully adopted the proposed standard. there are 481 different IRC networks known to be operating, of which the open source Libera Chat, founded in May 2021, has the most users, with 20,374 channels on 26 servers; between them, the top 100 IRC networks share over 100 thousand channels operating on about one thousand servers. After its golden era during the 1990s and early 2000s (240,000 users on QuakeNet in 2004), IRC has seen a significant decline, losing around 60% of users between 2003 and 2012, with users moving to social media platforms such as Facebook or Twitter, but also to open platforms such as XMPP which was developed in 1999. Certain networks such as Freenode have not followed the overall trend and have more than quadrupled in size during the same period. However, Freenode, which in 2016 had around 90,000 users, has since declined to about 9,300 users. The largest IRC networks have traditionally been grouped as the "Big Four"—a designation for networks that top the statistics. The Big Four networks change periodically, but due to the community nature of IRC there are a large number of other networks for users to choose from. Historically the "Big Four" were: EFnet IRCnet Undernet DALnet IRC reached 6 million simultaneous users in 2001 and 10 million users in 2004–2005, dropping to around 350k in 2021. The top 100 IRC networks have around 230k users connected at peak hours. Timeline Timeline of major networks: EFnet, 1990 to present Undernet, 1992 to present DALnet, 1994 to present freenode, 1995 to present IRCnet, 1996 to present QuakeNet, 1997 to present Open and Free Technology Community, 2001 to present Rizon, 2002 to present Libera Chat, 2021 to present Technical information IRC is an open protocol that uses TCP and, optionally, TLS. An IRC server can connect to other IRC servers to expand the IRC network. Users access IRC networks by connecting a client to a server. There are many client implementations, such as mIRC, HexChat and irssi, and server implementations, e.g. the original IRCd. Most IRC servers do not require users to register an account but a nickname is required before being connected. IRC was originally a plain text protocol (although later extended), which on request was assigned port 194/TCP by IANA. However, the de facto standard has always been to run IRC on 6667/TCP and nearby port numbers (for example TCP ports 6660–6669, 7000) to avoid having to run the IRCd software with root privileges. The protocol specified that characters were 8-bit but did not specify the character encoding the text was supposed to use. This can cause problems when users using different clients and/or different platforms want to converse. All client-to-server IRC protocols in use today are descended from the protocol implemented in the irc2.4.0 version of the IRC2 server, and documented in RFC 1459. Since RFC 1459 was published, the new features in the irc2.10 implementation led to the publication of several revised protocol documents (RFC 2810, RFC 2811, RFC 2812 and RFC 2813); however, these protocol changes have not been widely adopted among other implementations. Although many specifications on the IRC protocol have been published, there is no official specification, as the protocol remains dynamic. Virtually no clients and very few servers rely strictly on the above RFCs as a reference. Microsoft made an extension for IRC in 1998 via the proprietary IRCX. They later stopped distributing software supporting IRCX, instead developing the proprietary MSNP. The standard structure of a network of IRC servers is a tree. Messages are routed along only necessary branches of the tree but network state is sent to every server and there is generally a high degree of implicit trust between servers. However, this architecture has a number of problems. A misbehaving or malicious server can cause major damage to the network and any changes in structure, whether intentional or a result of conditions on the underlying network, require a net-split and net-join. This results in a lot of network traffic and spurious quit/join messages to users and temporary loss of communication to users on the splitting servers. Adding a server to a large network means a large background bandwidth load on the network and a large memory load on the server. Once established, however, each message to multiple recipients is delivered in a fashion similar to multicast, meaning each message travels a network link exactly once. This is a strength in comparison to non-multicasting protocols such as Simple Mail Transfer Protocol (SMTP) or Extensible Messaging and Presence Protocol (XMPP). An IRC daemon can be used on a local area network (LAN). IRC can thus be used to facilitate communication between people within the local area network (internal communication). Commands and replies IRC has a line-based structure. Clients send single-line messages to the server, receive replies to those messages and receive copies of some messages sent by other clients. In most clients, users can enter commands by prefixing them with a '/'. Depending on the command, these may either be handled entirely by the client, or (generally for commands the client does not recognize) passed directly to the server, possibly with some modification. Due to the nature of the protocol, automated systems cannot always correctly pair a sent command with its reply with full reliability and are subject to guessing. Channels The basic means of communicating to a group of users in an established IRC session is through a channel. Channels on a network can be displayed using the IRC command LIST, which lists all currently available channels that do not have the modes +s or +p set, on that particular network. Users can join a channel using the JOIN command, in most clients available as /join #channelname. Messages sent to the joined channels are then relayed to all other users. Channels that are available across an entire IRC network are prefixed with a '#', while those local to a server use '&'. Other less common channel types include '+' channels—'modeless' channels without operators—and '!' channels, a form of timestamped channel on normally non-timestamped networks. Modes Users and channels may have modes that are represented by individual case-sensitive letters and are set using the MODE command. User modes and channel modes are separate and can use the same letter to mean different things (e.g. user mode "i" is invisible mode while channel mode "i" is invite only.) Modes are usually set and unset using the mode command that takes a target (user or channel), a set of modes to set (+) or unset (-) and any parameters the modes need. Some channel modes take parameters and other channel modes apply to a user on a channel or add or remove a mask (e.g. a ban mask) from a list associated with the channel rather than applying to the channel as a whole. Modes that apply to users on a channel have an associated symbol that is used to represent the mode in names replies (sent to clients on first joining a channel and use of the names command) and in many clients also used to represent it in the client's displayed list of users in a channel or to display an own indicator for a user's modes. In order to correctly parse incoming mode messages and track channel state the client must know which mode is of which type and for the modes that apply to a user on a channel which symbol goes with which letter. In early implementations of IRC this had to be hard-coded in the client but there is now a de facto standard extension to the protocol called ISUPPORT that sends this information to the client at connect time using numeric 005. There is a small design fault in IRC regarding modes that apply to users on channels: the names message used to establish initial channel state can only send one such mode per user on the channel, but multiple such modes can be set on a single user. For example, if a user holds both operator status (+o) and voice status (+v) on a channel, a new client will be unable to see the mode with less priority (i.e. voice). Workarounds for this are possible on both the client and server side; a common solution is to use IRCv3 "multi-prefix" extension. Standard (RFC 1459) modes Many daemons and networks have added extra modes or modified the behavior of modes in the above list. Channel operators A channel operator is a client on an IRC channel that manages the channel. IRC channel operators can be easily seen by the symbol or icon next to their name (varies by client implementation, commonly a "@" symbol prefix, a green circle, or a Latin letter "+o"/"o"). On most networks, an operator can: Kick a user. Ban a user. Give another user IRC Channel Operator Status or IRC Channel Voice Status. Change the IRC Channel topic while channel mode +t is set. Change the IRC Channel Mode locks. Operators There are also users who maintain elevated rights on their local server, or the entire network; these are called IRC operators, sometimes shortened to IRCops or Opers (not to be confused with channel operators). As the implementation of the IRCd varies, so do the privileges of the IRC operator on the given IRCd. RFC 1459 claims that IRC operators are "a necessary evil" to keep a clean state of the network, and as such they need to be able to disconnect and reconnect servers. Additionally, to prevent malicious users or even harmful automated programs from entering IRC, IRC operators are usually allowed to disconnect clients and completely ban IP addresses or complete subnets. Networks that carry services (NickServ et al.) usually allow their IRC operators also to handle basic "ownership" matters. Further privileged rights may include overriding channel bans (being able to join channels they would not be allowed to join, if they were not opered), being able to op themselves on channels where they would not be able without being opered, being auto-opped on channels always and so forth. Hostmasks A hostmask is a unique identifier of an IRC client connected to an IRC server. IRC servers, services, and other clients, including bots, can use it to identify a specific IRC session. The format of a hostmask is nick!user@host. The hostmask looks similar to, but should not be confused with an e-mail address. The nick part is the nickname chosen by the user and may be changed while connected. The user part is the username reported by ident on the client. If ident is not available on the client, the username specified when the client connected is used after being prefixed with a tilde. The host part is the hostname the client is connecting from. If the IP address of the client cannot be resolved to a valid hostname by the server, it is used instead of the hostname. Because of the privacy implications of exposing the IP address or hostname of a client, some IRC daemons also provide privacy features, such as InspIRCd or UnrealIRCd's "+x" mode. This hashes a client IP address or masks part of a client's hostname, making it unreadable to users other than IRCops. Users may also have the option of requesting a "virtual host" (or "vhost"), to be displayed in the hostmask to allow further anonymity. Some IRC networks, such as Libera Chat or Freenode, use these as "cloaks" to indicate that a user is affiliated with a group or project. URI scheme There are three provisional recognized uniform resource identifier (URI) schemes for Internet Relay Chat: irc, ircs, and irc6. When supported, they allow hyperlinks of various forms, including irc://<host>[:<port>]/[<channel>[?<channel_keyword>]] ircs://<host>[:<port>]/[<channel>[?<channel_keyword>]] irc6://<host>[:<port>]/[<channel>[?<channel_keyword>]] (where items enclosed within brackets ([,]) are optional) to be used to (if necessary) connect to the specified host (or network, if known to the IRC client) and join the specified channel. (This can be used within the client itself, or from another application such as a Web browser). irc is the default URI, irc6 specifies a connection to be made using IPv6, and ircs specifies a secure connection. Per the specification, the usual hash symbol (#) will be prepended to channel names that begin with an alphanumeric character—allowing it to be omitted. Some implementations (for example, mIRC) will do so unconditionally resulting in a (usually unintended) extra (for example, ##channel), if included in the URL. Some implementations allow multiple channels to be specified, separated by commas. Challenges Issues in the original design of IRC were the amount of shared state data being a limitation on its scalability, the absence of unique user identifications leading to the nickname collision problem, lack of protection from netsplits by means of cyclic routing, the trade-off in scalability for the sake of real-time user presence information, protocol weaknesses providing a platform for abuse, no transparent and optimizable message passing, and no encryption. Some of these issues have been addressed in Modern IRC. Attacks Because IRC connections may be unencrypted and typically span long time periods, they are an attractive target for DoS/DDoS attackers and hackers. Because of this, careful security policy is necessary to ensure that an IRC network is not susceptible to an attack such as a takeover war. IRC networks may also K-line or G-line users or servers that have a harming effect. Some IRC servers support SSL/TLS connections for security purposes. This helps stop the use of packet sniffer programs to obtain the passwords of IRC users, but has little use beyond this scope due to the public nature of IRC channels. SSL connections require both client and server support (that may require the user to install SSL binaries and IRC client specific patches or modules on their computers). Some networks also use SSL for server-to-server connections, and provide a special channel flag (such as +S) to only allow SSL-connected users on the channel, while disallowing operator identification in clear text, to better utilize the advantages that SSL provides. IRC served as an early laboratory for many kinds of Internet attacks, such as using fake ICMP unreachable messages to break TCP-based IRC connections (nuking) to annoy users or facilitate takeovers. Abuse prevention One of the most contentious technical issues surrounding IRC implementations, which survives to this day, is the merit of "Nick/Channel Delay" vs. "Timestamp" protocols. Both methods exist to solve the problem of denial-of-service attacks, but take very different approaches. The problem with the original IRC protocol as implemented was that when two servers split and rejoined, the two sides of the network would simply merge their channels. If a user could join on a "split" server, where a channel that existed on the other side of the network was empty, and gain operator status, they would become a channel operator of the "combined" channel after the netsplit ended; if a user took a nickname that existed on the other side of the network, the server would kill both users when rejoining (a "nick collision"). This was often abused to "mass-kill" all users on a channel, thus creating "opless" channels where no operators were present to deal with abuse. Apart from causing problems within IRC, this encouraged people to conduct denial-of-service attacks against IRC servers in order to cause netsplits, which they would then abuse. The nick delay (ND) and channel delay (CD) strategies aim to prevent abuse by delaying reconnections and renames. After a user signs off and the nickname becomes available, or a channel ceases to exist because all its users parted (as often happens during a netsplit), the server will not allow any user to use that nickname or join that channel, until a certain period of time (the delay) has passed. The idea behind this is that even if a netsplit occurs, it is useless to an abuser because they cannot take the nickname or gain operator status on a channel, and thus no collision of a nickname or "merging" of a channel can occur. To some extent, this inconveniences legitimate users, who might be forced to briefly use a different name after rejoining (appending an underscore is popular). The timestamp protocol is an alternative to nick/channel delays which resolves collisions using timestamped priority. Every nickname and channel on the network is assigned a timestampthe date and time when it was created. When a netsplit occurs, two users on each side are free to use the same nickname or channel, but when the two sides are joined, only one can survive. In the case of nicknames, the newer user, according to their TS, is killed; when a channel collides, the members (users on the channel) are merged, but the channel operators on the "losing" side of the split lose their channel operator status. TS is a much more complicated protocol than ND/CD, both in design and implementation, and despite having gone through several revisions, some implementations still have problems with "desyncs" (where two servers on the same network disagree about the current state of the network), and allowing too much leniency in what was allowed by the "losing" side. Under the original TS protocols, for example, there was no protection against users setting bans or other modes in the losing channel that would then be merged when the split rejoined, even though the users who had set those modes lost their channel operator status. Some modern TS-based IRC servers have also incorporated some form of ND and/or CD in addition to timestamping in an attempt to further curb abuse. Most networks today use the timestamping approach. The timestamp versus ND/CD disagreements caused several servers to split away from EFnet and form the newer IRCnet. After the split, EFnet moved to a TS protocol, while IRCnet used ND/CD. In recent versions of the IRCnet ircd, as well as ircds using the TS6 protocol (including Charybdis), ND has been extended/replaced by a mechanism called SAVE. This mechanism assigns every client a UID upon connecting to an IRC server. This ID starts with a number, which is forbidden in nicks (although some ircds, namely IRCnet and InspIRCd, allow clients to switch to their own UID as the nickname). If two clients with the same nickname join from different sides of a netsplit ("nick collision"), the first server to see this collision will force both clients to change their nick to their UID, thus saving both clients from being disconnected. On IRCnet, the nickname will also be locked for some time (ND) to prevent both clients from changing back to the original nickname, thus colliding again. Clients Client software Client software exists for various operating systems or software packages, as well as web-based or inside games. Many different clients are available for the various operating systems, including Windows, Unix and Linux, macOS and mobile operating systems (such as iOS and Android). On Windows, mIRC is one of the most popular clients. Some Linux distributions come with an IRC client preinstalled, such as Linux Mint which comes with HexChat preinstalled. Some programs which are extensible through plug-ins also serve as platforms for IRC clients. For instance, a client called ERC, written entirely in Emacs Lisp, is included in v.22.3 of Emacs. Therefore, any platform that can run Emacs can run ERC. A number of web browsers have built-in IRC clients, such as: Opera used to have a client, but no longer supports IRC ChatZilla add-on for Mozilla Firefox (for Firefox 56 and earlier; included as a built-in component of SeaMonkey). Web-based clients, such as Mibbit and open source KiwiIRC, can run in most browsers. Games such as War§ow, Unreal Tournament (up to Unreal Tournament 2004), Uplink, Spring Engine-based games, 0 A.D. and ZDaemon have included IRC. Ustream's chat interface is IRC with custom authentication as well as Twitch's (formerly Justin.tv). Bots A typical use of bots in IRC is to provide IRC services or specific functionality within a channel such as to host a chat-based game or provide notifications of external events. However, some IRC bots are used to launch malicious attacks such as denial of service, spamming, or exploitation. Bouncer A program that runs as a daemon on a server and functions as a persistent proxy is known as a BNC or bouncer. The purpose is to maintain a connection to an IRC server, acting as a relay between the server and client, or simply to act as a proxy. Should the client lose network connectivity, the BNC may stay connected and archive all traffic for later delivery, allowing the user to resume their IRC session without disrupting their connection to the server. Furthermore, as a way of obtaining a bouncer-like effect, an IRC client (typically text-based, for example Irssi) may be run on an always-on server to which the user connects via ssh. This also allows devices that only have ssh functionality, but no actual IRC client installed themselves, to connect to the IRC, and it allows sharing of IRC sessions. To keep the IRC client from quitting when the ssh connection closes, the client can be run inside a terminal multiplexer such as GNU Screen or tmux, thus staying connected to the IRC network(s) constantly and able to log conversation in channels that the user is interested in, or to maintain a channel's presence on the network. Modelled after this setup, in 2004 an IRC client following the client–server, called Smuxi, was launched. Search engines There are numerous search engines available to aid the user in finding what they are looking for on IRC. Generally the search engine consists of two parts, a "back-end" (or "spider/crawler") and a front-end "search engine". The back-end (spider/webcrawler) is the work horse of the search engine. It is responsible for crawling IRC servers to index the information being sent across them. The information that is indexed usually consists solely of channel text (text that is publicly displayed in public channels). The storage method is usually some sort of relational database, like MySQL or Oracle. The front-end "search engine" is the user interface to the database. It supplies users with a way to search the database of indexed information to retrieve the data they are looking for. These front-end search engines can also be coded in numerous programming languages. Most search engines have their own spider that is a single application responsible for crawling IRC and indexing data itself; however, others are "user based" indexers. The latter rely on users to install their "add-on" to their IRC client; the add-on is what sends the database the channel information of whatever channels the user happens to be on. Many users have implemented their own ad hoc search engines using the logging features built into many IRC clients. These search engines are usually implemented as bots and dedicated to a particular channel or group of associated channels. Character encoding IRC still lacks a single globally accepted standard convention for how to transmit characters outside the 7-bit ASCII repertoire. IRC servers normally transfer messages from a client to another client just as byte sequences, without any interpretation or recoding of characters. The IRC protocol (unlike e.g. MIME or HTTP) lacks mechanisms for announcing and negotiating character encoding options. This has put the responsibility for choosing the appropriate character codec on the client. In practice, IRC channels have largely used the same character encodings that were also used by operating systems (in particular Unix derivatives) in the respective language communities: 7-bit era: In the early days of IRC, especially among Scandinavian and Finnish language users, national variants of ISO 646 were the dominant character encodings. These encode non-ASCII characters like Ä Ö Å ä ö å at code positions 0x5B 0x5C 0x5D 0x7B 0x7C 0x7D (US-ASCII: [ \ ] { | }). That is why these codes are always allowed in nicknames. According to RFC 1459, { | } in nicknames should be treated as lowercase equivalents of [ \ ] respectively. By the late 1990s, the use of 7-bit encodings had disappeared in favour of ISO 8859-1, and such equivalence mappings were dropped from some IRC daemons. 8-bit era: Since the early 1990s, 8-bit encodings such as ISO 8859-1 have become commonly used for European languages. Russian users had a choice of KOI8-R, ISO 8859-5 and CP1251, and since about 2000, modern Russian IRC networks convert between these different commonly used encodings of the Cyrillic script. Multi-byte era: For a long time, East Asian IRC channels with logographic scripts in China, Japan, and Korea have been using multi-byte encodings such as EUC or ISO-2022-JP. With the common migration from ISO 8859 to UTF-8 on Linux and Unix platforms since about 2002, UTF-8 has become an increasingly popular substitute for many of the previously used 8-bit encodings in European channels. Some IRC clients are now capable of reading messages both in ISO 8859-1 or UTF-8 in the same channel, heuristically autodetecting which encoding is used. The shift to UTF-8 began in particular on Finnish-speaking IRC (Merkistö (Finnish)). Today, the UTF-8 encoding of Unicode/ISO 10646 would be the most likely contender for a single future standard character encoding for all IRC communication, if such standard ever relaxed the 510-byte message size restriction. UTF-8 is ASCII compatible and covers the superset of all other commonly used coded character set standards. File sharing Much like conventional P2P file sharing, users can create file servers that allow them to share files with each other by using customised IRC bots or scripts for their IRC client. Often users will group together to distribute warez via a network of IRC bots. Technically, IRC provides no file transfer mechanisms itself; file sharing is implemented by IRC clients, typically using the Direct Client-to-Client (DCC) protocol, in which file transfers are negotiated through the exchange of private messages between clients. The vast majority of IRC clients feature support for DCC file transfers, hence the view that file sharing is an integral feature of IRC. The commonplace usage of this protocol, however, sometimes also causes DCC spam. DCC commands have also been used to exploit vulnerable clients into performing an action such as disconnecting from the server or exiting the client. See also Chat room Client-to-client protocol Comparison of instant messaging protocols Comparison of IRC clients The Hamnet Players Internet slang List of IRC commands Serving channel Matrix (protocol) and XMPP, alternative chat protocols Citations General bibliography Further reading External links IRC Numerics List History of IRC IRC.org – Technical and Historical IRC6 information; Articles on the history of IRC IRChelp.org – Internet Relay Chat (IRC) help archive; Large archive of IRC-related documents IRCv3 – Working group of developers, who add new features to the protocol and write specs for them IRC-Source – Internet Relay Chat (IRC) network and channel search engine with historical data irc.netsplit.de – Internet Relay Chat (IRC) network listing with historical data 1988 software Application layer protocols Internet properties established in 1988 Finnish inventions Internet terminology Virtual communities Software developed in Finland Fediverse
IRC
[ "Technology" ]
8,267
[ "Computing terminology", "Internet terminology", "Software developed in Finland" ]
14,731
https://en.wikipedia.org/wiki/Ideogram
An ideogram or ideograph (from Greek 'idea' + 'to write') is a symbol that represents an idea or concept independent of any particular language. Some ideograms are more arbitrary than others: some are only meaningful assuming preexisting familiarity with some convention; others more directly resemble their signifieds. Ideograms that represent physical objects by visually resembling them are called pictograms. Numerals and mathematical symbols are ideograms, for example ⟨1⟩ 'one', ⟨2⟩ 'two', ⟨+⟩ 'plus', and ⟨=⟩ 'equals'. The ampersand ⟨&⟩ is used in many languages to represent the word and, originally a stylized ligature of the Latin word . Other typographical examples include ⟨§⟩ 'section', ⟨€⟩ 'euro', ⟨£⟩ 'pound sterling', and ⟨©⟩ 'copyright'. Ideograms are not to be equated with logograms, which represent specific morphemes in a language. In a broad sense, ideograms may form part of a writing system otherwise based on other principles, like the examples above in the phonetic English writing system—while also potentially representing the same idea across several languages, as they do not correspond to a specific spoken word. There may not always be a single way to read a given ideograph. While remaining logograms assigned to morphemes, specific Chinese characters like ⟨⟩ 'middle' may be classified as ideographs in a narrower sense, given their origin and visual structure. Terminology Pictograms and indicatives Pictograms are ideograms that represent an idea through a direct graphical resemblance to what is being referenced. In proto-writing systems, pictograms generally comprised most of the available symbols. Their use could also be extended via the rebus principle: for example, the pictorial Dongba symbols without Geba annotation cannot represent the Naxi language, but are used as a mnemonic for the recitation of oral literature. Some systems also use indicatives, which denote abstract concepts. Sometimes, the word ideogram is used to refer exclusively to indicatives, contrasting them with pictograms. The word ideogram has historically often been used to describe Egyptian hieroglyphs, Sumerian cuneiform, and Chinese characters. However, these symbols represent semantic elements of a language, and not the underlying ideas directly—their use generally requires knowledge of a specific spoken language. Modern scholars refer to these symbols instead as logograms, and generally avoid calling them ideograms. Most logograms include some representation of the pronunciation of the corresponding word in the language, often using the rebus principle. Later systems used selected symbols to represent the sounds of the language, such as the adaptation of the logogram for 'ox' as the letter aleph representing the initial glottal stop. However, some logograms still meaningfully depict the meaning of the morpheme they represent visually. Pictograms are shaped like the object that the word refers to, such as an icon of a bull denoting the Semitic word 'ox'. Other logograms may visually represent meaning via more abstract techniques. Many Egyptian hieroglyphs and cuneiform graphs could be used either logographically or phonetically. For example, the Sumerian dingir could represent the word 'deity', the god An or the word 'sky'. In Akkadian, the graph could represent the stem 'deity', the word 'sky', or the syllable . While Chinese characters generally function as logograms, three of the six classes in the traditional classification are ideographic (or semantographic) in origin, as they have no phonetic component: Pictograms ( ) are generally among the oldest characters, with forms dating to the 12th century BC. Generally, with the evolution of the script, the forms of pictographs became less directly representational, to the extent that their referents are no longer plausible to intuit. Examples include 'field',and 'heart'. Indicatives ( ) like 'up' and 'down', or numerals like 'three'. Ideographic compounds ( ) have a meaning synthesized from several other characters, such as 'bright', a compound of 'Sun' and 'Moon', or 'rest', composed of 'person' and 'tree'. As the understanding of Old Chinese phonology developed during the second half of the 20th century, many researchers became convinced that the etymology of most characters originally thought to be ideographic compounds actually included some phonetic component. Example of ideograms are the DOT pictograms, a collection of 50 symbols developed during the 1970s by the American Institute of Graphic Arts at the request of the United States Department of Transportation. Initially used to mark airports, the system gradually became more widespread. Pure signs Many ideograms only represent ideas by convention. For example, a red octagon only carries the meaning of 'stop' due to the public association and reification of that meaning over time. In the field of semiotics, these are a type of pure sign, a term which also includes symbols using non-graphical media. Modern analysis of Chinese characters reveals that pure signs are as old as the system itself, with prominent examples including the numerals representing numbers larger than four, including 'five', and 'eight'. These do not indicate anything about the quantities they represent visually or phonetically, only conventionally. Types Mathematical notation A mathematical symbol is a type of ideogram. History As true writing systems emerged from systems of pure ideograms, later societies with phonetic writing were often compelled by the intuitive connection between pictures, diagrams and logograms—though ultimately ignorant of the latter's necessary phonetic dimension. Greek speakers began regularly visiting Egypt during the 7th century BC. Ancient Greek writers generally mistook the Egyptian writing system to be purely ideographic. According to tradition, the Greeks had acquired the ability to write, among other things, from the Egyptians through Pythagoras (), who had been directly taught their silent form of "symbolic teaching". Beginning with Plato (428–347 BC), the conception of hieroglyphs as ideograms was rooted in a broader metaphysical conception of most language as an imperfect and obfuscatory image of reality. The views of Plato involved an ontologically separate world of forms, but those of his student Aristotle (384–322 BC) instead saw the forms as parts identical within the soul of every person. For both, ideography was a more perfect representation of the forms possessed by the Egyptians. The Aristotelian framework would be the foundation for the conception of language in the Mediterranean world into the medieval era. According to the classical theory, because ideographs directly reflected the forms, they were the only "true language", and had the unique ability to communicate arcane wisdom to readers. The ability to read Egyptian hieroglyphs had been lost during late antiquity, in the context of the country's Hellenization and Christianization. However, the traditional notion that the latter trends compelled the abandonment of hieroglyphic writing has been rejected by recent scholarship. Europe only became fully acquainted with written Chinese near the end of the 16th century, and initially related the system to their existing framework of ideography as partially informed by Egyptian hieroglyphs. Ultimately, Jean-François Champollion's successful decipherment of hieroglyphs in 1823 stemmed from an understanding that they did represent spoken Egyptian language, as opposed to being purely ideographic. Champollion's insight in part stemmed from his familiarity with the work of French sinologist Jean-Pierre Abel-Rémusat regarding fanqie, which demonstrated that Chinese characters were often used to write sounds, and not just ideas. Proposed universal languages Inspired by these conceptions of ideography, several attempts have been made to design a universal written language—i.e., an ideography whose interpretations are accessible to all people with no regard to the languages they speak. An early proposal was made in 1668 by John Wilkins in An Essay Towards a Real Character, and a Philosophical Language. More recently, Blissymbols was devised by Charles K. Bliss in 1949, and currently includes over 2,000 graphs. See also Epigraphy – the study of inscriptions List of symbols List of writing systems Character (symbol) Emoji Heterogram (linguistics) Lexigrams Logotype Traffic sign References Citations Works cited Further reading Communication design Graphic design Writing systems
Ideogram
[ "Engineering" ]
1,793
[ "Design", "Communication design" ]
14,734
https://en.wikipedia.org/wiki/Iron
Iron is a chemical element; it has the symbol Fe () and atomic number 26. It is a metal that belongs to the first transition series and group 8 of the periodic table. It is, by mass, the most common element on Earth, forming much of Earth's outer and inner core. It is the fourth most abundant element in the Earth's crust, being mainly deposited by meteorites in its metallic state. Extracting usable metal from iron ores requires kilns or furnaces capable of reaching , about higher than that required to smelt copper. Humans started to master that process in Eurasia during the 2nd millennium BC and the use of iron tools and weapons began to displace copper alloys – in some regions, only around 1200 BC. That event is considered the transition from the Bronze Age to the Iron Age. In the modern world, iron alloys, such as steel, stainless steel, cast iron and special steels, are by far the most common industrial metals, due to their mechanical properties and low cost. The iron and steel industry is thus very important economically, and iron is the cheapest metal, with a price of a few dollars per kilogram or pound. Pristine and smooth pure iron surfaces are a mirror-like silvery-gray. Iron reacts readily with oxygen and water to produce brown-to-black hydrated iron oxides, commonly known as rust. Unlike the oxides of some other metals that form passivating layers, rust occupies more volume than the metal and thus flakes off, exposing more fresh surfaces for corrosion. Chemically, the most common oxidation states of iron are iron(II) and iron(III). Iron shares many properties of other transition metals, including the other group 8 elements, ruthenium and osmium. Iron forms compounds in a wide range of oxidation states, −4 to +7. Iron also forms many coordination complexs; some of them, such as ferrocene, ferrioxalate, and Prussian blue have substantial industrial, medical, or research applications. The body of an adult human contains about 4 grams (0.005% body weight) of iron, mostly in hemoglobin and myoglobin. These two proteins play essential roles in oxygen transport by blood and oxygen storage in muscles. To maintain the necessary levels, human iron metabolism requires a minimum of iron in the diet. Iron is also the metal at the active site of many important redox enzymes dealing with cellular respiration and oxidation and reduction in plants and animals. Characteristics Allotropes At least four allotropes of iron (differing atom arrangements in the solid) are known, conventionally denoted α, γ, δ, and ε. The first three forms are observed at ordinary pressures. As molten iron cools past its freezing point of 1538 °C, it crystallizes into its δ allotrope, which has a body-centered cubic (bcc) crystal structure. As it cools further to 1394 °C, it changes to its γ-iron allotrope, a face-centered cubic (fcc) crystal structure, or austenite. At 912 °C and below, the crystal structure again becomes the bcc α-iron allotrope. The physical properties of iron at very high pressures and temperatures have also been studied extensively, because of their relevance to theories about the cores of the Earth and other planets. Above approximately 10 GPa and temperatures of a few hundred kelvin or less, α-iron changes into another hexagonal close-packed (hcp) structure, which is also known as ε-iron. The higher-temperature γ-phase also changes into ε-iron, but does so at higher pressure. Some controversial experimental evidence exists for a stable β phase at pressures above 50 GPa and temperatures of at least 1500 K. It is supposed to have an orthorhombic or a double hcp structure. (Confusingly, the term "β-iron" is sometimes also used to refer to α-iron above its Curie point, when it changes from being ferromagnetic to paramagnetic, even though its crystal structure has not changed.) The Earth's inner core is generally presumed to consist of an iron-nickel alloy with ε (or β) structure. Melting and boiling points The melting and boiling points of iron, along with its enthalpy of atomization, are lower than those of the earlier 3d elements from scandium to chromium, showing the lessened contribution of the 3d electrons to metallic bonding as they are attracted more and more into the inert core by the nucleus; however, they are higher than the values for the previous element manganese because that element has a half-filled 3d sub-shell and consequently its d-electrons are not easily delocalized. This same trend appears for ruthenium but not osmium. The melting point of iron is experimentally well defined for pressures less than 50 GPa. For greater pressures, published data (as of 2007) still varies by tens of gigapascals and over a thousand kelvin. Magnetic properties Below its Curie point of , α-iron changes from paramagnetic to ferromagnetic: the spins of the two unpaired electrons in each atom generally align with the spins of its neighbors, creating an overall magnetic field. This happens because the orbitals of those two electrons (dz2 and dx2 − y2) do not point toward neighboring atoms in the lattice, and therefore are not involved in metallic bonding. In the absence of an external source of magnetic field, the atoms get spontaneously partitioned into magnetic domains, about 10 micrometers across, such that the atoms in each domain have parallel spins, but some domains have other orientations. Thus a macroscopic piece of iron will have a nearly zero overall magnetic field. Application of an external magnetic field causes the domains that are magnetized in the same general direction to grow at the expense of adjacent ones that point in other directions, reinforcing the external field. This effect is exploited in devices that need to channel magnetic fields to fulfill design function, such as electrical transformers, magnetic recording heads, and electric motors. Impurities, lattice defects, or grain and particle boundaries can "pin" the domains in the new positions, so that the effect persists even after the external field is removed – thus turning the iron object into a (permanent) magnet. Similar behavior is exhibited by some iron compounds, such as the ferrites including the mineral magnetite, a crystalline form of the mixed iron(II,III) oxide (although the atomic-scale mechanism, ferrimagnetism, is somewhat different). Pieces of magnetite with natural permanent magnetization (lodestones) provided the earliest compasses for navigation. Particles of magnetite were extensively used in magnetic recording media such as core memories, magnetic tapes, floppies, and disks, until they were replaced by cobalt-based materials. Isotopes Iron has four stable isotopes: 54Fe (5.845% of natural iron), 56Fe (91.754%), 57Fe (2.119%) and 58Fe (0.282%). Twenty-four artificial isotopes have also been created. Of these stable isotopes, only 57Fe has a nuclear spin (−). The nuclide 54Fe theoretically can undergo double electron capture to 54Cr, but the process has never been observed and only a lower limit on the half-life of 4.4×1020 years has been established. 60Fe is an extinct radionuclide of long half-life (2.6 million years). It is not found on Earth, but its ultimate decay product is its granddaughter, the stable nuclide 60Ni. Much of the past work on isotopic composition of iron has focused on the nucleosynthesis of 60Fe through studies of meteorites and ore formation. In the last decade, advances in mass spectrometry have allowed the detection and quantification of minute, naturally occurring variations in the ratios of the stable isotopes of iron. Much of this work is driven by the Earth and planetary science communities, although applications to biological and industrial systems are emerging. In phases of the meteorites Semarkona and Chervony Kut, a correlation between the concentration of 60Ni, the granddaughter of 60Fe, and the abundance of the stable iron isotopes provided evidence for the existence of 60Fe at the time of formation of the Solar System. Possibly the energy released by the decay of 60Fe, along with that released by 26Al, contributed to the remelting and differentiation of asteroids after their formation 4.6 billion years ago. The abundance of 60Ni present in extraterrestrial material may bring further insight into the origin and early history of the Solar System. The most abundant iron isotope 56Fe is of particular interest to nuclear scientists because it represents the most common endpoint of nucleosynthesis. Since 56Ni (14 alpha particles) is easily produced from lighter nuclei in the alpha process in nuclear reactions in supernovae (see silicon burning process), it is the endpoint of fusion chains inside extremely massive stars. Although adding more alpha particles is possible, but nonetheless the sequence does effectively end at 56Ni because conditions in stellar interiors cause the competition between photodisintegration and the alpha process to favor photodisintegration around 56Ni. This 56Ni, which has a half-life of about 6 days, is created in quantity in these stars, but soon decays by two successive positron emissions within supernova decay products in the supernova remnant gas cloud, first to radioactive 56Co, and then to stable 56Fe. As such, iron is the most abundant element in the core of red giants, and is the most abundant metal in iron meteorites and in the dense metal cores of planets such as Earth. It is also very common in the universe, relative to other stable metals of approximately the same atomic weight. Iron is the sixth most abundant element in the universe, and the most common refractory element. Although a further tiny energy gain could be extracted by synthesizing 62Ni, which has a marginally higher binding energy than 56Fe, conditions in stars are unsuitable for this process. Element production in supernovas greatly favor iron over nickel, and in any case, 56Fe still has a lower mass per nucleon than 62Ni due to its higher fraction of lighter protons. Hence, elements heavier than iron require a supernova for their formation, involving rapid neutron capture by starting 56Fe nuclei. In the far future of the universe, assuming that proton decay does not occur, cold fusion occurring via quantum tunnelling would cause the light nuclei in ordinary matter to fuse into 56Fe nuclei. Fission and alpha-particle emission would then make heavy nuclei decay into iron, converting all stellar-mass objects to cold spheres of pure iron. Origin and occurrence in nature Cosmogenesis Iron's abundance in rocky planets like Earth is due to its abundant production during the runaway fusion and explosion of type Ia supernovae, which scatters the iron into space. Metallic iron Metallic or native iron is rarely found on the surface of the Earth because it tends to oxidize. However, both the Earth's inner and outer core, which together account for 35% of the mass of the whole Earth, are believed to consist largely of an iron alloy, possibly with nickel. Electric currents in the liquid outer core are believed to be the origin of the Earth's magnetic field. The other terrestrial planets (Mercury, Venus, and Mars) as well as the Moon are believed to have a metallic core consisting mostly of iron. The M-type asteroids are also believed to be partly or mostly made of metallic iron alloy. The rare iron meteorites are the main form of natural metallic iron on the Earth's surface. Items made of cold-worked meteoritic iron have been found in various archaeological sites dating from a time when iron smelting had not yet been developed; and the Inuit in Greenland have been reported to use iron from the Cape York meteorite for tools and hunting weapons. About 1 in 20 meteorites consist of the unique iron-nickel minerals taenite (35–80% iron) and kamacite (90–95% iron). Native iron is also rarely found in basalts that have formed from magmas that have come into contact with carbon-rich sedimentary rocks, which have reduced the oxygen fugacity sufficiently for iron to crystallize. This is known as telluric iron and is described from a few localities, such as Disko Island in West Greenland, Yakutia in Russia and Bühl in Germany. Mantle minerals Ferropericlase , a solid solution of periclase (MgO) and wüstite (FeO), makes up about 20% of the volume of the lower mantle of the Earth, which makes it the second most abundant mineral phase in that region after silicate perovskite ; it also is the major host for iron in the lower mantle. At the bottom of the transition zone of the mantle, the reaction γ- transforms γ-olivine into a mixture of silicate perovskite and ferropericlase and vice versa. In the literature, this mineral phase of the lower mantle is also often called magnesiowüstite. Silicate perovskite may form up to 93% of the lower mantle, and the magnesium iron form, , is considered to be the most abundant mineral in the Earth, making up 38% of its volume. Earth's crust While iron is the most abundant element on Earth, most of this iron is concentrated in the inner and outer cores. The fraction of iron that is in Earth's crust only amounts to about 5% of the overall mass of the crust and is thus only the fourth most abundant element in that layer (after oxygen, silicon, and aluminium). Most of the iron in the crust is combined with various other elements to form many iron minerals. An important class is the iron oxide minerals such as hematite (Fe2O3), magnetite (Fe3O4), and siderite (FeCO3), which are the major ores of iron. Many igneous rocks also contain the sulfide minerals pyrrhotite and pentlandite. During weathering, iron tends to leach from sulfide deposits as the sulfate and from silicate deposits as the bicarbonate. Both of these are oxidized in aqueous solution and precipitate in even mildly elevated pH as iron(III) oxide. Large deposits of iron are banded iron formations, a type of rock consisting of repeated thin layers of iron oxides alternating with bands of iron-poor shale and chert. The banded iron formations were laid down in the time between and . Materials containing finely ground iron(III) oxides or oxide-hydroxides, such as ochre, have been used as yellow, red, and brown pigments since pre-historical times. They contribute as well to the color of various rocks and clays, including entire geological formations like the Painted Hills in Oregon and the Buntsandstein ("colored sandstone", British Bunter). Through Eisensandstein (a jurassic 'iron sandstone', e.g. from Donzdorf in Germany) and Bath stone in the UK, iron compounds are responsible for the yellowish color of many historical buildings and sculptures. The proverbial red color of the surface of Mars is derived from an iron oxide-rich regolith. Significant amounts of iron occur in the iron sulfide mineral pyrite (FeS2), but it is difficult to extract iron from it and it is therefore not exploited. In fact, iron is so common that production generally focuses only on ores with very high quantities of it. According to the International Resource Panel's Metal Stocks in Society report, the global stock of iron in use in society is 2,200 kg per capita. More-developed countries differ in this respect from less-developed countries (7,000–14,000 vs 2,000 kg per capita). Oceans Ocean science demonstrated the role of the iron in the ancient seas in both marine biota and climate. Chemistry and compounds Iron shows the characteristic chemical properties of the transition metals, namely the ability to form variable oxidation states differing by steps of one and a very large coordination and organometallic chemistry: indeed, it was the discovery of an iron compound, ferrocene, that revolutionalized the latter field in the 1950s. Iron is sometimes considered as a prototype for the entire block of transition metals, due to its abundance and the immense role it has played in the technological progress of humanity. Its 26 electrons are arranged in the configuration [Ar]3d64s2, of which the 3d and 4s electrons are relatively close in energy, and thus a number of electrons can be ionized. Iron forms compounds mainly in the oxidation states +2 (iron(II), "ferrous") and +3 (iron(III), "ferric"). Iron also occurs in higher oxidation states, e.g., the purple potassium ferrate (K2FeO4), which contains iron in its +6 oxidation state. The anion [FeO4]– with iron in its +7 oxidation state, along with an iron(V)-peroxo isomer, has been detected by infrared spectroscopy at 4 K after cocondensation of laser-ablated Fe atoms with a mixture of O2/Ar. Iron(IV) is a common intermediate in many biochemical oxidation reactions. Numerous organoiron compounds contain formal oxidation states of +1, 0, −1, or even −2. The oxidation states and other bonding properties are often assessed using the technique of Mössbauer spectroscopy. Many mixed valence compounds contain both iron(II) and iron(III) centers, such as magnetite and Prussian blue (). The latter is used as the traditional "blue" in blueprints. Iron is the first of the transition metals that cannot reach its group oxidation state of +8, although its heavier congeners ruthenium and osmium can, with ruthenium having more difficulty than osmium. Ruthenium exhibits an aqueous cationic chemistry in its low oxidation states similar to that of iron, but osmium does not, favoring high oxidation states in which it forms anionic complexes. In the second half of the 3d transition series, vertical similarities down the groups compete with the horizontal similarities of iron with its neighbors cobalt and nickel in the periodic table, which are also ferromagnetic at room temperature and share similar chemistry. As such, iron, cobalt, and nickel are sometimes grouped together as the iron triad. Unlike many other metals, iron does not form amalgams with mercury. As a result, mercury is traded in standardized 76 pound flasks (34 kg) made of iron. Iron is by far the most reactive element in its group; it is pyrophoric when finely divided and dissolves easily in dilute acids, giving Fe2+. However, it does not react with concentrated nitric acid and other oxidizing acids due to the formation of an impervious oxide layer, which can nevertheless react with hydrochloric acid. High-purity iron, called electrolytic iron, is considered to be resistant to rust, due to its oxide layer. Binary compounds Oxides and sulfides Iron forms various oxide and hydroxide compounds; the most common are iron(II,III) oxide (Fe3O4), and iron(III) oxide (Fe2O3). Iron(II) oxide also exists, though it is unstable at room temperature. Despite their names, they are actually all non-stoichiometric compounds whose compositions may vary. These oxides are the principal ores for the production of iron (see bloomery and blast furnace). They are also used in the production of ferrites, useful magnetic storage media in computers, and pigments. The best known sulfide is iron pyrite (FeS2), also known as fool's gold owing to its golden luster. It is not an iron(IV) compound, but is actually an iron(II) polysulfide containing Fe2+ and ions in a distorted sodium chloride structure. Halides The binary ferrous and ferric halides are well-known. The ferrous halides typically arise from treating iron metal with the corresponding hydrohalic acid to give the corresponding hydrated salts. Fe + 2 HX → FeX2 + H2 (X = F, Cl, Br, I) Iron reacts with fluorine, chlorine, and bromine to give the corresponding ferric halides, ferric chloride being the most common. 2 Fe + 3 X2 → 2 FeX3 (X = F, Cl, Br) Ferric iodide is an exception, being thermodynamically unstable due to the oxidizing power of Fe3+ and the high reducing power of I−: 2 I− + 2 Fe3+ → I2 + 2 Fe2+ (E0 = +0.23 V) Ferric iodide, a black solid, is not stable in ordinary conditions, but can be prepared through the reaction of iron pentacarbonyl with iodine and carbon monoxide in the presence of hexane and light at the temperature of −20 °C, with oxygen and water excluded. Complexes of ferric iodide with some soft bases are known to be stable compounds. Solution chemistry The standard reduction potentials in acidic aqueous solution for some common iron ions are given below: {| |- | [Fe(H2O)6]2+ + 2 e−|| Fe || E0 = −0.447 V |- | [Fe(H2O)6]3+ + e−|| [Fe(H2O)6]2+ || E0 = +0.77 V |- | + 8 H3O+ + 3 e−|| [Fe(H2O)6]3+ + 6 H2O || E0 = +2.20 V |} The red-purple tetrahedral ferrate(VI) anion is such a strong oxidizing agent that it oxidizes ammonia to nitrogen (N2) and water to oxygen: 4 + 34 → 4 + 20 + 3 O2 The pale-violet hexaquo complex is an acid such that above pH 0 it is fully hydrolyzed: {| |- | || || K = 10−3.05 mol dm−3 |- | || || K = 10−3.26 mol dm−3 |- | || || K = 10−2.91 mol dm−3 |} As pH rises above 0 the above yellow hydrolyzed species form and as it rises above 2–3, reddish-brown hydrous iron(III) oxide precipitates out of solution. Although Fe3+ has a d5 configuration, its absorption spectrum is not like that of Mn2+ with its weak, spin-forbidden d–d bands, because Fe3+ has higher positive charge and is more polarizing, lowering the energy of its ligand-to-metal charge transfer absorptions. Thus, all the above complexes are rather strongly colored, with the single exception of the hexaquo ion – and even that has a spectrum dominated by charge transfer in the near ultraviolet region. On the other hand, the pale green iron(II) hexaquo ion does not undergo appreciable hydrolysis. Carbon dioxide is not evolved when carbonate anions are added, which instead results in white iron(II) carbonate being precipitated out. In excess carbon dioxide this forms the slightly soluble bicarbonate, which occurs commonly in groundwater, but it oxidises quickly in air to form iron(III) oxide that accounts for the brown deposits present in a sizeable number of streams. Coordination compounds Due to its electronic structure, iron has a very large coordination and organometallic chemistry. Many coordination compounds of iron are known. A typical six-coordinate anion is hexachloroferrate(III), [FeCl6]3−, found in the mixed salt tetrakis(methylammonium) hexachloroferrate(III) chloride. Complexes with multiple bidentate ligands have geometric isomers. For example, the trans-chlorohydridobis(bis-1,2-(diphenylphosphino)ethane)iron(II) complex is used as a starting material for compounds with the moiety. The ferrioxalate ion with three oxalate ligands displays helical chirality with its two non-superposable geometries labelled Λ (lambda) for the left-handed screw axis and Δ (delta) for the right-handed screw axis, in line with IUPAC conventions. Potassium ferrioxalate is used in chemical actinometry and along with its sodium salt undergoes photoreduction applied in old-style photographic processes. The dihydrate of iron(II) oxalate has a polymeric structure with co-planar oxalate ions bridging between iron centres with the water of crystallisation located forming the caps of each octahedron, as illustrated below. Iron(III) complexes are quite similar to those of chromium(III) with the exception of iron(III)'s preference for O-donor instead of N-donor ligands. The latter tend to be rather more unstable than iron(II) complexes and often dissociate in water. Many Fe–O complexes show intense colors and are used as tests for phenols or enols. For example, in the ferric chloride test, used to determine the presence of phenols, iron(III) chloride reacts with a phenol to form a deep violet complex: 3 ArOH + FeCl3 → Fe(OAr)3 + 3 HCl (Ar = aryl) Among the halide and pseudohalide complexes, fluoro complexes of iron(III) are the most stable, with the colorless [FeF5(H2O)]2− being the most stable in aqueous solution. Chloro complexes are less stable and favor tetrahedral coordination as in [FeCl4]−; [FeBr4]− and [FeI4]− are reduced easily to iron(II). Thiocyanate is a common test for the presence of iron(III) as it forms the blood-red [Fe(SCN)(H2O)5]2+. Like manganese(II), most iron(III) complexes are high-spin, the exceptions being those with ligands that are high in the spectrochemical series such as cyanide. An example of a low-spin iron(III) complex is [Fe(CN)6]3−. Iron shows a great variety of electronic spin states, including every possible spin quantum number value for a d-block element from 0 (diamagnetic) to (5 unpaired electrons). This value is always half the number of unpaired electrons. Complexes with zero to two unpaired electrons are considered low-spin and those with four or five are considered high-spin. Iron(II) complexes are less stable than iron(III) complexes but the preference for O-donor ligands is less marked, so that for example is known while is not. They have a tendency to be oxidized to iron(III) but this can be moderated by low pH and the specific ligands used. Organometallic compounds Organoiron chemistry is the study of organometallic compounds of iron, where carbon atoms are covalently bound to the metal atom. They are many and varied, including cyanide complexes, carbonyl complexes, sandwich and half-sandwich compounds. Prussian blue or "ferric ferrocyanide", Fe4[Fe(CN)6]3, is an old and well-known iron-cyanide complex, extensively used as pigment and in several other applications. Its formation can be used as a simple wet chemistry test to distinguish between aqueous solutions of Fe2+ and Fe3+ as they react (respectively) with potassium ferricyanide and potassium ferrocyanide to form Prussian blue. Another old example of an organoiron compound is iron pentacarbonyl, Fe(CO)5, in which a neutral iron atom is bound to the carbon atoms of five carbon monoxide molecules. The compound can be used to make carbonyl iron powder, a highly reactive form of metallic iron. Thermolysis of iron pentacarbonyl gives triiron dodecacarbonyl, , a complex with a cluster of three iron atoms at its core. Collman's reagent, disodium tetracarbonylferrate, is a useful reagent for organic chemistry; it contains iron in the −2 oxidation state. Cyclopentadienyliron dicarbonyl dimer contains iron in the rare +1 oxidation state. A landmark in this field was the discovery in 1951 of the remarkably stable sandwich compound ferrocene , by Pauson and Kealy and independently by Miller and colleagues, whose surprising molecular structure was determined only a year later by Woodward and Wilkinson and Fischer. Ferrocene is still one of the most important tools and models in this class. Iron-centered organometallic species are used as catalysts. The Knölker complex, for example, is a transfer hydrogenation catalyst for ketones. Industrial uses The iron compounds produced on the largest scale in industry are iron(II) sulfate (FeSO4·7H2O) and iron(III) chloride (FeCl3). The former is one of the most readily available sources of iron(II), but is less stable to aerial oxidation than Mohr's salt (). Iron(II) compounds tend to be oxidized to iron(III) compounds in the air. History Development of iron metallurgy Iron is one of the elements undoubtedly known to the ancient world. It has been worked, or wrought, for millennia. However, iron artefacts of great age are much rarer than objects made of gold or silver due to the ease with which iron corrodes. The technology developed slowly, and even after the discovery of smelting it took many centuries for iron to replace bronze as the metal of choice for tools and weapons. Meteoritic iron Beads made from meteoric iron in 3500 BC or earlier were found in Gerzeh, Egypt by G. A. Wainwright. The beads contain 7.5% nickel, which is a signature of meteoric origin since iron found in the Earth's crust generally has only minuscule nickel impurities. Meteoric iron was highly regarded due to its origin in the heavens and was often used to forge weapons and tools. For example, a dagger made of meteoric iron was found in the tomb of Tutankhamun, containing similar proportions of iron, cobalt, and nickel to a meteorite discovered in the area, deposited by an ancient meteor shower. Items that were likely made of iron by Egyptians date from 3000 to 2500 BC. Meteoritic iron is comparably soft and ductile and easily cold forged but may get brittle when heated because of the nickel content. Wrought iron The first iron production started in the Middle Bronze Age, but it took several centuries before iron displaced bronze. Samples of smelted iron from Asmar, Mesopotamia and Tall Chagar Bazaar in northern Syria were made sometime between 3000 and 2700 BC. The Hittites established an empire in north-central Anatolia around 1600 BC. They appear to be the first to understand the production of iron from its ores and regard it highly in their society. The Hittites began to smelt iron between 1500 and 1200 BC and the practice spread to the rest of the Near East after their empire fell in 1180 BC. The subsequent period is called the Iron Age. Artifacts of smelted iron are found in India dating from 1800 to 1200 BC, and in the Levant from about 1500 BC (suggesting smelting in Anatolia or the Caucasus). Alleged references (compare history of metallurgy in South Asia) to iron in the Indian Vedas have been used for claims of a very early usage of iron in India respectively to date the texts as such. The rigveda term ayas (metal) refers to copper, while iron which is called as śyāma ayas, literally "black copper", first is mentioned in the post-rigvedic Atharvaveda. Some archaeological evidence suggests iron was smelted in Zimbabwe and southeast Africa as early as the eighth century BC. Iron working was introduced to Greece in the late 11th century BC, from which it spread quickly throughout Europe. The spread of ironworking in Central and Western Europe is associated with Celtic expansion. According to Pliny the Elder, iron use was common in the Roman era. In the lands of what is now considered China, iron appears approximately 700–500 BC. Iron smelting may have been introduced into China through Central Asia. The earliest evidence of the use of a blast furnace in China dates to the 1st century AD, and cupola furnaces were used as early as the Warring States period (403–221 BC). Usage of the blast and cupola furnace remained widespread during the Tang and Song dynasties. During the Industrial Revolution in Britain, Henry Cort began refining iron from pig iron to wrought iron (or bar iron) using innovative production systems. In 1783 he patented the puddling process for refining iron ore. It was later improved by others, including Joseph Hall. Cast iron Cast iron was first produced in China during 5th century BC, but was hardly in Europe until the medieval period. The earliest cast iron artifacts were discovered by archaeologists in what is now modern Luhe County, Jiangsu in China. Cast iron was used in ancient China for warfare, agriculture, and architecture. During the medieval period, means were found in Europe of producing wrought iron from cast iron (in this context known as pig iron) using finery forges. For all these processes, charcoal was required as fuel. Medieval blast furnaces were about tall and made of fireproof brick; forced air was usually provided by hand-operated bellows. Modern blast furnaces have grown much bigger, with hearths fourteen meters in diameter that allow them to produce thousands of tons of iron each day, but essentially operate in much the same way as they did during medieval times. In 1709, Abraham Darby I established a coke-fired blast furnace to produce cast iron, replacing charcoal, although continuing to use blast furnaces. The ensuing availability of inexpensive iron was one of the factors leading to the Industrial Revolution. Toward the end of the 18th century, cast iron began to replace wrought iron for certain purposes, because it was cheaper. Carbon content in iron was not implicated as the reason for the differences in properties of wrought iron, cast iron, and steel until the 18th century. Since iron was becoming cheaper and more plentiful, it also became a major structural material following the building of the innovative first iron bridge in 1778. This bridge still stands today as a monument to the role iron played in the Industrial Revolution. Following this, iron was used in rails, boats, ships, aqueducts, and buildings, as well as in iron cylinders in steam engines. Railways have been central to the formation of modernity and ideas of progress and various languages refer to railways as iron road (e.g. French , German , Turkish , Russian , Chinese, Japanese, and Korean 鐵道, Vietnamese ). Steel Steel (with smaller carbon content than pig iron but more than wrought iron) was first produced in antiquity by using a bloomery. Blacksmiths in Luristan in western Persia were making good steel by 1000 BC. Then improved versions, Wootz steel by India and Damascus steel were developed around 300 BC and AD 500 respectively. These methods were specialized, and so steel did not become a major commodity until the 1850s. New methods of producing it by carburizing bars of iron in the cementation process were devised in the 17th century. In the Industrial Revolution, new methods of producing bar iron without charcoal were devised and these were later applied to produce steel. In the late 1850s, Henry Bessemer invented a new steelmaking process, involving blowing air through molten pig iron, to produce mild steel. This made steel much more economical, thereby leading to wrought iron no longer being produced in large quantities. Foundations of modern chemistry In 1774, Antoine Lavoisier used the reaction of water steam with metallic iron inside an incandescent iron tube to produce hydrogen in his experiments leading to the demonstration of the conservation of mass, which was instrumental in changing chemistry from a qualitative science to a quantitative one. Symbolic role Iron plays a certain role in mythology and has found various usage as a metaphor and in folklore. The Greek poet Hesiod's Works and Days (lines 109–201) lists different ages of man named after metals like gold, silver, bronze and iron to account for successive ages of humanity. The Iron Age was closely related with Rome, and in Ovid's Metamorphoses An example of the importance of iron's symbolic role may be found in the German Campaign of 1813. Frederick William III commissioned then the first Iron Cross as military decoration. Berlin iron jewellery reached its peak production between 1813 and 1815, when the Prussian royal family urged citizens to donate gold and silver jewellery for military funding. The inscription Ich gab Gold für Eisen (I gave gold for iron) was used as well in later war efforts. Laboratory routes For a few limited purposes when it is needed, pure iron is produced in the laboratory in small quantities by reducing the pure oxide or hydroxide with hydrogen, or forming iron pentacarbonyl and heating it to 250 °C so that it decomposes to form pure iron powder. Another method is electrolysis of ferrous chloride onto an iron cathode. Main industrial route Nowadays, the industrial production of iron or steel consists of two main stages. In the first stage, iron ore is reduced with coke in a blast furnace, and the molten metal is separated from gross impurities such as silicate minerals. This stage yields an alloy – pig iron – that contains relatively large amounts of carbon. In the second stage, the amount of carbon in the pig iron is lowered by oxidation to yield wrought iron, steel, or cast iron. Other metals can be added at this stage to form alloy steels. Blast furnace processing The blast furnace is loaded with iron ores, usually hematite or magnetite , along with coke (coal that has been separately baked to remove volatile components) and flux (limestone or dolomite). "Blasts" of air pre-heated to 900 °C (sometimes with oxygen enrichment) is blown through the mixture, in sufficient amount to turn the carbon into carbon monoxide: This reaction raises the temperature to about 2000 °C. The carbon monoxide reduces the iron ore to metallic iron: Some iron in the high-temperature lower region of the furnace reacts directly with the coke: The flux removes silicaceous minerals in the ore, which would otherwise clog the furnace: The heat of the furnace decomposes the carbonates to calcium oxide, which reacts with any excess silica to form a slag composed of calcium silicate or other products. At the furnace's temperature, the metal and the slag are both molten. They collect at the bottom as two immiscible liquid layers (with the slag on top), that are then easily separated. The slag can be used as a material in road construction or to improve mineral-poor soils for agriculture. Steelmaking thus remains one of the largest industrial contributors of CO2 emissions in the world. Steelmaking The pig iron produced by the blast furnace process contains up to 4–5% carbon (by mass), with small amounts of other impurities like sulfur, magnesium, phosphorus, and manganese. This high level of carbon makes it relatively weak and brittle. Reducing the amount of carbon to 0.002–2.1% produces steel, which may be up to 1000 times harder than pure iron. A great variety of steel articles can then be made by cold working, hot rolling, forging, machining, etc. Removing the impurities from pig iron, but leaving 2–4% carbon, results in cast iron, which is cast by foundries into articles such as stoves, pipes, radiators, lamp-posts, and rails. Steel products often undergo various heat treatments after they are forged to shape. Annealing consists of heating them to 700–800 °C for several hours and then gradual cooling. It makes the steel softer and more workable. Direct iron reduction Owing to environmental concerns, alternative methods of processing iron have been developed. "Direct iron reduction" reduces iron ore to a ferrous lump called "sponge" iron or "direct" iron that is suitable for steelmaking. Two main reactions comprise the direct reduction process: Natural gas is partially oxidized (with heat and a catalyst): Iron ore is then treated with these gases in a furnace, producing solid sponge iron: Silica is removed by adding a limestone flux as described above. Thermite process Ignition of a mixture of aluminium powder and iron oxide yields metallic iron via the thermite reaction: Alternatively pig iron may be made into steel (with up to about 2% carbon) or wrought iron (commercially pure iron). Various processes have been used for this, including finery forges, puddling furnaces, Bessemer converters, open hearth furnaces, basic oxygen furnaces, and electric arc furnaces. In all cases, the objective is to oxidize some or all of the carbon, together with other impurities. On the other hand, other metals may be added to make alloy steels. Molten oxide electrolysis Molten oxide electrolysis (MOE) uses electrolysis of molten iron oxide to yield metallic iron. It is studied in laboratory-scale experiments and is proposed as a method for industrial iron production that has no direct emissions of carbon dioxide. It uses a liquid iron cathode, an anode formed from an alloy of chromium, aluminium and iron, and the electrolyte is a mixture of molten metal oxides into which iron ore is dissolved. The current keeps the electrolyte molten and reduces the iron oxide. Oxygen gas is produced in addition to liquid iron. The only carbon dioxide emissions come from any fossil fuel-generated electricity used to heat and reduce the metal. Applications As structural material Iron is the most widely used of all the metals, accounting for over 90% of worldwide metal production. Its low cost and high strength often make it the material of choice to withstand stress or transmit forces, such as the construction of machinery and machine tools, rails, automobiles, ship hulls, concrete reinforcing bars, and the load-carrying framework of buildings. Since pure iron is quite soft, it is most commonly combined with alloying elements to make steel. Mechanical properties The mechanical properties of iron and its alloys are extremely relevant to their structural applications. Those properties can be evaluated in various ways, including the Brinell test, the Rockwell test and the Vickers hardness test. The properties of pure iron are often used to calibrate measurements or to compare tests. However, the mechanical properties of iron are significantly affected by the sample's purity: pure, single crystals of iron are actually softer than aluminium, and the purest industrially produced iron (99.99%) has a hardness of 20–30 Brinell. The pure iron (99.9%~99.999%), especially called electrolytic iron, is industrially produced by electrolytic refining. An increase in the carbon content will cause a significant increase in the hardness and tensile strength of iron. Maximum hardness of 65 Rc is achieved with a 0.6% carbon content, although the alloy has low tensile strength. Because of the softness of iron, it is much easier to work with than its heavier congeners ruthenium and osmium. Types of steels and alloys α-Iron is a fairly soft metal that can dissolve only a small concentration of carbon (no more than 0.021% by mass at 910 °C). Austenite (γ-iron) is similarly soft and metallic but can dissolve considerably more carbon (as much as 2.04% by mass at 1146 °C). This form of iron is used in the type of stainless steel used for making cutlery, and hospital and food-service equipment. Commercially available iron is classified based on purity and the abundance of additives. Pig iron has 3.5–4.5% carbon and contains varying amounts of contaminants such as sulfur, silicon and phosphorus. Pig iron is not a saleable product, but rather an intermediate step in the production of cast iron and steel. The reduction of contaminants in pig iron that negatively affect material properties, such as sulfur and phosphorus, yields cast iron containing 2–4% carbon, 1–6% silicon, and small amounts of manganese. Pig iron has a melting point in the range of 1420–1470 K, which is lower than either of its two main components, and makes it the first product to be melted when carbon and iron are heated together. Its mechanical properties vary greatly and depend on the form the carbon takes in the alloy. "White" cast irons contain their carbon in the form of cementite, or iron carbide (Fe3C). This hard, brittle compound dominates the mechanical properties of white cast irons, rendering them hard, but unresistant to shock. The broken surface of a white cast iron is full of fine facets of the broken iron carbide, a very pale, silvery, shiny material, hence the appellation. Cooling a mixture of iron with 0.8% carbon slowly below 723 °C to room temperature results in separate, alternating layers of cementite and α-iron, which is soft and malleable and is called pearlite for its appearance. Rapid cooling, on the other hand, does not allow time for this separation and creates hard and brittle martensite. The steel can then be tempered by reheating to a temperature in between, changing the proportions of pearlite and martensite. The end product below 0.8% carbon content is a pearlite-αFe mixture, and that above 0.8% carbon content is a pearlite-cementite mixture. In gray iron the carbon exists as separate, fine flakes of graphite, and also renders the material brittle due to the sharp edged flakes of graphite that produce stress concentration sites within the material. A newer variant of gray iron, referred to as ductile iron, is specially treated with trace amounts of magnesium to alter the shape of graphite to spheroids, or nodules, reducing the stress concentrations and vastly increasing the toughness and strength of the material. Wrought iron contains less than 0.25% carbon but large amounts of slag that give it a fibrous characteristic. Wrought iron is more corrosion resistant than steel. It has been almost completely replaced by mild steel, which corrodes more readily than wrought iron, but is cheaper and more widely available. Carbon steel contains 2.0% carbon or less, with small amounts of manganese, sulfur, phosphorus, and silicon. Alloy steels contain varying amounts of carbon as well as other metals, such as chromium, vanadium, molybdenum, nickel, tungsten, etc. Their alloy content raises their cost, and so they are usually only employed for specialist uses. One common alloy steel, though, is stainless steel. Recent developments in ferrous metallurgy have produced a growing range of microalloyed steels, also termed 'HSLA' or high-strength, low alloy steels, containing tiny additions to produce high strengths and often spectacular toughness at minimal cost. Alloys with high purity elemental makeups (such as alloys of electrolytic iron) have specifically enhanced properties such as ductility, tensile strength, toughness, fatigue strength, heat resistance, and corrosion resistance. Apart from traditional applications, iron is also used for protection from ionizing radiation. Although it is lighter than another traditional protection material, lead, it is much stronger mechanically. The main disadvantage of iron and steel is that pure iron, and most of its alloys, suffer badly from rust if not protected in some way, a cost amounting to over 1% of the world's economy. Painting, galvanization, passivation, plastic coating and bluing are all used to protect iron from rust by excluding water and oxygen or by cathodic protection. The mechanism of the rusting of iron is as follows: Cathode: 3 O2 + 6 H2O + 12 e− → 12 OH− Anode: 4 Fe → 4 Fe2+ + 8 e−; 4 Fe2+ → 4 Fe3+ + 4 e− Overall: 4 Fe + 3 O2 + 6 H2O → 4 Fe3+ + 12 OH− → 4 Fe(OH)3 or 4 FeO(OH) + 4 H2O The electrolyte is usually iron(II) sulfate in urban areas (formed when atmospheric sulfur dioxide attacks iron), and salt particles in the atmosphere in seaside areas. Catalysts and reagents Because Fe is inexpensive and nontoxic, much effort has been devoted to the development of Fe-based catalysts and reagents. Iron is however less common as a catalyst in commercial processes than more expensive metals. In biology, Fe-containing enzymes are pervasive. Iron catalysts are traditionally used in the Haber–Bosch process for the production of ammonia and the Fischer–Tropsch process for conversion of carbon monoxide to hydrocarbons for fuels and lubricants. Powdered iron in an acidic medium is used in the Bechamp reduction, the conversion of nitrobenzene to aniline. Iron compounds Iron(III) oxide mixed with aluminium powder can be ignited to create a thermite reaction, used in welding large iron parts (like rails) and purifying ores. Iron(III) oxide and oxyhydroxide are used as reddish and ocher pigments. Iron(III) chloride finds use in water purification and sewage treatment, in the dyeing of cloth, as a coloring agent in paints, as an additive in animal feed, and as an etchant for copper in the manufacture of printed circuit boards. It can also be dissolved in alcohol to form tincture of iron, which is used as a medicine to stop bleeding in canaries. Iron(II) sulfate is used as a precursor to other iron compounds. It is also used to reduce chromate in cement. It is used to fortify foods and treat iron deficiency anemia. Iron(III) sulfate is used in settling minute sewage particles in tank water. Iron(II) chloride is used as a reducing flocculating agent, in the formation of iron complexes and magnetic iron oxides, and as a reducing agent in organic synthesis. Sodium nitroprusside is a drug used as a vasodilator. It is on the World Health Organization's List of Essential Medicines. Biological and pathological role Iron is required for life. The iron–sulfur clusters are pervasive and include nitrogenase, the enzymes responsible for biological nitrogen fixation. Iron-containing proteins participate in transport, storage and use of oxygen. Iron proteins are involved in electron transfer. Examples of iron-containing proteins in higher organisms include hemoglobin, cytochrome (see high-valent iron), and catalase. The average adult human contains about 0.005% body weight of iron, or about four grams, of which three quarters is in hemoglobin—a level that remains constant despite only about one milligram of iron being absorbed each day, because the human body recycles its hemoglobin for the iron content. Microbial growth may be assisted by oxidation of iron(II) or by reduction of iron(III). Biochemistry Iron acquisition poses a problem for aerobic organisms because ferric iron is poorly soluble near neutral pH. Thus, these organisms have developed means to absorb iron as complexes, sometimes taking up ferrous iron before oxidising it back to ferric iron. In particular, bacteria have evolved very high-affinity sequestering agents called siderophores. After uptake in human cells, iron storage is precisely regulated. A major component of this regulation is the protein transferrin, which binds iron ions absorbed from the duodenum and carries it in the blood to cells. Transferrin contains Fe3+ in the middle of a distorted octahedron, bonded to one nitrogen, three oxygens and a chelating carbonate anion that traps the Fe3+ ion: it has such a high stability constant that it is very effective at taking up Fe3+ ions even from the most stable complexes. At the bone marrow, transferrin is reduced from Fe3+ to Fe2+ and stored as ferritin to be incorporated into hemoglobin. The most commonly known and studied bioinorganic iron compounds (biological iron molecules) are the heme proteins: examples are hemoglobin, myoglobin, and cytochrome P450. These compounds participate in transporting gases, building enzymes, and transferring electrons. Metalloproteins are a group of proteins with metal ion cofactors. Some examples of iron metalloproteins are ferritin and rubredoxin. Many enzymes vital to life contain iron, such as catalase, lipoxygenases, and IRE-BP. Hemoglobin is an oxygen carrier that occurs in red blood cells and contributes their color, transporting oxygen in the arteries from the lungs to the muscles where it is transferred to myoglobin, which stores it until it is needed for the metabolic oxidation of glucose, generating energy. Here the hemoglobin binds to carbon dioxide, produced when glucose is oxidized, which is transported through the veins by hemoglobin (predominantly as bicarbonate anions) back to the lungs where it is exhaled. In hemoglobin, the iron is in one of four heme groups and has six possible coordination sites; four are occupied by nitrogen atoms in a porphyrin ring, the fifth by an imidazole nitrogen in a histidine residue of one of the protein chains attached to the heme group, and the sixth is reserved for the oxygen molecule it can reversibly bind to. When hemoglobin is not attached to oxygen (and is then called deoxyhemoglobin), the Fe2+ ion at the center of the heme group (in the hydrophobic protein interior) is in a high-spin configuration. It is thus too large to fit inside the porphyrin ring, which bends instead into a dome with the Fe2+ ion about 55 picometers above it. In this configuration, the sixth coordination site reserved for the oxygen is blocked by another histidine residue. When deoxyhemoglobin picks up an oxygen molecule, this histidine residue moves away and returns once the oxygen is securely attached to form a hydrogen bond with it. This results in the Fe2+ ion switching to a low-spin configuration, resulting in a 20% decrease in ionic radius so that now it can fit into the porphyrin ring, which becomes planar. Additionally, this hydrogen bonding results in the tilting of the oxygen molecule, resulting in a Fe–O–O bond angle of around 120° that avoids the formation of Fe–O–Fe or Fe–O2–Fe bridges that would lead to electron transfer, the oxidation of Fe2+ to Fe3+, and the destruction of hemoglobin. This results in a movement of all the protein chains that leads to the other subunits of hemoglobin changing shape to a form with larger oxygen affinity. Thus, when deoxyhemoglobin takes up oxygen, its affinity for more oxygen increases, and vice versa. Myoglobin, on the other hand, contains only one heme group and hence this cooperative effect cannot occur. Thus, while hemoglobin is almost saturated with oxygen in the high partial pressures of oxygen found in the lungs, its affinity for oxygen is much lower than that of myoglobin, which oxygenates even at low partial pressures of oxygen found in muscle tissue. As described by the Bohr effect (named after Christian Bohr, the father of Niels Bohr), the oxygen affinity of hemoglobin diminishes in the presence of carbon dioxide. Carbon monoxide and phosphorus trifluoride are poisonous to humans because they bind to hemoglobin similarly to oxygen, but with much more strength, so that oxygen can no longer be transported throughout the body. Hemoglobin bound to carbon monoxide is known as carboxyhemoglobin. This effect also plays a minor role in the toxicity of cyanide, but there the major effect is by far its interference with the proper functioning of the electron transport protein cytochrome a. The cytochrome proteins also involve heme groups and are involved in the metabolic oxidation of glucose by oxygen. The sixth coordination site is then occupied by either another imidazole nitrogen or a methionine sulfur, so that these proteins are largely inert to oxygen—with the exception of cytochrome a, which bonds directly to oxygen and thus is very easily poisoned by cyanide. Here, the electron transfer takes place as the iron remains in low spin but changes between the +2 and +3 oxidation states. Since the reduction potential of each step is slightly greater than the previous one, the energy is released step-by-step and can thus be stored in adenosine triphosphate. Cytochrome a is slightly distinct, as it occurs at the mitochondrial membrane, binds directly to oxygen, and transports protons as well as electrons, as follows: 4 Cytc2+ + O2 + 8H → 4 Cytc3+ + 2 H2O + 4H Although the heme proteins are the most important class of iron-containing proteins, the iron–sulfur proteins are also very important, being involved in electron transfer, which is possible since iron can exist stably in either the +2 or +3 oxidation states. These have one, two, four, or eight iron atoms that are each approximately tetrahedrally coordinated to four sulfur atoms; because of this tetrahedral coordination, they always have high-spin iron. The simplest of such compounds is rubredoxin, which has only one iron atom coordinated to four sulfur atoms from cysteine residues in the surrounding peptide chains. Another important class of iron–sulfur proteins is the ferredoxins, which have multiple iron atoms. Transferrin does not belong to either of these classes. The ability of sea mussels to maintain their grip on rocks in the ocean is facilitated by their use of organometallic iron-based bonds in their protein-rich cuticles. Based on synthetic replicas, the presence of iron in these structures increased elastic modulus 770 times, tensile strength 58 times, and toughness 92 times. The amount of stress required to permanently damage them increased 76 times. Nutrition Diet Iron is pervasive, but particularly rich sources of dietary iron include red meat, oysters, beans, poultry, fish, leaf vegetables, watercress, tofu, and blackstrap molasses. Bread and breakfast cereals are sometimes specifically fortified with iron. Iron provided by dietary supplements is often found as iron(II) fumarate, although iron(II) sulfate is cheaper and is absorbed equally well. Elemental iron, or reduced iron, despite being absorbed at only one-third to two-thirds the efficiency (relative to iron sulfate), is often added to foods such as breakfast cereals or enriched wheat flour. Iron is most available to the body when chelated to amino acids and is also available for use as a common iron supplement. Glycine, the least expensive amino acid, is most often used to produce iron glycinate supplements. Dietary recommendations The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for iron in 2001. The current EAR for iron for women ages 1418 is 7.9 mg/day, 8.1 mg/day for ages 1950 and 5.0 mg/day thereafter (postmenopause). For men, the EAR is 6.0 mg/day for ages 19 and up. The RDA is 15.0 mg/day for women ages 1518, 18.0 mg/day for ages 1950 and 8.0 mg/day thereafter. For men, 8.0 mg/day for ages 19 and up. RDAs are higher than EARs so as to identify amounts that will cover people with higher-than-average requirements. RDA for pregnancy is 27 mg/day and, for lactation, 9 mg/day. For children ages 13 years 7 mg/day, 10 mg/day for ages 4–8 and 8 mg/day for ages 913. As for safety, the IOM also sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of iron, the UL is set at 45 mg/day. Collectively the EARs, RDAs and ULs are referred to as Dietary Reference Intakes. The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For women the PRI is 13 mg/day ages 1517 years, 16 mg/day for women ages 18 and up who are premenopausal and 11 mg/day postmenopausal. For pregnancy and lactation, 16 mg/day. For men the PRI is 11 mg/day ages 15 and older. For children ages 1 to 14, the PRI increases from 7 to 11 mg/day. The PRIs are higher than the U.S. RDAs, with the exception of pregnancy. The EFSA reviewed the same safety question did not establish a UL. Infants may require iron supplements if they are bottle-fed cow's milk. Frequent blood donors are at risk of low iron levels and are often advised to supplement their iron intake. For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For iron labeling purposes, 100% of the Daily Value was 18 mg, and remained unchanged at 18 mg. A table of the old and new adult daily values is provided at Reference Daily Intake. Deficiency Iron deficiency is the most common nutritional deficiency in the world. When loss of iron is not adequately compensated by adequate dietary iron intake, a state of latent iron deficiency occurs, which over time leads to iron-deficiency anemia if left untreated, which is characterised by an insufficient number of red blood cells and an insufficient amount of hemoglobin. Children, pre-menopausal women (women of child-bearing age), and people with poor diet are most susceptible to the disease. Most cases of iron-deficiency anemia are mild, but if not treated can cause problems like fast or irregular heartbeat, complications during pregnancy, and delayed growth in infants and children. The brain is resistant to acute iron deficiency due to the slow transport of iron through the blood brain barrier. Acute fluctuations in iron status (marked by serum ferritin levels) do not reflect brain iron status, but prolonged nutritional iron deficiency is suspected to reduce brain iron concentrations over time. In the brain, iron plays a role in oxygen transport, myelin synthesis, mitochondrial respiration, and as a cofactor for neurotransmitter synthesis and metabolism. Animal models of nutritional iron deficiency report biomolecular changes resembling those seen in Parkinson's and Huntington's disease. However, age-related accumulation of iron in the brain has also been linked to the development of Parkinson's. Excess Iron uptake is tightly regulated by the human body, which has no regulated physiological means of excreting iron. Only small amounts of iron are lost daily due to mucosal and skin epithelial cell sloughing, so control of iron levels is primarily accomplished by regulating uptake. Regulation of iron uptake is impaired in some people as a result of a genetic defect that maps to the HLA-H gene region on chromosome 6 and leads to abnormally low levels of hepcidin, a key regulator of the entry of iron into the circulatory system in mammals. In these people, excessive iron intake can result in iron overload disorders, known medically as hemochromatosis. Many people have an undiagnosed genetic susceptibility to iron overload, and are not aware of a family history of the problem. For this reason, people should not take iron supplements unless they suffer from iron deficiency and have consulted a doctor. Hemochromatosis is estimated to be the cause of 0.3–0.8% of all metabolic diseases of Caucasians. Overdoses of ingested iron can cause excessive levels of free iron in the blood. High blood levels of free ferrous iron react with peroxides to produce highly reactive free radicals that can damage DNA, proteins, lipids, and other cellular components. Iron toxicity occurs when the cell contains free iron, which generally occurs when iron levels exceed the availability of transferrin to bind the iron. Damage to the cells of the gastrointestinal tract can also prevent them from regulating iron absorption, leading to further increases in blood levels. Iron typically damages cells in the heart, liver and elsewhere, causing adverse effects that include coma, metabolic acidosis, shock, liver failure, coagulopathy, long-term organ damage, and even death. Humans experience iron toxicity when the iron exceeds 20 milligrams for every kilogram of body mass; 60 milligrams per kilogram is considered a lethal dose. Overconsumption of iron, often the result of children eating large quantities of ferrous sulfate tablets intended for adult consumption, is one of the most common toxicological causes of death in children under six. The Dietary Reference Intake (DRI) sets the Tolerable Upper Intake Level (UL) for adults at 45 mg/day. For children under fourteen years old the UL is 40 mg/day. The medical management of iron toxicity is complicated, and can include use of a specific chelating agent called deferoxamine to bind and expel excess iron from the body. ADHD Some research has suggested that low thalamic iron levels may play a role in the pathophysiology of ADHD. Some researchers have found that iron supplementation can be effective especially in the inattentive subtype of the disorder. Some researchers in the 2000s suggested a link between low levels of iron in the blood and ADHD. A 2012 study found no such correlation. Cancer The role of iron in cancer defense can be described as a "double-edged sword" because of its pervasive presence in non-pathological processes. People having chemotherapy may develop iron deficiency and anemia, for which intravenous iron therapy is used to restore iron levels. Iron overload, which may occur from high consumption of red meat, may initiate tumor growth and increase susceptibility to cancer onset, particularly for colorectal cancer. Marine systems Iron plays an essential role in marine systems and can act as a limiting nutrient for planktonic activity. Because of this, too much of a decrease in iron may lead to a decrease in growth rates in phytoplanktonic organisms such as diatoms. Iron can also be oxidized by marine microbes under conditions that are high in iron and low in oxygen. Iron can enter marine systems through adjoining rivers and directly from the atmosphere. Once iron enters the ocean, it can be distributed throughout the water column through ocean mixing and through recycling on the cellular level. In the arctic, sea ice plays a major role in the store and distribution of iron in the ocean, depleting oceanic iron as it freezes in the winter and releasing it back into the water when thawing occurs in the summer. The iron cycle can fluctuate the forms of iron from aqueous to particle forms altering the availability of iron to primary producers. Increased light and warmth increases the amount of iron that is in forms that are usable by primary producers. See also Economically important iron deposits include: Carajás Mine in the state of Pará, Brazil, is thought to be the largest iron deposit in the world. El Mutún in Bolivia, where 10% of the world's accessible iron ore is located. Hamersley Basin is the largest iron ore deposit in Australia. Kiirunavaara in Sweden, where one of the world's largest deposits of iron ore is located The Mesabi Iron Range is the chief iron ore mining district in the United States. Iron and steel industry Iron cycle Iron nanoparticle Iron–platinum nanoparticle Iron fertilization – proposed fertilization of oceans to stimulate phytoplankton growth Iron-oxidizing bacteria List of countries by iron production Pelletising – process of creation of iron ore pellets Rustproof iron Steel References Bibliography Further reading H.R. Schubert, History of the British Iron and Steel Industry ... to 1775 AD (Routledge, London, 1957) R.F. Tylecote, History of Metallurgy (Institute of Materials, London 1992). R.F. Tylecote, "Iron in the Industrial Revolution" in J. Day and R.F. Tylecote, The Industrial Revolution in Metals (Institute of Materials 1991), 200–60. External links It's Elemental – Iron Iron at The Periodic Table of Videos (University of Nottingham) Metallurgy for the non-Metallurgist Iron by J. B. Calvert Building materials Chemical elements with body-centered cubic structure Chemical elements Cubic minerals Dietary minerals Ferromagnetic materials Minerals in space group 225 Minerals in space group 229 Native element minerals Pyrotechnic fuels Transition metals
Iron
[ "Physics", "Engineering" ]
14,633
[ "Chemical elements", "Building engineering", "Ferromagnetic materials", "Architecture", "Construction", "Materials", "Atoms", "Matter", "Building materials" ]
14,735
https://en.wikipedia.org/wiki/IEEE%20802.15
IEEE 802.15 is a working group of the Institute of Electrical and Electronics Engineers (IEEE) IEEE 802 standards committee which specifies Wireless Specialty Networks (WSN) standards. The working group was formerly known as Working Group for Wireless Personal Area Networks. The number of Task Groups in IEEE 802.15 varies based on the number of active projects. The current list of active projects can be found on the IEEE 802.15 website. IEEE 802.15.1: WPAN / Bluetooth Task group one is based on Bluetooth technology. It defines physical layer (PHY) and medium access control (MAC) specification for wireless connectivity with fixed, portable and moving devices within or entering personal operating space. Standards were issued in 2002 and 2005. IEEE 802.15.2: Coexistence Task group two addresses the coexistence of wireless personal area networks (WPAN) with other wireless devices operating in unlicensed frequency bands such as wireless local area networks (WLAN). The IEEE 802.15.2-2003 standard was published in 2003 and task group two went into "hibernation". IEEE 802.15.3: High Rate WPAN IEEE 802.15.3-2003 IEEE 802.15.3-2003 is a MAC and PHY standard for high-rate (11 to 55 Mbit/s) WPANs. The standard can be downloaded via the IEEE Get program, which is funded by IEEE 802 volunteers. IEEE 802.15.3a IEEE P802.15.3a was an attempt to provide a higher speed ultra-wideband PHY enhancement amendment to IEEE 802.15.3 for applications that involve imaging and multimedia. The members of the task group were not able to come to an agreement choosing between two technology proposals, Multi-band Orthogonal Frequency Division Multiplexing (MB-OFDM) and Direct Sequence UWB (DS-UWB), backed by two different industry alliances and was withdrawn in January 2006. Documents related to the development of IEEE 802.15.3a are archived on the IEEE document server. IEEE 802.15.3b-2006 IEEE 802.15.3b-2005 amendment was released on May 5, 2006. It enhanced 802.15.3 to improve implementation and interoperability of the MAC. This amendment includes many optimizations, corrected errors, clarified ambiguities, and added editorial clarifications while preserving backward compatibility. Among other changes, the amendment defined the following new features: a new MAC layer management entity (MLME) service access point (SAP) implied acknowledgment policy that allow polling logical link control/subnetwork access protocol (LLC/SNAP) headers multicast address assignment multiple contention periods in a superfame a method for relinquishing channel time to another device in the PAN faster network recover in the case when the piconet coordinator (PNC) abruptly disconnects a method for a device to return information about signal quality of a received packet. IEEE 802.15.3c-2009 IEEE 802.15.3c-2009 was published on September 11, 2009. The task group TG3c developed a millimeter-wave-based alternative physical layer (PHY) for the existing 802.15.3 Wireless Personal Area Network (WPAN) Standard 802.15.3-2003. The IEEE 802.15.3 Task Group 3c (TG3c) was formed in March 2005. This mmWave WPAN is defined to operate in the 57–66 GHz range. Depending on the geographical region, anywhere from 2 to 9 GHz of bandwidth is available (for example, 57–64 GHz is available as unlicensed band defined by FCC 47 CFR 15.255 in North America). The millimeter-wave WPAN allows very high data rate, short range (10 m) for applications including high-speed internet access, streaming content download (video on demand, HDTV, home theater, etc.), real-time streaming and wireless data bus for cable replacement. A total of three PHY modes were defined in the standard: Single carrier (SC) mode (up to 5.3 Gbit/s) High speed interface (HSI) mode (single carrier, up to 5 Gbit/s) Audio/visual (AV) mode (OFDM, up to 3.8 Gbit/s). IEEE 802.15.3d-2017 IEEE Std 802.15.3d-2017 defines an alternative physical layer (PHY) at the lower THz frequency range between 252 GHz and 325 GHz for switched point-to-point links is defined in this amendment. Two PHY modes are defined that enable data rates of up to 100 Gb/s using eight different bandwidths between 2.16 GHz and 69.12 GHz. IEEE 802.15.3e-2017 IEEE Std 802.15.3e-2017 provides an alternative physical layer (PHY) and a modified medium access control (MAC) layer is defined in this amendment. Two PHY modes have been defined that enable data rates up to 100 Gb/s using the 60 GHz band. MIMO and aggregation methods have been defined to increase the maximum achievable communication speeds. Stack acknowledgment has been defined to improve the medium access control (MAC) efficiency when used in a point-to-point (P2P) topology between two devices. IEEE 802.15.3f-2017 IEEE Std 802.15.3f-2017 extends the RF channelization of the millimeter wave PHYs to allow for use of the spectrum up to 71 GHz. 802.15.3f was initiated because several regulatory domains extended the licensed exempt 60 GHz bands up to 71 GHz. IEEE 802.15.4: Low Rate WPAN IEEE 802.15.4-2003 (Low Rate WPAN) deals with low data rate but very long battery life (months or even years) and very low complexity. The standard defines both the physical (Layer 1) and data-link (Layer 2) layers of the OSI model. The first edition of the 802.15.4 standard was released in May 2003. Several standardized and proprietary networks (or mesh) layer protocols run over 802.15.4-based networks, including IEEE 802.15.5, Zigbee, Thread, 6LoWPAN, WirelessHART, and ISA100.11a. WPAN Low Rate Alternative PHY (4a) IEEE 802.15.4a (formally called IEEE 802.15.4a-2007) is an amendment to IEEE 802.15.4 specifying additional physical layers (PHYs) to the original standard. The principal interest was in providing higher precision ranging and localization capability (1 meter accuracy and better), higher aggregate throughput, adding scalability to data rates, longer range, and lower power consumption and cost. The selected baselines are two optional PHYs consisting of a UWB Pulse Radio (operating in unlicensed UWB spectrum) and a Chirp Spread Spectrum (operating in unlicensed 2.4 GHz spectrum). The Pulsed UWB Radio is based on Continuous Pulsed UWB technology (see C-UWB) and will be able to deliver communications and high precision ranging. Revision and Enhancement (4b) IEEE 802.15.4b was approved in June 2006 and was published in September 2006 as IEEE 802.15.4-2006. The IEEE 802.15 task group 4b was chartered to create a project for specific enhancements and clarifications to the IEEE 802.15.4-2003 standard, such as resolving ambiguities, reducing unnecessary complexity, increasing flexibility in security key usage, considerations for newly available frequency allocations, and others. PHY Amendment for China (4c) IEEE 802.15.4c was approved in 2008 and was published in January 2009. This defines a PHY amendment that adds new RF spectrum specifications to address the Chinese regulatory changes which have opened the 314-316 MHz, 430-434 MHz, and 779-787 MHz bands for Wireless PAN use within China. PHY and MAC Amendment for Japan (4d) The IEEE 802.15 Task Group 4d was chartered to define an amendment to the 802.15.4-2006 standard. The amendment defines a new PHY and such changes to the MAC as are necessary to support a new frequency allocation (950 MHz -956 MHz) in Japan while coexisting with passive tag systems in the band. MAC Amendment for Industrial Applications (4e) The IEEE 802.15 Task Group 4e is chartered to define a MAC amendment to the existing standard 802.15.4-2006. The intent of this amendment is to enhance and add functionality to the 802.15.4-2006 MAC to a) better support the industrial markets and b) permit compatibility with modifications being proposed within the Chinese WPAN. Specific enhancements were made to add channel hopping and a variable time slot option compatible with ISA100.11a. These changes were approved in 2011. PHY and MAC Amendment for Active RFID (4f) The IEEE 802.15.4f Active RFID System Task Group is chartered to define new wireless Physical (PHY) layer(s) and enhancements to the 802.15.4-2006 standard MAC layer which are required to support new PHY(s) for active RFID system bi-directional and location determination applications. PHY Amendment for Smart Utility Networks (4g) IEEE 802.15.4g Smart Utility Networks (SUN) Task Group is chartered to create a PHY amendment to 802.15.4 to provide a standard that facilitates very large-scale process control applications such as the utility smart grid network capable of supporting large, geographically diverse networks with minimal infrastructure, with potentially millions of fixed endpoints. In 2012 they released the 802.15.4g radio standard. The Telecommunications Industry Association TR-51 committee develops standards for similar applications. Enhanced Ultra Wideband (UWB) Physical Layers (PHYs) and Associated Ranging Techniques (4z) Approved in 2020, amendment to the UWB PHYs (e.g. with coding options) to increase accuracy and exchange ranging related information between the participating devices. IEEE 802.15.5: Mesh Networking IEEE 802.15.5 provides the architectural framework enabling WPAN devices to promote interoperable, stable, and scalable wireless mesh networking. This standard is composed of two parts: low-rate WPAN mesh and high-rate WPAN mesh networks. The low-rate mesh is built on IEEE 802.15.4-2006 MAC, while the high rate mesh utilizes IEEE 802.15.3/3b MAC. The common features of both meshes include network initialization, addressing, and multi-hop unicasting. In addition, the low-rate mesh supports multicasting, reliable broadcasting, portability support, trace route and energy saving function, and the high-rate mesh supports multihop time-guaranteed service. Mesh networking for IEEE 802.15.1 networks is beyond the scope of IEEE 802.15.5, and is instead carried out within the Bluetooth mesh working group. IEEE 802.15.6: Body Area Networks In December 2011, the IEEE 802.15.6 task group approved a draft of a standard for Body Area Network (BAN) technologies. The draft was approved on 22 July 2011 by Letter Ballot to start the Sponsor Ballot process. Task Group 6 was formed in November 2007 to focus on a low-power and short-range wireless standard to be optimized for devices and operation on, in, or around the human body (but not limited to humans) to serve a variety of applications including medical, consumer electronics, and personal entertainment. IEEE 802.15.7: Visible Light Communication The inaugural meeting for Task Group 7 was held during January 2009, where it was chartered to write standards for free-space optical communication using visible light. The 802.15.7-2011 Standard was published in September 2011. In 2015, a new task group was launched to revise the 802.15.7 standard, with several new PHY layers and MAC routines to support optical camera communications (OCC) and light fidelity (LiFi). As the new draft became too large, in March 2017, the 802.15 Working Group decided to continue 802.15.7 with OCC only, which is broadcast only, and to create a new task group 802.15.13 to work on a new standard for LiFi, which obviously needed a significantly revised MAC layer, besides new PHYs. The revision of 802.15.7-2018 was published in April 2019. In September 2020, a new PAR was approved, and a new task group started to work on a first amendment P802.15.7a aiming at increased data rate and longer range for OCC. IEEE P802.15.8: Peer Aware Communications IEEE P802.15.8 received IEEE Standards Board approval on 29 March 2012 to form a Task Group to develop a standard for Peer Aware Communications (PAC) optimized for peer-to-peer and infrastructure-less communications with fully distributed coordination operating in bands below 11 GHz. The proposed standard is targeting data rates greater than 100 kbit/s with scalable data rates up to 10 Mbit/s. Features of the proposed include: discovery for peer information without association discovery of the number of devices in the network group communications with simultaneous membership in multiple groups (typically up to 10) relative positioning multi-hop relay security The draft standard is under development, more information can be found on the IEEE 802.15 Task Group 8 web page. IEEE P802.15.9: Key Management Protocol IEEE P802.15.9 received IEEE Standards Board approval on 7 December 2011 to form a Task Group to develop a recommended practice for the transport of Key Management Protocol (KMP) datagrams. The recommended practice will define a message framework based on Information Elements as a transport method for key management protocol (KMP) datagrams and guidelines for the use of some existing KMPs with IEEE Std 802.15.4. The recommended practice will not create a new KMP. While IEEE Std 802.15.4 has always supported datagram security, it has not provided a mechanism for establishing the keys used by this feature. Lack of key management support in IEEE Std 802.15.4 can result in weak keys, which is a common avenue for attacking the security system. Adding KMP support is critical to a proper security framework. Some of the existing KMPs that it may address are IETF's PANA, HIP, IKEv2, IEEE Std 802.1X, and 4-Way-Handshake. The draft recommended practice is under development, more information can be found on the IEEE 802.15 web page. IEEE P802.15.10: Layer 2 Routing IEEE P802.15.10 received IEEE Standards Board approval on 23 August 2013 to form a Task Group to develop a recommended practice for routing packets in dynamically changing 802.15.4 wireless networks (changes on the order of a minute time frame), with minimal impact to route handling. The goal is to extend the coverage area as the number of nodes increase. The route related capabilities that the recommended practice will provide include the following: Route establishment Dynamic route reconfiguration Discovery and addition of new nodes Breaking of established routes Loss and recurrence of routes Real time gathering of link status Allowing for single hop appearance at the networking layer (not breaking standard L3 mechanisms) Support for broadcast Support for multicast Effective frame forwarding The draft recommended practice is under development; more information can be found on the IEEE 802.15.10 web page. IEEE 802.15.13: Multi-Gigabit/s Optical Wireless Communications The first meeting of Task Group 13 was held during March 2017, aiming at a new standard on light fidelity (LiFi), i.e. mobile communications by using the light. The aim is to address industrial applications, i.e. ultra-reliable, low-latency connectivity with negligible jitter for next-generation IoT. Compared to 802.15.7, the group decided to rewrite the standard entirely, based on existing and new contributions, to meet those targets. The group first worked on a low-power pulsed modulation PHY (PM-PHY) using On-Off-Keying (OOK) with frequency-domain equalization (FDE) and also a high-bandwidth PHY (HB-PHY) based on orthogonal frequency-division multiplexing (OFDM) adopted from ITU-T G.9991. The group also decided to implement mobility by considering access points in the infrastructure and mobile users in the service area as inputs and outputs of a distributed multiple-input multiple-output (D-MIMO) link. 802.15.13 supports D-MIMO natively with a minimalistic design, suitable for specialty applications. It is implementable on low-cost FPGAs and off-the-shelf computing hardware. The Working Group letter ballot and the IEEE SA Ballot were started in November 2019 and November 2020, respectively. Publication is expected mid of 2022. Wireless Next Generation Standing Committee The IEEE P802.15 Wireless Next Generation Standing Committee (SCwng) is chartered to facilitate and stimulate presentations and discussions on new wireless related technologies that may be subject for new 802.15 standardization projects or to address the whole 802.15 work group with issues or concerns with techniques or technologies. See also (UWB) References External links Official web site Get IEEE 802 (IEEE 802.15) IEEE 802 Wireless networking standards Working groups
IEEE 802.15
[ "Technology" ]
3,660
[ "Wireless networking", "Wireless networking standards" ]
14,739
https://en.wikipedia.org/wiki/IEEE%20802.11
IEEE 802.11 is part of the IEEE 802 set of local area network (LAN) technical standards, and specifies the set of medium access control (MAC) and physical layer (PHY) protocols for implementing wireless local area network (WLAN) computer communication. The standard and amendments provide the basis for wireless network products using the Wi-Fi brand and are the world's most widely used wireless computer networking standards. IEEE 802.11 is used in most home and office networks to allow laptops, printers, smartphones, and other devices to communicate with each other and access the Internet without connecting wires. IEEE 802.11 is also a basis for vehicle-based communication networks with IEEE 802.11p. The standards are created and maintained by the Institute of Electrical and Electronics Engineers (IEEE) LAN/MAN Standards Committee (IEEE 802). The base version of the standard was released in 1997 and has had subsequent amendments. While each amendment is officially revoked when it is incorporated in the latest version of the standard, the corporate world tends to market to the revisions because they concisely denote the capabilities of their products. As a result, in the marketplace, each revision tends to become its own standard. 802.11x is a shorthand for "any version of 802.11", to avoid confusion with "802.11" used specifically for the original 1997 version. IEEE 802.11 uses various frequencies including, but not limited to, 2.4 GHz, 5 GHz, 6 GHz, and 60 GHz frequency bands. Although IEEE 802.11 specifications list channels that might be used, the allowed radio frequency spectrum availability varies significantly by regulatory domain. The protocols are typically used in conjunction with IEEE 802.2, and are designed to interwork seamlessly with Ethernet, and are very often used to carry Internet Protocol traffic. General description The 802.11 family consists of a series of half-duplex over-the-air modulation techniques that use the same basic protocol. The 802.11 protocol family employs carrier-sense multiple access with collision avoidance (CSMA/CA) whereby equipment listens to a channel for other users (including non 802.11 users) before transmitting each frame (some use the term "packet", which may be ambiguous: "frame" is more technically correct). 802.11-1997 was the first wireless networking standard in the family, but 802.11b was the first widely accepted one, followed by 802.11a, 802.11g, 802.11n, 802.11ac, and 802.11ax. Other standards in the family (c–f, h, j) are service amendments that are used to extend the current scope of the existing standard, which amendments may also include corrections to a previous specification. 802.11b and 802.11g use the 2.4-GHz ISM band, operating in the United States under Part 15 of the U.S. Federal Communications Commission Rules and Regulations. 802.11n can also use that 2.4-GHz band. Because of this choice of frequency band, 802.11b/g/n equipment may occasionally suffer interference in the 2.4-GHz band from microwave ovens, cordless telephones, and Bluetooth devices. 802.11b and 802.11g control their interference and susceptibility to interference by using direct-sequence spread spectrum (DSSS) and orthogonal frequency-division multiplexing (OFDM) signaling methods, respectively. 802.11a uses the 5 GHz U-NII band which, for much of the world, offers at least 23 non-overlapping, 20-MHz-wide channels. This is an advantage over the 2.4-GHz, ISM-frequency band, which offers only three non-overlapping, 20-MHz-wide channels where other adjacent channels overlap (see: list of WLAN channels). Better or worse performance with higher or lower frequencies (channels) may be realized, depending on the environment. 802.11n and 802.11ax can use either the 2.4 GHz or 5 GHz band; 802.11ac uses only the 5 GHz band. The segment of the radio frequency spectrum used by 802.11 varies between countries. In the US, 802.11a and 802.11g devices may be operated without a license, as allowed in Part 15 of the FCC Rules and Regulations. Frequencies used by channels one through six of 802.11b and 802.11g fall within the 2.4 GHz amateur radio band. Licensed amateur radio operators may operate 802.11b/g devices under Part 97 of the FCC Rules and Regulations, allowing increased power output but not commercial content or encryption. Generations In 2018, the Wi-Fi Alliance began using a consumer-friendly generation numbering scheme for the publicly used 802.11 protocols. Wi-Fi generations 1–8 use the 802.11b, 802.11a, 802.11g, 802.11n, 802.11ac, 802.11ax, 802.11be and 802.11bn protocols, in that order. History 802.11 technology has its origins in a 1985 ruling by the U.S. Federal Communications Commission that released the ISM band for unlicensed use. In 1991 NCR Corporation/AT&T (now Nokia Labs and LSI Corporation) invented a precursor to 802.11 in Nieuwegein, the Netherlands. The inventors initially intended to use the technology for cashier systems. The first wireless products were brought to the market under the name WaveLAN with raw data rates of 1 Mbit/s and 2 Mbit/s. Vic Hayes, who held the chair of IEEE 802.11 for 10 years, and has been called the "father of Wi-Fi", was involved in designing the initial 802.11b and 802.11a standards within the IEEE. He, along with Bell Labs Engineer Bruce Tuch, approached IEEE to create a standard. In 1999, the Wi-Fi Alliance was formed as a trade association to hold the Wi-Fi trademark under which most products are sold. The major commercial breakthrough came with Apple's adoption of Wi-Fi for their iBook series of laptops in 1999. It was the first mass consumer product to offer Wi-Fi network connectivity, which was then branded by Apple as AirPort. One year later IBM followed with its ThinkPad 1300 series in 2000. Protocol 802.11-1997 (802.11 legacy) The original version of the standard IEEE 802.11 was released in 1997 and clarified in 1999, but is now obsolete. It specified two net bit rates of 1 or 2 megabits per second (Mbit/s), plus forward error correction code. It specified three alternative physical layer technologies: diffuse infrared operating at 1 Mbit/s; frequency-hopping spread spectrum operating at 1 Mbit/s or 2 Mbit/s; and direct-sequence spread spectrum operating at 1 Mbit/s or 2 Mbit/s. The latter two radio technologies used microwave transmission over the Industrial Scientific Medical frequency band at 2.4 GHz. Some earlier WLAN technologies used lower frequencies, such as the U.S. 900 MHz ISM band. Legacy 802.11 with direct-sequence spread spectrum was rapidly supplanted and popularized by 802.11b. 802.11a (OFDM waveform) 802.11a, published in 1999, uses the same data link layer protocol and frame format as the original standard, but an OFDM based air interface (physical layer) was added. It operates in the 5 GHz band with a maximum net data rate of 54 Mbit/s, plus error correction code, which yields realistic net achievable throughput in the mid-20 Mbit/s. It has seen widespread worldwide implementation, particularly within the corporate workspace. Since the 2.4 GHz band is heavily used to the point of being crowded, using the relatively unused 5 GHz band gives 802.11a a significant advantage. However, this high carrier frequency also brings a disadvantage: the effective overall range of 802.11a is less than that of 802.11b/g. In theory, 802.11a signals are absorbed more readily by walls and other solid objects in their path due to their smaller wavelength, and, as a result, cannot penetrate as far as those of 802.11b. In practice, 802.11b typically has a higher range at low speeds (802.11b will reduce speed to 5.5 Mbit/s or even 1 Mbit/s at low signal strengths). 802.11a also suffers from interference, but locally there may be fewer signals to interfere with, resulting in less interference and better throughput. 802.11b The 802.11b standard has a maximum raw data rate of 11 Mbit/s (Megabits per second) and uses the same media access method defined in the original standard. 802.11b products appeared on the market in early 2000, since 802.11b is a direct extension of the modulation technique defined in the original standard. The dramatic increase in throughput of 802.11b (compared to the original standard) along with simultaneous substantial price reductions led to the rapid acceptance of 802.11b as the definitive wireless LAN technology. Devices using 802.11b experience interference from other products operating in the 2.4 GHz band. Devices operating in the 2.4 GHz range include microwave ovens, Bluetooth devices, baby monitors, cordless telephones, and some amateur radio equipment. As unlicensed intentional radiators in this ISM band, they must not interfere with and must tolerate interference from primary or secondary allocations (users) of this band, such as amateur radio. 802.11g In June 2003, a third modulation standard was ratified: 802.11g. This works in the 2.4 GHz band (like 802.11b), but uses the same OFDM based transmission scheme as 802.11a. It operates at a maximum physical layer bit rate of 54 Mbit/s exclusive of forward error correction codes, or about 22 Mbit/s average throughput. 802.11g hardware is fully backward compatible with 802.11b hardware, and therefore is encumbered with legacy issues that reduce throughput by ~21% when compared to 802.11a. The then-proposed 802.11g standard was rapidly adopted in the market starting in January 2003, well before ratification, due to the desire for higher data rates as well as reductions in manufacturing costs. By summer 2003, most dual-band 802.11a/b products became dual-band/tri-mode, supporting a and b/g in a single mobile adapter card or access point. Details of making b and g work well together occupied much of the lingering technical process; in an 802.11g network, however, the activity of an 802.11b participant will reduce the data rate of the overall 802.11g network. Like 802.11b, 802.11g devices also suffer interference from other products operating in the 2.4 GHz band, for example, wireless keyboards. 802.11-2007 In 2003, task group TGma was authorized to "roll up" many of the amendments to the 1999 version of the 802.11 standard. REVma or 802.11ma, as it was called, created a single document that merged 8 amendments (802.11a, b, d, e, g, h, i, j) with the base standard. Upon approval on 8 March 2007, 802.11REVma was renamed to the then-current base standard IEEE 802.11-2007. 802.11n 802.11n is an amendment that improves upon the previous 802.11 standards; its first draft of certification was published in 2006. The 802.11n standard was retroactively labelled as Wi-Fi 4 by the Wi-Fi Alliance. The standard added support for multiple-input multiple-output antennas (MIMO). 802.11n operates on both the 2.4 GHz and the 5 GHz bands. Support for 5 GHz bands is optional. Its net data rate ranges from 54 Mbit/s to 600 Mbit/s. The IEEE has approved the amendment, and it was published in October 2009. Prior to the final ratification, enterprises were already migrating to 802.11n networks based on the Wi-Fi Alliance's certification of products conforming to a 2007 draft of the 802.11n proposal. Early Intel WiFi cards were not compatible with the final standard. Many rival access points and cards also did not support 5 GHz at all. 802.11-2012 In May 2007, task group TGmb was authorized to "roll up" many of the amendments to the 2007 version of the 802.11 standard. REVmb or 802.11mb, as it was called, created a single document that merged ten amendments (802.11k, r, y, n, w, p, z, v, u, s) with the 2007 base standard. In addition much cleanup was done, including a reordering of many of the clauses. Upon publication on 29 March 2012, the new standard was referred to as IEEE 802.11-2012. 802.11ac IEEE 802.11ac-2013 is an amendment to IEEE 802.11, published in December 2013, that builds on 802.11n. The 802.11ac standard was retroactively labelled as Wi-Fi 5 by the Wi-Fi Alliance. Changes compared to 802.11n include wider channels (80 or 160 MHz versus 40 MHz) in the 5 GHz band, more spatial streams (up to eight versus four), higher-order modulation (up to 256-QAM vs. 64-QAM), and the addition of Multi-user MIMO (MU-MIMO). The Wi-Fi Alliance separated the introduction of ac wireless products into two phases ("waves"), named "Wave 1" and "Wave 2". From mid-2013, the alliance started certifying Wave 1 802.11ac products shipped by manufacturers, based on the IEEE 802.11ac Draft 3.0 (the IEEE standard was not finalized until later that year). In 2016 Wi-Fi Alliance introduced the Wave 2 certification, to provide higher bandwidth and capacity than Wave 1 products. Wave 2 products include additional features like MU-MIMO, 160 MHz channel width support, support for more 5 GHz channels, and four spatial streams (with four antennas; compared to three in Wave 1 and 802.11n, and eight in IEEE's 802.11ax specification). 802.11ad IEEE 802.11ad is an amendment that defines a new physical layer for 802.11 networks to operate in the 60 GHz millimeter wave spectrum. This frequency band has significantly different propagation characteristics than the 2.4 GHz and 5 GHz bands where Wi-Fi networks operate. Products implementing the 802.11ad standard are sold under the WiGig brand name, with a certification program developed by the Wi-Fi Alliance. The peak transmission rate of 802.11ad is 7 Gbit/s. IEEE 802.11ad is a protocol used for very high data rates (about 8 Gbit/s) and for short range communication (about 1–10 meters). TP-Link announced the world's first 802.11ad router in January 2016. The WiGig standard as of 2021 has been published after being announced in 2009 and added to the IEEE 802.11 family in December 2012. 802.11af IEEE 802.11af, also referred to as "White-Fi" and "Super Wi-Fi", is an amendment, approved in February 2014, that allows WLAN operation in TV white space spectrum in the VHF and UHF bands between 54 and 790 MHz. It uses cognitive radio technology to transmit on unused TV channels, with the standard taking measures to limit interference for primary users, such as analog TV, digital TV, and wireless microphones. Access points and stations determine their position using a satellite positioning system such as GPS, and use the Internet to query a geolocation database (GDB) provided by a regional regulatory agency to discover what frequency channels are available for use at a given time and position. The physical layer uses OFDM and is based on 802.11ac. The propagation path loss as well as the attenuation by materials such as brick and concrete is lower in the UHF and VHF bands than in the 2.4 GHz and 5 GHz bands, which increases the possible range. The frequency channels are 6 to 8 MHz wide, depending on the regulatory domain. Up to four channels may be bonded in either one or two contiguous blocks. MIMO operation is possible with up to four streams used for either space–time block code (STBC) or multi-user (MU) operation. The achievable data rate per spatial stream is 26.7 Mbit/s for 6 and 7 MHz channels, and 35.6 Mbit/s for 8 MHz channels. With four spatial streams and four bonded channels, the maximum data rate is 426.7 Mbit/s for 6 and 7 MHz channels and 568.9 Mbit/s for 8 MHz channels. 802.11-2016 IEEE 802.11-2016 which was known as IEEE 802.11 REVmc, is a revision based on IEEE 802.11-2012, incorporating 5 amendments (11ae, 11aa, 11ad, 11ac, 11af). In addition, existing MAC and PHY functions have been enhanced and obsolete features were removed or marked for removal. Some clauses and annexes have been renumbered. 802.11ah IEEE 802.11ah, published in 2017, defines a WLAN system operating at sub-1 GHz license-exempt bands. Due to the favorable propagation characteristics of the low-frequency spectra, 802.11ah can provide improved transmission range compared with the conventional 802.11 WLANs operating in the 2.4 GHz and 5 GHz bands. 802.11ah can be used for various purposes including large-scale sensor networks, extended-range hotspots, and outdoor Wi-Fi for cellular WAN carrier traffic offloading, whereas the available bandwidth is relatively narrow. The protocol intends consumption to be competitive with low-power Bluetooth, at a much wider range. 802.11ai IEEE 802.11ai is an amendment to the 802.11 standard that added new mechanisms for a faster initial link setup time. 802.11aj IEEE 802.11aj is a derivative of 802.11ad for use in the 45 GHz unlicensed spectrum available in some regions of the world (specifically China); it also provides additional capabilities for use in the 60 GHz band. Alternatively known as China Millimeter Wave (CMMW). 802.11aq IEEE 802.11aq is an amendment to the 802.11 standard that will enable pre-association discovery of services. This extends some of the mechanisms in 802.11u that enabled device discovery to discover further the services running on a device, or provided by a network. 802.11-2020 IEEE 802.11-2020, which was known as IEEE 802.11 REVmd, is a revision based on IEEE 802.11-2016 incorporating 5 amendments (11ai, 11ah, 11aj, 11ak, 11aq). In addition, existing MAC and PHY functions have been enhanced and obsolete features were removed or marked for removal. Some clauses and annexes have been added. 802.11ax IEEE 802.11ax is the successor to 802.11ac, marketed as (2.4 GHz and 5 GHz) and (6 GHz) by the Wi-Fi Alliance. It is also known as High Efficiency , for the overall improvements to clients in dense environments. For an individual client, the maximum improvement in data rate (PHY speed) against the predecessor (802.11ac) is only 39% (for comparison, this improvement was nearly 500% for the predecessors). Yet, even with this comparatively minor 39% figure, the goal was to provide 4 times the throughput-per-area of 802.11ac (hence High Efficiency). The motivation behind this goal was the deployment of WLAN in dense environments such as corporate offices, shopping malls and dense residential apartments. This is achieved by means of a technique called OFDMA, which is basically multiplexing in the frequency domain (as opposed to spatial multiplexing, as in 802.11ac). This is equivalent to cellular technology applied into . The IEEE 802.11ax2021 standard was approved on February 9, 2021. 802.11ay IEEE 802.11ay is a standard that is being developed, also called EDMG: Enhanced Directional MultiGigabit PHY. It is an amendment that defines a new physical layer for 802.11 networks to operate in the 60 GHz millimeter wave spectrum. It will be an extension of the existing 11ad, aimed to extend the throughput, range, and use-cases. The main use-cases include indoor operation and short-range communications due to atmospheric oxygen absorption and inability to penetrate walls. The peak transmission rate of 802.11ay is 40 Gbit/s. The main extensions include: channel bonding (2, 3 and 4), MIMO (up to 4 streams) and higher modulation schemes. The expected range is 300–500 m. 802.11ba IEEE 802.11ba Wake-up Radio (WUR) Operation is an amendment to the IEEE 802.11 standard that enables energy-efficient operation for data reception without increasing latency. The target active power consumption to receive a WUR packet is less than 1 milliwatt and supports data rates of 62.5 kbit/s and 250 kbit/s. The WUR PHY uses MC-OOK (multicarrier OOK) to achieve extremely low power consumption. 802.11bb IEEE 802.11bb is a networking protocol standard in the IEEE 802.11 set of protocols that uses infrared light for communications. 802.11be IEEE 802.11be Extremely High Throughput (EHT) is the potential next amendment to the 802.11 IEEE standard, and will likely be designated as Wi-Fi 7. It will build upon 802.11ax, focusing on WLAN indoor and outdoor operation with stationary and pedestrian speeds in the 2.4 GHz, 5 GHz, and 6 GHz frequency bands. Common misunderstandings about achievable throughput Across all variations of 802.11, maximum achievable throughputs are given either based on measurements under ideal conditions or in the layer-2 data rates. However, this does not apply to typical deployments in which data is being transferred between two endpoints, of which at least one is typically connected to a wired infrastructure and the other endpoint is connected to an infrastructure via a wireless link. This means that, typically, data frames pass an 802.11 (WLAN) medium and are being converted to 802.3 (Ethernet) or vice versa. Due to the difference in the frame (header) lengths of these two media, the application's packet size determines the speed of the data transfer. This means applications that use small packets (e.g., VoIP) create dataflows with high-overhead traffic (i.e., a low goodput). Other factors that contribute to the overall application data rate are the speed with which the application transmits the packets (i.e., the data rate) and, of course, the energy with which the wireless signal is received. The latter is determined by distance and by the configured output power of the communicating devices. The same references apply to the attached graphs that show measurements of UDP throughput. Each represents an average (UDP) throughput (please note that the error bars are there but barely visible due to the small variation) of 25 measurements. Each is with a specific packet size (small or large) and with a specific data rate (10 kbit/s – 100 Mbit/s). Markers for traffic profiles of common applications are included as well. These figures assume there are no packet errors, which, if occurring, will lower the transmission rate further. Channels and frequencies 802.11b, 802.11g, and 802.11n-2.4 utilize the spectrum, one of the ISM bands. 802.11a, 802.11n, and 802.11ac use the more heavily regulated band. These are commonly referred to as the "2.4 GHz and 5 GHz bands" in most sales literature. Each spectrum is sub-divided into channels with a center frequency and bandwidth, analogous to how radio and TV broadcast bands are sub-divided. The 2.4 GHz band is divided into 14 channels spaced 5 MHz apart, beginning with channel 1, which is centered on 2.412 GHz. The latter channels have additional restrictions or are unavailable for use in some regulatory domains. The channel numbering of the spectrum is less intuitive due to the differences in regulations between countries. These are discussed in greater detail on the list of WLAN channels. Channel spacing within the 2.4 GHz band In addition to specifying the channel center frequency, 802.11 also specifies (in Clause 17) a spectral mask defining the permitted power distribution across each channel. The mask requires the signal to be attenuated a minimum of 20 dB from its peak amplitude at ±11 MHz from the center frequency, the point at which a channel is effectively 22 MHz wide. One consequence is that stations can use only every fourth or fifth channel without overlap. Availability of channels is regulated by country, constrained in part by how each country allocates radio spectrum to various services. At one extreme, Japan permits the use of all 14 channels for 802.11b, and for 802.11g/n-2.4. Other countries such as Spain initially allowed only channels 10 and 11, and France allowed only 10, 11, 12, and 13; however, Europe now allow channels 1 through 13. North America and some Central and South American countries allow only Since the spectral mask defines only power output restrictions up to ±11 MHz from the center frequency to be attenuated by −50 dBr, it is often assumed that the energy of the channel extends no further than these limits. It is more correct to say that the overlapping signal on any channel should be sufficiently attenuated to interfere with a transmitter on any other channel minimally, given the separation between channels. Due to the near–far problem a transmitter can impact (desensitize) a receiver on a "non-overlapping" channel, but only if it is close to the victim receiver (within a meter) or operating above allowed power levels. Conversely, a sufficiently distant transmitter on an overlapping channel can have little to no significant effect. Confusion often arises over the amount of channel separation required between transmitting devices. 802.11b was based on direct-sequence spread spectrum (DSSS) modulation and utilized a channel bandwidth of 22 MHz, resulting in three "non-overlapping" channels (1, 6, and 11). 802.11g was based on OFDM modulation and utilized a channel bandwidth of 20 MHz. This occasionally leads to the belief that four "non-overlapping" channels (1, 5, 9, and 13) exist under 802.11g. However, this is not the case as per 17.4.6.3 Channel Numbering of operating channels of the IEEE Std 802.11 (2012), which states, "In a multiple cell network topology, overlapping and/or adjacent cells using different channels can operate simultaneously without interference if the distance between the center frequencies is at least 25 MHz." and section 18.3.9.3 and Figure 18-13. This does not mean that the technical overlap of the channels recommends the non-use of overlapping channels. The amount of inter-channel interference seen on a configuration using channels 1, 5, 9, and 13 (which is permitted in Europe, but not in North America) is barely different from a three-channel configuration, but with an entire extra channel. However, overlap between channels with more narrow spacing (e.g. 1, 4, 7, 11 in North America) may cause unacceptable degradation of signal quality and throughput, particularly when users transmit near the boundaries of AP cells. Regulatory domains and legal compliance IEEE uses the phrase regdomain to refer to a legal regulatory region. Different countries define different levels of allowable transmitter power, time that a channel can be occupied, and different available channels. Domain codes are specified for the United States, Canada, ETSI (Europe), Spain, France, Japan, and China. Most Wi-Fi certified devices default to regdomain 0, which means least common denominator settings, i.e., the device will not transmit at a power above the allowable power in any nation, nor will it use frequencies that are not permitted in any nation. The regdomain setting is often made difficult or impossible to change so that the end-users do not conflict with local regulatory agencies such as the United States' Federal Communications Commission. Layer 2 – Datagrams The datagrams are called frames. Current 802.11 standards specify frame types for use in the transmission of data as well as management and control of wireless links. Frames are divided into very specific and standardized sections. Each frame consists of a MAC header, payload, and frame check sequence (FCS). Some frames do not have payloads. The first two bytes of the MAC header form a frame control field specifying the form and function of the frame. This frame control field is subdivided into the following sub-fields: Protocol Version: Two bits representing the protocol version. The currently used protocol version is zero. Other values are reserved for future use. Type: Two bits identifying the type of WLAN frame. Control, Data, and Management are various frame types defined in IEEE 802.11. Subtype: Four bits providing additional discrimination between frames. Type and Subtype are used together to identify the exact frame. ToDS and FromDS: Each is one bit in size. They indicate whether a data frame is headed for a distribution system or it is getting out of it. Control and management frames set these values to zero. All the data frames will have one of these bits set. ToDS = 0 and FromDS = 0 Communication within a basic service set or an independent basic service set (IBSS) network. ToDS = 0 and FromDS = 1 A frame sent by a station and directed to an AP accessed via the distribution system. ToDS = 1 and FromDS = 0 A frame exiting the distribution system for a station. ToDS = 1 and FromDS = 1 Only kind of frame frame that uses all four MAC addresses in a DATA frame. Address 1: access point address exiting from the distribution system. Address 2: access point entrance to the distribution system (AP to which the source station is connected). Address 3: final station address. Address 4: address of the source station. More Fragments: The More Fragments bit is set when a packet is divided into multiple frames for transmission. Every frame except the last frame of a packet will have this bit set. Retry: Sometimes frames require retransmission, and for this, there is a Retry bit that is set to one when a frame is resent. This aids in the elimination of duplicate frames. Power Management: This bit indicates the power management state of the sender after the completion of a frame exchange. Access points are required to manage the connection and will never set the power-saver bit. More Data: The More Data bit is used to buffer frames received in a distributed system. The access point uses this bit to facilitate stations in power-saver mode. It indicates that at least one frame is available and addresses all stations connected. Protected Frame: The Protected Frame bit is set to the value of one if the frame body is encrypted by a protection mechanism such as Wired Equivalent Privacy (WEP), Wi-Fi Protected Access (WPA), or Wi-Fi Protected Access II (WPA2). Order: This bit is set only when the "strict ordering" delivery method is employed. Frames and fragments are not always sent in order as it causes a transmission performance penalty. The next two bytes are reserved for the Duration ID field, indicating how long the field's transmission will take so other devices know when the channel will be available again. This field can take one of three forms: Duration, Contention-Free Period (CFP), and Association ID (AID). An 802.11 frame can have up to four address fields. Each field can carry a MAC address. Address 1 is the receiver, Address 2 is the transmitter, Address 3 is used for filtering purposes by the receiver. Address 4 is only present in data frames transmitted between access points in an Extended Service Set or between intermediate nodes in a mesh network. The remaining fields of the header are: The Sequence Control field is a two-byte section used to identify message order and eliminate duplicate frames. The first 4 bits are used for the fragmentation number, and the last 12 bits are the sequence number. An optional two-byte Quality of Service control field, present in QoS Data frames; it was added with 802.11e. The payload or frame body field is variable in size, from 0 to 2304 bytes plus any overhead from security encapsulation, and contains information from higher layers. The Frame Check Sequence (FCS) is the last four bytes in the standard 802.11 frame. Often referred to as the Cyclic Redundancy Check (CRC), it allows for integrity checks of retrieved frames. As frames are about to be sent, the FCS is calculated and appended. When a station receives a frame, it can calculate the FCS of the frame and compare it to the one received. If they match, it is assumed that the frame was not distorted during transmission. Management frames Management frames are not always authenticated, and allow for the maintenance, or discontinuance, of communication. Some common 802.11 subtypes include: Authentication frame: 802.11 authentication begins with the wireless network interface controller (WNIC) sending an authentication frame to the access point containing its identity. When open system authentication is being used, the WNIC sends only a single authentication frame, and the access point responds with an authentication frame of its own indicating acceptance or rejection. When shared key authentication is being used, the WNIC sends an initial authentication request, and the access point responds with an authentication frame containing challenge text. The WNIC then sends an authentication frame containing the encrypted version of the challenge text to the access point. The access point confirms the text was encrypted with the correct key by decrypting it with its own key. The result of this process determines the WNIC's authentication status. Association request frame: Sent from a station, it enables the access point to allocate resources and synchronize. The frame carries information about the WNIC, including supported data rates and the SSID of the network the station wishes to associate with. If the request is accepted, the access point reserves memory and establishes an association ID for the WNIC. Association response frame: Sent from an access point to a station containing the acceptance or rejection to an association request. If it is an acceptance, the frame will contain information such as an association ID and supported data rates. Beacon frame: Sent periodically from an access point to announce its presence and provide the SSID and other parameters for WNICs within range. : Sent from a station wishing to terminate connection from another station. Disassociation frame: Sent from a station wishing to terminate the connection. It is an elegant way to allow the access point to relinquish memory allocation and remove the WNIC from the association table. Probe request frame: Sent from a station when it requires information from another station. Probe response frame: Sent from an access point containing capability information, supported data rates, etc., after receiving a probe request frame. Reassociation request frame: A WNIC sends a reassociation request when it drops from the currently associated access point range and finds another access point with a stronger signal. The new access point coordinates the forwarding of any information that may still be contained in the buffer of the previous access point. Reassociation response frame: Sent from an access point containing the acceptance or rejection to a WNIC reassociation request frame. The frame includes information required for association such as the association ID and supported data rates. Action frame: extending management frame to control a certain action. Some of the action categories are QoS, Block Ack, Public, Radio Measurement, Fast BSS Transition, Mesh Peering Management, etc. These frames are sent by a station when it needs to tell its peer for a certain action to be taken. For example, a station can tell another station to set up a block acknowledgement by sending an ADDBA Request action frame. The other station would then respond with an ADDBA Response action frame. The body of a management frame consists of frame-subtype-dependent fixed fields followed by a sequence of information elements (IEs). The common structure of an IE is as follows: Control frames Control frames facilitate the exchange of data frames between stations. Some common 802.11 control frames include: Acknowledgement (ACK) frame: After receiving a data frame, the receiving station will send an ACK frame to the sending station if no errors are found. If the sending station does not receive an ACK frame within a predetermined period of time, the sending station will resend the frame. Request to Send (RTS) frame: The RTS and CTS frames provide an optional collision reduction scheme for access points with hidden stations. A station sends an RTS frame as the first step in a two-way handshake required before sending data frames. Clear to Send (CTS) frame: A station responds to an RTS frame with a CTS frame. It provides clearance for the requesting station to send a data frame. The CTS provides collision control management by including a time value for which all other stations are to hold off transmission while the requesting station transmits. Data frames Data frames carry packets from web pages, files, etc. within the body. The body begins with an IEEE 802.2 header, with the Destination Service Access Point (DSAP) specifying the protocol, followed by a Subnetwork Access Protocol (SNAP) header if the DSAP is hex AA, with the organizationally unique identifier (OUI) and protocol ID (PID) fields specifying the protocol. If the OUI is all zeroes, the protocol ID field is an EtherType value. Almost all 802.11 data frames use 802.2 and SNAP headers, and most use an OUI of 00:00:00 and an EtherType value. Similar to TCP congestion control on the internet, frame loss is built into the operation of 802.11. To select the correct transmission speed or Modulation and Coding Scheme, a rate control algorithm may test different speeds. The actual packet loss rate of Access points varies widely for different link conditions. There are variations in the loss rate experienced on production Access points, between 10% and 80%, with 30% being a common average. It is important to be aware that the link layer should recover these lost frames. If the sender does not receive an Acknowledgement (ACK) frame, then it will be resent. Standards and amendments Within the IEEE 802.11 Working Group, the following IEEE Standards Association Standard and Amendments exist: IEEE 802.11-1997: The WLAN standard was originally 1 Mbit/s and 2 Mbit/s, 2.4 GHz RF and infrared (IR) standard (1997), all the others listed below are Amendments to this standard, except for Recommended Practices 802.11F and 802.11T. IEEE 802.11a: 54 Mbit/s, 5 GHz standard (1999, shipping products in 2001) IEEE 802.11b: 5.5 Mbit/s and 11 Mbit/s, 2.4 GHz standard (1999) IEEE 802.11c: Bridge operation procedures; included in the IEEE 802.1D standard (2001) IEEE 802.11d: International (country-to-country) roaming extensions (2001) IEEE 802.11e: Enhancements: QoS, including packet bursting (2005) IEEE 802.11F: Inter-Access Point Protocol (2003) Withdrawn February 2006 IEEE 802.11g: 54 Mbit/s, 2.4 GHz standard (backwards compatible with b) (2003) IEEE 802.11h: Spectrum Managed 802.11a (5 GHz) for European compatibility (2004) IEEE 802.11i: Enhanced security (2004) IEEE 802.11j: Extensions for Japan (4.9-5.0 GHz) (2004) IEEE 802.11-2007: A new release of the standard that includes amendments a, b, d, e, g, h, i, and j. (July 2007) IEEE 802.11k: Radio resource measurement enhancements (2008) IEEE 802.11n: Higher Throughput WLAN at 2.4 and 5 GHz; 20 and 40 MHz channels; introduces MIMO to (September 2009) IEEE 802.11p: WAVE—Wireless Access for the Vehicular Environment (such as ambulances and passenger cars) (July 2010) IEEE 802.11r: Fast BSS transition (FT) (2008) IEEE 802.11s: Mesh Networking, Extended Service Set (ESS) (July 2011) IEEE 802.11T: Wireless Performance Prediction (WPP)—test methods and metrics Recommendation cancelled IEEE 802.11u: Improvements related to HotSpots and 3rd-party authorization of clients, e.g., cellular network offload (February 2011) IEEE 802.11v: Wireless network management (February 2011) IEEE 802.11w: Protected Management Frames (September 2009) IEEE 802.11y: 3650–3700 MHz Operation in the U.S. (2008) IEEE 802.11z: Extensions to Direct Link Setup (DLS) (September 2010) IEEE 802.11-2012: A new release of the standard that includes amendments k, n, p, r, s, u, v, w, y, and z (March 2012) IEEE 802.11aa: Robust streaming of Audio Video Transport Streams (June 2012) - see Stream Reservation Protocol IEEE 802.11ac: Very High Throughput WLAN at 5 GHz; wider channels (80 and 160 MHz); Multi-user MIMO (down-link only) (December 2013) IEEE 802.11ad: Very High Throughput 60 GHz (December 2012) — see also WiGig IEEE 802.11ae: Prioritization of Management Frames (March 2012) IEEE 802.11af: TV Whitespace (February 2014) IEEE 802.11-2016: A new release of the standard that includes amendments aa, ac, ad, ae, and af (December 2016) IEEE 802.11ah: Sub-1 GHz license exempt operation (e.g., sensor network, smart metering) (December 2016) IEEE 802.11ai: Fast Initial Link Setup (December 2016) IEEE 802.11aj: China Millimeter Wave (February 2018) IEEE 802.11ak: Transit Links within Bridged Networks (June 2018) IEEE 802.11aq: Pre-association Discovery (July 2018) IEEE 802.11-2020: A new release of the standard that includes amendments ah, ai, aj, ak, and aq (December 2020) IEEE 802.11ax: High Efficiency WLAN at 2.4, 5 and 6 GHz; introduces OFDMA to (February 2021) IEEE 802.11ay: Enhancements for Ultra High Throughput in and around the 60 GHz Band (March 2021) IEEE 802.11az: Next Generation Positioning (March 2023) IEEE 802.11ba: Wake Up Radio (March 2021) IEEE 802.11bb: Light Communications (November 2023) IEEE 802.11bc: Enhanced Broadcast Service (February 2024) IEEE 802.11bd: Enhancements for Next Generation V2X (see also IEEE 802.11p) (March 2023) In process IEEE 802.11be: Extremely High Throughput (see also IEEE 802.11ax) (May 2024) IEEE 802.11bf: WLAN Sensing IEEE 802.11bh: Randomized and Changing MAC Addresses IEEE 802.11bi: Enhanced Data Privacy IEEE 802.11bk: 320 MHz Positioning IEEE 802.11bn: Ultra High Reliability IEEE 802.11bp: Ambient Power Communication IEEE 802.11me: 802.11 Accumulated Maintenance Changes IEEE 802.11mf: 802.11 Accumulated Maintenance Changes 802.11F and 802.11T are recommended practices rather than standards and are capitalized as such. 802.11m is used for standard maintenance. 802.11ma was completed for 802.11-2007, 802.11mb for 802.11-2012, 802.11mc for 802.11-2016, and 802.11md for 802.11-2020. Standard vs. amendment Both the terms "standard" and "amendment" are used when referring to the different variants of IEEE standards. As far as the IEEE Standards Association is concerned, there is only one current standard; it is denoted by IEEE 802.11 followed by the date published. IEEE 802.11-2020 is the only version currently in publication, superseding previous releases. The standard is updated by means of amendments. Amendments are created by task groups (TG). Both the task group and their finished document are denoted by 802.11 followed by one or two lower case letters, for example, IEEE 802.11a or IEEE 802.11ax. Updating 802.11 is the responsibility of task group m. In order to create a new version, TGm combines the previous version of the standard and all published amendments. TGm also provides clarification and interpretation to industry on published documents. New versions of the IEEE 802.11 were published in 1999, 2007, 2012, 2016, and 2020. Nomenclature Various terms in 802.11 are used to specify aspects of wireless local-area networking operation and may be unfamiliar to some readers. For example, time unit (usually abbreviated TU) is used to indicate a unit of time equal to 1024 microseconds. Numerous time constants are defined in terms of TU (rather than the nearly equal millisecond). Also, the term portal is used to describe an entity that is similar to an 802.1H bridge. A portal provides access to the WLAN by non-802.11 LAN STAs. Security In 2001, a group from the University of California, Berkeley presented a paper describing weaknesses in the 802.11 Wired Equivalent Privacy (WEP) security mechanism defined in the original standard; they were followed by Fluhrer, Mantin, and Shamir's paper titled "Weaknesses in the Key Scheduling Algorithm of RC4". Not long after, Adam Stubblefield and AT&T publicly announced the first verification of the attack. In the attack, they were able to intercept transmissions and gain unauthorized access to wireless networks. The IEEE set up a dedicated task group to create a replacement security solution, 802.11i (previously, this work was handled as part of a broader 802.11e effort to enhance the MAC layer). The Wi-Fi Alliance announced an interim specification called Wi-Fi Protected Access (WPA) based on a subset of the then-current IEEE 802.11i draft. These started to appear in products in mid-2003. IEEE 802.11i (also known as WPA2) itself was ratified in June 2004, and uses the Advanced Encryption Standard (AES), instead of RC4, which was used in WEP. The modern recommended encryption for the home/consumer space is WPA2 (AES Pre-Shared Key), and for the enterprise space is WPA2 along with a RADIUS authentication server (or another type of authentication server) and a strong authentication method such as EAP-TLS. In January 2005, the IEEE set up yet another task group "w" to protect management and broadcast frames, which previously were sent unsecured. Its standard was published in 2009. In December 2011, a security flaw was revealed that affects some wireless routers with a specific implementation of the optional Wi-Fi Protected Setup (WPS) feature. While WPS is not a part of 802.11, the flaw allows an attacker within the range of the wireless router to recover the WPS PIN and, with it, the router's 802.11i password in a few hours. In late 2014, Apple announced that its iOS 8 mobile operating system would scramble MAC addresses during the pre-association stage to thwart retail footfall tracking made possible by the regular transmission of uniquely identifiable probe requests. Android 8.0 "Oreo" introduced a similar feature, named "MAC randomization". Wi-Fi users may be subjected to a Wi-Fi deauthentication attack to eavesdrop, attack passwords, or force the use of another, usually more expensive access point. See also 802.11 frame types Comparison of wireless data standards Fujitsu Ltd. v. Netgear Inc. Gi-Fi, a term used by some trade press to refer to faster versions of the IEEE 802.11 standards LTE-WLAN Aggregation OFDM system comparison table Passive Wi-Fi Reference Broadcast Infrastructure Synchronization TU (time unit) TV White Space Database Ultra-wideband White spaces (radio) Wi-Fi operating system support Wibree or Bluetooth low energy WiGig Wireless USB – another wireless protocol primarily designed for shorter-range applications Notes Footnotes References External links IEEE 802.11 working group Official timelines of 802.11 standards from IEEE List of all Wi-Fi Chipset Vendors – Including historical timeline of mergers and acquisitions Computer-related introductions in 1997 Wireless networking standards Local area networks
IEEE 802.11
[ "Technology" ]
10,339
[ "Wireless networking", "Wireless networking standards" ]
14,749
https://en.wikipedia.org/wiki/Indium
Indium is a chemical element; it has symbol In and atomic number 49. It is a silvery-white post-transition metal and one of the softest elements. Chemically, indium is similar to gallium and thallium, and its properties are largely intermediate between the two. It was discovered in 1863 by Ferdinand Reich and Hieronymous Theodor Richter by spectroscopic methods and named for the indigo blue line in its spectrum. Indium is a technology-critical element used primarily in the production of flat-panel displays as indium tin oxide (ITO), a transparent and conductive coating applied to glass. Indium is also used in the semiconductor industry, in low-melting-point metal alloys such as solders and soft-metal high-vacuum seals. It is produced exclusively as a by-product during the processing of the ores of other metals, chiefly from sphalerite and other zinc sulfide ores. Indium has no biological role and its compounds are toxic when inhaled or injected into the bloodstream, although they are poorly absorbed following ingestion. Etymology The name comes from the Latin word indicum meaning violet or indigo. The word indicum means "Indian", as the naturally based dye indigo was originally exported to Europe from India. Properties Physical Indium is a shiny silvery-white, highly ductile post-transition metal with a bright luster. It is so soft (Mohs hardness 1.2) that it can be cut with a knife and leaves a visible line like a pencil when rubbed on paper. It is a member of group 13 on the periodic table and its properties are mostly intermediate between its vertical neighbors gallium and thallium. As with tin, a high-pitched cry is heard when indium is bent – a crackling sound due to crystal twinning. Like gallium, indium is able to wet glass. Like both, indium has a low melting point, 156.60 °C (313.88 °F); higher than its lighter homologue, gallium, but lower than its heavier homologue, thallium, and lower than tin. The boiling point is 2072 °C (3762 °F), higher than that of thallium, but lower than gallium, conversely to the general trend of melting points, but similarly to the trends down the other post-transition metal groups because of the weakness of the metallic bonding with few electrons delocalized. The density of indium, 7.31 g/cm3, is also greater than gallium, but lower than thallium. Below the critical temperature, 3.41 K, indium becomes a superconductor. Indium crystallizes in the body-centered tetragonal crystal system in the space group I4/mmm (lattice parameters: a = 325 pm, c = 495 pm): this is a slightly distorted face-centered cubic structure, where each indium atom has four neighbours at 324 pm distance and eight neighbours slightly further (336 pm). Indium has greater solubility in liquid mercury than any other metal (more than 50 mass percent of indium at 0 °C). Indium displays a ductile viscoplastic response, found to be size-independent in tension and compression. However it does have a size effect in bending and indentation, associated to a length-scale of order 50–100 μm, significantly large when compared with other metals. Chemical Indium has 49 electrons, with an electronic configuration of [Kr]4d5s5p. In compounds, indium most commonly donates the three outermost electrons to become indium(III), In. In some cases, the pair of 5s-electrons are not donated, resulting in indium(I), In. The stabilization of the monovalent state is attributed to the inert pair effect, in which relativistic effects stabilize the 5s-orbital, observed in heavier elements. Thallium (indium's heavier homolog) shows an even stronger effect, causing oxidation to thallium(I) to be more probable than to thallium(III), whereas gallium (indium's lighter homolog) commonly shows only the +3 oxidation state. Thus, although thallium(III) is a moderately strong oxidizing agent, indium(III) is not, and many indium(I) compounds are powerful reducing agents. While the energy required to include the s-electrons in chemical bonding is lowest for indium among the group 13 metals, bond energies decrease down the group so that by indium, the energy released in forming two additional bonds and attaining the +3 state is not always enough to outweigh the energy needed to involve the 5s-electrons. Indium(I) oxide and hydroxide are more basic and indium(III) oxide and hydroxide are more acidic. A number of standard electrode potentials, depending on the reaction under study, are reported for indium, reflecting the decreased stability of the +3 oxidation state: {| |- | In2+ + e−|| ⇌ In+ || E0 = −0.40 V |- | In3+ + e−|| ⇌ In2+ || E0 = −0.49 V |- | In3+ + 2 e−|| ⇌ In+ || E0 = −0.443 V |- | In3+ + 3 e−|| ⇌ In || E0 = −0.3382 V |- | In+ + e−|| ⇌ In || E0 = −0.14 V |} Indium metal does not react with water, but it is oxidized by stronger oxidizing agents such as halogens to give indium(III) compounds. It does not form a boride, silicide, or carbide, and the hydride InH3 has at best a transitory existence in ethereal solutions at low temperatures, being unstable enough to spontaneously polymerize without coordination. Indium is rather basic in aqueous solution, showing only slight amphoteric characteristics, and unlike its lighter homologs aluminium and gallium, it is insoluble in aqueous alkaline solutions. Isotopes Indium has 39 known isotopes, ranging in mass number from 97 to 135. Only two isotopes occur naturally as primordial nuclides: indium-113, the only stable isotope, and indium-115, which has a half-life of 4.41 years, four orders of magnitude greater than the age of the Universe and nearly 30,000 times greater than half life of thorium-232. The half-life of 115In is very long because the beta decay to 115Sn is spin-forbidden. Indium-115 makes up 95.7% of all indium. Indium is one of three known elements (the others being tellurium and rhenium) of which the stable isotope is less abundant in nature than the long-lived primordial radioisotopes. The stablest artificial isotope is indium-111, with a half-life of approximately 2.8 days. All other isotopes have half-lives shorter than 5 hours. Indium also has 47 meta states, among which indium-114m1 (half-life about 49.51 days) is the most stable, more stable than the ground state of any indium isotope other than the primordial. All decay by isomeric transition. The indium isotopes lighter than 113In predominantly decay through electron capture or positron emission to form cadmium isotopes, while the indium isotopes heavier than 113In predominantly decay through beta-minus decay to form tin isotopes. Compounds Indium(III) Indium(III) oxide, In2O3, forms when indium metal is burned in air or when the hydroxide or nitrate is heated. In2O3 adopts a structure like alumina and is amphoteric, that is able to react with both acids and bases. Indium reacts with water to reproduce soluble indium(III) hydroxide, which is also amphoteric; with alkalis to produce indates(III); and with acids to produce indium(III) salts: In(OH)3 + 3 HCl → InCl3 + 3 H2O The analogous sesqui-chalcogenides with sulfur, selenium, and tellurium are also known. Indium forms the expected trihalides. Chlorination, bromination, and iodination of In produce colorless InCl3, InBr3, and yellow InI3. The compounds are Lewis acids, somewhat akin to the better known aluminium trihalides. Again like the related aluminium compound, InF3 is polymeric. Direct reaction of indium with the pnictogens produces the gray or semimetallic III–V semiconductors. Many of them slowly decompose in moist air, necessitating careful storage of semiconductor compounds to prevent contact with the atmosphere. Indium nitride is readily attacked by acids and alkalis. Indium(I) Indium(I) compounds are not common. The chloride, bromide, and iodide are deeply colored, unlike the parent trihalides from which they are prepared. The fluoride is known only as an unstable gas. Indium(I) oxide black powder is produced when indium(III) oxide decomposes upon heating to 700 °C. Other oxidation states Less frequently, indium forms compounds in oxidation state +2 and even fractional oxidation states. Usually such materials feature In–In bonding, most notably in the halides In2X4 and [In2X6]2−, and various subchalcogenides such as In4Se3. Several other compounds are known to combine indium(I) and indium(III), such as InI6(InIIICl6)Cl3, InI5(InIIIBr4)2(InIIIBr6), and InIInIIIBr4. Organoindium compounds Organoindium compounds feature In–C bonds. Most are In(III) derivatives, but cyclopentadienylindium(I) is an exception. It was the first known organoindium(I) compound, and is polymeric, consisting of zigzag chains of alternating indium atoms and cyclopentadienyl complexes. Perhaps the best-known organoindium compound is trimethylindium, In(CH3)3, used to prepare certain semiconducting materials. History In 1863, German chemists Ferdinand Reich and Hieronymus Theodor Richter were testing ores from the mines around Freiberg, Saxony. They dissolved the minerals pyrite, arsenopyrite, galena and sphalerite in hydrochloric acid and distilled raw zinc chloride. Reich, who was color-blind, employed Richter as an assistant for detecting the colored spectral lines. Knowing that ores from that region sometimes contain thallium, they searched for the green thallium emission spectrum lines. Instead, they found a bright blue line. Because that blue line did not match any known element, they hypothesized a new element was present in the minerals. They named the element indium, from the indigo color seen in its spectrum, after the Latin indicum, meaning 'of India'. Richter went on to isolate the metal in 1864. An ingot of was presented at the World Fair 1867. Reich and Richter later fell out when the latter claimed to be the sole discoverer. Occurrence Indium is created by the long-lasting (up to thousands of years) s-process (slow neutron capture) in low-to-medium-mass stars (range in mass between 0.6 and 10 solar masses). When a silver-109 atom captures a neutron, it transmutes into silver-110, which then undergoes beta decay to become cadmium-110. Capturing further neutrons, it becomes cadmium-115, which decays to indium-115 by another beta decay. This explains why the radioactive isotope is more abundant than the stable one. The stable indium isotope, indium-113, is one of the p-nuclei, the origin of which is not fully understood; although indium-113 is known to be made directly in the s- and r-processes (rapid neutron capture), and also as the daughter of very long-lived cadmium-113, which has a half-life of about eight quadrillion years, this cannot account for all indium-113. Indium is the 68th most abundant element in Earth's crust at approximately 50 ppb. This is similar to the crustal abundance of silver, bismuth and mercury. It very rarely forms its own minerals, or occurs in elemental form. Fewer than 10 indium minerals such as roquesite (CuInS2) are known, and none occur at sufficient concentrations for economic extraction. Instead, indium is usually a trace constituent of more common ore minerals, such as sphalerite and chalcopyrite. From these, it can be extracted as a by-product during smelting. While the enrichment of indium in these deposits is high relative to its crustal abundance, it is insufficient, at current prices, to support extraction of indium as the main product. Different estimates exist of the amounts of indium contained within the ores of other metals. However, these amounts are not extractable without mining of the host materials (see Production and availability). Thus, the availability of indium is fundamentally determined by the rate at which these ores are extracted, and not their absolute amount. This is an aspect that is often forgotten in the current debate, e.g. by the Graedel group at Yale in their criticality assessments, explaining the paradoxically low depletion times some studies cite. Production and availability Indium is produced exclusively as a by-product during the processing of the ores of other metals. Its main source material are sulfidic zinc ores, where it is mostly hosted by sphalerite. Minor amounts are also extracted from sulfidic copper ores. During the roast-leach-electrowinning process of zinc smelting, indium accumulates in the iron-rich residues. From these, it can be extracted in different ways. It may also be recovered directly from the process solutions. Further purification is done by electrolysis. The exact process varies with the mode of operation of the smelter. Its by-product status means that indium production is constrained by the amount of sulfidic zinc (and copper) ores extracted each year. Therefore, its availability needs to be discussed in terms of supply potential. The supply potential of a by-product is defined as that amount which is economically extractable from its host materials per year under current market conditions (i.e. technology and price). Reserves and resources are not relevant for by-products, since they cannot be extracted independently from the main-products. Recent estimates put the supply potential of indium at a minimum of 1,300 t/yr from sulfidic zinc ores and 20 t/yr from sulfidic copper ores. These figures are significantly greater than current production (655 t in 2016). Thus, major future increases in the by-product production of indium will be possible without significant increases in production costs or price. The average indium price in 2016 was 240/kg, down from 705/kg in 2014. China is a leading producer of indium (290 tonnes in 2016), followed by South Korea (195 t), Japan (70 t) and Canada (65 t). The Teck Resources refinery in Trail, British Columbia, is a large single-source indium producer, with an output of 32.5 tonnes in 2005, 41.8 tonnes in 2004 and 36.1 tonnes in 2003. The primary consumption of indium worldwide is LCD production. Demand rose rapidly from the late 1990s to 2010 with the popularity of LCD computer monitors and television sets, which now account for 50% of indium consumption. Increased manufacturing efficiency and recycling (especially in Japan) maintain a balance between demand and supply. According to the UNEP, indium's end-of-life recycling rate is less than 1%. Applications Industrial uses In 1924, indium was found to have a valued property of stabilizing non-ferrous metals, and that became the first significant use for the element. The first large-scale application for indium was coating bearings in high-performance aircraft engines during World War II, to protect against damage and corrosion; this is no longer a major use of the element. New uses were found in fusible alloys, solders, and electronics. In the 1950s, tiny beads of indium were used for the emitters and collectors of PNP alloy-junction transistors. In the middle and late 1980s, the development of indium phosphide semiconductors and indium tin oxide thin films for liquid-crystal displays (LCD) aroused much interest. By 1992, the thin-film application had become the largest end use. Indium(III) oxide and indium tin oxide (ITO) are used as a transparent conductive coating on glass substrates in electroluminescent panels. Indium tin oxide is used as a light filter in low-pressure sodium-vapor lamps. The infrared radiation is reflected back into the lamp, which increases the temperature within the tube and improves the performance of the lamp. Indium has many semiconductor-related applications. Some indium compounds, such as indium antimonide and indium phosphide, are semiconductors with useful properties: one precursor is usually trimethylindium (TMI), which is also used as the semiconductor dopant in II–VI compound semiconductors. InAs and InSb are used for low-temperature transistors and InP for high-temperature transistors. The compound semiconductors InGaN and InGaP are used in light-emitting diodes (LEDs) and laser diodes. Indium is used in photovoltaics as the semiconductor copper indium gallium selenide (CIGS), also called CIGS solar cells, a type of second-generation thin-film solar cell. Indium is used in PNP bipolar junction transistors with germanium: when soldered at low temperature, indium does not stress the germanium. Indium wire is used as a vacuum seal and a thermal conductor in cryogenics and ultra-high-vacuum applications, in such manufacturing applications as gaskets that deform to fill gaps. Owing to its great plasticity and adhesion to metals, Indium sheets are sometimes used for cold-soldering in microwave circuits and waveguide joints, where direct soldering is complicated. Indium is an ingredient in the gallium–indium–tin alloy galinstan, which is liquid at room temperature and replaces mercury in some thermometers. Other alloys of indium with bismuth, cadmium, lead, and tin, which have higher but still low melting points (between 50 and 100 °C), are used in fire sprinkler systems and heat regulators. Indium is one of many substitutes for mercury in alkaline batteries to prevent the zinc from corroding and releasing hydrogen gas. Indium is added to some dental amalgam alloys to decrease the surface tension of the mercury and allow for less mercury and easier amalgamation. Indium's high neutron-capture cross-section for thermal neutrons makes it suitable for use in control rods for nuclear reactors, typically in an alloy of 80% silver, 15% indium, and 5% cadmium. In nuclear engineering, the (n,n') reactions of 113In and 115In are used to determine magnitudes of neutron fluxes. In 2009, Professor Mas Subramanian and former graduate student Andrew Smith at Oregon State University discovered that indium can be combined with yttrium and manganese to form an intensely blue, non-toxic, inert, fade-resistant pigment, YInMn blue, the first new inorganic blue pigment discovered in 200 years. Medical applications Radioactive indium-111 (in very small amounts) is used in nuclear medicine tests, as a radiotracer to follow the movement of labeled proteins and white blood cells to diagnose different types of infection. Indium compounds are mostly not absorbed upon ingestion and are only moderately absorbed on inhalation; they tend to be stored temporarily in the muscles, skin, and bones before being excreted, and the biological half-life of indium is about two weeks in humans. It is also tagged to growth hormone analogues like octreotide to find growth hormone receptors in neuroendocrine tumors. Biological role and precautions Indium has no metabolic role in any organism. In a similar way to aluminium salts, indium(III) ions can be toxic to the kidney when given by injection. Indium tin oxide and indium phosphide harm the pulmonary and immune systems, predominantly through ionic indium, though hydrated indium oxide is more than forty times as toxic when injected, measured by the quantity of indium introduced. People can be exposed to indium in the workplace by inhalation, ingestion, skin contact, and eye contact. Indium lung is a lung disease characterized by pulmonary alveolar proteinosis and pulmonary fibrosis, first described by Japanese researchers in 2003. , 10 cases had been described, though more than 100 indium workers had documented respiratory abnormalities. The National Institute for Occupational Safety and Health has set a recommended exposure limit (REL) of 0.1 mg/m over an eight-hour workday. Notes References Sources External links Indium at The Periodic Table of Videos (University of Nottingham) Reducing Agents > Indium low valent NIOSH Pocket Guide to Chemical Hazards (Centers for Disease Control and Prevention) Chemical elements Post-transition metals Native element minerals Chemical elements with body-centered tetragonal structure
Indium
[ "Physics" ]
4,588
[ "Chemical elements", "Atoms", "Matter" ]
14,750
https://en.wikipedia.org/wiki/Iodine
Iodine is a chemical element; it has symbol I and atomic number 53. The heaviest of the stable halogens, it exists at standard conditions as a semi-lustrous, non-metallic solid that melts to form a deep violet liquid at , and boils to a violet gas at . The element was discovered by the French chemist Bernard Courtois in 1811 and was named two years later by Joseph Louis Gay-Lussac, after the Ancient Greek , meaning 'violet'. Iodine occurs in many oxidation states, including iodide (I−), iodate (), and the various periodate anions. As the heaviest essential mineral nutrient, iodine is required for the synthesis of thyroid hormones. Iodine deficiency affects about two billion people and is the leading preventable cause of intellectual disabilities. The dominant producers of iodine today are Chile and Japan. Due to its high atomic number and ease of attachment to organic compounds, it has also found favour as a non-toxic radiocontrast material. Because of the specificity of its uptake by the human body, radioactive isotopes of iodine can also be used to treat thyroid cancer. Iodine is also used as a catalyst in the industrial production of acetic acid and some polymers. It is on the World Health Organization's List of Essential Medicines. History In 1811, iodine was discovered by French chemist Bernard Courtois, who was born to a family of manufacturers of saltpetre (an essential component of gunpowder). At the time of the Napoleonic Wars, saltpetre was in great demand in France. Saltpetre produced from French nitre beds required sodium carbonate, which could be isolated from seaweed collected on the coasts of Normandy and Brittany. To isolate the sodium carbonate, seaweed was burned and the ash washed with water. The remaining waste was destroyed by adding sulfuric acid. Courtois once added excessive sulfuric acid and a cloud of violet vapour rose. He noted that the vapour crystallised on cold surfaces, making dark black crystals. Courtois suspected that this material was a new element but lacked funding to pursue it further. Courtois gave samples to his friends, Charles Bernard Desormes (1777–1838) and Nicolas Clément (1779–1841), to continue research. He also gave some of the substance to chemist Joseph Louis Gay-Lussac (1778–1850), and to physicist André-Marie Ampère (1775–1836). On 29 November 1813, Desormes and Clément made Courtois' discovery public by describing the substance to a meeting of the Imperial Institute of France. On 6 December 1813, Gay-Lussac found and announced that the new substance was either an element or a compound of oxygen and he found that it is an element. Gay-Lussac suggested the name "iode" (anglicised as "iodine"), from the Ancient Greek (, "violet"), because of the colour of iodine vapour. Ampère had given some of his sample to British chemist Humphry Davy (1778–1829), who experimented on the substance and noted its similarity to chlorine and also found it as an element. Davy sent a letter dated 10 December to the Royal Society of London stating that he had identified a new element called iodine. Arguments erupted between Davy and Gay-Lussac over who identified iodine first, but both scientists found that both of them identified iodine first and also knew that Courtois is the first one to isolate the element. In 1873, the French medical researcher Casimir Davaine (1812–1882) discovered the antiseptic action of iodine. Antonio Grossich (1849–1926), an Istrian-born surgeon, was among the first to use sterilisation of the operative field. In 1908, he introduced tincture of iodine as a way to rapidly sterilise the human skin in the surgical field. In early periodic tables, iodine was often given the symbol J, for Jod, its name in German; in German texts, J is still frequently used in place of I. Properties Iodine is the fourth halogen, being a member of group 17 in the periodic table, below fluorine, chlorine, and bromine; since astatine and tennessine are radioactive, iodine is the heaviest stable halogen. Iodine has an electron configuration of [Kr]5s24d105p5, with the seven electrons in the fifth and outermost shell being its valence electrons. Like the other halogens, it is one electron short of a full octet and is hence an oxidising agent, reacting with many elements in order to complete its outer shell, although in keeping with periodic trends, it is the weakest oxidising agent among the stable halogens: it has the lowest electronegativity among them, just 2.66 on the Pauling scale (compare fluorine, chlorine, and bromine at 3.98, 3.16, and 2.96 respectively; astatine continues the trend with an electronegativity of 2.2). Elemental iodine hence forms diatomic molecules with chemical formula I2, where two iodine atoms share a pair of electrons in order to each achieve a stable octet for themselves; at high temperatures, these diatomic molecules reversibly dissociate a pair of iodine atoms. Similarly, the iodide anion, I−, is the strongest reducing agent among the stable halogens, being the most easily oxidised back to diatomic I2. (Astatine goes further, being indeed unstable as At− and readily oxidised to At0 or At+.) The halogens darken in colour as the group is descended: fluorine is a very pale yellow, chlorine is greenish-yellow, bromine is reddish-brown, and iodine is violet. Elemental iodine is slightly soluble in water, with one gram dissolving in 3450 mL at 20 °C and 1280 mL at 50 °C; potassium iodide may be added to increase solubility via formation of triiodide ions, among other polyiodides. Nonpolar solvents such as hexane and carbon tetrachloride provide a higher solubility. Polar solutions, such as aqueous solutions, are brown, reflecting the role of these solvents as Lewis bases; on the other hand, nonpolar solutions are violet, the color of iodine vapour. Charge-transfer complexes form when iodine is dissolved in polar solvents, hence changing the colour. Iodine is violet when dissolved in carbon tetrachloride and saturated hydrocarbons but deep brown in alcohols and amines, solvents that form charge-transfer adducts. The melting and boiling points of iodine are the highest among the halogens, conforming to the increasing trend down the group, since iodine has the largest electron cloud among them that is the most easily polarised, resulting in its molecules having the strongest Van der Waals interactions among the halogens. Similarly, iodine is the least volatile of the halogens, though the solid still can be observed to give off purple vapour. Due to this property iodine is commonly used to demonstrate sublimation directly from solid to gas, which gives rise to a misconception that it does not melt in atmospheric pressure. Because it has the largest atomic radius among the halogens, iodine has the lowest first ionisation energy, lowest electron affinity, lowest electronegativity and lowest reactivity of the halogens. The interhalogen bond in diiodine is the weakest of all the halogens. As such, 1% of a sample of gaseous iodine at atmospheric pressure is dissociated into iodine atoms at 575 °C. Temperatures greater than 750 °C are required for fluorine, chlorine, and bromine to dissociate to a similar extent. Most bonds to iodine are weaker than the analogous bonds to the lighter halogens. Gaseous iodine is composed of I2 molecules with an I–I bond length of 266.6 pm. The I–I bond is one of the longest single bonds known. It is even longer (271.5 pm) in solid orthorhombic crystalline iodine, which has the same crystal structure as chlorine and bromine. (The record is held by iodine's neighbour xenon: the Xe–Xe bond length is 308.71 pm.) As such, within the iodine molecule, significant electronic interactions occur with the two next-nearest neighbours of each atom, and these interactions give rise, in bulk iodine, to a shiny appearance and semiconducting properties. Iodine is a two-dimensional semiconductor with a band gap of 1.3 eV (125 kJ/mol): it is a semiconductor in the plane of its crystalline layers and an insulator in the perpendicular direction. Isotopes Of the forty known isotopes of iodine, only one occurs in nature, iodine-127. The others are radioactive and have half-lives too short to be primordial. As such, iodine is both monoisotopic and mononuclidic and its atomic weight is known to great precision, as it is a constant of nature. The longest-lived of the radioactive isotopes of iodine is iodine-129, which has a half-life of 15.7 million years, decaying via beta decay to stable xenon-129. Some iodine-129 was formed along with iodine-127 before the formation of the Solar System, but it has by now completely decayed away, making it an extinct radionuclide. Its former presence may be determined from an excess of its daughter xenon-129, but early attempts to use this characteristic to date the supernova source for elements in the Solar System are made difficult by alternative nuclear processes giving iodine-129 and by iodine's volatility at higher temperatures. Due to its mobility in the environment iodine-129 has been used to date very old groundwaters. Traces of iodine-129 still exist today, as it is also a cosmogenic nuclide, formed from cosmic ray spallation of atmospheric xenon: these traces make up 10−14 to 10−10 of all terrestrial iodine. It also occurs from open-air nuclear testing, and is not hazardous because of its very long half-life, the longest of all fission products. At the peak of thermonuclear testing in the 1960s and 1970s, iodine-129 still made up only about 10−7 of all terrestrial iodine. Excited states of iodine-127 and iodine-129 are often used in Mössbauer spectroscopy. The other iodine radioisotopes have much shorter half-lives, no longer than days. Some of them have medical applications involving the thyroid gland, where the iodine that enters the body is stored and concentrated. Iodine-123 has a half-life of thirteen hours and decays by electron capture to tellurium-123, emitting gamma radiation; it is used in nuclear medicine imaging, including single photon emission computed tomography (SPECT) and X-ray computed tomography (X-Ray CT) scans. Iodine-125 has a half-life of fifty-nine days, decaying by electron capture to tellurium-125 and emitting low-energy gamma radiation; the second-longest-lived iodine radioisotope, it has uses in biological assays, nuclear medicine imaging and in radiation therapy as brachytherapy to treat a number of conditions, including prostate cancer, uveal melanomas, and brain tumours. Finally, iodine-131, with a half-life of eight days, beta decays to an excited state of stable xenon-131 that then converts to the ground state by emitting gamma radiation. It is a common fission product and thus is present in high levels in radioactive fallout. It may then be absorbed through contaminated food, and will also accumulate in the thyroid. As it decays, it may cause damage to the thyroid. The primary risk from exposure to high levels of iodine-131 is the chance occurrence of radiogenic thyroid cancer in later life. Other risks include the possibility of non-cancerous growths and thyroiditis. Protection usually used against the negative effects of iodine-131 is by saturating the thyroid gland with stable iodine-127 in the form of potassium iodide tablets, taken daily for optimal prophylaxis. However, iodine-131 may also be used for medicinal purposes in radiation therapy for this very reason, when tissue destruction is desired after iodine uptake by the tissue. Iodine-131 is also used as a radioactive tracer. Chemistry and compounds Iodine is quite reactive, but it is less so than the lighter halogens, and it is a weaker oxidant. For example, it does not halogenate carbon monoxide, nitric oxide, and sulfur dioxide, which chlorine does. Many metals react with iodine. By the same token, however, since iodine has the lowest ionisation energy among the halogens and is the most easily oxidised of them, it has a more significant cationic chemistry and its higher oxidation states are rather more stable than those of bromine and chlorine, for example in iodine heptafluoride. Charge-transfer complexes The iodine molecule, I2, dissolves in CCl4 and aliphatic hydrocarbons to give bright violet solutions. In these solvents the absorption band maximum occurs in the 520 – 540 nm region and is assigned to a * to σ* transition. When I2 reacts with Lewis bases in these solvents a blue shift in I2 peak is seen and the new peak (230 – 330 nm) arises that is due to the formation of adducts, which are referred to as charge-transfer complexes. Hydrogen iodide The simplest compound of iodine is hydrogen iodide, HI. It is a colourless gas that reacts with oxygen to give water and iodine. Although it is useful in iodination reactions in the laboratory, it does not have large-scale industrial uses, unlike the other hydrogen halides. Commercially, it is usually made by reacting iodine with hydrogen sulfide or hydrazine: 2 I2 + N2H4 4 HI + N2 At room temperature, it is a colourless gas, like all of the hydrogen halides except hydrogen fluoride, since hydrogen cannot form strong hydrogen bonds to the large and only mildly electronegative iodine atom. It melts at and boils at . It is an endothermic compound that can exothermically dissociate at room temperature, although the process is very slow unless a catalyst is present: the reaction between hydrogen and iodine at room temperature to give hydrogen iodide does not proceed to completion. The H–I bond dissociation energy is likewise the smallest of the hydrogen halides, at 295 kJ/mol. Aqueous hydrogen iodide is known as hydroiodic acid, which is a strong acid. Hydrogen iodide is exceptionally soluble in water: one litre of water will dissolve 425 litres of hydrogen iodide, and the saturated solution has only four water molecules per molecule of hydrogen iodide. Commercial so-called "concentrated" hydroiodic acid usually contains 48–57% HI by mass; the solution forms an azeotrope with boiling point at 56.7 g HI per 100 g solution. Hence hydroiodic acid cannot be concentrated past this point by evaporation of water. Unlike gaseous hydrogen iodide, hydroiodic acid has major industrial use in the manufacture of acetic acid by the Cativa process. Other binary iodine compounds With the exception of the noble gases, nearly all elements on the periodic table up to einsteinium (EsI3 is known) are known to form binary compounds with iodine. Until 1990, nitrogen triiodide was only known as an ammonia adduct. Ammonia-free NI3 was found to be isolable at –196 °C but spontaneously decomposes at 0 °C. For thermodynamic reasons related to electronegativity of the elements, neutral sulfur and selenium iodides that are stable at room temperature are also nonexistent, although S2I2 and SI2 are stable up to 183 and 9 K, respectively. As of 2022, no neutral binary selenium iodide has been unambiguously identified (at any temperature). Sulfur- and selenium-iodine polyatomic cations (e.g., [S2I42+][AsF6–]2 and [Se2I42+][Sb2F11–]2) have been prepared and characterised crystallographically. Given the large size of the iodide anion and iodine's weak oxidising power, high oxidation states are difficult to achieve in binary iodides, the maximum known being in the pentaiodides of niobium, tantalum, and protactinium. Iodides can be made by reaction of an element or its oxide, hydroxide, or carbonate with hydroiodic acid, and then dehydrated by mildly high temperatures combined with either low pressure or anhydrous hydrogen iodide gas. These methods work best when the iodide product is stable to hydrolysis. Other syntheses include high-temperature oxidative iodination of the element with iodine or hydrogen iodide, high-temperature iodination of a metal oxide or other halide by iodine, a volatile metal halide, carbon tetraiodide, or an organic iodide. For example, molybdenum(IV) oxide reacts with aluminium(III) iodide at 230 °C to give molybdenum(II) iodide. An example involving halogen exchange is given below, involving the reaction of tantalum(V) chloride with excess aluminium(III) iodide at 400 °C to give tantalum(V) iodide: 3TaCl5 + \underset{(excess)}{5AlI3} -> 3TaI5 + 5AlCl3 Lower iodides may be produced either through thermal decomposition or disproportionation, or by reducing the higher iodide with hydrogen or a metal, for example: TaI5{} + Ta ->[\text{thermal gradient}] [\ce{630^\circ C\ ->\ 575^\circ C}] Ta6I14 Most metal iodides with the metal in low oxidation states (+1 to +3) are ionic. Nonmetals tend to form covalent molecular iodides, as do metals in high oxidation states from +3 and above. Both ionic and covalent iodides are known for metals in oxidation state +3 (e.g. scandium iodide is mostly ionic, but aluminium iodide is not). Ionic iodides MIn tend to have the lowest melting and boiling points among the halides MXn of the same element, because the electrostatic forces of attraction between the cations and anions are weakest for the large iodide anion. In contrast, covalent iodides tend to instead have the highest melting and boiling points among the halides of the same element, since iodine is the most polarisable of the halogens and, having the most electrons among them, can contribute the most to van der Waals forces. Naturally, exceptions abound in intermediate iodides where one trend gives way to the other. Similarly, solubilities in water of predominantly ionic iodides (e.g. potassium and calcium) are the greatest among ionic halides of that element, while those of covalent iodides (e.g. silver) are the lowest of that element. In particular, silver iodide is very insoluble in water and its formation is often used as a qualitative test for iodine. Iodine halides The halogens form many binary, diamagnetic interhalogen compounds with stoichiometries XY, XY3, XY5, and XY7 (where X is heavier than Y), and iodine is no exception. Iodine forms all three possible diatomic interhalogens, a trifluoride and trichloride, as well as a pentafluoride and, exceptionally among the halogens, a heptafluoride. Numerous cationic and anionic derivatives are also characterised, such as the wine-red or bright orange compounds of and the dark brown or purplish black compounds of I2Cl+. Apart from these, some pseudohalides are also known, such as cyanogen iodide (ICN), iodine thiocyanate (ISCN), and iodine azide (IN3). Iodine monofluoride (IF) is unstable at room temperature and disproportionates very readily and irreversibly to iodine and iodine pentafluoride, and thus cannot be obtained pure. It can be synthesised from the reaction of iodine with fluorine gas in trichlorofluoromethane at −45 °C, with iodine trifluoride in trichlorofluoromethane at −78 °C, or with silver(I) fluoride at 0 °C. Iodine monochloride (ICl) and iodine monobromide (IBr), on the other hand, are moderately stable. The former, a volatile red-brown compound, was discovered independently by Joseph Louis Gay-Lussac and Humphry Davy in 1813–1814 not long after the discoveries of chlorine and iodine, and it mimics the intermediate halogen bromine so well that Justus von Liebig was misled into mistaking bromine (which he had found) for iodine monochloride. Iodine monochloride and iodine monobromide may be prepared simply by reacting iodine with chlorine or bromine at room temperature and purified by fractional crystallisation. Both are quite reactive and attack even platinum and gold, though not boron, carbon, cadmium, lead, zirconium, niobium, molybdenum, and tungsten. Their reaction with organic compounds depends on conditions. Iodine chloride vapour tends to chlorinate phenol and salicylic acid, since when iodine chloride undergoes homolytic fission, chlorine and iodine are produced and the former is more reactive. However, iodine chloride in carbon tetrachloride solution results in iodination being the main reaction, since now heterolytic fission of the I–Cl bond occurs and I+ attacks phenol as an electrophile. However, iodine monobromide tends to brominate phenol even in carbon tetrachloride solution because it tends to dissociate into its elements in solution, and bromine is more reactive than iodine. When liquid, iodine monochloride and iodine monobromide dissociate into and ions (X = Cl, Br); thus they are significant conductors of electricity and can be used as ionising solvents. Iodine trifluoride (IF3) is an unstable yellow solid that decomposes above −28 °C. It is thus little-known. It is difficult to produce because fluorine gas would tend to oxidise iodine all the way to the pentafluoride; reaction at low temperature with xenon difluoride is necessary. Iodine trichloride, which exists in the solid state as the planar dimer I2Cl6, is a bright yellow solid, synthesised by reacting iodine with liquid chlorine at −80 °C; caution is necessary during purification because it easily dissociates to iodine monochloride and chlorine and hence can act as a strong chlorinating agent. Liquid iodine trichloride conducts electricity, possibly indicating dissociation to and ions. Iodine pentafluoride (IF5), a colourless, volatile liquid, is the most thermodynamically stable iodine fluoride, and can be made by reacting iodine with fluorine gas at room temperature. It is a fluorinating agent, but is mild enough to store in glass apparatus. Again, slight electrical conductivity is present in the liquid state because of dissociation to and . The pentagonal bipyramidal iodine heptafluoride (IF7) is an extremely powerful fluorinating agent, behind only chlorine trifluoride, chlorine pentafluoride, and bromine pentafluoride among the interhalogens: it reacts with almost all the elements even at low temperatures, fluorinates Pyrex glass to form iodine(VII) oxyfluoride (IOF5), and sets carbon monoxide on fire. Iodine oxides and oxoacids Iodine oxides are the most stable of all the halogen oxides, because of the strong I–O bonds resulting from the large electronegativity difference between iodine and oxygen, and they have been known for the longest time. The stable, white, hygroscopic iodine pentoxide (I2O5) has been known since its formation in 1813 by Gay-Lussac and Davy. It is most easily made by the dehydration of iodic acid (HIO3), of which it is the anhydride. It will quickly oxidise carbon monoxide completely to carbon dioxide at room temperature, and is thus a useful reagent in determining carbon monoxide concentration. It also oxidises nitrogen oxide, ethylene, and hydrogen sulfide. It reacts with sulfur trioxide and peroxydisulfuryl difluoride (S2O6F2) to form salts of the iodyl cation, [IO2]+, and is reduced by concentrated sulfuric acid to iodosyl salts involving [IO]+. It may be fluorinated by fluorine, bromine trifluoride, sulfur tetrafluoride, or chloryl fluoride, resulting iodine pentafluoride, which also reacts with iodine pentoxide, giving iodine(V) oxyfluoride, IOF3. A few other less stable oxides are known, notably I4O9 and I2O4; their structures have not been determined, but reasonable guesses are IIII(IVO3)3 and [IO]+[IO3]− respectively. More important are the four oxoacids: hypoiodous acid (HIO), iodous acid (HIO2), iodic acid (HIO3), and periodic acid (HIO4 or H5IO6). When iodine dissolves in aqueous solution, the following reactions occur: Hypoiodous acid is unstable to disproportionation. The hypoiodite ions thus formed disproportionate immediately to give iodide and iodate: Iodous acid and iodite are even less stable and exist only as a fleeting intermediate in the oxidation of iodide to iodate, if at all. Iodates are by far the most important of these compounds, which can be made by oxidising alkali metal iodides with oxygen at 600 °C and high pressure, or by oxidising iodine with chlorates. Unlike chlorates, which disproportionate very slowly to form chloride and perchlorate, iodates are stable to disproportionation in both acidic and alkaline solutions. From these, salts of most metals can be obtained. Iodic acid is most easily made by oxidation of an aqueous iodine suspension by electrolysis or fuming nitric acid. Iodate has the weakest oxidising power of the halates, but reacts the quickest. Many periodates are known, including not only the expected tetrahedral , but also square-pyramidal , octahedral orthoperiodate , [IO3(OH)3]2−, [I2O8(OH2)]4−, and . They are usually made by oxidising alkaline sodium iodate electrochemically (with lead(IV) oxide as the anode) or by chlorine gas: They are thermodymically and kinetically powerful oxidising agents, quickly oxidising Mn2+ to , and cleaving glycols, α-diketones, α-ketols, α-aminoalcohols, and α-diamines. Orthoperiodate especially stabilises high oxidation states among metals because of its very high negative charge of −5. Orthoperiodic acid, H5IO6, is stable, and dehydrates at 100 °C in a vacuum to Metaperiodic acid, HIO4. Attempting to go further does not result in the nonexistent iodine heptoxide (I2O7), but rather iodine pentoxide and oxygen. Periodic acid may be protonated by sulfuric acid to give the cation, isoelectronic to Te(OH)6 and , and giving salts with bisulfate and sulfate. Polyiodine compounds When iodine dissolves in strong acids, such as fuming sulfuric acid, a bright blue paramagnetic solution including cations is formed. A solid salt of the diiodine cation may be obtained by oxidising iodine with antimony pentafluoride: The salt I2Sb2F11 is dark blue, and the blue tantalum analogue I2Ta2F11 is also known. Whereas the I–I bond length in I2 is 267 pm, that in is only 256 pm as the missing electron in the latter has been removed from an antibonding orbital, making the bond stronger and hence shorter. In fluorosulfuric acid solution, deep-blue reversibly dimerises below −60 °C, forming red rectangular diamagnetic . Other polyiodine cations are not as well-characterised, including bent dark-brown or black and centrosymmetric C2h green or black , known in the and salts among others. The only important polyiodide anion in aqueous solution is linear triiodide, . Its formation explains why the solubility of iodine in water may be increased by the addition of potassium iodide solution: Many other polyiodides may be found when solutions containing iodine and iodide crystallise, such as , , , and , whose salts with large, weakly polarising cations such as Cs+ may be isolated. Organoiodine compounds Organoiodine compounds have been fundamental in the development of organic synthesis, such as in the Hofmann elimination of amines, the Williamson ether synthesis, the Wurtz coupling reaction, and in Grignard reagents. The carbon–iodine bond is a common functional group that forms part of core organic chemistry; formally, these compounds may be thought of as organic derivatives of the iodide anion. The simplest organoiodine compounds, alkyl iodides, may be synthesised by the reaction of alcohols with phosphorus triiodide; these may then be used in nucleophilic substitution reactions, or for preparing Grignard reagents. The C–I bond is the weakest of all the carbon–halogen bonds due to the minuscule difference in electronegativity between carbon (2.55) and iodine (2.66). As such, iodide is the best leaving group among the halogens, to such an extent that many organoiodine compounds turn yellow when stored over time due to decomposition into elemental iodine; as such, they are commonly used in organic synthesis, because of the easy formation and cleavage of the C–I bond. They are also significantly denser than the other organohalogen compounds thanks to the high atomic weight of iodine. A few organic oxidising agents like the iodanes contain iodine in a higher oxidation state than −1, such as 2-iodoxybenzoic acid, a common reagent for the oxidation of alcohols to aldehydes, and iodobenzene dichloride (PhICl2), used for the selective chlorination of alkenes and alkynes. One of the more well-known uses of organoiodine compounds is the so-called iodoform test, where iodoform (CHI3) is produced by the exhaustive iodination of a methyl ketone (or another compound capable of being oxidised to a methyl ketone), as follows: Some drawbacks of using organoiodine compounds as compared to organochlorine or organobromine compounds is the greater expense and toxicity of the iodine derivatives, since iodine is expensive and organoiodine compounds are stronger alkylating agents. For example, iodoacetamide and iodoacetic acid denature proteins by irreversibly alkylating cysteine residues and preventing the reformation of disulfide linkages. Halogen exchange to produce iodoalkanes by the Finkelstein reaction is slightly complicated by the fact that iodide is a better leaving group than chloride or bromide. The difference is nevertheless small enough that the reaction can be driven to completion by exploiting the differential solubility of halide salts, or by using a large excess of the halide salt. In the classic Finkelstein reaction, an alkyl chloride or an alkyl bromide is converted to an alkyl iodide by treatment with a solution of sodium iodide in acetone. Sodium iodide is soluble in acetone and sodium chloride and sodium bromide are not. The reaction is driven toward products by mass action due to the precipitation of the insoluble salt. Occurrence and production Iodine is the least abundant of the stable halogens, comprising only 0.46 parts per million of Earth's crustal rocks (compare: fluorine: 544 ppm, chlorine: 126 ppm, bromine: 2.5 ppm) making it the 60th most abundant element. Iodide minerals are rare, and most deposits that are concentrated enough for economical extraction are iodate minerals instead. Examples include lautarite, Ca(IO3)2, and dietzeite, 7Ca(IO3)2·8CaCrO4. These are the minerals that occur as trace impurities in the caliche, found in Chile, whose main product is sodium nitrate. In total, they can contain at least 0.02% and at most 1% iodine by mass. Sodium iodate is extracted from the caliche and reduced to iodide by sodium bisulfite. This solution is then reacted with freshly extracted iodate, resulting in comproportionation to iodine, which may be filtered off. The caliche was the main source of iodine in the 19th century and continues to be important today, replacing kelp (which is no longer an economically viable source), but in the late 20th century brines emerged as a comparable source. The Japanese Minami Kantō gas field east of Tokyo and the American Anadarko Basin gas field in northwest Oklahoma are the two largest such sources. The brine is hotter than 60 °C from the depth of the source. The brine is first purified and acidified using sulfuric acid, then the iodide present is oxidised to iodine with chlorine. An iodine solution is produced, but is dilute and must be concentrated. Air is blown into the solution to evaporate the iodine, which is passed into an absorbing tower, where sulfur dioxide reduces the iodine. The hydrogen iodide (HI) is reacted with chlorine to precipitate the iodine. After filtering and purification the iodine is packed. These sources ensure that Chile and Japan are the largest producers of iodine today. Alternatively, the brine may be treated with silver nitrate to precipitate out iodine as silver iodide, which is then decomposed by reaction with iron to form metallic silver and a solution of iron(II) iodide. The iodine is then liberated by displacement with chlorine. Applications About half of all produced iodine goes into various organoiodine compounds, another 15% remains as the pure element, another 15% is used to form potassium iodide, and another 15% for other inorganic iodine compounds. Among the major uses of iodine compounds are catalysts, animal feed supplements, stabilisers, dyes, colourants and pigments, pharmaceutical, sanitation (from tincture of iodine), and photography; minor uses include smog inhibition, cloud seeding, and various uses in analytical chemistry. X-ray imaging As an element with high electron density and atomic number, iodine efficiently absorbs X-rays. X-ray radiocontrast agents is the top application for iodine. In this application, Organoiodine compounds are injected intravenously. This application is often in conjunction with advanced X-ray techniques such as angiography and CT scanning. At present, all water-soluble radiocontrast agents rely on iodine-containing compounds. Iodine absorbs X-rays with energies lessthan 33.3 keV due to the photoelectric effect of the innermost electrons. Biocide Use of iodine as a biocide represents a major application of the element, ranked 2nd by weight. Elemental iodine (I2) is used as an antiseptic in medicine. A number of water-soluble compounds, from triiodide (I3−, generated in situ by adding iodide to poorly water-soluble elemental iodine) to various iodophors, slowly decompose to release I2 when applied. Optical polarising films Thin-film-transistor liquid crystal displays rely on polarisation. The liquid crystal transistor is sandwiched between two polarising films and illuminated from behind. The two films prevent light transmission unless the transistor in the middle of the sandwich rotates the light. Iodine-impregnated polymer films are used in polarising optical components with the highest transmission and degree of polarisation. Co-catalyst Another significant use of iodine is as a cocatalyst for the production of acetic acid by the Monsanto and Cativa processes. In these technologies, hydroiodic acid converts the methanol feedstock into methyl iodide, which undergoes carbonylation. Hydrolysis of the resulting acetyl iodide regenerates hydroiodic acid and gives acetic acid. The majority of acetic acid is produced by these approaches. Nutrition Salts of iodide and iodate are used extensively in human and animal nutrition. This application reflects the status of iodide as an essential element, being required for two hormones. The production of ethylenediamine dihydroiodide, provided as a nutritional supplement for livestock, consumes a large portion of available iodine. Iodine is a component of iodised salt. A saturated solution of potassium iodide is used to treat acute thyrotoxicosis. It is also used to block uptake of iodine-131 in the thyroid gland (see isotopes section above), when this isotope is used as part of radiopharmaceuticals (such as iobenguane) that are not targeted to the thyroid or thyroid-type tissues. Others Inorganic iodides find specialised uses. Titanium, zirconium, hafnium, and thorium are purified by the Van Arkel–de Boer process, which involves the reversible formation of the tetraiodides of these elements. Silver iodide is a major ingredient to traditional photographic film. Thousands of kilograms of silver iodide are used annually for cloud seeding to induce rain. The organoiodine compound erythrosine is an important food colouring agent. Perfluoroalkyl iodides are precursors to important surfactants, such as perfluorooctanesulfonic acid. I is used as the radiolabel in investigating which ligands go to which plant pattern recognition receptors (PRRs). An iodine based thermochemical cycle has been evaluated for hydrogen production using energy from nuclear paper. The cycle has three steps. At , iodine reacts with sulfur dioxide and water to give hydrogen iodide and sulfuric acid: I_2+SO_2+2H_2O \rightarrow 2HI+H_2SO_4 After a separation stage, at sulfuric acid splits in sulfur dioxide and oxygen: 2H_2SO_4 \rightarrow 2SO_2+2H_2O+O_2 Hydrogen iodide, at , gives hydrogen and the initial element, iodine: 2HI \rightarrow I_2+H_2 The yield of the cycle (ratio between lower heating value of the produced hydrogen and the consumed energy for its production, is approximately 38%. , the cycle is not a competitive means of producing hydrogen. Spectroscopy The spectrum of the iodine molecule, I2, consists of (not exclusively) tens of thousands of sharp spectral lines in the wavelength range 500–700 nm. It is therefore a commonly used wavelength reference (secondary standard). By measuring with a spectroscopic Doppler-free technique while focusing on one of these lines, the hyperfine structure of the iodine molecule reveals itself. A line is now resolved such that either 15 components (from even rotational quantum numbers, Jeven), or 21 components (from odd rotational quantum numbers, Jodd) are measurable. Caesium iodide and thallium-doped sodium iodide are used in crystal scintillators for the detection of gamma rays. The efficiency is high and energy dispersive spectroscopy is possible, but the resolution is rather poor. Chemical analysis The iodide and iodate anions can be used for quantitative volumetric analysis, for example in iodometry. Iodine and starch form a blue complex, and this reaction is often used to test for either starch or iodine and as an indicator in iodometry. The iodine test for starch is still used to detect counterfeit banknotes printed on starch-containing paper. The iodine value is the mass of iodine in grams that is consumed by 100 grams of a chemical substance typically fats or oils. Iodine numbers are often used to determine the amount of unsaturation in fatty acids. This unsaturation is in the form of double bonds, which react with iodine compounds. Potassium tetraiodomercurate(II), K2HgI4, is also known as Nessler's reagent. It is once was used as a sensitive spot test for ammonia. Similarly, Mayer's reagent (potassium tetraiodomercurate(II) solution) is used as a precipitating reagent to test for alkaloids. Aqueous alkaline iodine solution is used in the iodoform test for methyl ketones. Biological role Iodine is an essential element for life and, at atomic number Z = 53, is the heaviest element commonly needed by living organisms. (Lanthanum and the other lanthanides, as well as tungsten with Z = 74 and uranium with Z = 92, are used by a few microorganisms.) It is required for the synthesis of the growth-regulating thyroid hormones tetraiodothyronine and triiodothyronine (T4 and T3 respectively, named after their number of iodine atoms). A deficiency of iodine leads to decreased production of T3 and T4 and a concomitant enlargement of the thyroid tissue in an attempt to obtain more iodine, causing the disease goitre. The major form of thyroid hormone in the blood is tetraiodothyronine (T4), which has a longer life than triiodothyronine (T3). In humans, the ratio of T4 to T3 released into the blood is between 14:1 and 20:1. T4 is converted to the active T3 (three to four times more potent than T4) within cells by deiodinases (5'-iodinase). These are further processed by decarboxylation and deiodination to produce iodothyronamine (T1a) and thyronamine (T0a'). All three isoforms of the deiodinases are selenium-containing enzymes; thus metallic selenium is needed for triiodothyronine and tetraiodothyronine production. Iodine accounts for 65% of the molecular weight of T4 and 59% of T3. Fifteen to 20 mg of iodine is concentrated in thyroid tissue and hormones, but 70% of all iodine in the body is found in other tissues, including mammary glands, eyes, gastric mucosa, thymus, cerebrospinal fluid, choroid plexus, arteries, cervix, salivary glands. During pregnancy, the placenta is able to store and accumulate iodine. In the cells of those tissues, iodine enters directly by sodium-iodide symporter (NIS). The action of iodine in mammal tissues is related to fetal and neonatal development, and in the other tissues, it is known. Dietary recommendations and intake The daily levels of intake recommended by the United States National Academy of Medicine are between 110 and 130 μg for infants up to 12 months, 90 μg for children up to eight years, 130 μg for children up to 13 years, 150 μg for adults, 220 μg for pregnant women and 290 μg for lactating women. The Tolerable Upper Intake Level (TUIL) for adults is 1,100 μg/day. This upper limit was assessed by analysing the effect of supplementation on thyroid-stimulating hormone. The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR; AI and UL are defined the same as in the United States. For women and men ages 18 and older, the PRI for iodine is set at 150 μg/day; the PRI during pregnancy and lactation is 200 μg/day. For children aged 1–17 years, the PRI increases with age from 90 to 130 μg/day. These PRIs are comparable to the U.S. RDAs with the exception of that for lactation. The thyroid gland needs 70 μg/day of iodine to synthesise the requisite daily amounts of T4 and T3. The higher recommended daily allowance levels of iodine seem necessary for optimal function of a number of body systems, including mammary glands, gastric mucosa, salivary glands, brain cells, choroid plexus, thymus, arteries. Natural food sources of iodine include seafood which contains fish, seaweeds, kelp, shellfish and other foods which contain dairy products, eggs, meats, vegetables, so long as the animals ate iodine richly, and the plants are grown on iodine-rich soil. Iodised salt is fortified with potassium iodate, a salt of iodine, potassium, oxygen. As of 2000, the median intake of iodine from food in the United States was 240 to 300 μg/day for men and 190 to 210 μg/day for women. The general US population has adequate iodine nutrition, with lactating women and pregnant women having a mild risk of deficiency. In Japan, consumption was considered much higher, ranging between 5,280 μg/day to 13,800 μg/day from wakame and kombu that are eaten, both in the form of kombu and wakame and kombu and wakame umami extracts for soup stock and potato chips. However, new studies suggest that Japan's consumption is closer to 1,000–3,000 μg/day. The adult UL in Japan was last revised to 3,000 μg/day in 2015. After iodine fortification programs such as iodisation of salt have been done, some cases of iodine-induced hyperthyroidism have been observed (so-called Jod-Basedow phenomenon). The condition occurs mainly in people above 40 years of age, and the risk is higher when iodine deficiency is high and the first rise in iodine consumption is high. Deficiency In areas where there is little iodine in the diet, which are remote inland areas and faraway mountainous areas where no iodine rich foods are eaten, iodine deficiency gives rise to hypothyroidism, symptoms of which are extreme fatigue, goitre, mental slowing, depression, low weight gain, and low basal body temperatures. Iodine deficiency is the leading cause of preventable intellectual disability, a result that occurs primarily when babies or small children are rendered hypothyroidic by no iodine. The addition of iodine to salt has largely destroyed this problem in wealthier areas, but iodine deficiency remains a serious public health problem in poorer areas today. Iodine deficiency is also a problem in certain areas of all continents of the world. Information processing, fine motor skills, and visual problem solving are normalised by iodine repletion in iodine-deficient people. Precautions Toxicity Elemental iodine (I2) is toxic if taken orally undiluted. The lethal dose for an adult human is 30 mg/kg, which is about 2.1–2.4 grams for a human weighing 70 to 80 kg (even when experiments on rats demonstrated that these animals could survive after eating a 14000 mg/kg dose and are still living after that). Excess iodine is more cytotoxic in the presence of selenium deficiency. Iodine supplementation in selenium-deficient populations is problematic for this reason. The toxicity derives from its oxidising properties, through which it denaturates proteins (including enzymes). Elemental iodine is also a skin irritant. Solutions with high elemental iodine concentration, such as tincture of iodine and Lugol's solution, are capable of causing tissue damage if used in prolonged cleaning or antisepsis; similarly, liquid Povidone-iodine (Betadine) trapped against the skin resulted in chemical burns in some reported cases. Occupational exposure The U.S. Occupational Safety and Health Administration (OSHA) has set the legal limit (Permissible exposure limit) for iodine exposure in the workplace at 0.1 ppm (1 mg/m3) during an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a Recommended exposure limit (REL) of 0.1 ppm (1 mg/m3) during an 8-hour workday. At levels of 2 ppm, iodine is immediately dangerous to life and health. Allergic reactions Some people develop a hypersensitivity to products and foods containing iodine. Applications of tincture of iodine or Betadine can cause rashes, sometimes severe. Parenteral use of iodine-based contrast agents (see above) can cause reactions ranging from a mild rash to fatal anaphylaxis. Such reactions have led to the misconception (widely held, even among physicians) that some people are allergic to iodine itself; even allergies to iodine-rich foods have been so construed. In fact, there has never been a confirmed report of a true iodine allergy, as an allergy to iodine or iodine salts is biologically impossible. Hypersensitivity reactions to products and foods containing iodine are apparently related to their other molecular components; thus, a person who has demonstrated an allergy to one food or product containing iodine may not have an allergic reaction to another. Patients with various food allergies (fishes, shellfishes, eggs, milk, seaweeds, kelp, meats, vegetables, kombu, wakame) do not have an increased risk for a contrast medium hypersensitivity. The patient's allergy history is relevant. US DEA List I status Phosphorus reduces iodine to hydroiodic acid, which is a reagent effective for reducing ephedrine and pseudoephedrine to methamphetamine. For this reason, iodine was designated by the United States Drug Enforcement Administration as a List I precursor chemical under 21 CFR 1310.02. Notes References Bibliography Chemical elements Halogens Reactive nonmetals Diatomic nonmetals Dietary minerals Oxidizing agents Gases with color Chemical elements with primitive orthorhombic structure
Iodine
[ "Physics", "Chemistry", "Materials_science" ]
11,179
[ "Chemical elements", "Redox", "Diatomic nonmetals", "Nonmetals", "Oxidizing agents", "Reactive nonmetals", "Atoms", "Matter" ]
14,752
https://en.wikipedia.org/wiki/Iridium
Iridium is a chemical element; it has symbol Ir and atomic number 77. A very hard, brittle, silvery-white transition metal of the platinum group, it is considered the second-densest naturally occurring metal (after osmium) with a density of as defined by experimental X-ray crystallography. 191Ir and 193Ir are the only two naturally occurring isotopes of iridium, as well as the only stable isotopes; the latter is the more abundant. It is one of the most corrosion-resistant metals, even at temperatures as high as . Iridium was discovered in 1803 in the acid-insoluble residues of platinum ores by the English chemist Smithson Tennant. The name iridium, derived from the Greek word iris (rainbow), refers to the various colors of its compounds. Iridium is one of the rarest elements in Earth's crust, with an estimated annual production of only in 2023. The dominant uses of iridium are the metal itself and its alloys, as in high-performance spark plugs, crucibles for recrystallization of semiconductors at high temperatures, and electrodes for the production of chlorine in the chloralkali process. Important compounds of iridium are chlorides and iodides in industrial catalysis. Iridium is a component of some OLEDs. Iridium is found in meteorites in much higher abundance than in the Earth's crust. For this reason, the unusually high abundance of iridium in the clay layer at the Cretaceous–Paleogene boundary gave rise to the Alvarez hypothesis that the impact of a massive extraterrestrial object caused the extinction of non-avian dinosaurs and many other species 66 million years ago, now known to be produced by the impact that formed the Chicxulub crater. Similarly, an iridium anomaly in core samples from the Pacific Ocean suggested the Eltanin impact of about 2.5 million years ago. Characteristics Physical properties A member of the platinum group metals, iridium is white, resembling platinum, but with a slight yellowish cast. Because of its hardness, brittleness, and very high melting point, solid iridium is difficult to machine, form, or work; thus powder metallurgy is commonly employed instead. It is the only metal to maintain good mechanical properties in air at temperatures above . It has the 10th highest boiling point among all elements and becomes a superconductor at temperatures below . Iridium's modulus of elasticity is the second-highest among the metals, being surpassed only by osmium. This, together with a high shear modulus and a very low figure for Poisson's ratio (the relationship of longitudinal to lateral strain), indicate the high degree of stiffness and resistance to deformation that have rendered its fabrication into useful components a matter of great difficulty. Despite these limitations and iridium's high cost, a number of applications have developed where mechanical strength is an essential factor in some of the extremely severe conditions encountered in modern technology. The measured density of iridium is only slightly lower (by about 0.12%) than that of osmium, the densest metal known. Some ambiguity occurred regarding which of the two elements was denser, due to the small size of the difference in density and difficulties in measuring it accurately, but, with increased accuracy in factors used for calculating density, X-ray crystallographic data yielded densities of for iridium and for osmium. Iridium is extremely brittle, to the point of being hard to weld because the heat-affected zone cracks, but it can be made more ductile by addition of small quantities of titanium and zirconium (0.2% of each apparently works well). The Vickers hardness of pure platinum is 56 HV, whereas platinum with 50% of iridium can reach over 500 HV. Chemical properties Iridium is the most corrosion-resistant metal known. It is not attacked by acids, including aqua regia, but it can be dissolved in concentrated hydrochloric acid in the presence of sodium perchlorate. In the presence of oxygen, it reacts with cyanide salts. Traditional oxidants also react, including the halogens and oxygen at higher temperatures. Iridium also reacts directly with sulfur at atmospheric pressure to yield iridium disulfide. Isotopes Iridium has two naturally occurring stable isotopes, 191Ir and 193Ir, with natural abundances of 37.3% and 62.7%, respectively. At least 37 radioisotopes have also been synthesized, ranging in mass number from 164 to 202. 192Ir, which falls between the two stable isotopes, is the most stable radioisotope, with a half-life of 73.827 days, and finds application in brachytherapy and in industrial radiography, particularly for nondestructive testing of welds in steel in the oil and gas industries; iridium-192 sources have been involved in a number of radiological accidents. Three other isotopes have half-lives of at least a day—188Ir, 189Ir, and 190Ir. Isotopes with masses below 191 decay by some combination of β+ decay, α decay, and (rare) proton emission, with the exception of 189Ir, which decays by electron capture. Synthetic isotopes heavier than 191 decay by β− decay, although 192Ir also has a minor electron capture decay path. All known isotopes of iridium were discovered between 1934 and 2008, with the most recent discoveries being 200–202Ir. At least 32 metastable isomers have been characterized, ranging in mass number from 164 to 197. The most stable of these is 192m2Ir, which decays by isomeric transition with a half-life of 241 years, making it more stable than any of iridium's synthetic isotopes in their ground states. The least stable isomer is 190m3Ir with a half-life of only 2 μs. The isotope 191Ir was the first one of any element to be shown to present a Mössbauer effect. This renders it useful for Mössbauer spectroscopy for research in physics, chemistry, biochemistry, metallurgy, and mineralogy. Chemistry Oxidation states Iridium forms compounds in oxidation states between −3 and +9, but the most common oxidation states are +1, +2, +3, and +4. Well-characterized compounds containing iridium in the +6 oxidation state include and the oxides and . iridium(VIII) oxide () was generated under matrix isolation conditions at 6 K in argon. The highest oxidation state (+9), which is also the highest recorded for any element, is found in gaseous . Binary compounds Iridium does not form binary hydrides. Only one binary oxide is well-characterized: iridium dioxide, . It is a blue black solid that adopts the fluorite structure. A sesquioxide, , has been described as a blue-black powder, which is oxidized to by . The corresponding disulfides, diselenides, sesquisulfides, and sesquiselenides are known, as well as . Binary trihalides, , are known for all of the halogens. For oxidation states +4 and above, only the tetrafluoride, pentafluoride and hexafluoride are known. Iridium hexafluoride, , is a volatile yellow solid, composed of octahedral molecules. It decomposes in water and is reduced to . Iridium pentafluoride is also a strong oxidant, but it is a tetramer, , formed by four corner-sharing octahedra. Complexes Iridium has extensive coordination chemistry. Iridium in its complexes is always low-spin. Ir(III) and Ir(IV) generally form octahedral complexes. Polyhydride complexes are known for the +5 and +3 oxidation states. One example is (iPr = isopropyl). The ternary hydride is believed to contain both the and the 18-electron anion. Iridium also forms oxyanions with oxidation states +4 and +5. and can be prepared from the reaction of potassium oxide or potassium superoxide with iridium at high temperatures. Such solids are not soluble in conventional solvents. Just like many elements, iridium forms important chloride complexes. Hexachloroiridic (IV) acid, , and its ammonium salt are common iridium compounds from both industrial and preparative perspectives. They are intermediates in the purification of iridium and used as precursors for most other iridium compounds, as well as in the preparation of anode coatings. The ion has an intense dark brown color, and can be readily reduced to the lighter-colored and vice versa. Iridium trichloride, , which can be obtained in anhydrous form from direct oxidation of iridium powder by chlorine at 650 °C, or in hydrated form by dissolving in hydrochloric acid, is often used as a starting material for the synthesis of other Ir(III) compounds. Another compound used as a starting material is potassium hexachloroiridate(III), . Organoiridium chemistry Organoiridium compounds contain iridium–carbon bonds. Early studies identified the very stable tetrairidium dodecacarbonyl, . In this compound, each of the iridium atoms is bonded to the other three, forming a tetrahedral cluster. The discovery of Vaska's complex () opened the door for oxidative addition reactions, a process fundamental to useful reactions. For example, Crabtree's catalyst, a homogeneous catalyst for hydrogenation reactions. Iridium complexes played a pivotal role in the development of Carbon–hydrogen bond activation (C–H activation), which promises to allow functionalization of hydrocarbons, which are traditionally regarded as unreactive. History Platinum group The discovery of iridium is intertwined with that of platinum and the other metals of the platinum group. The first European reference to platinum appears in 1557 in the writings of the Italian humanist Julius Caesar Scaliger as a description of an unknown noble metal found between Darién and Mexico, "which no fire nor any Spanish artifice has yet been able to liquefy". From their first encounters with platinum, the Spanish generally saw the metal as a kind of impurity in gold, and it was treated as such. It was often simply thrown away, and there was an official decree forbidding the adulteration of gold with platinum impurities. In 1735, Antonio de Ulloa and Jorge Juan y Santacilia saw Native Americans mining platinum while the Spaniards were travelling through Colombia and Peru for eight years. Ulloa and Juan found mines with the whitish metal nuggets and took them home to Spain. Ulloa returned to Spain and established the first mineralogy lab in Spain and was the first to systematically study platinum, which was in 1748. His historical account of the expedition included a description of platinum as being neither separable nor calcinable. Ulloa also anticipated the discovery of platinum mines. After publishing the report in 1748, Ulloa did not continue to investigate the new metal. In 1758, he was sent to superintend mercury mining operations in Huancavelica. In 1741, Charles Wood, a British metallurgist, found various samples of Colombian platinum in Jamaica, which he sent to William Brownrigg for further investigation. In 1750, after studying the platinum sent to him by Wood, Brownrigg presented a detailed account of the metal to the Royal Society, stating that he had seen no mention of it in any previous accounts of known minerals. Brownrigg also made note of platinum's extremely high melting point and refractory metal-like behaviour toward borax. Other chemists across Europe soon began studying platinum, including Andreas Sigismund Marggraf, Torbern Bergman, Jöns Jakob Berzelius, William Lewis, and Pierre Macquer. In 1752, Henrik Scheffer published a detailed scientific description of the metal, which he referred to as "white gold", including an account of how he succeeded in fusing platinum ore with the aid of arsenic. Scheffer described platinum as being less pliable than gold, but with similar resistance to corrosion. Discovery Chemists who studied platinum dissolved it in aqua regia (a mixture of hydrochloric and nitric acids) to create soluble salts. They always observed a small amount of a dark, insoluble residue. Joseph Louis Proust thought that the residue was graphite. The French chemists Victor Collet-Descotils, Antoine François, comte de Fourcroy, and Louis Nicolas Vauquelin also observed the black residue in 1803, but did not obtain enough for further experiments. In 1803 British scientist Smithson Tennant (1761–1815) analyzed the insoluble residue and concluded that it must contain a new metal. Vauquelin treated the powder alternately with alkali and acids and obtained a volatile new oxide, which he believed to be of this new metal—which he named ptene, from the Greek word ptēnós, "winged". Tennant, who had the advantage of a much greater amount of residue, continued his research and identified the two previously undiscovered elements in the black residue, iridium and osmium. He obtained dark red crystals (probably of ]·n) by a sequence of reactions with sodium hydroxide and hydrochloric acid. He named iridium after Iris (), the Greek winged goddess of the rainbow and the messenger of the Olympian gods, because many of the salts he obtained were strongly colored. Discovery of the new elements was documented in a letter to the Royal Society on June 21, 1804. Metalworking and applications British scientist John George Children was the first to melt a sample of iridium in 1813 with the aid of "the greatest galvanic battery that has ever been constructed" (at that time). The first to obtain high-purity iridium was Robert Hare in 1842. He found it had a density of around and noted the metal is nearly immalleable and very hard. The first melting in appreciable quantity was done by Henri Sainte-Claire Deville and Jules Henri Debray in 1860. They required burning more than of pure and gas for each of iridium. These extreme difficulties in melting the metal limited the possibilities for handling iridium. John Isaac Hawkins was looking to obtain a fine and hard point for fountain pen nibs, and in 1834 managed to create an iridium-pointed gold pen. In 1880, John Holland and William Lofland Dudley were able to melt iridium by adding phosphorus and patented the process in the United States; British company Johnson Matthey later stated they had been using a similar process since 1837 and had already presented fused iridium at a number of World Fairs. The first use of an alloy of iridium with ruthenium in thermocouples was made by Otto Feussner in 1933. These allowed for the measurement of high temperatures in air up to . In Munich, Germany in 1957 Rudolf Mössbauer, in what has been called one of the "landmark experiments in twentieth-century physics", discovered the resonant and recoil-free emission and absorption of gamma rays by atoms in a solid metal sample containing only 191Ir. This phenomenon, known as the Mössbauer effect resulted in the awarding of the Nobel Prize in Physics in 1961, at the age 32, just three years after he published his discovery. Occurrence Along with many elements having atomic weights higher than that of iron, iridium is only naturally formed by the r-process (rapid neutron capture) in neutron star mergers and possibly rare types of supernovae. Iridium is one of the nine least abundant stable elements in Earth's crust, having an average mass fraction of 0.001 ppm in crustal rock; gold is 4 times more abundant, platinum is 10 times more abundant, silver and mercury are 80 times more abundant. Osmium, tellurium, ruthenium, rhodium and rhenium are about as abundant as iridium. In contrast to its low abundance in crustal rock, iridium is relatively common in meteorites, with concentrations of 0.5 ppm or more. The overall concentration of iridium on Earth is thought to be much higher than what is observed in crustal rocks, but because of the density and siderophilic ("iron-loving") character of iridium, it descended below the crust and into Earth's core when the planet was still molten. Iridium is found in nature as an uncombined element or in natural alloys, especially the iridium–osmium alloys osmiridium (osmium-rich) and iridosmium (iridium-rich). In nickel and copper deposits, the platinum group metals occur as sulfides, tellurides, antimonides, and arsenides. In all of these compounds, platinum can be exchanged with a small amount of iridium or osmium. As with all of the platinum group metals, iridium can be found naturally in alloys with raw nickel or raw copper. A number of iridium-dominant minerals, with iridium as the species-forming element, are known. They are exceedingly rare and often represent the iridium analogues of the above-given ones. The examples are irarsite and cuproiridsite, to mention some. Within Earth's crust, iridium is found at highest concentrations in three types of geologic structure: igneous deposits (crustal intrusions from below), impact craters, and deposits reworked from one of the former structures. The largest known primary reserves are in the Bushveld igneous complex in South Africa, (near the largest known impact structure, the Vredefort impact structure) though the large copper–nickel deposits near Norilsk in Russia, and the Sudbury Basin (also an impact crater) in Canada are also significant sources of iridium. Smaller reserves are found in the United States. Iridium is also found in secondary deposits, combined with platinum and other platinum group metals in alluvial deposits. The alluvial deposits used by pre-Columbian people in the Chocó Department of Colombia are still a source for platinum-group metals. As of 2003, world reserves have not been estimated. Marine oceanography Iridium is found within marine organisms, sediments, and the water column. The abundance of iridium in seawater and organisms is relatively low, as it does not readily form chloride complexes. The abundance in organisms is about 20 parts per trillion, or about five orders of magnitude less than in sedimentary rocks at the Cretaceous–Paleogene (K–T) boundary. The concentration of iridium in seawater and marine sediment is sensitive to marine oxygenation, seawater temperature, and various geological and biological processes. Iridium in sediments can come from cosmic dust, volcanoes, precipitation from seawater, microbial processes, or hydrothermal vents, and its abundance can be strongly indicative of the source. It tends to associate with other ferrous metals in manganese nodules. Iridium is one of the characteristic elements of extraterrestrial rocks, and, along with osmium, can be used as a tracer element for meteoritic material in sediment. For example, core samples from the Pacific Ocean with elevated iridium levels suggested the Eltanin impact of about 2.5 million years ago. Some of the mass extinctions, such as the Cretaceous extinction, can be identified by anomalously high concentrations of iridium in sediment, and these can be linked to major asteroid impacts. Cretaceous–Paleogene boundary presence The Cretaceous–Paleogene boundary of 66 million years ago, marking the temporal border between the Cretaceous and Paleogene periods of geological time, was identified by a thin stratum of iridium-rich clay. A team led by Luis Alvarez proposed in 1980 an extraterrestrial origin for this iridium, attributing it to an asteroid or comet impact. Their theory, known as the Alvarez hypothesis, is now widely accepted to explain the extinction of the non-avian dinosaurs. A large buried impact crater structure with an estimated age of about 66 million years was later identified under what is now the Yucatán Peninsula (the Chicxulub crater). Dewey M. McLean and others argue that the iridium may have been of volcanic origin instead, because Earth's core is rich in iridium, and active volcanoes such as Piton de la Fournaise, in the island of Réunion, are still releasing iridium. Production Worldwide production of iridium was about in 2018. The price is high and varying (see table). Illustrative factors that affect the price include oversupply of Ir crucibles and changes in LED technology. Platinum metals occur together as dilute ores. Iridium is one of the rarer platinum metals: for every 190 tonnes of platinum obtained from ores, only 7.5 tonnes of iridium is isolated. To separate the metals, they must first be brought into solution. Two methods for rendering Ir-containing ores soluble are (i) fusion of the solid with sodium peroxide followed by extraction of the resulting glass in aqua regia and (ii) extraction of the solid with a mixture of chlorine with hydrochloric acid. From soluble extracts, iridium is separated by precipitating solid ammonium hexachloroiridate () or by extracting with organic amines. The first method is similar to the procedure Tennant and Wollaston used for their original separation. The second method can be planned as continuous liquid–liquid extraction and is therefore more suitable for industrial scale production. In either case, the product, an iridium chloride salt, is reduced with hydrogen, yielding the metal as a powder or sponge, which is amenable to powder metallurgy techniques. Iridium is also obtained commercially as a by-product from nickel and copper mining and processing. During electrorefining of copper and nickel, noble metals such as silver, gold and the platinum group metals as well as selenium and tellurium settle to the bottom of the cell as anode mud, which forms the starting point for their extraction. Applications Due to iridium's resistance to corrosion it has industrial applications. The main areas of use are electrodes for producing chlorine and other corrosive products, OLEDs, crucibles, catalysts (e.g. acetic acid), and ignition tips for spark plugs. Metal and alloys Resistance to heat and corrosion are the bases for several uses of iridium and its alloys. Owing to its high melting point, hardness, and corrosion resistance, iridium is used to make crucibles. Such crucibles are used in the Czochralski process to produce oxide single-crystals (such as sapphires) for use in computer memory devices and in solid state lasers. The crystals, such as gadolinium gallium garnet and yttrium gallium garnet, are grown by melting pre-sintered charges of mixed oxides under oxidizing conditions at temperatures up to . Certain long-life aircraft engine parts are made of an iridium alloy, and an iridium–titanium alloy is used for deep-water pipes because of its corrosion resistance. Iridium is used for multi-pored spinnerets, through which a plastic polymer melt is extruded to form fibers, such as rayon. Osmium–iridium is used for compass bearings and for balances. Because of their resistance to arc erosion, iridium alloys are used by some manufacturers for the centre electrodes of spark plugs, and iridium-based spark plugs are particularly used in aviation. Catalysis Iridium compounds are used as catalysts in the Cativa process for carbonylation of methanol to produce acetic acid. Iridium complexes are often active for asymmetric hydrogenation both by traditional hydrogenation. and transfer hydrogenation. This property is the basis of the industrial route to the chiral herbicide (S)-metolachlor. As practiced by Syngenta on the scale of 10,000 tons/year, the complex [Ir(COD)Cl]2 in the presence of Josiphos ligands. Medical imaging The radioisotope iridium-192 is one of the two most important sources of energy for use in industrial γ-radiography for non-destructive testing of metals. Additionally, is used as a source of gamma radiation for the treatment of cancer using brachytherapy, a form of radiotherapy where a sealed radioactive source is placed inside or next to the area requiring treatment. Specific treatments include high-dose-rate prostate brachytherapy, biliary duct brachytherapy, and intracavitary cervix brachytherapy. Iridium-192 is normally produced by neutron activation of isotope iridium-191 in natural-abundance iridium metal. Photocatalysis and OLEDs Iridium complexes are key components of white OLEDs. Similar complexes are used in photocatalysis. Scientific An alloy of 90% platinum and 10% iridium was used in 1889 to construct the International Prototype Meter and kilogram mass, kept by the International Bureau of Weights and Measures near Paris. The meter bar was replaced as the definition of the fundamental unit of length in 1960 by a line in the atomic spectrum of krypton, but the kilogram prototype remained the international standard of mass until 20 May 2019, when the kilogram was redefined in terms of the Planck constant. Historical Iridium–osmium alloys were used in fountain pen nib tips. The first major use of iridium was in 1834 in nibs mounted on gold. Starting in 1944, the Parker 51 fountain pen was fitted with a nib tipped by a ruthenium and iridium alloy (with 3.8% iridium). The tip material in modern fountain pens is still conventionally called "iridium", although there is seldom any iridium in it; other metals such as ruthenium, osmium, and tungsten have taken its place. An iridium–platinum alloy was used for the touch holes or vent pieces of cannon. According to a report of the Paris Exhibition of 1867, one of the pieces being exhibited by Johnson and Matthey "has been used in a Whitworth gun for more than 3000 rounds, and scarcely shows signs of wear yet. Those who know the constant trouble and expense which are occasioned by the wearing of the vent-pieces of cannon when in active service, will appreciate this important adaptation". The pigment iridium black, which consists of very finely divided iridium, is used for painting porcelain an intense black; it was said that "all other porcelain black colors appear grey by the side of it". Precautions and hazards Iridium in bulk metallic form is not biologically important or hazardous to health due to its lack of reactivity with tissues; there are only about 20 parts per trillion of iridium in human tissue. Like most metals, finely divided iridium powder can be hazardous to handle, as it is an irritant and may ignite in air. Iridium is relatively unhazardous otherwise, with the only effect of Iridium ingestion being irritation of the digestive tract. However, soluble salts, such as the iridium halides, could be hazardous due to elements other than iridium or due to iridium itself. At the same time, most iridium compounds are insoluble, which makes absorption into the body difficult. A radioisotope of iridium, , is dangerous, like other radioactive isotopes. The only reported injuries related to iridium concern accidental exposure to radiation from used in brachytherapy. High-energy gamma radiation from can increase the risk of cancer. External exposure can cause burns, radiation poisoning, and death. Ingestion of 192Ir can burn the linings of the stomach and the intestines. 192Ir, 192mIr, and 194mIr tend to deposit in the liver, and can pose health hazards from both gamma and beta radiation. Notes References External links Iridium at The Periodic Table of Videos (University of Nottingham) Iridium in Encyclopædia Britannica Chemical elements Transition metals Precious metals Noble metals Impact event minerals Meteorite minerals Native element minerals Chemical elements with face-centered cubic structure Platinum-group metals
Iridium
[ "Physics" ]
5,864
[ "Chemical elements", "Atoms", "Matter" ]
14,773
https://en.wikipedia.org/wiki/Information%20theory
Information theory is the mathematical study of the quantification, storage, and communication of information. The field was established and formalized by Claude Shannon in the 1940s, though early contributions were made in the 1920s through the works of Harry Nyquist and Ralph Hartley. It is at the intersection of electronic engineering, mathematics, statistics, computer science, neurobiology, physics, and electrical engineering. A key measure in information theory is entropy. Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a fair coin flip (which has two equally likely outcomes) provides less information (lower entropy, less uncertainty) than identifying the outcome from a roll of a die (which has six equally likely outcomes). Some other important measures in information theory are mutual information, channel capacity, error exponents, and relative entropy. Important sub-fields of information theory include source coding, algorithmic complexity theory, algorithmic information theory and information-theoretic security. Applications of fundamental topics of information theory include source coding/data compression (e.g. for ZIP files), and channel coding/error detection and correction (e.g. for DSL). Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones and the development of the Internet and artificial intelligence. The theory has also found applications in other areas, including statistical inference, cryptography, neurobiology, perception, signal processing, linguistics, the evolution and function of molecular codes (bioinformatics), thermal physics, molecular dynamics, black holes, quantum computing, information retrieval, intelligence gathering, plagiarism detection, pattern recognition, anomaly detection, the analysis of music, art creation, imaging system design, study of outer space, the dimensionality of space, and epistemology. Overview Information theory studies the transmission, processing, extraction, and utilization of information. Abstractly, information can be thought of as the resolution of uncertainty. In the case of communication of information over a noisy channel, this abstract concept was formalized in 1948 by Claude Shannon in a paper entitled A Mathematical Theory of Communication, in which information is thought of as a set of possible messages, and the goal is to send these messages over a noisy channel, and to have the receiver reconstruct the message with low probability of error, in spite of the channel noise. Shannon's main result, the noisy-channel coding theorem, showed that, in the limit of many channel uses, the rate of information that is asymptotically achievable is equal to the channel capacity, a quantity dependent merely on the statistics of the channel over which the messages are sent. Coding theory is concerned with finding explicit methods, called codes, for increasing the efficiency and reducing the error rate of data communication over noisy channels to near the channel capacity. These codes can be roughly subdivided into data compression (source coding) and error-correction (channel coding) techniques. In the latter case, it took many years to find the methods Shannon's work proved were possible. A third class of information theory codes are cryptographic algorithms (both codes and ciphers). Concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis, such as the unit ban. Historical background The landmark event establishing the discipline of information theory and bringing it to immediate worldwide attention was the publication of Claude E. Shannon's classic paper "A Mathematical Theory of Communication" in the Bell System Technical Journal in July and October 1948. Historian James Gleick rated the paper as the most important development of 1948, noting that the paper was "even more profound and more fundamental" than the transistor. He came to be known as the "father of information theory". Shannon outlined some of his initial ideas of information theory as early as 1939 in a letter to Vannevar Bush. Prior to this paper, limited information-theoretic ideas had been developed at Bell Labs, all implicitly assuming events of equal probability. Harry Nyquist's 1924 paper, Certain Factors Affecting Telegraph Speed, contains a theoretical section quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relation (recalling the Boltzmann constant), where W is the speed of transmission of intelligence, m is the number of different voltage levels to choose from at each time step, and K is a constant. Ralph Hartley's 1928 paper, Transmission of Information, uses the word information as a measurable quantity, reflecting the receiver's ability to distinguish one sequence of symbols from any other, thus quantifying information as , where S was the number of possible symbols, and n the number of symbols in a transmission. The unit of information was therefore the decimal digit, which since has sometimes been called the hartley in his honor as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world war Enigma ciphers. Much of the mathematics behind information theory with events of different probabilities were developed for the field of thermodynamics by Ludwig Boltzmann and J. Willard Gibbs. Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions by Rolf Landauer in the 1960s, are explored in Entropy in thermodynamics and information theory. In Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion: "The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point." With it came the ideas of: the information entropy and redundancy of a source, and its relevance through the source coding theorem; the mutual information, and the channel capacity of a noisy channel, including the promise of perfect loss-free communication given by the noisy-channel coding theorem; the practical result of the Shannon–Hartley law for the channel capacity of a Gaussian channel; as well as the bit—a new way of seeing the most fundamental unit of information. Quantities of information Information theory is based on probability theory and statistics, where quantified information is usually described in terms of bits. Information theory often concerns itself with measures of information of the distributions associated with random variables. One of the most important measures is called entropy, which forms the building block of many other measures. Entropy allows quantification of measure of information in a single random variable. Another useful concept is mutual information defined on two random variables, which describes the measure of information in common between those variables, which can be used to describe their correlation. The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated by independent samples with the given distribution can be reliably compressed. The latter is a property of the joint distribution of two random variables, and is the maximum rate of reliable communication across a noisy channel in the limit of long block lengths, when the channel statistics are determined by the joint distribution. The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. A common unit of information is the bit or shannon, based on the binary logarithm. Other units include the nat, which is based on the natural logarithm, and the decimal digit, which is based on the common logarithm. In what follows, an expression of the form is considered by convention to be equal to zero whenever . This is justified because for any logarithmic base. Entropy of an information source Based on the probability mass function of each source symbol to be communicated, the Shannon entropy , in units of bits (per symbol), is given by where is the probability of occurrence of the -th possible value of the source symbol. This equation gives the entropy in the units of "bits" (per symbol) because it uses a logarithm of base 2, and this base-2 measure of entropy has sometimes been called the shannon in his honor. Entropy is also commonly computed using the natural logarithm (base , where is Euler's number), which produces a measurement of entropy in nats per symbol and sometimes simplifies the analysis by avoiding the need to include extra constants in the formulas. Other bases are also possible, but less commonly used. For example, a logarithm of base will produce a measurement in bytes per symbol, and a logarithm of base 10 will produce a measurement in decimal digits (or hartleys) per symbol. Intuitively, the entropy of a discrete random variable is a measure of the amount of uncertainty associated with the value of when only its distribution is known. The entropy of a source that emits a sequence of symbols that are independent and identically distributed (iid) is bits (per message of symbols). If the source data symbols are identically distributed but not independent, the entropy of a message of length will be less than . If one transmits 1000 bits (0s and 1s), and the value of each of these bits is known to the receiver (has a specific value with certainty) ahead of transmission, it is clear that no information is transmitted. If, however, each bit is independently equally likely to be 0 or 1, 1000 shannons of information (more often called bits) have been transmitted. Between these two extremes, information can be quantified as follows. If is the set of all messages that could be, and is the probability of some , then the entropy, , of is defined: (Here, is the self-information, which is the entropy contribution of an individual message, and is the expected value.) A property of entropy is that it is maximized when all the messages in the message space are equiprobable ; i.e., most unpredictable, in which case . The special case of information entropy for a random variable with two outcomes is the binary entropy function, usually taken to the logarithmic base 2, thus having the shannon (Sh) as unit: Joint entropy The of two discrete random variables and is merely the entropy of their pairing: . This implies that if and are independent, then their joint entropy is the sum of their individual entropies. For example, if represents the position of a chess piece— the row and the column, then the joint entropy of the row of the piece and the column of the piece will be the entropy of the position of the piece. Despite similar notation, joint entropy should not be confused with . Conditional entropy (equivocation) The or conditional uncertainty of given random variable (also called the equivocation of about ) is the average conditional entropy over : Because entropy can be conditioned on a random variable or on that random variable being a certain value, care should be taken not to confuse these two definitions of conditional entropy, the former of which is in more common use. A basic property of this form of conditional entropy is that: Mutual information (transinformation) Mutual information measures the amount of information that can be obtained about one random variable by observing another. It is important in communication where it can be used to maximize the amount of information shared between sent and received signals. The mutual information of relative to is given by: where (Specific mutual Information) is the pointwise mutual information. A basic property of the mutual information is that That is, knowing Y, we can save an average of bits in encoding X compared to not knowing Y. Mutual information is symmetric: Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) between the posterior probability distribution of X given the value of Y and the prior distribution on X: In other words, this is a measure of how much, on the average, the probability distribution on X will change if we are given the value of Y. This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution: Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the multinomial distribution and to Pearson's χ2 test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution. Kullback–Leibler divergence (information gain) The Kullback–Leibler divergence (or information divergence, information gain, or relative entropy) is a way of comparing two distributions: a "true" probability distribution , and an arbitrary probability distribution . If we compress data in a manner that assumes is the distribution underlying some data, when, in reality, is the correct distribution, the Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression. It is thus defined Although it is sometimes used as a 'distance metric', KL divergence is not a true metric since it is not symmetric and does not satisfy the triangle inequality (making it a semi-quasimetric). Another interpretation of the KL divergence is the "unnecessary surprise" introduced by a prior from the truth: suppose a number X is about to be drawn randomly from a discrete set with probability distribution . If Alice knows the true distribution , while Bob believes (has a prior) that the distribution is , then Bob will be more surprised than Alice, on average, upon seeing the value of X. The KL divergence is the (objective) expected value of Bob's (subjective) surprisal minus Alice's surprisal, measured in bits if the log is in base 2. In this way, the extent to which Bob's prior is "wrong" can be quantified in terms of how "unnecessarily surprised" it is expected to make him. Directed Information Directed information, , is an information theory measure that quantifies the information flow from the random process to the random process . The term directed information was coined by James Massey and is defined as , where is the conditional mutual information . In contrast to mutual information, directed information is not symmetric. The measures the information bits that are transmitted causally from to . The Directed information has many applications in problems where causality plays an important role such as capacity of channel with feedback, capacity of discrete memoryless networks with feedback, gambling with causal side information, compression with causal side information, real-time control communication settings, and in statistical physics. Other quantities Other important information theoretic quantities include the Rényi entropy and the Tsallis entropy (generalizations of the concept of entropy), differential entropy (a generalization of quantities of information to continuous distributions), and the conditional mutual information. Also, pragmatic information has been proposed as a measure of how much information has been used in making a decision. Coding theory Coding theory is one of the most important and direct applications of information theory. It can be subdivided into source coding theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source. Data compression (source coding): There are two formulations for the compression problem: lossless data compression: the data must be reconstructed exactly; lossy data compression: allocates bits needed to reconstruct the data, within a specified fidelity level measured by a distortion function. This subset of information theory is called rate–distortion theory. Error-correcting codes (channel coding): While data compression removes as much redundancy as possible, an error-correcting code adds just the right kind of redundancy (i.e., error correction) needed to transmit the data efficiently and faithfully across a noisy channel. This division of coding theory into compression and transmission is justified by the information transmission theorems, or source–channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel) or intermediary "helpers" (the relay channel), or more general networks, compression followed by transmission may no longer be optimal. Source theory Any process that generates successive messages can be considered a of information. A memoryless source is one in which each message is an independent identically distributed random variable, whereas the properties of ergodicity and stationarity impose less restrictive constraints. All such sources are stochastic. These terms are well studied in their own right outside information theory. Rate Information rate is the average entropy per symbol. For memoryless sources, this is merely the entropy of each symbol, while, in the case of a stationary stochastic process, it is: that is, the conditional entropy of a symbol given all the previous symbols generated. For the more general case of a process that is not necessarily stationary, the average rate is: that is, the limit of the joint entropy per symbol. For stationary sources, these two expressions give the same result. The information rate is defined as: It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a source of information is related to its redundancy and how well it can be compressed, the subject of . Channel capacity Communications over a channel is the primary motivation of information theory. However, channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality. Consider the communications process over a discrete channel. A simple model of the process is shown below: Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel. Let be the conditional probability distribution function of Y given X. We will consider to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution of X and Y is completely determined by our channel and by our choice of , the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the rate of information, or the signal, we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the and is given by: This capacity has the following property related to communicating at information rate R (where R is usually bits per symbol). For any information rate R < C and coding error ε > 0, for large enough N, there exists a code of length N and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rate R > C, it is impossible to transmit with arbitrarily small block error. Channel coding is concerned with finding such nearly optimal codes that can be used to transmit data over a noisy channel with a small coding error at a rate near the channel capacity. Capacity of particular channel models A continuous-time analog communications channel subject to Gaussian noise—see Shannon–Hartley theorem. A binary symmetric channel (BSC) with crossover probability p is a binary input, binary output channel that flips the input bit with probability p. The BSC has a capacity of bits per channel use, where is the binary entropy function to the base-2 logarithm: A binary erasure channel (BEC) with erasure probability p is a binary input, ternary output channel. The possible channel outputs are 0, 1, and a third symbol 'e' called an erasure. The erasure represents complete loss of information about an input bit. The capacity of the BEC is bits per channel use. Channels with memory and directed information In practice many channels have memory. Namely, at time the channel is given by the conditional probability. It is often more comfortable to use the notation and the channel become . In such a case the capacity is given by the mutual information rate when there is no feedback available and the Directed information rate in the case that either there is feedback or not (if there is no feedback the directed information equals the mutual information). Fungible information Fungible information is the information for which the means of encoding is not important. Classical information theorists and computer scientists are mainly concerned with information of this sort. It is sometimes referred as speakable information. Applications to other fields Intelligence uses and secrecy applications Information theoretic concepts apply to cryptography and cryptanalysis. Turing's information unit, the ban, was used in the Ultra project, breaking the German Enigma machine code and hastening the end of World War II in Europe. Shannon himself defined an important concept now called the unicity distance. Based on the redundancy of the plaintext, it attempts to give a minimum amount of ciphertext necessary to ensure unique decipherability. Information theory leads us to believe it is much more difficult to keep secrets than it might first appear. A brute force attack can break systems based on asymmetric key algorithms or on most commonly used methods of symmetric key algorithms (sometimes called secret key algorithms), such as block ciphers. The security of all such methods comes from the assumption that no known attack can break them in a practical amount of time. Information theoretic security refers to methods such as the one-time pad that are not vulnerable to such brute force attacks. In such cases, the positive conditional mutual information between the plaintext and ciphertext (conditioned on the key) can ensure proper transmission, while the unconditional mutual information between the plaintext and ciphertext remains zero, resulting in absolutely secure communications. In other words, an eavesdropper would not be able to improve his or her guess of the plaintext by gaining knowledge of the ciphertext but not of the key. However, as in any other cryptographic system, care must be used to correctly apply even information-theoretically secure methods; the Venona project was able to crack the one-time pads of the Soviet Union due to their improper reuse of key material. Pseudorandom number generation Pseudorandom number generators are widely available in computer language libraries and application programs. They are, almost universally, unsuited to cryptographic use as they do not evade the deterministic nature of modern computer equipment and software. A class of improved random number generators is termed cryptographically secure pseudorandom number generators, but even they require random seeds external to the software to work as intended. These can be obtained via extractors, if done carefully. The measure of sufficient randomness in extractors is min-entropy, a value related to Shannon entropy through Rényi entropy; Rényi entropy is also used in evaluating randomness in cryptographic systems. Although related, the distinctions among these measures mean that a random variable with high Shannon entropy is not necessarily satisfactory for use in an extractor and so for cryptography uses. Seismic exploration One early commercial application of information theory was in the field of seismic oil exploration. Work in this field made it possible to strip off and separate the unwanted noise from the desired seismic signal. Information theory and digital signal processing offer a major improvement of resolution and image clarity over previous analog methods. Semiotics Semioticians and Winfried Nöth both considered Charles Sanders Peirce as having created a theory of information in his works on semiotics. Nauta defined semiotic information theory as the study of "the internal processes of coding, filtering, and information processing." Concepts from information theory such as redundancy and code control have been used by semioticians such as Umberto Eco and to explain ideology as a form of message transmission whereby a dominant social class emits its message by using signs that exhibit a high degree of redundancy such that only one message is decoded among a selection of competing ones. Integrated process organization of neural information Quantitative information theoretic methods have been applied in cognitive science to analyze the integrated process organization of neural information in the context of the binding problem in cognitive neuroscience. In this context, either an information-theoretical measure, such as (Gerald Edelman and Giulio Tononi's functional clustering model and dynamic core hypothesis (DCH)) or (Tononi's integrated information theory (IIT) of consciousness), is defined (on the basis of a reentrant process organization, i.e. the synchronization of neurophysiological activity between groups of neuronal populations), or the measure of the minimization of free energy on the basis of statistical methods (Karl J. Friston's free energy principle (FEP), an information-theoretical measure which states that every adaptive change in a self-organized system leads to a minimization of free energy, and the Bayesian brain hypothesis). Miscellaneous applications Information theory also has applications in the search for extraterrestrial intelligence, black holes, bioinformatics, and gambling. See also Algorithmic probability Bayesian inference Communication theory Constructor theory – a generalization of information theory that includes quantum information Formal science Inductive probability Info-metrics Minimum message length Minimum description length Philosophy of information Applications Active networking Cryptanalysis Cryptography Cybernetics Entropy in thermodynamics and information theory Gambling Intelligence (information gathering) Seismic exploration History Hartley, R.V.L. History of information theory Shannon, C.E. Timeline of information theory Yockey, H.P. Andrey Kolmogorov Theory Coding theory Detection theory Estimation theory Fisher information Information algebra Information asymmetry Information field theory Information geometry Information theory and measure theory Kolmogorov complexity List of unsolved problems in information theory Logic of information Network coding Philosophy of information Quantum information science Source coding Concepts Ban (unit) Channel capacity Communication channel Communication source Conditional entropy Covert channel Data compression Decoder Differential entropy Fungible information Information fluctuation complexity Information entropy Joint entropy Kullback–Leibler divergence Mutual information Pointwise mutual information (PMI) Receiver (information theory) Redundancy Rényi entropy Self-information Unicity distance Variety Hamming distance Perplexity References Further reading The classic work Shannon, C.E. (1948), "A Mathematical Theory of Communication", Bell System Technical Journal, 27, pp. 379–423 & 623–656, July & October, 1948. PDF. Notes and other formats. R.V.L. Hartley, "Transmission of Information", Bell System Technical Journal, July 1928 Andrey Kolmogorov (1968), "Three approaches to the quantitative definition of information" in International Journal of Computer Mathematics, 2, pp. 157–168. Other journal articles J. L. Kelly Jr., Princeton, "A New Interpretation of Information Rate" Bell System Technical Journal, Vol. 35, July 1956, pp. 917–26. R. Landauer, IEEE.org, "Information is Physical" Proc. Workshop on Physics and Computation PhysComp'92 (IEEE Comp. Sci.Press, Los Alamitos, 1993) pp. 1–4. Textbooks on information theory Alajaji, F. and Chen, P.N. An Introduction to Single-User Information Theory. Singapore: Springer, 2018. Arndt, C. Information Measures, Information and its Description in Science and Engineering (Springer Series: Signals and Communication Technology), 2004, Gallager, R. Information Theory and Reliable Communication. New York: John Wiley and Sons, 1968. Goldman, S. Information Theory. New York: Prentice Hall, 1953. New York: Dover 1968 , 2005 Csiszar, I, Korner, J. Information Theory: Coding Theorems for Discrete Memoryless Systems Akademiai Kiado: 2nd edition, 1997. MacKay, David J. C. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. Mansuripur, M. Introduction to Information Theory. New York: Prentice Hall, 1987. McEliece, R. The Theory of Information and Coding. Cambridge, 2002. Pierce, JR. "An introduction to information theory: symbols, signals and noise". Dover (2nd Edition). 1961 (reprinted by Dover 1980). Stone, JV. Chapter 1 of book "Information Theory: A Tutorial Introduction", University of Sheffield, England, 2014. . Yeung, RW. A First Course in Information Theory Kluwer Academic/Plenum Publishers, 2002. . Yeung, RW. Information Theory and Network Coding Springer 2008, 2002. Other books Leon Brillouin, Science and Information Theory, Mineola, N.Y.: Dover, [1956, 1962] 2004. A. I. Khinchin, Mathematical Foundations of Information Theory, New York: Dover, 1957. H. S. Leff and A. F. Rex, Editors, Maxwell's Demon: Entropy, Information, Computing, Princeton University Press, Princeton, New Jersey (1990). Robert K. Logan. What is Information? - Propagating Organization in the Biosphere, the Symbolosphere, the Technosphere and the Econosphere, Toronto: DEMO Publishing. Tom Siegfried, The Bit and the Pendulum, Wiley, 2000. Charles Seife, Decoding the Universe, Viking, 2006. Jeremy Campbell, Grammatical Man, Touchstone/Simon & Schuster, 1982, Henri Theil, Economics and Information Theory, Rand McNally & Company - Chicago, 1967. Escolano, Suau, Bonev, Information Theory in Computer Vision and Pattern Recognition, Springer, 2009. Vlatko Vedral, Decoding Reality: The Universe as Quantum Information, Oxford University Press 2010. External links Lambert F. L. (1999), "Shuffled Cards, Messy Desks, and Disorderly Dorm Rooms - Examples of Entropy Increase? Nonsense!", Journal of Chemical Education IEEE Information Theory Society and ITSOC Monographs, Surveys, and Reviews Claude Shannon Computer-related introductions in 1948 Cybernetics Formal sciences History of logic History of mathematics Information Age Data compression
Information theory
[ "Mathematics", "Technology", "Engineering" ]
6,245
[ "Information Age", "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory", "Computing and society" ]
14,774
https://en.wikipedia.org/wiki/Information%20explosion
The information explosion is the rapid increase in the amount of published information or data and the effects of this abundance. As the amount of available data grows, the problem of managing the information becomes more difficult, which can lead to information overload. The Online Oxford English Dictionary indicates use of the phrase in a March 1964 New Statesman article. The New York Times first used the phrase in its editorial content in an article by Walter Sullivan on June 7, 1964, in which he described the phrase as "much discussed". (p11.) The earliest known use of the phrase was in a speech about television by NBC president Pat Weaver at the Institute of Practitioners of Advertising in London on September 27, 1955. The speech was rebroadcast on radio station WSUI in Iowa City and excerpted in the Daily Iowan newspaper two months later. Many sectors are seeing this rapid increase in the amount of information available such as healthcare, supermarkets, and governments. Another sector that is being affected by this phenomenon is journalism. Such a profession, which in the past was responsible for the dissemination of information, may be suppressed by the overabundance of information today. Techniques to gather knowledge from an overabundance of electronic information (e.g., data fusion may help in data mining) have existed since the 1970s. Another common technique to deal with such amount of information is qualitative research. Such approaches aim to organize the information, synthesizing, categorizing and systematizing in order to be more usable and easier to search. Growth patterns The world's technological capacity to store information grew from, optimally compressed, 2.6 exabytes in 1986 to 15.7 in 1993, over 54.5 in 2000, and to 295 exabytes in 2007. The world's technological capacity to receive information through one-way broadcast networks was 432 exabytes of (optimally compressed) information in 1986, 715 (optimally compressed) exabytes in 1993, 1,200 (optimally compressed) exabytes in 2000, and 1,900 in 2007. The world's effective capacity to exchange information through two-way telecommunications networks was 0.281 exabytes of (optimally compressed) information in 1986, 0.471 in 1993, 2.2 in 2000, and 65 (optimally compressed) exabytes in 2007. A new metric that is being used in an attempt to characterize the growth in person-specific information, is the disk storage per person (DSP), which is measured in megabytes/person (where megabytes is 106 bytes and is abbreviated MB). Global DSP (GDSP) is the total rigid disk drive space (in MB) of new units sold in a year divided by the world population in that year. The GDSP metric is a crude measure of how much disk storage could possibly be used to collect person-specific data on the world population. In 1983, one million fixed drives with an estimated total of 90 terabytes were sold worldwide; 30MB drives had the largest market segment. In 1996, 105 million drives, totaling 160,623 terabytes were sold with 1 and 2 gigabyte drives leading the industry. By the year 2000, with 20GB drive leading the industry, rigid drives sold for the year are projected to total 2,829,288 terabytes Rigid disk drive sales to top $34 billion in 1997. According to Latanya Sweeney, there are three trends in data gathering today: Type 1. Expansion of the number of fields being collected, known as the “collect more” trend. Type 2. Replace an existing aggregate data collection with a person-specific one, known as the “collect specifically” trend. Type 3. Gather information by starting a new person-specific data collection, known as the “collect it if you can” trend. Related terms Since "information" in electronic media is often used synonymously with "data", the term information explosion is closely related to the concept of data flood (also dubbed data deluge). Sometimes the term information flood is used as well. All of those basically boil down to the ever-increasing amount of electronic data exchanged per time unit. A term that covers the potential negative effects of information explosion is information inflation. The awareness about non-manageable amounts of data grew along with the advent of ever more powerful data processing since the mid-1960s. Challenges Even though the abundance of information can be beneficial in several levels, some problems may be of concern such as privacy, legal and ethical guidelines, filtering and data accuracy. Filtering refers to finding useful information in the middle of so much data, which relates to the job of data scientists. A typical example of a necessity of data filtering (data mining) is in healthcare since in the next years is due to have EHRs (Electronic Health Records) of patients available. With so much information available, the doctors will need to be able to identify patterns and select important data for the diagnosis of the patient. On the other hand, according to some experts, having so much public data available makes it difficult to provide data that is actually anonymous. Another point to take into account is the legal and ethical guidelines, which relates to who will be the owner of the data and how frequently he/she is obliged to the release this and for how long. With so many sources of data, another problem will be accuracy of such. An untrusted source may be challenged by others, by ordering a new set of data, causing a repetition in the information. According to Edward Huth, another concern is the accessibility and cost of such information. The accessibility rate could be improved by either reducing the costs or increasing the utility of the information. The reduction of costs according to the author, could be done by associations, which should assess which information was relevant and gather it in a more organized fashion. Web servers As of August 2005, there were over 70 million web servers. there were over 135 million web servers. Blogs According to Technorati, the number of blogs doubles about every 6 months with a total of 35.3 million blogs . This is an example of the early stages of logistic growth, where growth is approximately exponential, since blogs are a recent innovation. As the number of blogs approaches the number of possible producers (humans), saturation occurs, growth declines, and the number of blogs eventually stabilizes. See also References External links Conceptualizing Information Systems and Cognitive Sustainability in 21st Century 'Attention' Economies (Includes Syllabus) How Much Information? 2003 Surviving the Information Explosion: How People Find Their Electronic Information Why the Information Explosion Can Be Bad for Data Mining, and How Data Fusion Provides a Way Out Information Explosion, Largest databases Library science Information Age Information science
Information explosion
[ "Technology" ]
1,381
[ "Information Age", "Computing and society" ]
14,775
https://en.wikipedia.org/wiki/Inch
The inch (symbol: in or ) is a unit of length in the British Imperial and the United States customary systems of measurement. It is equal to yard or of a foot. Derived from the Roman uncia ("twelfth"), the word inch is also sometimes used to translate similar units in other measurement systems, usually understood as deriving from the width of the human thumb. Standards for the exact length of an inch have varied in the past, but since the adoption of the international yard during the 1950s and 1960s the inch has been based on the metric system and defined as exactly 25.4mm. Name The English word "inch" () was an early borrowing from Latin ("one-twelfth; Roman inch; Roman ounce"). The vowel change from Latin to Old English (which became Modern English ) is known as umlaut. The consonant change from the Latin (spelled c) to English is palatalisation. Both were features of Old English phonology; see and for more information. "Inch" is cognate with "ounce" (), whose separate pronunciation and spelling reflect its reborrowing in Middle English from Anglo-Norman unce and ounce. In many other European languages, the word for "inch" is the same as or derived from the word for "thumb", as a man's thumb is about an inch wide (and this was even sometimes used to define the inch). In the Dutch language a term for inch is engelse duim (english thumb). Examples include ("inch") and ("thumb"); ("thumb"); Danish and ("inch") ("thumb"); (whence and ); ; , ; ; ("inch") and ("thumb"); ("duim"); ("thumb"); ("inch") and ("thumb"); and ("inch") and tumme ("thumb"). Usage Imperial or hybrid countries The inch is a commonly used customary unit of length in the United States, Canada, and the United Kingdom. For the United Kingdom, guidance on public sector use states that, since 1 October 1995, without time limit, the inch (along with the foot) is to be used as a primary unit for road signs and related measurements of distance (with the possible exception of clearance heights and widths) and may continue to be used as a secondary or supplementary indication following a metric measurement for other purposes. Worldwide Inches are used for display screens (e.g. televisions and computer monitors) worldwide. It is the official Japanese standard for electronic parts, especially display screens, and is the industry standard throughout continental Europe for display screens (Germany being one of few countries to supplement it with centimetres in most stores). Inches are commonly used to specify the diameter of vehicle wheel rims, and the corresponding inner diameter of tyres in tyre codes. SI countries Both inch-based and millimeter-based hex keys are widely available for sale in Europe. Technical details The international standard symbol for inch is in (see ISO 31-1, Annex A) but traditionally the inch is denoted by a double prime, which is often approximated by a double quote symbol, and the foot by a prime, which is often approximated by an apostrophe. For example; can be written as 3 2. (This is akin to how the first and second "cuts" of the hour are likewise indicated by prime and double prime symbols, and also the first and second cuts of the degree.) Subdivisions of an inch are typically written using dyadic fractions with odd number numerators; for example, would be written as and not as 2.375 nor as . However, for engineering purposes fractions are commonly given to three or four places of decimals and have been for many years. Equivalents international inch is equal to: centimeters (1 inch is exactly 2.54 cm) millimetres (1 inch is exactly 25.4 mm) or feet or yards 'tenths' thou or mil points or gries PostScript points , , or lines computer picas barleycorns US Survey inches or palms or hands History The earliest known reference to the inch in England is from the Laws of Æthelberht dating to the early 7th century, surviving in a single manuscript, the Textus Roffensis from 1120. Paragraph LXVII sets out the fine for wounds of various depths: one inch, one shilling; two inches, two shillings, etc. An Anglo-Saxon unit of length was the barleycorn. After 1066, 1 inch was equal to 3 barleycorns, which continued to be its legal definition for several centuries, with the barleycorn being the base unit. One of the earliest such definitions is that of 1324, where the legal definition of the inch was set out in a statute of Edward II of England, defining it as "three grains of barley, dry and round, placed end to end, lengthwise". Similar definitions are recorded in both English and Welsh medieval law tracts. One, dating from the first half of the 10th century, is contained in the Laws of Hywel Dda which superseded those of Dyfnwal, an even earlier definition of the inch in Wales. Both definitions, as recorded in Ancient Laws and Institutes of Wales (vol i., pp. 184, 187, 189), are that "three lengths of a barleycorn is the inch". King David I of Scotland in his Assize of Weights and Measures (c. 1150) is said to have defined the Scottish inch as the width of an average man's thumb at the base of the nail, even including the requirement to calculate the average of a small, a medium, and a large man's measures. However, the oldest surviving manuscripts date from the early 14th century and appear to have been altered with the inclusion of newer material. In 1814, Charles Butler, a mathematics teacher at Cheam School, recorded the old legal definition of the inch to be "three grains of sound ripe barley being taken out the middle of the ear, well dried, and laid end to end in a row", and placed the barleycorn, not the inch, as the base unit of the English Long Measure system, from which all other units were derived. John Bouvier similarly recorded in his 1843 law dictionary that the barleycorn was the fundamental measure. Butler observed, however, that "[a]s the length of the barley-corn cannot be fixed, so the inch according to this method will be uncertain", noting that a standard inch measure was now [i.e. by 1843] kept in the Exchequer chamber, Guildhall, and that was the legal definition of the inch. This was a point also made by George Long in his 1842 Penny Cyclopædia, observing that standard measures had since surpassed the barleycorn definition of the inch, and that to recover the inch measure from its original definition, in case the standard measure were destroyed, would involve the measurement of large numbers of barleycorns and taking their average lengths. He noted that this process would not perfectly recover the standard, since it might introduce errors of anywhere between one hundredth and one tenth of an inch in the definition of a yard. Before the adoption of the international yard and pound, various definitions were in use. In the United Kingdom and most countries of the British Commonwealth, the inch was defined in terms of the Imperial Standard Yard. The United States adopted the conversion factor 1 metre = 39.37 inches by an act in 1866. In 1893, Mendenhall ordered the physical realization of the inch to be based on the international prototype metres numbers 21 and 27, which had been received from the CGPM, together with the previously adopted conversion factor. As a result of the definitions above, the U.S. inch was effectively defined as 25.4000508 mm (with a reference temperature of 68 degrees Fahrenheit) and the UK inch at 25.399977 mm (with a reference temperature of 62 degrees Fahrenheit). When Carl Edvard Johansson started manufacturing gauge blocks in inch sizes in 1912, Johansson's compromise was to manufacture gauge blocks with a nominal size of 25.4mm, with a reference temperature of 20 degrees Celsius, accurate to within a few parts per million of both official definitions. Because Johansson's blocks were so popular, his blocks became the de facto standard for manufacturers internationally, with other manufacturers of gauge blocks following Johansson's definition by producing blocks designed to be equivalent to his. In 1930, the British Standards Institution adopted an inch of exactly 25.4 mm. The American Standards Association followed suit in 1933. By 1935, industry in 16 countries had adopted the "industrial inch" as it came to be known, effectively endorsing Johansson's pragmatic choice of conversion ratio. In 1946, the Commonwealth Science Congress recommended a yard of exactly 0.9144 metres for adoption throughout the British Commonwealth. This was adopted by Canada in 1951; the United States on 1 July 1959; Australia in 1961, effective 1 January 1964; and the United Kingdom in 1963, effective on 1 January 1964. The new standards gave an inch of exactly 25.4 mm, 1.7 millionths of an inch longer than the old imperial inch and 2 millionths of an inch shorter than the old US inch. Related units US survey inches The United States retained the -metre definition for surveying, producing a 2 millionth part difference between standard and US survey inches. This is approximately  inch per mile; 12.7 kilometres is exactly standard inches and exactly survey inches. This difference is substantial when doing calculations in State Plane Coordinate Systems with coordinate values in the hundreds of thousands or millions of feet. In 2020, the National Institute of Standards and Technology announced that the U.S. survey foot would "be phased out" on 1 January 2023 and be superseded by the international foot (also known as the foot) equal to 0.3048 metres exactly, for all further applications. This implies that the survey inch was replaced by the international inch. Continental inches Before the adoption of the metric system, several European countries had customary units whose name translates into "inch". The French pouce measured roughly 27.0 mm, at least when applied to describe the calibre of artillery pieces. The Amsterdam foot (voet) consisted of 11 Amsterdam inches (duim). The Amsterdam foot is about 8% shorter than an English foot. Scottish inch The now obsolete Scottish inch (), of a Scottish foot, was about 1.0016 imperial inches (about ). See also English units Square inch and Cubic inch International yard and pound Anthropic units Pyramid inch Digit and Line Notes References Citations Bibliography Customary units of measurement in the United States Imperial units Units of length Obsolete Scottish units of measurement fy:Tomme (lingtemaat)
Inch
[ "Mathematics" ]
2,242
[ "Quantity", "Units of measurement", "Units of length" ]
14,794
https://en.wikipedia.org/wiki/Integer%20%28computer%20science%29
In computer science, an integer is a datum of integral data type, a data type that represents some range of mathematical integers. Integral data types may be of different sizes and may or may not be allowed to contain negative values. Integers are commonly represented in a computer as a group of binary digits (bits). The size of the grouping varies so the set of integer sizes available varies between different types of computers. Computer hardware nearly always provides a way to represent a processor register or memory address as an integer. Value and representation The value of an item with an integral type is the mathematical integer that it corresponds to. Integral types may be unsigned (capable of representing only non-negative integers) or signed (capable of representing negative integers as well). An integer value is typically specified in the source code of a program as a sequence of digits optionally prefixed with + or −. Some programming languages allow other notations, such as hexadecimal (base 16) or octal (base 8). Some programming languages also permit digit group separators. The internal representation of this datum is the way the value is stored in the computer's memory. Unlike mathematical integers, a typical datum in a computer has some minimal and maximum possible value. The most common representation of a positive integer is a string of bits, using the binary numeral system. The order of the memory bytes storing the bits varies; see endianness. The width, precision, or bitness of an integral type is the number of bits in its representation. An integral type with n bits can encode 2n numbers; for example an unsigned type typically represents the non-negative values 0 through . Other encodings of integer values to bit patterns are sometimes used, for example binary-coded decimal or Gray code, or as printed character codes such as ASCII. There are four well-known ways to represent signed numbers in a binary computing system. The most common is two's complement, which allows a signed integral type with n bits to represent numbers from through . Two's complement arithmetic is convenient because there is a perfect one-to-one correspondence between representations and values (in particular, no separate +0 and −0), and because addition, subtraction and multiplication do not need to distinguish between signed and unsigned types. Other possibilities include offset binary, sign-magnitude, and ones' complement. Some computer languages define integer sizes in a machine-independent way; others have varying definitions depending on the underlying processor word size. Not all language implementations define variables of all integer sizes, and defined sizes may not even be distinct in a particular implementation. An integer in one programming language may be a different size in a different language, on a different processor, or in an execution context of different bitness; see . Some older computer architectures used decimal representations of integers, stored in binary-coded decimal (BCD) or other format. These values generally require data sizes of 4 bits per decimal digit (sometimes called a nibble), usually with additional bits for a sign. Many modern CPUs provide limited support for decimal integers as an extended datatype, providing instructions for converting such values to and from binary values. Depending on the architecture, decimal integers may have fixed sizes (e.g., 7 decimal digits plus a sign fit into a 32-bit word), or may be variable-length (up to some maximum digit size), typically occupying two digits per byte (octet). Common integral data types Different CPUs support different integral data types. Typically, hardware will support both signed and unsigned types, but only a small, fixed set of widths. The table above lists integral type widths that are supported in hardware by common processors. High-level programming languages provide more possibilities. It is common to have a 'double width' integral type that has twice as many bits as the biggest hardware-supported type. Many languages also have bit-field types (a specified number of bits, usually constrained to be less than the maximum hardware-supported width) and range types (that can represent only the integers in a specified range). Some languages, such as Lisp, Smalltalk, REXX, Haskell, Python, and Raku, support arbitrary precision integers (also known as infinite precision integers or bignums). Other languages that do not support this concept as a top-level construct may have libraries available to represent very large numbers using arrays of smaller variables, such as Java's class or Perl's "" package. These use as much of the computer's memory as is necessary to store the numbers; however, a computer has only a finite amount of storage, so they, too, can only represent a finite subset of the mathematical integers. These schemes support very large numbers; for example one kilobyte of memory could be used to store numbers up to 2466 decimal digits long. A Boolean type is a type that can represent only two values: 0 and 1, usually identified with false and true respectively. This type can be stored in memory using a single bit, but is often given a full byte for convenience of addressing and speed of access. A four-bit quantity is known as a nibble (when eating, being smaller than a bite) or nybble (being a pun on the form of the word byte). One nibble corresponds to one digit in hexadecimal and holds one digit or a sign code in binary-coded decimal. Bytes and octets The term byte initially meant 'the smallest addressable unit of memory'. In the past, 5-, 6-, 7-, 8-, and 9-bit bytes have all been used. There have also been computers that could address individual bits ('bit-addressed machine'), or that could only address 16- or 32-bit quantities ('word-addressed machine'). The term byte was usually not used at all in connection with bit- and word-addressed machines. The term octet always refers to an 8-bit quantity. It is mostly used in the field of computer networking, where computers with different byte widths might have to communicate. In modern usage byte almost invariably means eight bits, since all other sizes have fallen into disuse; thus byte has come to be synonymous with octet. Words The term 'word' is used for a small group of bits that are handled simultaneously by processors of a particular architecture. The size of a word is thus CPU-specific. Many different word sizes have been used, including 6-, 8-, 12-, 16-, 18-, 24-, 32-, 36-, 39-, 40-, 48-, 60-, and 64-bit. Since it is architectural, the size of a word is usually set by the first CPU in a family, rather than the characteristics of a later compatible CPU. The meanings of terms derived from word, such as longword, doubleword, quadword, and halfword, also vary with the CPU and OS. Practically all new desktop processors are capable of using 64-bit words, though embedded processors with 8- and 16-bit word size are still common. The 36-bit word length was common in the early days of computers. One important cause of non-portability of software is the incorrect assumption that all computers have the same word size as the computer used by the programmer. For example, if a programmer using the C language incorrectly declares as a variable that will be used to store values greater than 215−1, the program will fail on computers with 16-bit integers. That variable should have been declared as , which has at least 32 bits on any computer. Programmers may also incorrectly assume that a pointer can be converted to an integer without loss of information, which may work on (some) 32-bit computers, but fail on 64-bit computers with 64-bit pointers and 32-bit integers. This issue is resolved by C99 in stdint.h in the form of . The bitness of a program may refer to the word size (or bitness) of the processor on which it runs, or it may refer to the width of a memory address or pointer, which can differ between execution modes or contexts. For example, 64-bit versions of Microsoft Windows support existing 32-bit binaries, and programs compiled for Linux's x32 ABI run in 64-bit mode yet use 32-bit memory addresses. Standard integer The standard integer size is platform-dependent. In C, it is denoted by and required to be at least 16 bits. Windows and Unix systems have 32-bit s on both 32-bit and 64-bit architectures. Short integer A short integer can represent a whole number that may take less storage, while having a smaller range, compared with a standard integer on the same machine. In C, it is denoted by . It is required to be at least 16 bits, and is often smaller than a standard integer, but this is not required. A conforming program can assume that it can safely store values between −(215−1) and 215−1, but it may not assume that the range is not larger. In Java, a is always a 16-bit integer. In the Windows API, the datatype is defined as a 16-bit signed integer on all machines. Long integer A long integer can represent a whole integer whose range is greater than or equal to that of a standard integer on the same machine. In C, it is denoted by . It is required to be at least 32 bits, and may or may not be larger than a standard integer. A conforming program can assume that it can safely store values between −(231−1) and 231−1, but it may not assume that the range is not larger. Long long In the C99 version of the C programming language and the C++11 version of C++, a long long type is supported that has double the minimum capacity of the standard long. This type is not supported by compilers that require C code to be compliant with the previous C++ standard, C++03, because the type did not exist in C++03. For an ANSI/ISO compliant compiler, the minimum requirements for the specified ranges, that is, −(263−1) to 263−1 for signed and 0 to 264−1 for unsigned, must be fulfilled; however, extending this range is permitted. This can be an issue when exchanging code and data between platforms, or doing direct hardware access. Thus, there are several sets of headers providing platform independent exact width types. The C standard library provides stdint.h; this was introduced in C99 and C++11. Syntax Integer literals can be written as regular Arabic numerals, consisting of a sequence of digits and with negation indicated by a minus sign before the value. However, most programming languages disallow use of commas or spaces for digit grouping. Examples of integer literals are: 42 10000 -233000 There are several alternate methods for writing integer literals in many programming languages: Many programming languages, especially those influenced by C, prefix an integer literal with 0X or 0x to represent a hexadecimal value, e.g. 0xDEADBEEF. Other languages may use a different notation, e.g. some assembly languages append an H or h to the end of a hexadecimal value. Perl, Ruby, Java, Julia, D, Go, C#, Rust and Python (starting from version 3.6) allow embedded underscores for clarity, e.g. 10_000_000, and fixed-form Fortran ignores embedded spaces in integer literals. C (starting from C23) and C++ use single quotes for this purpose. In C and C++, a leading zero indicates an octal value, e.g. 0755. This was primarily intended to be used with Unix modes; however, it has been criticized because normal integers may also lead with zero. As such, Python, Ruby, Haskell, and OCaml prefix octal values with 0O or 0o, following the layout used by hexadecimal values. Several languages, including Java, C#, Scala, Python, Ruby, OCaml, C (starting from C23) and C++ can represent binary values by prefixing a number with 0B or 0b. Extreme values In many programming languages, there exist predefined constants representing the least and the greatest values representable with a given integer type. Names for these include SmallBASIC: Java: , Corresponding fields exist for the other integer classes in Java. C: , etc. GLib: , , , ... Pascal: Python 2: Turing: See also Arbitrary-precision arithmetic Binary-coded decimal (BCD) C data types Integer overflow Signed number representations Notes References Data types Computer arithmetic Primitive types
Integer (computer science)
[ "Mathematics" ]
2,661
[ "Computer arithmetic", "Arithmetic" ]
14,822
https://en.wikipedia.org/wiki/Irreducible%20fraction
An irreducible fraction (or fraction in lowest terms, simplest form or reduced fraction) is a fraction in which the numerator and denominator are integers that have no other common divisors than 1 (and −1, when negative numbers are considered). In other words, a fraction is irreducible if and only if a and b are coprime, that is, if a and b have a greatest common divisor of 1. In higher mathematics, "irreducible fraction" may also refer to rational fractions such that the numerator and the denominator are coprime polynomials. Every rational number can be represented as an irreducible fraction with positive denominator in exactly one way. An equivalent definition is sometimes useful: if a and b are integers, then the fraction is irreducible if and only if there is no other equal fraction such that or , where means the absolute value of a. (Two fractions and are equal or equivalent if and only if ad = bc.) For example, , , and are all irreducible fractions. On the other hand, is reducible since it is equal in value to , and the numerator of is less than the numerator of . A fraction that is reducible can be reduced by dividing both the numerator and denominator by a common factor. It can be fully reduced to lowest terms if both are divided by their greatest common divisor. In order to find the greatest common divisor, the Euclidean algorithm or prime factorization can be used. The Euclidean algorithm is commonly preferred because it allows one to reduce fractions with numerators and denominators too large to be easily factored. Examples In the first step both numbers were divided by 10, which is a factor common to both 120 and 90. In the second step, they were divided by 3. The final result, , is an irreducible fraction because 4 and 3 have no common factors other than 1. The original fraction could have also been reduced in a single step by using the greatest common divisor of 90 and 120, which is 30. As , and , one gets Which method is faster "by hand" depends on the fraction and the ease with which common factors are spotted. In case a denominator and numerator remain that are too large to ensure they are coprime by inspection, a greatest common divisor computation is needed anyway to ensure the fraction is actually irreducible. Uniqueness Every rational number has a unique representation as an irreducible fraction with a positive denominator (however = although both are irreducible). Uniqueness is a consequence of the unique prime factorization of integers, since implies ad = bc, and so both sides of the latter must share the same prime factorization, yet a and b share no prime factors so the set of prime factors of a (with multiplicity) is a subset of those of c and vice versa, meaning a = c and by the same argument b = d. Applications The fact that any rational number has a unique representation as an irreducible fraction is utilized in various proofs of the irrationality of the square root of 2 and of other irrational numbers. For example, one proof notes that if could be represented as a ratio of integers, then it would have in particular the fully reduced representation where a and b are the smallest possible; but given that equals so does (since cross-multiplying this with shows that they are equal). Since a > b (because is greater than 1), the latter is a ratio of two smaller integers. This is a contradiction, so the premise that the square root of two has a representation as the ratio of two integers is false. Generalization The notion of irreducible fraction generalizes to the field of fractions of any unique factorization domain: any element of such a field can be written as a fraction in which denominator and numerator are coprime, by dividing both by their greatest common divisor. This applies notably to rational expressions over a field. The irreducible fraction for a given element is unique up to multiplication of denominator and numerator by the same invertible element. In the case of the rational numbers this means that any number has two irreducible fractions, related by a change of sign of both numerator and denominator; this ambiguity can be removed by requiring the denominator to be positive. In the case of rational functions the denominator could similarly be required to be a monic polynomial. See also Anomalous cancellation, an erroneous arithmetic procedure that produces the correct irreducible fraction by cancelling digits of the original unreduced form. Diophantine approximation, the approximation of real numbers by rational numbers. References External links Fractions (mathematics) Elementary arithmetic
Irreducible fraction
[ "Mathematics" ]
1,004
[ "Fractions (mathematics)", "Elementary arithmetic", "Mathematical objects", "Elementary mathematics", "Arithmetic", "Numbers" ]
14,828
https://en.wikipedia.org/wiki/Isomorphism
In mathematics, an isomorphism is a structure-preserving mapping (a morphism) between two structures of the same type that can be reversed by an inverse mapping. Two mathematical structures are isomorphic if an isomorphism exists between them. The word is derived . The interest in isomorphisms lies in the fact that two isomorphic objects have the same properties (excluding further information such as additional structure or names of objects). Thus isomorphic structures cannot be distinguished from the point of view of structure only, and may be identified. In mathematical jargon, one says that two objects are . An automorphism is an isomorphism from a structure to itself. An isomorphism between two structures is a canonical isomorphism (a canonical map that is an isomorphism) if there is only one isomorphism between the two structures (as is the case for solutions of a universal property), or if the isomorphism is much more natural (in some sense) than other isomorphisms. For example, for every prime number , all fields with elements are canonically isomorphic, with a unique isomorphism. The isomorphism theorems provide canonical isomorphisms that are not unique. The term is mainly used for algebraic structures. In this case, mappings are called homomorphisms, and a homomorphism is an isomorphism if and only if it is bijective. In various areas of mathematics, isomorphisms have received specialized names, depending on the type of structure under consideration. For example: An isometry is an isomorphism of metric spaces. A homeomorphism is an isomorphism of topological spaces. A diffeomorphism is an isomorphism of spaces equipped with a differential structure, typically differentiable manifolds. A symplectomorphism is an isomorphism of symplectic manifolds. A permutation is an automorphism of a set. In geometry, isomorphisms and automorphisms are often called transformations, for example rigid transformations, affine transformations, projective transformations. Category theory, which can be viewed as a formalization of the concept of mapping between structures, provides a language that may be used to unify the approach to these different aspects of the basic idea. Examples Logarithm and exponential Let be the multiplicative group of positive real numbers, and let be the additive group of real numbers. The logarithm function satisfies for all so it is a group homomorphism. The exponential function satisfies for all so it too is a homomorphism. The identities and show that and are inverses of each other. Since is a homomorphism that has an inverse that is also a homomorphism, is an isomorphism of groups, i.e., via the isomorphism . The function is an isomorphism which translates multiplication of positive real numbers into addition of real numbers. This facility makes it possible to multiply real numbers using a ruler and a table of logarithms, or using a slide rule with a logarithmic scale. Integers modulo 6 Consider the group the integers from 0 to 5 with addition modulo 6. Also consider the group the ordered pairs where the x coordinates can be 0 or 1, and the y coordinates can be 0, 1, or 2, where addition in the x-coordinate is modulo 2 and addition in the y-coordinate is modulo 3. These structures are isomorphic under addition, under the following scheme: or in general For example, which translates in the other system as Even though these two groups "look" different in that the sets contain different elements, they are indeed isomorphic: their structures are exactly the same. More generally, the direct product of two cyclic groups and is isomorphic to if and only if m and n are coprime, per the Chinese remainder theorem. Relation-preserving isomorphism If one object consists of a set X with a binary relation R and the other object consists of a set Y with a binary relation S then an isomorphism from X to Y is a bijective function such that: S is reflexive, irreflexive, symmetric, antisymmetric, asymmetric, transitive, total, trichotomous, a partial order, total order, well-order, strict weak order, total preorder (weak order), an equivalence relation, or a relation with any other special properties, if and only if R is. For example, R is an ordering ≤ and S an ordering then an isomorphism from X to Y is a bijective function such that Such an isomorphism is called an or (less commonly) an . If then this is a relation-preserving automorphism. Applications In algebra, isomorphisms are defined for all algebraic structures. Some are more specifically studied; for example: Linear isomorphisms between vector spaces; they are specified by invertible matrices. Group isomorphisms between groups; the classification of isomorphism classes of finite groups is an open problem. Ring isomorphism between rings. Field isomorphisms are the same as ring isomorphism between fields; their study, and more specifically the study of field automorphisms is an important part of Galois theory. Just as the automorphisms of an algebraic structure form a group, the isomorphisms between two algebras sharing a common structure form a heap. Letting a particular isomorphism identify the two structures turns this heap into a group. In mathematical analysis, the Laplace transform is an isomorphism mapping hard differential equations into easier algebraic equations. In graph theory, an isomorphism between two graphs G and H is a bijective map f from the vertices of G to the vertices of H that preserves the "edge structure" in the sense that there is an edge from vertex u to vertex v in G if and only if there is an edge from to in H. See graph isomorphism. In order theory, an isomorphism between two partially ordered sets P and Q is a bijective map from P to Q that preserves the order structure in the sense that for any elements and of P we have less than in P if and only if is less than in Q. As an example, the set {1,2,3,6} of whole numbers ordered by the is-a-factor-of relation is isomorphic to the set {O, A, B, AB} of blood types ordered by the can-donate-to relation. See order isomorphism. In mathematical analysis, an isomorphism between two Hilbert spaces is a bijection preserving addition, scalar multiplication, and inner product. In early theories of logical atomism, the formal relationship between facts and true propositions was theorized by Bertrand Russell and Ludwig Wittgenstein to be isomorphic. An example of this line of thinking can be found in Russell's Introduction to Mathematical Philosophy. In cybernetics, the good regulator or Conant–Ashby theorem is stated "Every good regulator of a system must be a model of that system". Whether regulated or self-regulating, an isomorphism is required between the regulator and processing parts of the system. Category theoretic view In category theory, given a category C, an isomorphism is a morphism that has an inverse morphism that is, and Two categories and are isomorphic if there exist functors and which are mutually inverse to each other, that is, (the identity functor on ) and (the identity functor on ). Isomorphism vs. bijective morphism In a concrete category (roughly, a category whose objects are sets (perhaps with extra structure) and whose morphisms are structure-preserving functions), such as the category of topological spaces or categories of algebraic objects (like the category of groups, the category of rings, and the category of modules), an isomorphism must be bijective on the underlying sets. In algebraic categories (specifically, categories of varieties in the sense of universal algebra), an isomorphism is the same as a homomorphism which is bijective on underlying sets. However, there are concrete categories in which bijective morphisms are not necessarily isomorphisms (such as the category of topological spaces). Isomorphism class Since a composition of isomorphisms is an isomorphism, since the identity is an isomorphism and since the inverse of an isomorphism is an isomorphism, the relation that two mathematical objects are isomorphic is an equivalence relation. An equivalence class given by isomorphisms is commonly called an isomorphism class. Examples Examples of isomorphism classes are plentiful in mathematics. Two sets are isomorphic if there is a bijection between them. The isomorphism class of a finite set can be identified with the non-negative integer representing the number of elements it contains. The isomorphism class of a finite-dimensional vector space can be identified with the non-negative integer representing its dimension. The classification of finite simple groups enumerates the isomorphism classes of all finite simple groups. The classification of closed surfaces enumerates the isomorphism classes of all connected closed surfaces. Ordinals are essentially defined as isomorphism classes of well-ordered sets (though there are technical issues involved). However, there are circumstances in which the isomorphism class of an object conceals vital information about it. Given a mathematical structure, it is common that two substructures belong to the same isomorphism class. However, the way they are included in the whole structure can not be studied if they are identified. For example, in a finite-dimensional vector space, all subspaces of the same dimension are isomorphic, but must be distinguished to consider their intersection, sum, etc. The associative algebras consisting of coquaternions and 2 × 2 real matrices are isomorphic as rings. Yet they appear in different contexts for application (plane mapping and kinematics) so the isomorphism is insufficient to merge the concepts. In homotopy theory, the fundamental group of a space at a point , though technically denoted to emphasize the dependence on the base point, is often written lazily as simply if is path connected. The reason for this is that the existence of a path between two points allows one to identify loops at one with loops at the other; however, unless is abelian this isomorphism is non-unique. Furthermore, the classification of covering spaces makes strict reference to particular subgroups of , specifically distinguishing between isomorphic but conjugate subgroups, and therefore amalgamating the elements of an isomorphism class into a single featureless object seriously decreases the level of detail provided by the theory. Relation to equality Although there are cases where isomorphic objects can be considered equal, one must distinguish and . Equality is when two objects are the same, and therefore everything that is true about one object is true about the other. On the other hand, isomorphisms are related to some structure, and two isomorphic objects share only the properties that are related to this structure. For example, the sets are ; they are merely different representations—the first an intensional one (in set builder notation), and the second extensional (by explicit enumeration)—of the same subset of the integers. By contrast, the sets and are not since they do not have the same elements. They are isomorphic as sets, but there are many choices (in fact 6) of an isomorphism between them: one isomorphism is while another is and no one isomorphism is intrinsically better than any other. On this view and in this sense, these two sets are not equal because one cannot consider them : one can choose an isomorphism between them, but that is a weaker claim than identity—and valid only in the context of the chosen isomorphism. Also, integers and even numbers are isomorphic as ordered sets and abelian groups (for addition), but cannot be considered equal sets, since one is a proper subset of the other. On the other hand, when sets (or other mathematical objects) are defined only by their properties, without considering the nature of their elements, one often considers them to be equal. This is generally the case with solutions of universal properties. For example, the rational numbers are usually defined as equivalence classes of pairs of integers, although nobody thinks of a rational number as a set (equivalence class). The universal property of the rational numbers is essentially that they form a field that contains the integers and does not contain any proper subfield. It results that given two fields with these properties, there is a unique field isomorphism between them. This allows identifying these two fields, since every property of one of them can be transferred to the other through the isomorphism. For example, the real numbers that are obtained by dividing two integers (inside the real numbers) form the smallest subfield of the real numbers. There is thus a unique isomorphism from the rational numbers (defined as equivalence classes of pairs) to the quotients of two real numbers that are integers. This allows identifying these two sorts of rational numbers. See also Bisimulation Equivalence relation Heap (mathematics) Isometry Isomorphism class Isomorphism theorem Universal property Coherent isomorphism Balanced category Notes References Further reading External links Morphisms Equivalence (mathematics)
Isomorphism
[ "Mathematics" ]
2,694
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Mathematical relations", "Category theory", "Morphisms" ]
14,838
https://en.wikipedia.org/wiki/Inertial%20frame%20of%20reference
In classical physics and special relativity, an inertial frame of reference (also called an inertial space or a Galilean reference frame) is a frame of reference in which objects exhibit inertia: they remain at rest or in uniform motion relative to the frame until acted upon by external forces. In such a frame, the laws of nature can be observed without the need to correct for acceleration. All frames of reference with zero acceleration are in a state of constant rectilinear motion (straight-line motion) with respect to one another. In such a frame, an object with zero net force acting on it, is perceived to move with a constant velocity, or, equivalently, Newton's first law of motion holds. Such frames are known as inertial. Some physicists, like Isaac Newton, originally thought that one of these frames was absolute — the one approximated by the fixed stars. However, this is not required for the definition, and it is now known that those stars are in fact moving. According to the principle of special relativity, all physical laws look the same in all inertial reference frames, and no inertial frame is privileged over another. Measurements of objects in one inertial frame can be converted to measurements in another by a simple transformation — the Galilean transformation in Newtonian physics or the Lorentz transformation (combined with a translation) in special relativity; these approximately match when the relative speed of the frames is low, but differ as it approaches the speed of light. By contrast, a non-inertial reference frame has non-zero acceleration. In such a frame, the interactions between physical objects vary depending on the acceleration of that frame with respect to an inertial frame. Viewed from the perspective of classical mechanics and special relativity, the usual physical forces caused by the interaction of objects have to be supplemented by fictitious forces caused by inertia. Viewed from the perspective of general relativity theory, the fictitious (i.e. inertial) forces are attributed to geodesic motion in spacetime. Due to Earth's rotation, its surface is not an inertial frame of reference. The Coriolis effect can deflect certain forms of motion as seen from Earth, and the centrifugal force will reduce the effective gravity at the equator. Nevertheless, for many applications the Earth is an adequate approximation of an inertial reference frame. Introduction The motion of a body can only be described relative to something else—other bodies, observers, or a set of spacetime coordinates. These are called frames of reference. According to the first postulate of special relativity, all physical laws take their simplest form in an inertial frame, and there exist multiple inertial frames interrelated by uniform translation: This simplicity manifests itself in that inertial frames have self-contained physics without the need for external causes, while physics in non-inertial frames has external causes. The principle of simplicity can be used within Newtonian physics as well as in special relativity: However, this definition of inertial frames is understood to apply in the Newtonian realm and ignores relativistic effects. In practical terms, the equivalence of inertial reference frames means that scientists within a box moving with a constant absolute velocity cannot determine this velocity by any experiment. Otherwise, the differences would set up an absolute standard reference frame. According to this definition, supplemented with the constancy of the speed of light, inertial frames of reference transform among themselves according to the Poincaré group of symmetry transformations, of which the Lorentz transformations are a subgroup. In Newtonian mechanics, inertial frames of reference are related by the Galilean group of symmetries. Newton's inertial frame of reference Absolute space Newton posited an absolute space considered well-approximated by a frame of reference stationary relative to the fixed stars. An inertial frame was then one in uniform translation relative to absolute space. However, some "relativists", even at the time of Newton, felt that absolute space was a defect of the formulation, and should be replaced. The expression inertial frame of reference () was coined by Ludwig Lange in 1885, to replace Newton's definitions of "absolute space and time" with a more operational definition: The inadequacy of the notion of "absolute space" in Newtonian mechanics is spelled out by Blagojevich: The utility of operational definitions was carried much further in the special theory of relativity. Some historical background including Lange's definition is provided by DiSalle, who says in summary: Newtonian mechanics Classical theories that use the Galilean transformation postulate the equivalence of all inertial reference frames. The Galilean transformation transforms coordinates from one inertial reference frame, , to another, , by simple addition or subtraction of coordinates: where r0 and t0 represent shifts in the origin of space and time, and v is the relative velocity of the two inertial reference frames. Under Galilean transformations, the time t2 − t1 between two events is the same for all reference frames and the distance between two simultaneous events (or, equivalently, the length of any object, |r2 − r1|) is also the same. Within the realm of Newtonian mechanics, an inertial frame of reference, or inertial reference frame, is one in which Newton's first law of motion is valid. However, the principle of special relativity generalizes the notion of an inertial frame to include all physical laws, not simply Newton's first law. Newton viewed the first law as valid in any reference frame that is in uniform motion (neither rotating nor accelerating) relative to absolute space; as a practical matter, "absolute space" was considered to be the fixed stars In the theory of relativity the notion of absolute space or a privileged frame is abandoned, and an inertial frame in the field of classical mechanics is defined as: Hence, with respect to an inertial frame, an object or body accelerates only when a physical force is applied, and (following Newton's first law of motion), in the absence of a net force, a body at rest will remain at rest and a body in motion will continue to move uniformly—that is, in a straight line and at constant speed. Newtonian inertial frames transform among each other according to the Galilean group of symmetries. If this rule is interpreted as saying that straight-line motion is an indication of zero net force, the rule does not identify inertial reference frames because straight-line motion can be observed in a variety of frames. If the rule is interpreted as defining an inertial frame, then being able to determine when zero net force is applied is crucial. The problem was summarized by Einstein: There are several approaches to this issue. One approach is to argue that all real forces drop off with distance from their sources in a known manner, so it is only needed that a body is far enough away from all sources to ensure that no force is present. A possible issue with this approach is the historically long-lived view that the distant universe might affect matters (Mach's principle). Another approach is to identify all real sources for real forces and account for them. A possible issue with this approach is the possibility of missing something, or accounting inappropriately for their influence, perhaps, again, due to Mach's principle and an incomplete understanding of the universe. A third approach is to look at the way the forces transform when shifting reference frames. Fictitious forces, those that arise due to the acceleration of a frame, disappear in inertial frames and have complicated rules of transformation in general cases. Based on the universality of physical law and the request for frames where the laws are most simply expressed, inertial frames are distinguished by the absence of such fictitious forces. Newton enunciated a principle of relativity himself in one of his corollaries to the laws of motion: This principle differs from the special principle in two ways: first, it is restricted to mechanics, and second, it makes no mention of simplicity. It shares the special principle of the invariance of the form of the description among mutually translating reference frames. The role of fictitious forces in classifying reference frames is pursued further below. Special relativity Einstein's theory of special relativity, like Newtonian mechanics, postulates the equivalence of all inertial reference frames. However, because special relativity postulates that the speed of light in free space is invariant, the transformation between inertial frames is the Lorentz transformation, not the Galilean transformation which is used in Newtonian mechanics. The invariance of the speed of light leads to counter-intuitive phenomena, such as time dilation, length contraction, and the relativity of simultaneity. The predictions of special relativity have been extensively verified experimentally. The Lorentz transformation reduces to the Galilean transformation as the speed of light approaches infinity or as the relative velocity between frames approaches zero. Examples Simple example Consider a situation common in everyday life. Two cars travel along a road, both moving at constant velocities. See Figure 1. At some particular moment, they are separated by 200 meters. The car in front is traveling at 22 meters per second and the car behind is traveling at 30 meters per second. If we want to find out how long it will take the second car to catch up with the first, there are three obvious "frames of reference" that we could choose. First, we could observe the two cars from the side of the road. We define our "frame of reference" S as follows. We stand on the side of the road and start a stop-clock at the exact moment that the second car passes us, which happens to be when they are a distance apart. Since neither of the cars is accelerating, we can determine their positions by the following formulas, where is the position in meters of car one after time t in seconds and is the position of car two after time t. Notice that these formulas predict at t = 0 s the first car is 200m down the road and the second car is right beside us, as expected. We want to find the time at which . Therefore, we set and solve for , that is: Alternatively, we could choose a frame of reference S′ situated in the first car. In this case, the first car is stationary and the second car is approaching from behind at a speed of . To catch up to the first car, it will take a time of , that is, 25 seconds, as before. Note how much easier the problem becomes by choosing a suitable frame of reference. The third possible frame of reference would be attached to the second car. That example resembles the case just discussed, except the second car is stationary and the first car moves backward towards it at . It would have been possible to choose a rotating, accelerating frame of reference, moving in a complicated manner, but this would have served to complicate the problem unnecessarily. It is also necessary to note that one can convert measurements made in one coordinate system to another. For example, suppose that your watch is running five minutes fast compared to the local standard time. If you know that this is the case, when somebody asks you what time it is, you can deduct five minutes from the time displayed on your watch to obtain the correct time. The measurements that an observer makes about a system depend therefore on the observer's frame of reference (you might say that the bus arrived at 5 past three, when in fact it arrived at three). Additional example For a simple example involving only the orientation of two observers, consider two people standing, facing each other on either side of a north-south street. See Figure 2. A car drives past them heading south. For the person facing east, the car was moving to the right. However, for the person facing west, the car was moving to the left. This discrepancy is because the two people used two different frames of reference from which to investigate this system. For a more complex example involving observers in relative motion, consider Alfred, who is standing on the side of a road watching a car drive past him from left to right. In his frame of reference, Alfred defines the spot where he is standing as the origin, the road as the -axis, and the direction in front of him as the positive -axis. To him, the car moves along the axis with some velocity in the positive -direction. Alfred's frame of reference is considered an inertial frame because he is not accelerating, ignoring effects such as Earth's rotation and gravity. Now consider Betsy, the person driving the car. Betsy, in choosing her frame of reference, defines her location as the origin, the direction to her right as the positive -axis, and the direction in front of her as the positive -axis. In this frame of reference, it is Betsy who is stationary and the world around her that is moving – for instance, as she drives past Alfred, she observes him moving with velocity in the negative -direction. If she is driving north, then north is the positive -direction; if she turns east, east becomes the positive -direction. Finally, as an example of non-inertial observers, assume Candace is accelerating her car. As she passes by him, Alfred measures her acceleration and finds it to be in the negative -direction. Assuming Candace's acceleration is constant, what acceleration does Betsy measure? If Betsy's velocity is constant, she is in an inertial frame of reference, and she will find the acceleration to be the same as Alfred in her frame of reference, in the negative -direction. However, if she is accelerating at rate in the negative -direction (in other words, slowing down), she will find Candace's acceleration to be in the negative -direction—a smaller value than Alfred has measured. Similarly, if she is accelerating at rate A in the positive -direction (speeding up), she will observe Candace's acceleration as in the negative -direction—a larger value than Alfred's measurement. Non-inertial frames Here the relation between inertial and non-inertial observational frames of reference is considered. The basic difference between these frames is the need in non-inertial frames for fictitious forces, as described below. General relativity General relativity is based upon the principle of equivalence: This idea was introduced in Einstein's 1907 article "Principle of Relativity and Gravitation" and later developed in 1911. Support for this principle is found in the Eötvös experiment, which determines whether the ratio of inertial to gravitational mass is the same for all bodies, regardless of size or composition. To date no difference has been found to a few parts in 1011. For some discussion of the subtleties of the Eötvös experiment, such as the local mass distribution around the experimental site (including a quip about the mass of Eötvös himself), see Franklin. Einstein's general theory modifies the distinction between nominally "inertial" and "non-inertial" effects by replacing special relativity's "flat" Minkowski Space with a metric that produces non-zero curvature. In general relativity, the principle of inertia is replaced with the principle of geodesic motion, whereby objects move in a way dictated by the curvature of spacetime. As a consequence of this curvature, it is not a given in general relativity that inertial objects moving at a particular rate with respect to each other will continue to do so. This phenomenon of geodesic deviation means that inertial frames of reference do not exist globally as they do in Newtonian mechanics and special relativity. However, the general theory reduces to the special theory over sufficiently small regions of spacetime, where curvature effects become less important and the earlier inertial frame arguments can come back into play. Consequently, modern special relativity is now sometimes described as only a "local theory". "Local" can encompass, for example, the entire Milky Way galaxy: The astronomer Karl Schwarzschild observed the motion of pairs of stars orbiting each other. He found that the two orbits of the stars of such a system lie in a plane, and the perihelion of the orbits of the two stars remains pointing in the same direction with respect to the Solar System. Schwarzschild pointed out that that was invariably seen: the direction of the angular momentum of all observed double star systems remains fixed with respect to the direction of the angular momentum of the Solar System. These observations allowed him to conclude that inertial frames inside the galaxy do not rotate with respect to one another, and that the space of the Milky Way is approximately Galilean or Minkowskian. Inertial frames and rotation In an inertial frame, Newton's first law, the law of inertia, is satisfied: Any free motion has a constant magnitude and direction. Newton's second law for a particle takes the form: with F the net force (a vector), m the mass of a particle and a the acceleration of the particle (also a vector) which would be measured by an observer at rest in the frame. The force F is the vector sum of all "real" forces on the particle, such as contact forces, electromagnetic, gravitational, and nuclear forces. In contrast, Newton's second law in a rotating frame of reference (a non-inertial frame of reference), rotating at angular rate Ω about an axis, takes the form: which looks the same as in an inertial frame, but now the force F′ is the resultant of not only F, but also additional terms (the paragraph following this equation presents the main points without detailed mathematics): where the angular rotation of the frame is expressed by the vector Ω pointing in the direction of the axis of rotation, and with magnitude equal to the angular rate of rotation Ω, symbol × denotes the vector cross product, vector xB locates the body and vector vB is the velocity of the body according to a rotating observer (different from the velocity seen by the inertial observer). The extra terms in the force F′ are the "fictitious" forces for this frame, whose causes are external to the system in the frame. The first extra term is the Coriolis force, the second the centrifugal force, and the third the Euler force. These terms all have these properties: they vanish when Ω = 0; that is, they are zero for an inertial frame (which, of course, does not rotate); they take on a different magnitude and direction in every rotating frame, depending upon its particular value of Ω; they are ubiquitous in the rotating frame (affect every particle, regardless of circumstance); and they have no apparent source in identifiable physical sources, in particular, matter. Also, fictitious forces do not drop off with distance (unlike, for example, nuclear forces or electrical forces). For example, the centrifugal force that appears to emanate from the axis of rotation in a rotating frame increases with distance from the axis. All observers agree on the real forces, F; only non-inertial observers need fictitious forces. The laws of physics in the inertial frame are simpler because unnecessary forces are not present. In Newton's time the fixed stars were invoked as a reference frame, supposedly at rest relative to absolute space. In reference frames that were either at rest with respect to the fixed stars or in uniform translation relative to these stars, Newton's laws of motion were supposed to hold. In contrast, in frames accelerating with respect to the fixed stars, an important case being frames rotating relative to the fixed stars, the laws of motion did not hold in their simplest form, but had to be supplemented by the addition of fictitious forces, for example, the Coriolis force and the centrifugal force. Two experiments were devised by Newton to demonstrate how these forces could be discovered, thereby revealing to an observer that they were not in an inertial frame: the example of the tension in the cord linking two spheres rotating about their center of gravity, and the example of the curvature of the surface of water in a rotating bucket. In both cases, application of Newton's second law would not work for the rotating observer without invoking centrifugal and Coriolis forces to account for their observations (tension in the case of the spheres; parabolic water surface in the case of the rotating bucket). As now known, the fixed stars are not fixed. Those that reside in the Milky Way turn with the galaxy, exhibiting proper motions. Those that are outside our galaxy (such as nebulae once mistaken to be stars) participate in their own motion as well, partly due to expansion of the universe, and partly due to peculiar velocities. For instance, the Andromeda Galaxy is on collision course with the Milky Way at a speed of 117 km/s. The concept of inertial frames of reference is no longer tied to either the fixed stars or to absolute space. Rather, the identification of an inertial frame is based on the simplicity of the laws of physics in the frame. The laws of nature take a simpler form in inertial frames of reference because in these frames one did not have to introduce inertial forces when writing down Newton's law of motion. In practice, using a frame of reference based upon the fixed stars as though it were an inertial frame of reference introduces little discrepancy. For example, the centrifugal acceleration of the Earth because of its rotation about the Sun is about thirty million times greater than that of the Sun about the galactic center. To illustrate further, consider the question: "Does the Universe rotate?" An answer might explain the shape of the Milky Way galaxy using the laws of physics, although other observations might be more definitive; that is, provide larger discrepancies or less measurement uncertainty, like the anisotropy of the microwave background radiation or Big Bang nucleosynthesis. The flatness of the Milky Way depends on its rate of rotation in an inertial frame of reference. If its apparent rate of rotation is attributed entirely to rotation in an inertial frame, a different "flatness" is predicted than if it is supposed that part of this rotation is actually due to rotation of the universe and should not be included in the rotation of the galaxy itself. Based upon the laws of physics, a model is set up in which one parameter is the rate of rotation of the Universe. If the laws of physics agree more accurately with observations in a model with rotation than without it, we are inclined to select the best-fit value for rotation, subject to all other pertinent experimental observations. If no value of the rotation parameter is successful and theory is not within observational error, a modification of physical law is considered, for example, dark matter is invoked to explain the galactic rotation curve. So far, observations show any rotation of the universe is very slow, no faster than once every years (10−13 rad/yr), and debate persists over whether there is any rotation. However, if rotation were found, interpretation of observations in a frame tied to the universe would have to be corrected for the fictitious forces inherent in such rotation in classical physics and special relativity, or interpreted as the curvature of spacetime and the motion of matter along the geodesics in general relativity. When quantum effects are important, there are additional conceptual complications that arise in quantum reference frames. Primed frames An accelerated frame of reference is often delineated as being the "primed" frame, and all variables that are dependent on that frame are notated with primes, e.g. x′, y′, a′. The vector from the origin of an inertial reference frame to the origin of an accelerated reference frame is commonly notated as R. Given a point of interest that exists in both frames, the vector from the inertial origin to the point is called r, and the vector from the accelerated origin to the point is called r′. From the geometry of the situation Taking the first and second derivatives of this with respect to time where V and A are the velocity and acceleration of the accelerated system with respect to the inertial system and v and a are the velocity and acceleration of the point of interest with respect to the inertial frame. These equations allow transformations between the two coordinate systems; for example, Newton's second law can be written as When there is accelerated motion due to a force being exerted there is manifestation of inertia. If an electric car designed to recharge its battery system when decelerating is switched to braking, the batteries are recharged, illustrating the physical strength of manifestation of inertia. However, the manifestation of inertia does not prevent acceleration (or deceleration), for manifestation of inertia occurs in response to change in velocity due to a force. Seen from the perspective of a rotating frame of reference the manifestation of inertia appears to exert a force (either in centrifugal direction, or in a direction orthogonal to an object's motion, the Coriolis effect). A common sort of accelerated reference frame is a frame that is both rotating and translating (an example is a frame of reference attached to a CD which is playing while the player is carried). This arrangement leads to the equation (see Fictitious force for a derivation): or, to solve for the acceleration in the accelerated frame, Multiplying through by the mass m gives where (Euler force), (Coriolis force), (centrifugal force). Separating non-inertial from inertial reference frames Theory Inertial and non-inertial reference frames can be distinguished by the absence or presence of fictitious forces. The presence of fictitious forces indicates the physical laws are not the simplest laws available, in terms of the special principle of relativity, a frame where fictitious forces are present is not an inertial frame: Bodies in non-inertial reference frames are subject to so-called fictitious forces (pseudo-forces); that is, forces that result from the acceleration of the reference frame itself and not from any physical force acting on the body. Examples of fictitious forces are the centrifugal force and the Coriolis force in rotating reference frames. To apply the Newtonian definition of an inertial frame, the understanding of separation between "fictitious" forces and "real" forces must be made clear. For example, consider a stationary object in an inertial frame. Being at rest, no net force is applied. But in a frame rotating about a fixed axis, the object appears to move in a circle, and is subject to centripetal force. How can it be decided that the rotating frame is a non-inertial frame? There are two approaches to this resolution: one approach is to look for the origin of the fictitious forces (the Coriolis force and the centrifugal force). It will be found there are no sources for these forces, no associated force carriers, no originating bodies. A second approach is to look at a variety of frames of reference. For any inertial frame, the Coriolis force and the centrifugal force disappear, so application of the principle of special relativity would identify these frames where the forces disappear as sharing the same and the simplest physical laws, and hence rule that the rotating frame is not an inertial frame. Newton examined this problem himself using rotating spheres, as shown in Figure 2 and Figure 3. He pointed out that if the spheres are not rotating, the tension in the tying string is measured as zero in every frame of reference. If the spheres only appear to rotate (that is, we are watching stationary spheres from a rotating frame), the zero tension in the string is accounted for by observing that the centripetal force is supplied by the centrifugal and Coriolis forces in combination, so no tension is needed. If the spheres really are rotating, the tension observed is exactly the centripetal force required by the circular motion. Thus, measurement of the tension in the string identifies the inertial frame: it is the one where the tension in the string provides exactly the centripetal force demanded by the motion as it is observed in that frame, and not a different value. That is, the inertial frame is the one where the fictitious forces vanish. For linear acceleration, Newton expressed the idea of undetectability of straight-line accelerations held in common: This principle generalizes the notion of an inertial frame. For example, an observer confined in a free-falling lift will assert that he himself is a valid inertial frame, even if he is accelerating under gravity, so long as he has no knowledge about anything outside the lift. So, strictly speaking, inertial frame is a relative concept. With this in mind, inertial frames can collectively be defined as a set of frames which are stationary or moving at constant velocity with respect to each other, so that a single inertial frame is defined as an element of this set. For these ideas to apply, everything observed in the frame has to be subject to a base-line, common acceleration shared by the frame itself. That situation would apply, for example, to the elevator example, where all objects are subject to the same gravitational acceleration, and the elevator itself accelerates at the same rate. Applications Inertial navigation systems used a cluster of gyroscopes and accelerometers to determine accelerations relative to inertial space. After a gyroscope is spun up in a particular orientation in inertial space, the law of conservation of angular momentum requires that it retain that orientation as long as no external forces are applied to it. Three orthogonal gyroscopes establish an inertial reference frame, and the accelerators measure acceleration relative to that frame. The accelerations, along with a clock, can then be used to calculate the change in position. Thus, inertial navigation is a form of dead reckoning that requires no external input, and therefore cannot be jammed by any external or internal signal source. A gyrocompass, employed for navigation of seagoing vessels, finds the geometric north. It does so, not by sensing the Earth's magnetic field, but by using inertial space as its reference. The outer casing of the gyrocompass device is held in such a way that it remains aligned with the local plumb line. When the gyroscope wheel inside the gyrocompass device is spun up, the way the gyroscope wheel is suspended causes the gyroscope wheel to gradually align its spinning axis with the Earth's axis. Alignment with the Earth's axis is the only direction for which the gyroscope's spinning axis can be stationary with respect to the Earth and not be required to change direction with respect to inertial space. After being spun up, a gyrocompass can reach the direction of alignment with the Earth's axis in as little as a quarter of an hour. See also Absolute rotation Diffeomorphism Galilean invariance General covariance Local reference frame Lorentz covariance Newton's first law Quantum reference frame References Further reading Edwin F. Taylor and John Archibald Wheeler, Spacetime Physics, 2nd ed. (Freeman, NY, 1992) Albert Einstein, Relativity, the special and the general theories, 15th ed. (1954) Albert Einstein, On the Electrodynamics of Moving Bodies, included in The Principle of Relativity, page 38. Dover 1923 Rotation of the Universe B Ciobanu, I Radinchi Modeling the electric and magnetic fields in a rotating universe Rom. Journ. Phys., Vol. 53, Nos. 1–2, P. 405–415, Bucharest, 2008 Yuri N. Obukhov, Thoralf Chrobok, Mike Scherfner Shear-free rotating inflation Phys. Rev. D 66, 043518 (2002) [5 pages] Yuri N. Obukhov On physical foundations and observational effects of cosmic rotation (2000) Li-Xin Li Effect of the Global Rotation of the Universe on the Formation of Galaxies General Relativity and Gravitation, 30 (1998) P Birch Is the Universe rotating? Nature 298, 451 – 454 (29 July 1982) Kurt Gödel An example of a new type of cosmological solutions of Einstein's field equations of gravitation Rev. Mod. Phys., Vol. 21, p. 447, 1949. External links Stanford Encyclopedia of Philosophy entry showing scenes as viewed from both an inertial frame and a rotating frame of reference, visualizing the Coriolis and centrifugal forces. Classical mechanics Frames of reference Theory of relativity Orbits
Inertial frame of reference
[ "Physics", "Mathematics" ]
6,794
[ "Frames of reference", "Classical mechanics", "Theory of relativity", "Mechanics", "Coordinate systems" ]
14,843
https://en.wikipedia.org/wiki/Interstellar%20travel
Interstellar travel is the hypothetical travel of spacecraft between star systems. Due to the vast distances between the Solar System and nearby stars, interstellar travel is not practicable with current propulsion technologies. To travel between stars within a reasonable amount of time (decades or centuries), an interstellar spacecraft must reach a significant fraction of the speed of light, requiring enormous energy. Communication with such interstellar craft will experience years of delay due to the speed of light. Collisions with cosmic dust and gas at such speeds can be catastrophic for such spacecrafts. Crewed interstellar travel could possibly be conducted more slowly (far beyond the scale of a human lifetime) by making a generation ship. Hypothetical interstellar propulsion systems include nuclear pulse propulsion, fission-fragment rocket, fusion rocket, beamed solar sail, and antimatter rocket. The benefits of interstellar travel include detailed surveys of habitable exoplanets and distant stars, comprehensive search for extraterrestrial intelligence and space colonization. Even though five uncrewed spacecraft have left our Solar System, they are not "interstellar craft" because they are not purposefully designed to explore other star systems. Thus, as of the 2020s, interstellar spaceflight remains a popular trope in speculative future studies and science fiction. A civilization that has mastered interstellar travel is called an interstellar species. Challenges Interstellar distances Distances between the planets in the Solar System are often measured in astronomical units (AU), defined as the average distance between the Sun and Earth, some . Venus, the closest planet to Earth is (at closest approach) 0.28 AU away. Neptune, the farthest planet from the Sun, is 29.8 AU away. As of January 20, 2023, Voyager 1, the farthest human-made object from Earth, is 163 AU away, exiting the Solar System at a speed of 17 km/s (0.006% of the speed of light). The closest known star, Proxima Centauri, is approximately away, or over 9,000 times farther away than Neptune. Because of this, distances between stars are usually expressed in light-years (defined as the distance that light travels in vacuum in one Julian year) or in parsecs (one parsec is 3.26 ly, the distance at which stellar parallax is exactly one arcsecond, hence the name). Light in a vacuum travels around per second, so 1 light-year is about or AU. Hence, Proxima Centauri is approximately 4.243 light-years from Earth. Another way of understanding the vastness of interstellar distances is by scaling: One of the closest stars to the Sun, Alpha Centauri A (a Sun-like star that is one of two companions of Proxima Centauri), can be pictured by scaling down the Earth–Sun distance to . On this scale, the distance to Alpha Centauri A would be . The fastest outward-bound spacecraft yet sent, Voyager 1, has covered 1/390 of a light-year in 46 years and is currently moving at 1/17,600 the speed of light. At this rate, a journey to Proxima Centauri would take 75,000 years. Required energy A significant factor contributing to the difficulty is the energy that must be supplied to obtain a reasonable travel time. A lower bound for the required energy is the kinetic energy where is the final mass. If deceleration on arrival is desired and cannot be achieved by any means other than the engines of the ship, then the lower bound for the required energy is doubled to . The velocity for a crewed round trip of a few decades to even the nearest star is several thousand times greater than those of present space vehicles. This means that due to the term in the kinetic energy formula, millions of times as much energy is required. Accelerating one ton to one-tenth of the speed of light requires at least (world energy consumption 2008 was 143,851 terawatt-hours), without factoring in efficiency of the propulsion mechanism. This energy has to be generated onboard from stored fuel, harvested from the interstellar medium, or projected over immense distances. Interstellar medium A knowledge of the properties of the interstellar gas and dust through which the vehicle must pass is essential for the design of any interstellar space mission. A major issue with traveling at extremely high speeds is that, due to the requisite high relative speeds and large kinetic energies, collisions with interstellar dust could cause considerable damage to the craft. Various shielding methods to mitigate this problem have been proposed. Larger objects (such as macroscopic dust grains) are far less common, but would be much more destructive. The risks of impacting such objects and mitigation methods have been discussed in literature, but many unknowns remain. An additional consideration is that, due to the non-homogeneous distribution of interstellar matter around the Sun, these risks would vary between different trajectories. Although a high density interstellar medium may cause difficulties for many interstellar travel concepts, interstellar ramjets, and some proposed concepts for decelerating interstellar spacecraft, would actually benefit from a denser interstellar medium. Hazards The crew of an interstellar ship would face several significant hazards, including the psychological effects of long-term isolation, the physiological effects of extreme acceleration, the effects of exposure to ionising radiation, and the physiological effects of weightlessness to the muscles, joints, bones, immune system, and eyes. There also exists the risk of impact by micrometeoroids and other space debris. These risks represent challenges that have yet to be overcome. Wait calculation The speculative fiction writer and physicist Robert L. Forward has argued that an interstellar mission that cannot be completed within 50 years should not be started at all. Instead, assuming that a civilization is still on an increasing curve of propulsion system velocity and not yet having reached the limit, the resources should be invested in designing a better propulsion system. This is because a slow spacecraft would probably be passed by another mission sent later with more advanced propulsion (the incessant obsolescence postulate). In 2006, Andrew Kennedy calculated ideal departure dates for a trip to Barnard's Star using a more precise concept of the wait calculation where for a given destination and growth rate in propulsion capacity there is a departure point that overtakes earlier launches and will not be overtaken by later ones and concluded "an interstellar journey of 6 light years can best be made in about 635 years from now if growth continues at about 1.4% per annum", or approximately 2641 AD. It may be the most significant calculation for competing cultures occupying the galaxy. Prime targets for interstellar travel There are 59 known stellar systems within 40 light years of the Sun, containing 81 visible stars. The following could be considered prime targets for interstellar missions: Existing astronomical technology is capable of finding planetary systems around these objects, increasing their potential for exploration. Proposed methods Slow, uncrewed probes "Slow" interstellar missions (still fast by other standards) based on current and near-future propulsion technologies are associated with trip times starting from about several decades to thousands of years. These missions consist of sending a robotic probe to a nearby star for exploration, similar to interplanetary probes like those used in the Voyager program. By taking along no crew, the cost and complexity of the mission is significantly reduced, as is the mass that needs to be accelerated, although technology lifetime is still a significant issue next to obtaining a reasonable speed of travel. Proposed concepts include Project Daedalus, Project Icarus, Project Dragonfly, Project Longshot, and more recently Breakthrough Starshot. Fast, uncrewed probes Nanoprobes Near-lightspeed nano spacecraft might be possible within the near future built on existing microchip technology with a newly developed nanoscale thruster. Researchers at the University of Michigan are developing thrusters that use nanoparticles as propellant. Their technology is called "nanoparticle field extraction thruster", or nanoFET. These devices act like small particle accelerators shooting conductive nanoparticles out into space. Michio Kaku, a theoretical physicist, has suggested that clouds of "smart dust" be sent to the stars, which may become possible with advances in nanotechnology. Kaku also notes that a large number of nanoprobes would need to be sent due to the vulnerability of very small probes to be easily deflected by magnetic fields, micrometeorites and other dangers to ensure the chances that at least one nanoprobe will survive the journey and reach the destination. As a near-term solution, small, laser-propelled interstellar probes, based on current CubeSat technology were proposed in the context of Project Dragonfly. Slow, crewed missions In crewed missions, the duration of a slow interstellar journey presents a major obstacle and existing concepts deal with this problem in different ways. They can be distinguished by the "state" in which humans are transported on-board of the spacecraft. Generation ships A generation ship (or world ship) is a type of interstellar ark in which the crew that arrives at the destination is descended from those who started the journey. Generation ships are not currently feasible because of the difficulty of constructing a ship of the enormous required scale and the great biological and sociological problems that life aboard such a ship raises. Suspended animation Scientists and writers have postulated various techniques for suspended animation. These include human hibernation and cryonic preservation. Although neither is currently practical, they offer the possibility of sleeper ships in which the passengers lie inert for the long duration of the voyage. Frozen embryos A robotic interstellar mission carrying some number of frozen early stage human embryos is another theoretical possibility. This method of space colonization requires, among other things, the development of an artificial uterus, the prior detection of a habitable terrestrial planet, and advances in the field of fully autonomous mobile robots and educational robots that would replace human parents. Island hopping through interstellar space Interstellar space is not completely empty; it contains trillions of icy bodies ranging from small asteroids (Oort cloud) to possible rogue planets. There may be ways to take advantage of these resources for a good part of an interstellar trip, slowly hopping from body to body or setting up waystations along the way. Fast, crewed missions If a spaceship could average 10 percent of light speed (and decelerate at the destination, for human crewed missions), this would be enough to reach Proxima Centauri in forty years. Several propulsion concepts have been proposed that might be eventually developed to accomplish this (see § Propulsion below), but none of them are ready for near-term (few decades) developments at acceptable cost. Time dilation Physicists generally believe faster-than-light travel is impossible. Relativistic time dilation allows a traveler to experience time more slowly, the closer their speed is to the speed of light. This apparent slowing becomes noticeable when velocities above 80% of the speed of light are attained. Clocks aboard an interstellar ship would run slower than Earth clocks, so if a ship's engines were capable of continuously generating around 1 g of acceleration (which is comfortable for humans), the ship could reach almost anywhere in the galaxy and return to Earth within 40 years ship-time (see diagram). Upon return, there would be a difference between the time elapsed on the astronaut's ship and the time elapsed on Earth. For example, a spaceship could travel to a star 32 light-years away, initially accelerating at a constant 1.03g (i.e. 10.1 m/s2) for 1.32 years (ship time), then stopping its engines and coasting for the next 17.3 years (ship time) at a constant speed, then decelerating again for 1.32 ship-years, and coming to a stop at the destination. After a short visit, the astronaut could return to Earth the same way. After the full round-trip, the clocks on board the ship show that 40 years have passed, but according to those on Earth, the ship comes back 76 years after launch. From the viewpoint of the astronaut, onboard clocks seem to be running normally. The star ahead seems to be approaching at a speed of 0.87 light years per ship-year. The universe would appear contracted along the direction of travel to half the size it had when the ship was at rest; the distance between that star and the Sun would seem to be 16 light years as measured by the astronaut. At higher speeds, the time on board will run even slower, so the astronaut could travel to the center of the Milky Way (30,000 light years from Earth) and back in 40 years ship-time. But the speed according to Earth clocks will always be less than 1 light year per Earth year, so, when back home, the astronaut will find that more than 60 thousand years will have passed on Earth. Constant acceleration Regardless of how it is achieved, a propulsion system that could produce acceleration continuously from departure to arrival would be the fastest method of travel. A constant acceleration journey is one where the propulsion system accelerates the ship at a constant rate for the first half of the journey, and then decelerates for the second half, so that it arrives at the destination stationary relative to where it began. If this were performed with an acceleration similar to that experienced at the Earth's surface, it would have the added advantage of producing artificial "gravity" for the crew. Supplying the energy required, however, would be prohibitively expensive with current technology. From the perspective of a planetary observer, the ship will appear to accelerate steadily at first, but then more gradually as it approaches the speed of light (which it cannot exceed). It will undergo hyperbolic motion. The ship will be close to the speed of light after about a year of accelerating and remain at that speed until it brakes for the end of the journey. From the perspective of an onboard observer, the crew will feel a gravitational field opposite the engine's acceleration, and the universe ahead will appear to fall in that field, undergoing hyperbolic motion. As part of this, distances between objects in the direction of the ship's motion will gradually contract until the ship begins to decelerate, at which time an onboard observer's experience of the gravitational field will be reversed. When the ship reaches its destination, if it were to exchange a message with its origin planet, it would find that less time had elapsed on board than had elapsed for the planetary observer, due to time dilation and length contraction. The result is an impressively fast journey for the crew. Propulsion Rocket concepts All rocket concepts are limited by the rocket equation, which sets the characteristic velocity available as a function of exhaust velocity and mass ratio, the ratio of initial (M0, including fuel) to final (M1, fuel depleted) mass. Very high specific power, the ratio of thrust to total vehicle mass, is required to reach interstellar targets within sub-century time-frames. Some heat transfer is inevitable, resulting in an extreme thermal load. Thus, for interstellar rocket concepts of all technologies, a key engineering problem (seldom explicitly discussed) is limiting the heat transfer from the exhaust stream back into the vehicle. Ion engine A type of electric propulsion, spacecraft such as Dawn use an ion engine. In an ion engine, electric power is used to create charged particles of the propellant, usually the gas xenon, and accelerate them to extremely high velocities. The exhaust velocity of conventional rockets is limited to about 5 km/s by the chemical energy stored in the fuel's molecular bonds. They produce a high thrust (about 106 N), but they have a low specific impulse, and that limits their top speed. By contrast, ion engines have low force, but the top speed in principle is limited only by the electrical power available on the spacecraft and on the gas ions being accelerated. The exhaust speed of the charged particles range from 15 km/s to 35 km/s. Nuclear fission powered Fission-electric Nuclear-electric or plasma engines, operating for long periods at low thrust and powered by fission reactors, have the potential to reach speeds much greater than chemically powered vehicles or nuclear-thermal rockets. Such vehicles probably have the potential to power solar system exploration with reasonable trip times within the current century. Because of their low-thrust propulsion, they would be limited to off-planet, deep-space operation. Electrically powered spacecraft propulsion powered by a portable power-source, say a nuclear reactor, producing only small accelerations, would take centuries to reach for example 15% of the velocity of light, thus unsuitable for interstellar flight during a single human lifetime. Fission-fragment Fission-fragment rockets use nuclear fission to create high-speed jets of fission fragments, which are ejected at speeds of up to . With fission, the energy output is approximately 0.1% of the total mass-energy of the reactor fuel and limits the effective exhaust velocity to about 5% of the velocity of light. For maximum velocity, the reaction mass should optimally consist of fission products, the "ash" of the primary energy source, so no extra reaction mass need be bookkept in the mass ratio. Nuclear pulse Based on work in the late 1950s to the early 1960s, it has been technically possible to build spaceships with nuclear pulse propulsion engines, i.e. driven by a series of nuclear explosions. This propulsion system contains the prospect of very high specific impulse and high specific power. Project Orion team member Freeman Dyson proposed in 1968 an interstellar spacecraft using nuclear pulse propulsion that used pure deuterium fusion detonations with a very high fuel-burnup fraction. He computed an exhaust velocity of 15,000 km/s and a 100,000-tonne space vehicle able to achieve a 20,000 km/s delta-v allowing a flight-time to Alpha Centauri of 130 years. Later studies indicate that the top cruise velocity that can theoretically be achieved by a Teller-Ulam thermonuclear unit powered Orion starship, assuming no fuel is saved for slowing back down, is about 8% to 10% of the speed of light (0.08-0.1c). An atomic (fission) Orion can achieve perhaps 3%-5% of the speed of light. A nuclear pulse drive starship powered by fusion-antimatter catalyzed nuclear pulse propulsion units would be similarly in the 10% range and pure matter-antimatter annihilation rockets would be theoretically capable of obtaining a velocity between 50% and 80% of the speed of light. In each case saving fuel for slowing down halves the maximum speed. The concept of using a magnetic sail to decelerate the spacecraft as it approaches its destination has been discussed as an alternative to using propellant, this would allow the ship to travel near the maximum theoretical velocity. Alternative designs utilizing similar principles include Project Longshot, Project Daedalus, and Mini-Mag Orion. The principle of external nuclear pulse propulsion to maximize survivable power has remained common among serious concepts for interstellar flight without external power beaming and for very high-performance interplanetary flight. In the 1970s the Nuclear Pulse Propulsion concept further was refined by Project Daedalus by use of externally triggered inertial confinement fusion, in this case producing fusion explosions via compressing fusion fuel pellets with high-powered electron beams. Since then, lasers, ion beams, neutral particle beams and hyper-kinetic projectiles have been suggested to produce nuclear pulses for propulsion purposes. A current impediment to the development of any nuclear-explosion-powered spacecraft is the 1963 Partial Test Ban Treaty, which includes a prohibition on the detonation of any nuclear devices (even non-weapon based) in outer space. This treaty would, therefore, need to be renegotiated, although a project on the scale of an interstellar mission using currently foreseeable technology would probably require international cooperation on at least the scale of the International Space Station. Another issue to be considered, would be the g-forces imparted to a rapidly accelerated spacecraft, cargo, and passengers inside (see Inertia negation). Nuclear fusion rockets Fusion rocket starships, powered by nuclear fusion reactions, should conceivably be able to reach speeds of the order of 10% of that of light, based on energy considerations alone. In theory, a large number of stages could push a vehicle arbitrarily close to the speed of light. These would "burn" such light element fuels as deuterium, tritium, 3He, 11B, and 7Li. Because fusion yields about 0.3–0.9% of the mass of the nuclear fuel as released energy, it is energetically more favorable than fission, which releases <0.1% of the fuel's mass-energy. The maximum exhaust velocities potentially energetically available are correspondingly higher than for fission, typically 4–10% of the speed of light. However, the most easily achievable fusion reactions release a large fraction of their energy as high-energy neutrons, which are a significant source of energy loss. Thus, although these concepts seem to offer the best (nearest-term) prospects for travel to the nearest stars within a (long) human lifetime, they still involve massive technological and engineering difficulties, which may turn out to be intractable for decades or centuries. Early studies include Project Daedalus, performed by the British Interplanetary Society in 1973–1978, and Project Longshot, a student project sponsored by NASA and the US Naval Academy, completed in 1988. Another fairly detailed vehicle system, "Discovery II", designed and optimized for crewed Solar System exploration, based on the D3He reaction but using hydrogen as reaction mass, has been described by a team from NASA's Glenn Research Center. It achieves characteristic velocities of >300 km/s with an acceleration of ~1.7•10−3 g, with a ship initial mass of ~1700 metric tons, and payload fraction above 10%. Although these are still far short of the requirements for interstellar travel on human timescales, the study seems to represent a reasonable benchmark towards what may be approachable within several decades, which is not impossibly beyond the current state-of-the-art. Based on the concept's 2.2% burnup fraction it could achieve a pure fusion product exhaust velocity of ~3,000 km/s. Antimatter rockets An antimatter rocket would have a far higher energy density and specific impulse than any other proposed class of rocket. If energy resources and efficient production methods are found to make antimatter in the quantities required and store it safely, it would be theoretically possible to reach speeds of several tens of percent that of light. Whether antimatter propulsion could lead to the higher speeds (>90% that of light) at which relativistic time dilation would become more noticeable, thus making time pass at a slower rate for the travelers as perceived by an outside observer, is doubtful owing to the large quantity of antimatter that would be required. Speculating that production and storage of antimatter should become feasible, two further issues need to be considered. First, in the annihilation of antimatter, much of the energy is lost as high-energy gamma radiation, and especially also as neutrinos, so that only about 40% of mc2 would actually be available if the antimatter were simply allowed to annihilate into radiations thermally. Even so, the energy available for propulsion would be substantially higher than the ~1% of mc2 yield of nuclear fusion, the next-best rival candidate. Second, heat transfer from the exhaust to the vehicle seems likely to transfer enormous wasted energy into the ship (e.g. for 0.1g ship acceleration, approaching 0.3 trillion watts per ton of ship mass), considering the large fraction of the energy that goes into penetrating gamma rays. Even assuming shielding was provided to protect the payload (and passengers on a crewed vehicle), some of the energy would inevitably heat the vehicle, and may thereby prove a limiting factor if useful accelerations are to be achieved. More recently, Friedwardt Winterberg proposed that a matter-antimatter GeV gamma ray laser photon rocket is possible by a relativistic proton-antiproton pinch discharge, where the recoil from the laser beam is transmitted by the Mössbauer effect to the spacecraft. Rockets with an external energy source Rockets deriving their power from external sources, such as a laser, could replace their internal energy source with an energy collector, potentially reducing the mass of the ship greatly and allowing much higher travel speeds. Geoffrey A. Landis proposed an interstellar probe propelled by an ion thruster powered by the energy beamed to it from a base station laser. Lenard and Andrews proposed using a base station laser to accelerate nuclear fuel pellets towards a Mini-Mag Orion spacecraft that ignites them for propulsion. Non-rocket concepts A problem with all traditional rocket propulsion methods is that the spacecraft would need to carry its fuel with it, thus making it very massive, in accordance with the rocket equation. Several concepts attempt to escape from this problem: RF resonant cavity thruster A radio frequency (RF) resonant cavity thruster is a device that is claimed to be a spacecraft thruster. In 2016, the Advanced Propulsion Physics Laboratory at NASA reported observing a small apparent thrust from one such test, a result not since replicated. One of the designs is called EMDrive. In December 2002, Satellite Propulsion Research Ltd described a working prototype with an alleged total thrust of about 0.02 newtons powered by an 850 W cavity magnetron. The device could operate for only a few dozen seconds before the magnetron failed, due to overheating. The latest test on the EMDrive concluded that it does not work. Helical engine Proposed in 2019 by NASA scientist Dr. David Burns, the helical engine concept would use a particle accelerator to accelerate particles to near the speed of light. Since particles traveling at such speeds acquire more mass, it is believed that this mass change could create acceleration. According to Burns, the spacecraft could theoretically reach 99% the speed of light. Interstellar ramjets In 1960, Robert W. Bussard proposed the Bussard ramjet, a fusion rocket in which a huge scoop would collect the diffuse hydrogen in interstellar space, "burn" it on the fly using a proton–proton chain reaction, and expel it out of the back. Later calculations with more accurate estimates suggest that the thrust generated would be less than the drag caused by any conceivable scoop design. Yet the idea is attractive because the fuel would be collected en route (commensurate with the concept of energy harvesting), so the craft could theoretically accelerate to near the speed of light. The limitation is due to the fact that the reaction can only accelerate the propellant to 0.12c. Thus the drag of catching interstellar dust and the thrust of accelerating that same dust to 0.12c would be the same when the speed is 0.12c, preventing further acceleration. Beamed propulsion A light sail or magnetic sail powered by a massive laser or particle accelerator in the home star system could potentially reach even greater speeds than rocket- or pulse propulsion methods, because it would not need to carry its own reaction mass and therefore would only need to accelerate the craft's payload. Robert L. Forward proposed a means for decelerating an interstellar craft with a light sail of 100 kilometers in the destination star system without requiring a laser array to be present in that system. In this scheme, a secondary sail of 30 kilometers is deployed to the rear of the spacecraft, while the large primary sail is detached from the craft to keep moving forward on its own. Light is reflected from the large primary sail to the secondary sail, which is used to decelerate the secondary sail and the spacecraft payload. In 2002, Geoffrey A. Landis of NASA's Glen Research center also proposed a laser-powered, propulsion, sail ship that would host a diamond sail (of a few nanometers thick) powered with the use of solar energy. With this proposal, this interstellar ship would, theoretically, be able to reach 10 percent the speed of light. It has also been proposed to use beamed-powered propulsion to accelerate a spacecraft, and electromagnetic propulsion to decelerate it; thus, eliminating the problem that the Bussard ramjet has with the drag produced during acceleration. A magnetic sail could also decelerate at its destination without depending on carried fuel or a driving beam in the destination system, by interacting with the plasma found in the solar wind of the destination star and the interstellar medium. The following table lists some example concepts using beamed laser propulsion as proposed by the physicist Robert L. Forward: Interstellar travel catalog to use photogravitational assists for a full stop The following table is based on work by Heller, Hippke and Kervella. Successive assists at α Cen A and B could allow travel times to 75 yr to both stars. Lightsail has a nominal mass-to-surface ratio (σnom) of 8.6×10−4 gram m−2 for a nominal graphene-class sail. Area of the Lightsail, about 105 m2 = (316 m)2 Velocity up to 37,300 km s−1 (12.5% c) Pre-accelerated fuel Achieving start-stop interstellar trip times of less than a human lifetime require mass-ratios of between 1,000 and 1,000,000, even for the nearer stars. This could be achieved by multi-staged vehicles on a vast scale. Alternatively large linear accelerators could propel fuel to fission propelled space-vehicles, avoiding the limitations of the Rocket equation. Dynamic soaring Dynamic soaring as a way to travel across interstellar space has been proposed. Theoretical concepts Transmission of minds with light Uploaded human minds or AI could be transmitted with laser or radio signals at the speed of light. This requires a receiver at the destination which would first have to be set up e.g. by humans, probes, self replicating machines (potentially along with AI or uploaded humans), or an alien civilization (which might also be in a different galaxy, perhaps a Kardashev type III civilization). Artificial black hole A theoretical idea for enabling interstellar travel is to propel a starship by creating an artificial black hole and using a parabolic reflector to reflect its Hawking radiation. Although beyond current technological capabilities, a black hole starship offers some advantages compared to other possible methods. Getting the black hole to act as a power source and engine also requires a way to convert the Hawking radiation into energy and thrust. One potential method involves placing the hole at the focal point of a parabolic reflector attached to the ship, creating forward thrust. A slightly easier, but less efficient method would involve simply absorbing all the gamma radiation heading towards the fore of the ship to push it onwards, and let the rest shoot out the back. Faster-than-light travel Scientists and authors have postulated a number of ways by which it might be possible to surpass the speed of light, but even the most serious-minded of these are highly speculative. It is also debatable whether faster-than-light travel is physically possible, in part because of causality concerns: travel faster than light may, under certain conditions, permit travel backwards in time within the context of special relativity. Proposed mechanisms for faster-than-light travel within the theory of general relativity require the existence of exotic matter and, it is not known if it could be produced in sufficient quantities, if at all. Alcubierre drive In physics, the Alcubierre drive is based on an argument, within the framework of general relativity and without the introduction of wormholes, that it is possible to modify spacetime in a way that allows a spaceship to travel with an arbitrarily large speed by a local expansion of spacetime behind the spaceship and an opposite contraction in front of it. Nevertheless, this concept would require the spaceship to incorporate a region of exotic matter, or the hypothetical concept of negative mass. Wormholes Wormholes are conjectural distortions in spacetime that theorists postulate could connect two arbitrary points in the universe, across an Einstein–Rosen Bridge. It is not known whether wormholes are possible in practice. Although there are solutions to the Einstein equation of general relativity that allow for wormholes, all of the currently known solutions involve some assumption, for example the existence of negative mass, which may be unphysical. However, Cramer et al. argue that such wormholes might have been created in the early universe, stabilized by cosmic strings. The general theory of wormholes is discussed by Visser in the book Lorentzian Wormholes. Designs and studies Project Hyperion Project Hyperion has looked into various feasibility issues of crewed interstellar travel. Notable results of the project include an assessment of world ship system architectures and adequate population size. Its members continue to publish on crewed interstellar travel in collaboration with the Initiative for Interstellar Studies. Enzmann starship The Enzmann starship, as detailed by G. Harry Stine in the October 1973 issue of Analog, was a design for a future starship, based on the ideas of Robert Duncan-Enzmann. The spacecraft itself as proposed used a 12,000,000 ton ball of frozen deuterium to power 12–24 thermonuclear pulse propulsion units. Twice as long as the Empire State Building is tall and assembled in-orbit, the spacecraft was part of a larger project preceded by interstellar probes and telescopic observation of target star systems. NASA research NASA has been researching interstellar travel since its formation, translating important foreign language papers and conducting early studies on applying fusion propulsion, in the 1960s, and laser propulsion, in the 1970s, to interstellar travel. In 1994, NASA and JPL cosponsored a "Workshop on Advanced Quantum/Relativity Theory Propulsion" to "establish and use new frames of reference for thinking about the faster-than-light (FTL) question". The NASA Breakthrough Propulsion Physics Program (terminated in FY 2003 after a 6-year, $1.2-million study, because "No breakthroughs appear imminent.") identified some breakthroughs that are needed for interstellar travel to be possible. Geoffrey A. Landis of NASA's Glenn Research Center states that a laser-powered interstellar sail ship could possibly be launched within 50 years, using new methods of space travel. "I think that ultimately we're going to do it, it's just a question of when and who," Landis said in an interview. Rockets are too slow to send humans on interstellar missions. Instead, he envisions interstellar craft with extensive sails, propelled by laser light to about one-tenth the speed of light. It would take such a ship about 43 years to reach Alpha Centauri if it passed through the system without stopping. Slowing down to stop at Alpha Centauri could increase the trip to 100 years, whereas a journey without slowing down raises the issue of making sufficiently accurate and useful observations and measurements during a fly-by. 100 Year Starship study The 100 Year Starship (100YSS) study was the name of a one-year project to assess the attributes of and lay the groundwork for an organization that can carry forward the 100 Year Starship vision. 100YSS-related symposia were organized between 2011 and 2015. Harold ("Sonny") White from NASA's Johnson Space Center is a member of Icarus Interstellar, the nonprofit foundation whose mission is to realize interstellar flight before the year 2100. At the 2012 meeting of 100YSS, he reported using a laser to try to warp spacetime by 1 part in 10 million with the aim of helping to make interstellar travel possible. Other designs Project Orion, human crewed interstellar ship (1958–1968). Project Daedalus, uncrewed interstellar probe (1973–1978). Starwisp, uncrewed interstellar probe (1985). Project Longshot, uncrewed interstellar probe (1987–1988). Starseed/launcher, fleet of uncrewed interstellar probes (1996). Project Valkyrie, human crewed interstellar ship (2009). Project Icarus, uncrewed interstellar probe (2009–2014). Sun-diver, uncrewed interstellar probe. Project Dragonfly, small laser-propelled interstellar probe (2013–2015). Breakthrough Starshot, fleet of uncrewed interstellar probes, announced on 12 April 2016. Solar One, crewed spacecraft that would combine beamed-powered propulsion, electromagnetic propulsion, and nuclear propulsion (2020). Non-profit organizations A few organisations dedicated to interstellar propulsion research and advocacy for the case exist worldwide. These are still in their infancy, but are already backed up by a membership of a wide variety of scientists, students and professionals. Initiative for Interstellar Studies (UK) Tau Zero Foundation (USA) Limitless Space Institute (USA) Tennessee Valley Interstellar Workshop (TVIW), business name Interstellar Research Group (IRG) (USA) Feasibility The energy requirements make interstellar travel very difficult. It has been reported that at the 2008 Joint Propulsion Conference, multiple experts opined that it was improbable that humans would ever explore beyond the Solar System. Brice N. Cassenti, an associate professor with the Department of Engineering and Science at Rensselaer Polytechnic Institute, stated that at least 100 times the total energy output of the entire world [in a given year] would be required to send a probe to the nearest star. Astrophysicist Sten Odenwald stated that the basic problem is that through intensive studies of thousands of detected exoplanets, most of the closest destinations within 50 light years do not yield Earth-like planets in the star's habitable zones. Given the multitrillion-dollar expense of some of the proposed technologies, travelers will have to spend up to 200 years traveling at 20% the speed of light to reach the best known destinations. Moreover, once the travelers arrive at their destination (by any means), they will not be able to travel down to the surface of the target world and set up a colony unless the atmosphere is non-lethal. The prospect of making such a journey, only to spend the rest of the colony's life inside a sealed habitat and venturing outside in a spacesuit, may eliminate many prospective targets from the list. Moving at a speed close to the speed of light and encountering even a tiny stationary object like a grain of sand will have fatal consequences. For example, a gram of matter moving at 90% of the speed of light contains a kinetic energy corresponding to a small nuclear bomb (around 30kt TNT). One of the major stumbling blocks is having enough Onboard Spares & Repairs facilities for such a lengthy time journey assuming all other considerations are solved, without access to all the resources available on Earth. Interstellar missions not for human benefit Explorative high-speed missions to Alpha Centauri, as planned for by the Breakthrough Starshot initiative, are projected to be realizable within the 21st century. It is alternatively possible to plan for uncrewed slow-cruising missions taking millennia to arrive. These probes would not be for human benefit in the sense that one can not foresee whether there would be anybody around on Earth interested in then back-transmitted science data. An example would be the Genesis mission, which aims to bring unicellular life, in the spirit of directed panspermia, to habitable but otherwise barren planets. Comparatively slow cruising Genesis probes, with a typical speed of , corresponding to about , can be decelerated using a magnetic sail. Uncrewed missions not for human benefit would hence be feasible. Discovery of Earth-like planets On August 24, 2016, Earth-size exoplanet Proxima Centauri b orbiting in the habitable zone of Proxima Centauri, 4.2 light-years away, was announced. This is the nearest known potentially-habitable exoplanet outside our Solar System. In February 2017, NASA announced that its Spitzer Space Telescope had revealed seven Earth-size planets in the TRAPPIST-1 system orbiting an ultra-cool dwarf star 40 light-years away from the Solar System. Three of these planets are firmly located in the habitable zone, the area around the parent star where a rocky planet is most likely to have liquid water. The discovery sets a new record for greatest number of habitable-zone planets found around a single star outside the Solar System. All of these seven planets could have liquid water – the key to life as we know it – under the right atmospheric conditions, but the chances are highest with the three in the habitable zone. See also Levels of spaceflight: Suborbital, orbital, interplanetary, interstellar and intergalactic Interstellar object List of artificial objects leaving the Solar System List of potentially habitable exoplanets Space travel in science fiction References Further reading External links Leonard David – Reaching for interstellar flight (2003) – MSNBC (MSNBC Webpage) NASA Breakthrough Propulsion Physics Program (NASA Webpage) Bibliography of Interstellar Flight (source list) DARPA seeks help for interstellar starship How to build a starship – and why we should start thinking about it now (Article from The Conversation, 2016)
Interstellar travel
[ "Astronomy" ]
8,569
[ "Astronomical hypotheses", "Interstellar travel" ]
14,865
https://en.wikipedia.org/wiki/Isotropy
In physics and geometry, isotropy () is uniformity in all orientations. Precise definitions depend on the subject area. Exceptions, or inequalities, are frequently indicated by the prefix or , hence anisotropy. Anisotropy is also used to describe situations where properties vary systematically, dependent on direction. Isotropic radiation has the same intensity regardless of the direction of measurement, and an isotropic field exerts the same action regardless of how the test particle is oriented. Mathematics Within mathematics, isotropy has a few different meanings: Isotropic manifolds A manifold is isotropic if the geometry on the manifold is the same regardless of direction. A similar concept is homogeneity. Isotropic quadratic form A quadratic form q is said to be isotropic if there is a non-zero vector v such that ; such a v is an isotropic vector or null vector. In complex geometry, a line through the origin in the direction of an isotropic vector is an isotropic line. Isotropic coordinates Isotropic coordinates are coordinates on an isotropic chart for Lorentzian manifolds. Isotropy groupAn isotropy group is the group of isomorphisms from any object to itself in a groupoid. An isotropy representation is a representation of an isotropy group. Isotropic position A probability distribution over a vector space is in isotropic position if its covariance matrix is the identity. Isotropic vector field The vector field generated by a point source is said to be isotropic if, for any spherical neighborhood centered at the point source, the magnitude of the vector determined by any point on the sphere is invariant under a change in direction. For an example, starlight appears to be isotropic. Physics Quantum mechanics or particle physics When a spinless particle (or even an unpolarized particle with spin) decays, the resulting decay distribution must be isotropic in the rest frame of the decaying particle - regardless of the detailed physics of the decay. This follows from rotational invariance of the Hamiltonian, which in turn is guaranteed for a spherically symmetric potential. Gases The kinetic theory of gases also exemplifies isotropy. It is assumed that the molecules move in random directions and as a consequence, there is an equal probability of a molecule moving in any direction. Thus when there are many molecules in the gas, with high probability there will be very similar numbers moving in one direction as any other, demonstrating approximate isotropy. Fluid dynamics Fluid flow is isotropic if there is no directional preference (e.g. in fully developed 3D turbulence). An example of anisotropy is in flows with a background density as gravity works in only one direction. The apparent surface separating two differing isotropic fluids would be referred to as an isotrope. Thermal expansion A solid is said to be isotropic if the expansion of solid is equal in all directions when thermal energy is provided to the solid. Electromagnetics An isotropic medium is one such that the permittivity, ε, and permeability, μ, of the medium are uniform in all directions of the medium, the simplest instance being free space. Optics Optical isotropy means having the same optical properties in all directions. The individual reflectance or transmittance of the domains is averaged for micro-heterogeneous samples if the macroscopic reflectance or transmittance is to be calculated. This can be verified simply by investigating, for example, a polycrystalline material under a polarizing microscope having the polarizers crossed: If the crystallites are larger than the resolution limit, they will be visible. Cosmology The cosmological principle, which underpins much of modern cosmology (including the Big Bang theory of the evolution of the observable universe), assumes that the universe is both isotropic and homogeneous, meaning that the universe has no preferred location (is the same everywhere) and has no preferred direction. Observations made in 2006 suggest that, on distance-scales much larger than galaxies, galaxy clusters are "Great" features, but small compared to so-called multiverse scenarios. Materials science In the study of mechanical properties of materials, "isotropic" means having identical values of a property in all directions. This definition is also used in geology and mineralogy. Glass and metals are examples of isotropic materials. Common anisotropic materials include wood (because its material properties are different parallel to and perpendicular to the grain) and layered rocks such as slate. Isotropic materials are useful since they are easier to shape, and their behavior is easier to predict. Anisotropic materials can be tailored to the forces an object is expected to experience. For example, the fibers in carbon fiber materials and rebars in reinforced concrete are oriented to withstand tension. Microfabrication In industrial processes, such as etching steps, "isotropic" means that the process proceeds at the same rate, regardless of direction. Simple chemical reaction and removal of a substrate by an acid, a solvent or a reactive gas is often very close to isotropic. Conversely, "anisotropic" means that the attack rate of the substrate is higher in a certain direction. Anisotropic etch processes, where vertical etch-rate is high but lateral etch-rate is very small, are essential processes in microfabrication of integrated circuits and MEMS devices. Antenna (radio) An isotropic antenna is an idealized "radiating element" used as a reference; an antenna that broadcasts power equally (calculated by the Poynting vector) in all directions. The gain of an arbitrary antenna is usually reported in decibels relative to an isotropic antenna, and is expressed as dBi or dB(i). In cells (a.k.a. muscle fibers), the term "isotropic" refers to the light bands (I bands) that contribute to the striated pattern of the cells. Pharmacology While it is well established that the skin provides an ideal site for the administration of local and systemic drugs, it presents a formidable barrier to the permeation of most substances. Recently, isotropic formulations have been used extensively in dermatology for drug delivery. Computer science ImagingA volume such as a computed tomography is said to have isotropic voxel spacing when the space between any two adjacent voxels is the same along each axis x, y, z. E.g., voxel spacing is isotropic if the center of voxel (i, j, k) is 1.38 mm from that of (i+1, j, k), 1.38 mm from that of (i, j+1, k) and 1.38 mm from that of (i, j, k+1) for all indices i, j, k. Other sciences Economics and geography An isotropic region is a region that has the same properties everywhere. Such a region is a construction needed in many types of models. See also Rotational invariance Isotropic bands Isotropic coordinates Transverse isotropy Bi isotropic Symmetry References Orientation (geometry) Symmetry
Isotropy
[ "Physics", "Mathematics" ]
1,497
[ "Topology", "Space", "Geometry", "Spacetime", "Orientation (geometry)", "Symmetry" ]
14,870
https://en.wikipedia.org/wiki/International%20Union%20of%20Pure%20and%20Applied%20Chemistry
The International Union of Pure and Applied Chemistry (IUPAC ) is an international federation of National Adhering Organizations working for the advancement of the chemical sciences, especially by developing nomenclature and terminology. It is a member of the International Science Council (ISC). IUPAC is registered in Zürich, Switzerland, and the administrative office, known as the "IUPAC Secretariat", is in Research Triangle Park, North Carolina, United States. IUPAC's executive director heads this administrative office, currently Greta Heydenrych. IUPAC was established in 1919 as the successor of the International Congress of Applied Chemistry for the advancement of chemistry. Its members, the National Adhering Organizations, can be national chemistry societies, national academies of sciences, or other bodies representing chemists. There are fifty-four National Adhering Organizations and three Associate National Adhering Organizations. IUPAC's Inter-divisional Committee on Nomenclature and Symbols (IUPAC nomenclature) is the recognized world authority in developing standards for naming the chemical elements and compounds. Since its creation, IUPAC has been run by many different committees with different responsibilities. These committees run different projects which include standardizing nomenclature, finding ways to bring chemistry to the world, and publishing works. IUPAC is best known for its works standardizing nomenclature in chemistry, but IUPAC has publications in many science fields including chemistry, biology, and physics. Some important work IUPAC has done in these fields includes standardizing nucleotide base sequence code names; publishing books for environmental scientists, chemists, and physicists; and improving education in science. IUPAC is also known for standardizing the atomic weights of the elements through one of its oldest standing committees, the Commission on Isotopic Abundances and Atomic Weights (CIAAW). Creation and history The need for an international standard for chemistry was first addressed in 1860 by a committee headed by German scientist Friedrich August Kekulé von Stradonitz. This committee was the first international conference to create an international naming system for organic compounds. The ideas that were formulated at that conference evolved into the official IUPAC nomenclature of organic chemistry. IUPAC stands as a legacy of this meeting, making it one of the most important historical international collaborations of chemistry societies. Since this time, IUPAC has been the official organization held with the responsibility of updating and maintaining official organic nomenclature. IUPAC as such was established in 1919. One notable country excluded from this early IUPAC is Germany. Germany's exclusion was a result of prejudice towards Germans by the Allied powers after World War I. Germany was finally admitted into IUPAC in 1929. However, Nazi Germany was removed from IUPAC during World War II. During World War II, IUPAC was affiliated with the Allied powers, but had little involvement during the war effort itself. After the war, East and West Germany were readmitted to IUPAC in 1973. Since World War II, IUPAC has been focused on standardizing nomenclature and methods in science without interruption. In 2016, IUPAC denounced the use of chlorine as a chemical weapon. The organization pointed out their concerns in a letter to Ahmet Üzümcü, the director of the Organisation for the Prohibition of Chemical Weapons (OPCW), in regards to the practice of utilizing chlorine for weapon usage in Syria among other locations. The letter stated, "Our organizations deplore the use of chlorine in this manner. The indiscriminate attacks, possibly carried out by a member state of the Chemical Weapons Convention (CWC), are of concern to chemical scientists and engineers around the globe and we stand ready to support your mission of implementing the CWC." According to the CWC, "the use, stockpiling, distribution, development or storage of any chemical weapons is forbidden by any of the 192 state party signatories." Committees and governance IUPAC is governed by several committees that all have different responsibilities. The committees are as follows: Bureau, CHEMRAWN (Chem Research Applied to World Needs) Committee, Committee on Chemistry Education, Committee on Chemistry and Industry, Committee on Printed and Electronic Publications, Evaluation Committee, Executive Committee, Finance Committee, Interdivisional Committee on Terminology, Nomenclature and Symbols, Project Committee, and Pure and Applied Chemistry Editorial Advisory Board. Each committee is made up of members of different National Adhering Organizations from different countries. The steering committee hierarchy for IUPAC is as follows: All committees have an allotted budget to which they must adhere. Any committee may start a project. If a project's spending becomes too much for a committee to continue funding, it must take the issue to the Project Committee. The project committee either increases the budget or decides on an external funding plan. The Bureau and Executive Committee oversee operations of the other committees. Nomenclature Scientists framed a systematic method for naming organic compounds based on their structures. Hence, the naming rules were formulated by IUPAC. Basic spellings IUPAC establishes rules for harmonized spelling of some chemicals to reduce variation among different local English-language variants. For example, they recommend "aluminium" rather than "aluminum", "sulfur" rather than "sulphur", and "caesium" rather than "cesium". Organic nomenclature IUPAC organic nomenclature has three basic parts: the substituents, carbon chain length, and chemical affix. The substituents are any functional groups attached to the main carbon chain. The main carbon chain is the longest possible continuous chain. The chemical affix denotes what type of molecule it is. For example, the ending ane denotes a single bonded carbon chain, as in "hexane" (). Another example of IUPAC organic nomenclature is cyclohexanol: The substituent name for a ring compound is cyclo. The indication (substituent name) for a six carbon chain is hex. The chemical ending for a single bonded carbon chain is ane. The chemical ending for an alcohol is ol. The two chemical endings are combined for an ending of anol indicating a single bonded carbon chain with an alcohol attached to it. Inorganic nomenclature Basic IUPAC inorganic nomenclature has two main parts: the cation and the anion. The cation is the name for the positively charged ion and the anion is the name for the negatively charged ion. An example of IUPAC nomenclature of inorganic chemistry is potassium chlorate (KClO3): "Potassium" is the cation name. "Chlorate" is the anion name. Amino acid and nucleotide base codes IUPAC also has a system for giving codes to identify amino acids and nucleotide bases. IUPAC needed a coding system that represented long sequences of amino acids. This would allow for these sequences to be compared to try to find homologies. These codes can consist of either a one-letter code or a three-letter code. These codes make it easier and shorter to write down the amino acid sequences that make up proteins. The nucleotide bases are made up of purines (adenine and guanine) and pyrimidines (cytosine and thymine or uracil). These nucleotide bases make up DNA and RNA. These nucleotide base codes make the genome of an organism much smaller and easier to read. The codes for amino acids (24 amino acids and three special codes) are: Publications Non-series books Experimental Thermodynamics book series The Experimental Thermodynamics books series covers many topics in the fields of thermodynamics. Series of books on analytical and physical chemistry of environmental systems Colored cover book and website series (nomenclature) IUPAC color code their books in order to make each publication distinguishable. International Year of Chemistry IUPAC and UNESCO were the lead organizations coordinating events for the International Year of Chemistry, which took place in 2011. The International Year of Chemistry was originally proposed by IUPAC at the general assembly in Turin, Italy. This motion was adopted by UNESCO at a meeting in 2008. The main objectives of the International Year of Chemistry were to increase public appreciation of chemistry and gain more interest in the world of chemistry. This event is also being held to encourage young people to get involved and contribute to chemistry. A further reason for this event being held is to honour how chemistry has made improvements to everyone's way of life. IUPAC Presidents IUPAC Presidents are elected by the IUPAC Council during the General Assembly. Below is the list of IUPAC Presidents since its inception in 1919. See also CAS registry number Chemical nomenclature Commission on Isotopic Abundances and Atomic Weights European Association for Chemical and Molecular Sciences Institute for Reference Materials and Measurements (IRMM) International Chemical Identifier (InChI) International Union of Biochemistry and Molecular Biology (IUBMB) International Union of Pure and Applied Physics (IUPAP) List of chemical elements naming controversies National Institute of Standards and Technology (NIST) Simplified molecular-input line-entry system (SMILES) References External links Chemical nomenclature Chemistry organizations International scientific organizations Members of the International Council for Science Organisations based in Zurich Organizations based in North Carolina Scientific organizations based in the United States Scientific organisations based in Switzerland Scientific organizations established in 1919 Standards organisations in Switzerland Members of the International Science Council
International Union of Pure and Applied Chemistry
[ "Chemistry" ]
1,924
[ "nan" ]
14,878
https://en.wikipedia.org/wiki/International%20Astronomical%20Union
The International Astronomical Union (IAU; , UAI) is an international non-governmental organization (INGO) with the objective of advancing astronomy in all aspects, including promoting astronomical research, outreach, education, and development through global cooperation. It was founded on 28 July 1919 in Brussels, Belgium and is based in Paris, France. The IAU is composed of individual members, who include both professional astronomers and junior scientists, and national members, such as professional associations, national societies, or academic institutions. Individual members are organised into divisions, committees, and working groups centered on particular subdisciplines, subjects, or initiatives. the Union had 85 national members and 12,734 individual members, spanning 90 countries and territories. Among the key activities of the IAU is serving as a forum for scientific conferences. It sponsors nine annual symposia and holds a triannual General Assembly that sets policy and includes various scientific meetings. The Union is best known for being the leading authority in assigning official names and designations to astronomical objects, and for setting uniform definitions for astronomical principles. It also coordinates with national and international partners, such as UNESCO, to fulfill its mission. The IAU is a member of the International Science Council, which is composed of international scholarly and scientific institutions and national academies of sciences. Function The International Astronomical Union is an international association of professional astronomers, at the PhD level and beyond, active in professional research and education in astronomy. Among other activities, it acts as the recognized authority for assigning designations and names to celestial bodies (stars, planets, asteroids, etc.) and any surface features on them. The IAU is a member of the International Science Council. Its main objective is to promote and safeguard the science of astronomy in all its aspects through international cooperation. The IAU maintains friendly relations with organizations that include amateur astronomers in their membership. The IAU has its head office on the second floor of the in the 14th arrondissement of Paris. This organisation has many working groups. For example, the Working Group for Planetary System Nomenclature (WGPSN), which maintains the astronomical naming conventions and planetary nomenclature for planetary bodies, and the Working Group on Star Names (WGSN), which catalogues and standardizes proper names for stars. The IAU is also responsible for the system of astronomical telegrams which are produced and distributed on its behalf by the Central Bureau for Astronomical Telegrams. The Minor Planet Center also operates under the IAU, and is a "clearinghouse" for all non-planetary or non-moon bodies in the Solar System. History The IAU was founded on 28 July 1919, at the Constitutive Assembly of the International Research Council (now the International Science Council) held in Brussels, Belgium. Two subsidiaries of the IAU were also created at this assembly: the International Time Commission seated at the International Time Bureau in Paris, France, and the International Central Bureau of Astronomical Telegrams initially seated in Copenhagen, Denmark. The seven initial member states were Belgium, Canada, France, Great Britain, Greece, Japan, and the United States, soon to be followed by Italy and Mexico. The first executive committee consisted of Benjamin Baillaud (President, France), Alfred Fowler (General Secretary, UK), and four vice presidents: William Campbell (US), Frank Dyson (UK), Georges Lecointe (Belgium), and Annibale Riccò (Italy). Thirty-two Commissions (referred to initially as Standing Committees) were appointed at the Brussels meeting and focused on topics ranging from relativity to minor planets. The reports of these 32 Commissions formed the main substance of the first General Assembly, which took place in Rome, Italy, 2–10 May 1922. By the end of the first General Assembly, ten additional nations (Australia, Brazil, Czechoslovakia, Denmark, the Netherlands, Norway, Poland, Romania, South Africa, and Spain) had joined the Union, bringing the total membership to 19 countries. Although the Union was officially formed eight months after the end of World War I, international collaboration in astronomy had been strong in the pre-war era (e.g., the Astronomische Gesellschaft Katalog projects since 1868, the Astrographic Catalogue since 1887, and the International Union for Solar research since 1904). The first 50 years of the Union's history are well documented. Subsequent history is recorded in the form of reminiscences of past IAU Presidents and General Secretaries. Twelve of the fourteen past General Secretaries in the period 1964–2006 contributed their recollections of the Union's history in IAU Information Bulletin No. 100. Six past IAU Presidents in the period 1976–2003 also contributed their recollections in IAU Information Bulletin No. 104. In 2015 and 2019, the Union held the NameExoWorlds contests. Starting in 2024, the Union, in partnership with the United Nations, is poised to play a critical role in developing the legislation and framework for lunar industrialization. Composition As of 1 August 2019, the IAU has a total of 13,701 individual members, who are professional astronomers from 102 countries worldwide; 81.7% of individual members are male, while 18.3% are female. Membership also includes 82 national members, professional astronomical communities representing their country's affiliation with the IAU. National members include the Australian Academy of Science, the Chinese Astronomical Society, the French Academy of Sciences, the Indian National Science Academy, the National Academies (United States), the National Research Foundation of South Africa, the National Scientific and Technical Research Council (Argentina), the Council of German Observatories, the Royal Astronomical Society (United Kingdom), the Royal Astronomical Society of New Zealand, the Royal Swedish Academy of Sciences, the Russian Academy of Sciences, and the Science Council of Japan, among many others. The sovereign body of the IAU is its General Assembly, which comprises all members. The Assembly determines IAU policy, approves the Statutes and By-Laws of the Union (and amendments proposed thereto) and elects various committees. The right to vote on matters brought before the Assembly varies according to the type of business under discussion. The Statutes consider such business to be divided into two categories: issues of a "primarily scientific nature" (as determined by the Executive Committee), upon which voting is restricted to individual members, and all other matters (such as Statute revision and procedural questions), upon which voting is restricted to the representatives of national members. On budget matters (which fall into the second category), votes are weighted according to the relative subscription levels of the national members. A second category vote requires a turnout of at least two-thirds of national members to be valid. An absolute majority is sufficient for approval in any vote, except for Statute revision which requires a two-thirds majority. An equality of votes is resolved by the vote of the President of the Union. List of national members Africa Asia (suspended) (suspended) (suspended) (suspended) (suspended) (suspended) Europe North America (interim) (interim) (interim) Oceania South America (observer) (suspended) (observer) (suspended) Terminated national members General Assemblies Since 1922, the IAU General Assembly meets every three years, except for the period between 1938 and 1948, due to World War II. After a Polish request in 1967, and by a controversial decision of the then President of the IAU, an Extraordinary IAU General Assembly was held in September 1973 in Warsaw, Poland, to commemorate the 500th anniversary of the birth of Nicolaus Copernicus, soon after the regular 1973 GA had been held in Sydney. List of the presidents of the IAU Sources. Commission 46: Education in astronomy Commission 46 is a Committee of the Executive Committee of the IAU, playing a special role in the discussion of astronomy development with governments and scientific academies. The IAU is affiliated with the International Council of Scientific Unions (ICSU), a non-governmental organization representing a global membership that includes both national scientific bodies and international scientific unions. They often encourage countries to become members of the IAU. The Commission further seeks to development, information or improvement of astronomical education. Part of Commission 46, is Teaching Astronomy for Development (TAD) program in countries where there is currently very little astronomical education. Another program is named the Galileo Teacher Training Program (GTTP), is a project of the International Year of Astronomy 2009, among which Hands-On Universe that will concentrate more resources on education activities for children and schools designed to advance sustainable global development. GTTP is also concerned with the effective use and transfer of astronomy education tools and resources into classroom science curricula. A strategic plan for the period 2010–2020 has been published. Publications In 2004 the IAU contracted with the Cambridge University Press to publish the Proceedings of the International Astronomical Union. In 2007, the Communicating Astronomy with the Public Journal Working Group prepared a study assessing the feasibility of the Communicating Astronomy with the Public Journal (CAP Journal). See also List of astronomy acronyms Astronomical naming conventions List of proper names of stars Planetary nomenclature References Statutes of the IAU, VII General Assembly (1948), pp. 13–15 External links XXVIth General Assembly 2006 XXVIIth General Assembly 2009 XXVIIIth General Assembly 2012 XXIXth General Assembly 2015 XXXth General Assembly 2018 XXXIst General Assembly 2022 XXXIIst General Assembly 2024 Astronomy organizations International organizations based in France International professional associations Members of the International Council for Science Organizations based in Paris Scientific organizations based in France Scientific organizations established in 1919 1919 establishments in France Standards organizations in France International scientific organizations International scientific organizations based in Europe Members of the International Science Council
International Astronomical Union
[ "Astronomy" ]
1,967
[ "Astronomy organizations" ]
14,884
https://en.wikipedia.org/wiki/Intermediate%20value%20theorem
In mathematical analysis, the intermediate value theorem states that if is a continuous function whose domain contains the interval , then it takes on any given value between and at some point within the interval. This has two important corollaries: If a continuous function has values of opposite sign inside an interval, then it has a root in that interval (Bolzano's theorem). The image of a continuous function over an interval is itself an interval. Motivation This captures an intuitive property of continuous functions over the real numbers: given continuous on with the known values and , then the graph of must pass through the horizontal line while moves from to . It represents the idea that the graph of a continuous function on a closed interval can be drawn without lifting a pencil from the paper. Theorem The intermediate value theorem states the following: Consider an interval of real numbers and a continuous function . Then Version I. if is a number between and , that is, then there is a such that . Version II. the image set is also a closed interval, and it contains . Remark: Version II states that the set of function values has no gap. For any two function values with all points in the interval are also function values, A subset of the real numbers with no internal gap is an interval. Version I is naturally contained in Version II. Relation to completeness The theorem depends on, and is equivalent to, the completeness of the real numbers. The intermediate value theorem does not apply to the rational numbers Q because gaps exist between rational numbers; irrational numbers fill those gaps. For example, the function for satisfies and . However, there is no rational number such that , because is an irrational number. Despite the above, there is a version of the intermediate value theorem for polynomials over a real closed field; see the Weierstrass Nullstellensatz. Proof Proof version A The theorem may be proven as a consequence of the completeness property of the real numbers as follows: We shall prove the first case, . The second case is similar. Let be the set of all such that . Then is non-empty since is an element of . Since is non-empty and bounded above by , by completeness, the supremum exists. That is, is the smallest number that is greater than or equal to every member of . Note that, due to the continuity of at , we can keep within any of by keeping sufficiently close to . Since is a strict inequality, consider the implication when is the distance between and . No sufficiently close to can then make greater than or equal to , which means there are values greater than in . A more detailed proof goes like this: Choose . Then such that , Consider the interval . Notice that and every satisfies the condition . Therefore for every we have . Hence cannot be . Likewise, due to the continuity of at , we can keep within any of by keeping sufficiently close to . Since is a strict inequality, consider the similar implication when is the distance between and . Every sufficiently close to must then make greater than , which means there are values smaller than that are upper bounds of . A more detailed proof goes like this: Choose . Then such that , Consider the interval . Notice that and every satisfies the condition . Therefore for every we have . Hence cannot be . With and , it must be the case . Now we claim that . Fix some . Since is continuous at , such that , . Since and is open, such that . Set . Then we have for all . By the properties of the supremum, there exists some that is contained in , and so Picking , we know that because is the supremum of . This means that Both inequalities are valid for all , from which we deduce as the only possible value, as stated. Proof version B We will only prove the case of , as the case is similar. Define which is equivalent to and lets us rewrite as , and we have to prove, that for some , which is more intuitive. We further define the set . Because we know, that so, that is not empty. Moreover, as , we know that is bounded and non-empty, so by Completeness, the supremum exists. There are 3 cases for the value of , those being and . For contradiction, let us assume, that . Then, by the definition of continuity, for , there exists a such that implies, that , which is equivalent to . If we just chose , where , then as , , from which we get and , so . It follows that is an upper bound for . However, , contradicting the upper bound property of the least upper bound , so . Assume then, that . We similarly chose and know, that there exists a such that implies . We can rewrite this as which implies, that . If we now chose , then and . It follows that is an upper bound for . However, , which contradict the least property of the least upper bound , which means, that is impossible. If we combine both results, we get that or is the only remaining possibility. Remark: The intermediate value theorem can also be proved using the methods of non-standard analysis, which places "intuitive" arguments involving infinitesimals on a rigorous footing. History A form of the theorem was postulated as early as the 5th century BCE, in the work of Bryson of Heraclea on squaring the circle. Bryson argued that, as circles larger than and smaller than a given square both exist, there must exist a circle of equal area. The theorem was first proved by Bernard Bolzano in 1817. Bolzano used the following formulation of the theorem: Let be continuous functions on the interval between and such that and . Then there is an between and such that . The equivalence between this formulation and the modern one can be shown by setting to the appropriate constant function. Augustin-Louis Cauchy provided the modern formulation and a proof in 1821. Both were inspired by the goal of formalizing the analysis of functions and the work of Joseph-Louis Lagrange. The idea that continuous functions possess the intermediate value property has an earlier origin. Simon Stevin proved the intermediate value theorem for polynomials (using a cubic as an example) by providing an algorithm for constructing the decimal expansion of the solution. The algorithm iteratively subdivides the interval into 10 parts, producing an additional decimal digit at each step of the iteration. Before the formal definition of continuity was given, the intermediate value property was given as part of the definition of a continuous function. Proponents include Louis Arbogast, who assumed the functions to have no jumps, satisfy the intermediate value property and have increments whose sizes corresponded to the sizes of the increments of the variable. Earlier authors held the result to be intuitively obvious and requiring no proof. The insight of Bolzano and Cauchy was to define a general notion of continuity (in terms of infinitesimals in Cauchy's case and using real inequalities in Bolzano's case), and to provide a proof based on such definitions. Converse is false A Darboux function is a real-valued function that has the "intermediate value property," i.e., that satisfies the conclusion of the intermediate value theorem: for any two values and in the domain of , and any between and , there is some between and with . The intermediate value theorem says that every continuous function is a Darboux function. However, not every Darboux function is continuous; i.e., the converse of the intermediate value theorem is false. As an example, take the function defined by for and . This function is not continuous at because the limit of as tends to 0 does not exist; yet the function has the intermediate value property. Another, more complicated example is given by the Conway base 13 function. In fact, Darboux's theorem states that all functions that result from the differentiation of some other function on some interval have the intermediate value property (even though they need not be continuous). Historically, this intermediate value property has been suggested as a definition for continuity of real-valued functions; this definition was not adopted. Generalizations Multi-dimensional spaces The Poincaré-Miranda theorem is a generalization of the Intermediate value theorem from a (one-dimensional) interval to a (two-dimensional) rectangle, or more generally, to an n-dimensional cube. Vrahatis presents a similar generalization to triangles, or more generally, n-dimensional simplices. Let Dn be an n-dimensional simplex with n+1 vertices denoted by v0,...,vn. Let F=(f1,...,fn) be a continuous function from Dn to Rn, that never equals 0 on the boundary of Dn. Suppose F satisfies the following conditions: For all i in 1,...,n, the sign of fi(vi) is opposite to the sign of fi(x) for all points x on the face opposite to vi; The sign-vector of f1,...,fn on v0 is not equal to the sign-vector of f1,...,fn on all points on the face opposite to v0. Then there is a point z in the interior of Dn on which F(z)=(0,...,0). It is possible to normalize the fi such that fi(vi)>0 for all i; then the conditions become simpler: For all i in 1,...,n, fi(vi)>0, and fi(x)<0 for all points x on the face opposite to vi. In particular, fi(v0)<0. For all points x on the face opposite to v0, fi(x)>0 for at least one i in 1,...,n. The theorem can be proved based on the Knaster–Kuratowski–Mazurkiewicz lemma. In can be used for approximations of fixed points and zeros. General metric and topological spaces The intermediate value theorem is closely linked to the topological notion of connectedness and follows from the basic properties of connected sets in metric spaces and connected subsets of R in particular: If and are metric spaces, is a continuous map, and is a connected subset, then is connected. () A subset is connected if and only if it satisfies the following property: . () In fact, connectedness is a topological property and generalizes to topological spaces: If and are topological spaces, is a continuous map, and is a connected space, then is connected. The preservation of connectedness under continuous maps can be thought of as a generalization of the intermediate value theorem, a property of continuous, real-valued functions of a real variable, to continuous functions in general spaces. Recall the first version of the intermediate value theorem, stated previously: The intermediate value theorem is an immediate consequence of these two properties of connectedness: The intermediate value theorem generalizes in a natural way: Suppose that is a connected topological space and is a totally ordered set equipped with the order topology, and let be a continuous map. If and are two points in and is a point in lying between and with respect to , then there exists in such that . The original theorem is recovered by noting that is connected and that its natural topology is the order topology. The Brouwer fixed-point theorem is a related theorem that, in one dimension, gives a special case of the intermediate value theorem. In constructive mathematics In constructive mathematics, the intermediate value theorem is not true. Instead, one has to weaken the conclusion: Let and be real numbers and be a pointwise continuous function from the closed interval to the real line, and suppose that and . Then for every positive number there exists a point in the unit interval such that . Practical applications A similar result is the Borsuk–Ulam theorem, which says that a continuous map from the -sphere to Euclidean -space will always map some pair of antipodal points to the same place. In general, for any continuous function whose domain is some closed convex shape and any point inside the shape (not necessarily its center), there exist two antipodal points with respect to the given point whose functional value is the same. The theorem also underpins the explanation of why rotating a wobbly table will bring it to stability (subject to certain easily met constraints). See also References Further reading https://mathoverflow.net/questions/253059/approximate-intermediate-value-theorem-in-pure-constructive-mathematics External links Intermediate value Theorem - Bolzano Theorem at cut-the-knot Bolzano's Theorem by Julio Cesar de la Yncera, Wolfram Demonstrations Project. Mizar system proof: http://mizar.org/version/current/html/topreal5.html#T4 Theory of continuous functions Articles containing proofs Theorems in calculus Theorems in real analysis
Intermediate value theorem
[ "Mathematics" ]
2,672
[ "Theorems in mathematical analysis", "Theorems in calculus", "Calculus", "Theory of continuous functions", "Theorems in real analysis", "Topology", "Articles containing proofs" ]
14,895
https://en.wikipedia.org/wiki/Insulin
Insulin (, from Latin insula, 'island') is a peptide hormone produced by beta cells of the pancreatic islets encoded in humans by the insulin (INS) gene. It is the main anabolic hormone of the body. It regulates the metabolism of carbohydrates, fats, and protein by promoting the absorption of glucose from the blood into cells of the liver, fat, and skeletal muscles. In these tissues the absorbed glucose is converted into either glycogen, via glycogenesis, or fats (triglycerides), via lipogenesis; in the liver, glucose is converted into both. Glucose production and secretion by the liver are strongly inhibited by high concentrations of insulin in the blood. Circulating insulin also affects the synthesis of proteins in a wide variety of tissues. It is thus an anabolic hormone, promoting the conversion of small molecules in the blood into large molecules in the cells. Low insulin in the blood has the opposite effect, promoting widespread catabolism, especially of reserve body fat. Beta cells are sensitive to blood sugar levels so that they secrete insulin into the blood in response to high level of glucose, and inhibit secretion of insulin when glucose levels are low. Insulin production is also regulated by glucose: high glucose promotes insulin production while low glucose levels lead to lower production. Insulin enhances glucose uptake and metabolism in the cells, thereby reducing blood sugar. Their neighboring alpha cells, by taking their cues from the beta cells, secrete glucagon into the blood in the opposite manner: increased secretion when blood glucose is low, and decreased secretion when glucose concentrations are high. Glucagon increases blood glucose by stimulating glycogenolysis and gluconeogenesis in the liver. The secretion of insulin and glucagon into the blood in response to the blood glucose concentration is the primary mechanism of glucose homeostasis. Decreased or absent insulin activity results in diabetes, a condition of high blood sugar level (hyperglycaemia). There are two types of the disease. In type 1 diabetes, the beta cells are destroyed by an autoimmune reaction so that insulin can no longer be synthesized or be secreted into the blood. In type 2 diabetes, the destruction of beta cells is less pronounced than in type 1, and is not due to an autoimmune process. Instead, there is an accumulation of amyloid in the pancreatic islets, which likely disrupts their anatomy and physiology. The pathogenesis of type 2 diabetes is not well understood but reduced population of islet beta-cells, reduced secretory function of islet beta-cells that survive, and peripheral tissue insulin resistance are known to be involved. Type 2 diabetes is characterized by increased glucagon secretion which is unaffected by, and unresponsive to the concentration of blood glucose. But insulin is still secreted into the blood in response to the blood glucose. As a result, glucose accumulates in the blood. The human insulin protein is composed of 51 amino acids, and has a molecular mass of 5808 Da. It is a heterodimer of an A-chain and a B-chain, which are linked together by disulfide bonds. Insulin's structure varies slightly between species of animals. Insulin from non-human animal sources differs somewhat in effectiveness (in carbohydrate metabolism effects) from human insulin because of these variations. Porcine insulin is especially close to the human version, and was widely used to treat type 1 diabetics before human insulin could be produced in large quantities by recombinant DNA technologies. Insulin was the first peptide hormone discovered. Frederick Banting and Charles Best, working in the laboratory of John Macleod at the University of Toronto, were the first to isolate insulin from dog pancreas in 1921. Frederick Sanger sequenced the amino acid structure in 1951, which made insulin the first protein to be fully sequenced. The crystal structure of insulin in the solid state was determined by Dorothy Hodgkin in 1969. Insulin is also the first protein to be chemically synthesised and produced by DNA recombinant technology. It is on the WHO Model List of Essential Medicines, the most important medications needed in a basic health system. Evolution and species distribution Insulin may have originated more than a billion years ago. The molecular origins of insulin go at least as far back as the simplest unicellular eukaryotes. Apart from animals, insulin-like proteins are also known to exist in fungi and protists. Insulin is produced by beta cells of the pancreatic islets in most vertebrates and by the Brockmann body in some teleost fish. Cone snails: Conus geographus and Conus tulipa, venomous sea snails that hunt small fish, use modified forms of insulin in their venom cocktails. The insulin toxin, closer in structure to fishes' than to snails' native insulin, slows down the prey fishes by lowering their blood glucose levels. Production Insulin is produced exclusively in the beta cells of the pancreatic islets in mammals, and the Brockmann body in some fish. Human insulin is produced from the INS gene, located on chromosome 11. Rodents have two functional insulin genes; one is the homolog of most mammalian genes (Ins2), and the other is a retroposed copy that includes promoter sequence but that is missing an intron (Ins1). Transcription of the insulin gene increases in response to elevated blood glucose. This is primarily controlled by transcription factors that bind enhancer sequences in the ~400 base pairs before the gene's transcription start site. The major transcription factors influencing insulin secretion are PDX1, NeuroD1, and MafA. During a low-glucose state, PDX1 (pancreatic and duodenal homeobox protein 1) is located in the nuclear periphery as a result of interaction with HDAC1 and 2, which results in downregulation of insulin secretion. An increase in blood glucose levels causes phosphorylation of PDX1, which leads it to undergo nuclear translocation and bind the A3 element within the insulin promoter. Upon translocation it interacts with coactivators HAT p300 and SETD7. PDX1 affects the histone modifications through acetylation and deacetylation as well as methylation. It is also said to suppress glucagon. NeuroD1, also known as β2, regulates insulin exocytosis in pancreatic β cells by directly inducing the expression of genes involved in exocytosis. It is localized in the cytosol, but in response to high glucose it becomes glycosylated by OGT and/or phosphorylated by ERK, which causes translocation to the nucleus. In the nucleus β2 heterodimerizes with E47, binds to the E1 element of the insulin promoter and recruits co-activator p300 which acetylates β2. It is able to interact with other transcription factors as well in activation of the insulin gene. MafA is degraded by proteasomes upon low blood glucose levels. Increased levels of glucose make an unknown protein glycosylated. This protein works as a transcription factor for MafA in an unknown manner and MafA is transported out of the cell. MafA is then translocated back into the nucleus where it binds the C1 element of the insulin promoter. These transcription factors work synergistically and in a complex arrangement. Increased blood glucose can after a while destroy the binding capacities of these proteins, and therefore reduce the amount of insulin secreted, causing diabetes. The decreased binding activities can be mediated by glucose induced oxidative stress and antioxidants are said to prevent the decreased insulin secretion in glucotoxic pancreatic β cells. Stress signalling molecules and reactive oxygen species inhibits the insulin gene by interfering with the cofactors binding the transcription factors and the transcription factors itself. Several regulatory sequences in the promoter region of the human insulin gene bind to transcription factors. In general, the A-boxes bind to Pdx1 factors, E-boxes bind to NeuroD, C-boxes bind to MafA, and cAMP response elements to CREB. There are also silencers that inhibit transcription. Synthesis Insulin is synthesized as an inactive precursor molecule, a 110 amino acid-long protein called "preproinsulin". Preproinsulin is translated directly into the rough endoplasmic reticulum (RER), where its signal peptide is removed by signal peptidase to form "proinsulin". As the proinsulin folds, opposite ends of the protein, called the "A-chain" and the "B-chain", are fused together with three disulfide bonds. Folded proinsulin then transits through the Golgi apparatus and is packaged into specialized secretory vesicles. In the granule, proinsulin is cleaved by proprotein convertase 1/3 and proprotein convertase 2, removing the middle part of the protein, called the "C-peptide". Finally, carboxypeptidase E removes two pairs of amino acids from the protein's ends, resulting in active insulin – the insulin A- and B- chains, now connected with two disulfide bonds. The resulting mature insulin is packaged inside mature granules waiting for metabolic signals (such as leucine, arginine, glucose and mannose) and vagal nerve stimulation to be exocytosed from the cell into the circulation. Insulin and its related proteins have been shown to be produced inside the brain, and reduced levels of these proteins are linked to Alzheimer's disease. Insulin release is stimulated also by beta-2 receptor stimulation and inhibited by alpha-1 receptor stimulation. In addition, cortisol, glucagon and growth hormone antagonize the actions of insulin during times of stress. Insulin also inhibits fatty acid release by hormone-sensitive lipase in adipose tissue. Structure Contrary to an initial belief that hormones would be generally small chemical molecules, as the first peptide hormone known of its structure, insulin was found to be quite large. A single protein (monomer) of human insulin is composed of 51 amino acids, and has a molecular mass of 5808 Da. The molecular formula of human insulin is C257H383N65O77S6. It is a combination of two peptide chains (dimer) named an A-chain and a B-chain, which are linked together by two disulfide bonds. The A-chain is composed of 21 amino acids, while the B-chain consists of 30 residues. The linking (interchain) disulfide bonds are formed at cysteine residues between the positions A7-B7 and A20-B19. There is an additional (intrachain) disulfide bond within the A-chain between cysteine residues at positions A6 and A11. The A-chain exhibits two α-helical regions at A1-A8 and A12-A19 which are antiparallel; while the B chain has a central α -helix (covering residues B9-B19) flanked by the disulfide bond on either sides and two β-sheets (covering B7-B10 and B20-B23). The amino acid sequence of insulin is strongly conserved and varies only slightly between species. Bovine insulin differs from human in only three amino acid residues, and porcine insulin in one. Even insulin from some species of fish is similar enough to human to be clinically effective in humans. Insulin in some invertebrates is quite similar in sequence to human insulin, and has similar physiological effects. The strong homology seen in the insulin sequence of diverse species suggests that it has been conserved across much of animal evolutionary history. The C-peptide of proinsulin, however, differs much more among species; it is also a hormone, but a secondary one. Insulin is produced and stored in the body as a hexamer (a unit of six insulin molecules), while the active form is the monomer. The hexamer is about 36000 Da in size. The six molecules are linked together as three dimeric units to form symmetrical molecule. An important feature is the presence of zinc atoms (Zn2+) on the axis of symmetry, which are surrounded by three water molecules and three histidine residues at position B10. The hexamer is an inactive form with long-term stability, which serves as a way to keep the highly reactive insulin protected, yet readily available. The hexamer-monomer conversion is one of the central aspects of insulin formulations for injection. The hexamer is far more stable than the monomer, which is desirable for practical reasons; however, the monomer is a much faster-reacting drug because diffusion rate is inversely related to particle size. A fast-reacting drug means insulin injections do not have to precede mealtimes by hours, which in turn gives people with diabetes more flexibility in their daily schedules. Insulin can aggregate and form fibrillar interdigitated beta-sheets. This can cause injection amyloidosis, and prevents the storage of insulin for long periods. Function Secretion Beta cells in the islets of Langerhans release insulin in two phases. The first-phase release is rapidly triggered in response to increased blood glucose levels, and lasts about 10 minutes. The second phase is a sustained, slow release of newly formed vesicles triggered independently of sugar, peaking in 2 to 3 hours. The two phases of the insulin release suggest that insulin granules are present in diverse stated populations or "pools". During the first phase of insulin exocytosis, most of the granules predispose for exocytosis are released after the calcium internalization. This pool is known as Readily Releasable Pool (RRP). The RRP granules represent 0.3-0.7% of the total insulin-containing granule population, and they are found immediately adjacent to the plasma membrane. During the second phase of exocytosis, insulin granules require mobilization of granules to the plasma membrane and a previous preparation to undergo their release. Thus, the second phase of insulin release is governed by the rate at which granules get ready for release. This pool is known as a Reserve Pool (RP). The RP is released slower than the RRP (RRP: 18 granules/min; RP: 6 granules/min). Reduced first-phase insulin release may be the earliest detectable beta cell defect predicting onset of type 2 diabetes. First-phase release and insulin sensitivity are independent predictors of diabetes. The description of first phase release is as follows: Glucose enters the β-cells through the glucose transporters, GLUT 2. At low blood sugar levels little glucose enters the β-cells; at high blood glucose concentrations large quantities of glucose enter these cells. The glucose that enters the β-cell is phosphorylated to glucose-6-phosphate (G-6-P) by glucokinase (hexokinase IV) which is not inhibited by G-6-P in the way that the hexokinases in other tissues (hexokinase I – III) are affected by this product. This means that the intracellular G-6-P concentration remains proportional to the blood sugar concentration. Glucose-6-phosphate enters glycolytic pathway and then, via the pyruvate dehydrogenase reaction, into the Krebs cycle, where multiple, high-energy ATP molecules are produced by the oxidation of acetyl CoA (the Krebs cycle substrate), leading to a rise in the ATP:ADP ratio within the cell. An increased intracellular ATP:ADP ratio closes the ATP-sensitive SUR1/Kir6.2 potassium channel (see sulfonylurea receptor). This prevents potassium ions (K+) from leaving the cell by facilitated diffusion, leading to a buildup of intracellular potassium ions. As a result, the inside of the cell becomes less negative with respect to the outside, leading to the depolarization of the cell surface membrane. Upon depolarization, voltage-gated calcium ion (Ca2+) channels open, allowing calcium ions to move into the cell by facilitated diffusion. The cytosolic calcium ion concentration can also be increased by calcium release from intracellular stores via activation of ryanodine receptors. The calcium ion concentration in the cytosol of the beta cells can also, or additionally, be increased through the activation of phospholipase C resulting from the binding of an extracellular ligand (hormone or neurotransmitter) to a G protein-coupled membrane receptor. Phospholipase C cleaves the membrane phospholipid, phosphatidyl inositol 4,5-bisphosphate, into inositol 1,4,5-trisphosphate and diacylglycerol. Inositol 1,4,5-trisphosphate (IP3) then binds to receptor proteins in the plasma membrane of the endoplasmic reticulum (ER). This allows the release of Ca2+ ions from the ER via IP3-gated channels, which raises the cytosolic concentration of calcium ions independently of the effects of a high blood glucose concentration. Parasympathetic stimulation of the pancreatic islets operates via this pathway to increase insulin secretion into the blood. The significantly increased amount of calcium ions in the cells' cytoplasm causes the release into the blood of previously synthesized insulin, which has been stored in intracellular secretory vesicles. This is the primary mechanism for release of insulin. Other substances known to stimulate insulin release include the amino acids arginine and leucine, parasympathetic release of acetylcholine (acting via the phospholipase C pathway), sulfonylurea, cholecystokinin (CCK, also via phospholipase C), and the gastrointestinally derived incretins, such as glucagon-like peptide-1 (GLP-1) and glucose-dependent insulinotropic peptide (GIP). Release of insulin is strongly inhibited by norepinephrine (noradrenaline), which leads to increased blood glucose levels during stress. It appears that release of catecholamines by the sympathetic nervous system has conflicting influences on insulin release by beta cells, because insulin release is inhibited by α2-adrenergic receptors and stimulated by β2-adrenergic receptors. The net effect of norepinephrine from sympathetic nerves and epinephrine from adrenal glands on insulin release is inhibition due to dominance of the α-adrenergic receptors. When the glucose level comes down to the usual physiologic value, insulin release from the β-cells slows or stops. If the blood glucose level drops lower than this, especially to dangerously low levels, release of hyperglycemic hormones (most prominently glucagon from islet of Langerhans alpha cells) forces release of glucose into the blood from the liver glycogen stores, supplemented by gluconeogenesis if the glycogen stores become depleted. By increasing blood glucose, the hyperglycemic hormones prevent or correct life-threatening hypoglycemia. Evidence of impaired first-phase insulin release can be seen in the glucose tolerance test, demonstrated by a substantially elevated blood glucose level at 30 minutes after the ingestion of a glucose load (75 or 100 g of glucose), followed by a slow drop over the next 100 minutes, to remain above 120 mg/100 mL after two hours after the start of the test. In a normal person the blood glucose level is corrected (and may even be slightly over-corrected) by the end of the test. An insulin spike is a 'first response' to blood glucose increase, this response is individual and dose specific although it was always previously assumed to be food type specific only. Oscillations Even during digestion, in general, one or two hours following a meal, insulin release from the pancreas is not continuous, but oscillates with a period of 3–6 minutes, changing from generating a blood insulin concentration more than about 800 p mol/l to less than 100 pmol/L (in rats). This is thought to avoid downregulation of insulin receptors in target cells, and to assist the liver in extracting insulin from the blood. This oscillation is important to consider when administering insulin-stimulating medication, since it is the oscillating blood concentration of insulin release, which should, ideally, be achieved, not a constant high concentration. This may be achieved by delivering insulin rhythmically to the portal vein, by light activated delivery, or by islet cell transplantation to the liver. Blood insulin level The blood insulin level can be measured in international units, such as μIU/mL or in molar concentration, such as pmol/L, where 1 μIU/mL equals 6.945 pmol/L. A typical blood level between meals is 8–11 μIU/mL (57–79 pmol/L). Signal transduction The effects of insulin are initiated by its binding to a receptor, the insulin receptor (IR), present in the cell membrane. The receptor molecule contains an α- and β subunits. Two molecules are joined to form what is known as a homodimer. Insulin binds to the α-subunits of the homodimer, which faces the extracellular side of the cells. The β subunits have tyrosine kinase enzyme activity which is triggered by the insulin binding. This activity provokes the autophosphorylation of the β subunits and subsequently the phosphorylation of proteins inside the cell known as insulin receptor substrates (IRS). The phosphorylation of the IRS activates a signal transduction cascade that leads to the activation of other kinases as well as transcription factors that mediate the intracellular effects of insulin. The cascade that leads to the insertion of GLUT4 glucose transporters into the cell membranes of muscle and fat cells, and to the synthesis of glycogen in liver and muscle tissue, as well as the conversion of glucose into triglycerides in liver, adipose, and lactating mammary gland tissue, operates via the activation, by IRS-1, of phosphoinositol 3 kinase (PI3K). This enzyme converts a phospholipid in the cell membrane by the name of phosphatidylinositol 4,5-bisphosphate (PIP2), into phosphatidylinositol 3,4,5-triphosphate (PIP3), which, in turn, activates protein kinase B (PKB). Activated PKB facilitates the fusion of GLUT4 containing endosomes with the cell membrane, resulting in an increase in GLUT4 transporters in the plasma membrane. PKB also phosphorylates glycogen synthase kinase (GSK), thereby inactivating this enzyme. This means that its substrate, glycogen synthase (GS), cannot be phosphorylated, and remains dephosphorylated, and therefore active. The active enzyme, glycogen synthase (GS), catalyzes the rate limiting step in the synthesis of glycogen from glucose. Similar dephosphorylations affect the enzymes controlling the rate of glycolysis leading to the synthesis of fats via malonyl-CoA in the tissues that can generate triglycerides, and also the enzymes that control the rate of gluconeogenesis in the liver. The overall effect of these final enzyme dephosphorylations is that, in the tissues that can carry out these reactions, glycogen and fat synthesis from glucose are stimulated, and glucose production by the liver through glycogenolysis and gluconeogenesis are inhibited. The breakdown of triglycerides by adipose tissue into free fatty acids and glycerol is also inhibited. After the intracellular signal that resulted from the binding of insulin to its receptor has been produced, termination of signaling is then needed. As mentioned below in the section on degradation, endocytosis and degradation of the receptor bound to insulin is a main mechanism to end signaling. In addition, the signaling pathway is also terminated by dephosphorylation of the tyrosine residues in the various signaling pathways by tyrosine phosphatases. Serine/Threonine kinases are also known to reduce the activity of insulin. The structure of the insulin–insulin receptor complex has been determined using the techniques of X-ray crystallography. Physiological effects The actions of insulin on the global human metabolism level include: Increase of cellular intake of certain substances, most prominently glucose in muscle and adipose tissue (about two-thirds of body cells) Increase of DNA replication and protein synthesis via control of amino acid uptake Modification of the activity of numerous enzymes. The actions of insulin (indirect and direct) on cells include: Stimulates the uptake of glucose – Insulin decreases blood glucose concentration by inducing intake of glucose by the cells. This is possible because Insulin causes the insertion of the GLUT4 transporter in the cell membranes of muscle and fat tissues which allows glucose to enter the cell. Increased fat synthesis – insulin forces fat cells to take in blood glucose, which is converted into triglycerides; decrease of insulin causes the reverse. Increased esterification of fatty acids – forces adipose tissue to make neutral fats (i.e., triglycerides) from fatty acids; decrease of insulin causes the reverse. Decreased lipolysis in – forces reduction in conversion of fat cell lipid stores into blood fatty acids and glycerol; decrease of insulin causes the reverse. Induced glycogen synthesis – When glucose levels are high, insulin induces the formation of glycogen by the activation of the hexokinase enzyme, which adds a phosphate group in glucose, thus resulting in a molecule that cannot exit the cell. At the same time, insulin inhibits the enzyme glucose-6-phosphatase, which removes the phosphate group. These two enzymes are key for the formation of glycogen. Also, insulin activates the enzymes phosphofructokinase and glycogen synthase which are responsible for glycogen synthesis. Decreased gluconeogenesis and glycogenolysis – decreases production of glucose from noncarbohydrate substrates, primarily in the liver (the vast majority of endogenous insulin arriving at the liver never leaves the liver); decrease of insulin causes glucose production by the liver from assorted substrates. Decreased proteolysis – decreasing the breakdown of protein Decreased autophagy – decreased level of degradation of damaged organelles. Postprandial levels inhibit autophagy completely. Increased amino acid uptake – forces cells to absorb circulating amino acids; decrease of insulin inhibits absorption. Arterial muscle tone – forces arterial wall muscle to relax, increasing blood flow, especially in microarteries; decrease of insulin reduces flow by allowing these muscles to contract. Increase in the secretion of hydrochloric acid by parietal cells in the stomach. Increased potassium uptake – forces cells synthesizing glycogen (a very spongy, "wet" substance, that increases the content of intracellular water, and its accompanying K+ ions) to absorb potassium from the extracellular fluids; lack of insulin inhibits absorption. Insulin's increase in cellular potassium uptake lowers potassium levels in blood plasma. This possibly occurs via insulin-induced translocation of the Na+/K+-ATPase to the surface of skeletal muscle cells. Decreased renal sodium excretion. In hepatocytes, insulin binding acutely leads to activation of protein phosphatase 2A (PP2A), which dephosphorylates the bifunctional enzyme fructose bisphosphatase-2 (PFKB1), activating the phosphofructokinase-2 (PFK-2) active site. PFK-2 increases production of fructose 2,6-bisphosphate. Fructose 2,6-bisphosphate allosterically activates PFK-1, which favors glycolysis over gluconeogenesis. Increased glycolysis increases the formation of malonyl-CoA, a molecule that can be shunted into lipogenesis and that allosterically inhibits of carnitine palmitoyltransferase I (CPT1), a mitochondrial enzyme necessary for the translocation of fatty acids into the intermembrane space of the mitochondria for fatty acid metabolism. Insulin also influences other body functions, such as vascular compliance and cognition. Once insulin enters the human brain, it enhances learning and memory and benefits verbal memory in particular. Enhancing brain insulin signaling by means of intranasal insulin administration also enhances the acute thermoregulatory and glucoregulatory response to food intake, suggesting that central nervous insulin contributes to the co-ordination of a wide variety of homeostatic or regulatory processes in the human body. Insulin also has stimulatory effects on gonadotropin-releasing hormone from the hypothalamus, thus favoring fertility. Degradation Once an insulin molecule has docked onto the receptor and effected its action, it may be released back into the extracellular environment, or it may be degraded by the cell. The two primary sites for insulin clearance are the liver and the kidney. It is broken down by the enzyme, protein-disulfide reductase (glutathione), which breaks the disulphide bonds between the A and B chains. The liver clears most insulin during first-pass transit, whereas the kidney clears most of the insulin in systemic circulation. Degradation normally involves endocytosis of the insulin-receptor complex, followed by the action of insulin-degrading enzyme. An insulin molecule produced endogenously by the beta cells is estimated to be degraded within about one hour after its initial release into circulation (insulin half-life ~ 4–6 minutes). Regulator of endocannabinoid metabolism Insulin is a major regulator of endocannabinoid (EC) metabolism and insulin treatment has been shown to reduce intracellular ECs, the 2-arachidonoylglycerol (2-AG) and anandamide (AEA), which correspond with insulin-sensitive expression changes in enzymes of EC metabolism. In insulin-resistant adipocytes, patterns of insulin-induced enzyme expression is disturbed in a manner consistent with elevated EC synthesis and reduced EC degradation. Findings suggest that insulin-resistant adipocytes fail to regulate EC metabolism and decrease intracellular EC levels in response to insulin stimulation, whereby obese insulin-resistant individuals exhibit increased concentrations of ECs. This dysregulation contributes to excessive visceral fat accumulation and reduced adiponectin release from abdominal adipose tissue, and further to the onset of several cardiometabolic risk factors that are associated with obesity and type 2 diabetes. Hypoglycemia Hypoglycemia, also known as "low blood sugar", is when blood sugar decreases to below normal levels. This may result in a variety of symptoms including clumsiness, trouble talking, confusion, loss of consciousness, seizures or death. A feeling of hunger, sweating, shakiness and weakness may also be present. Symptoms typically come on quickly. The most common cause of hypoglycemia is medications used to treat diabetes such as insulin and sulfonylureas. Risk is greater in diabetics who have eaten less than usual, exercised more than usual or have consumed alcohol. Other causes of hypoglycemia include kidney failure, certain tumors, such as insulinoma, liver disease, hypothyroidism, starvation, inborn error of metabolism, severe infections, reactive hypoglycemia and a number of drugs including alcohol. Low blood sugar may occur in otherwise healthy babies who have not eaten for a few hours. Diseases and syndromes There are several conditions in which insulin disturbance is pathologic: Diabetes – general term referring to all states characterized by hyperglycemia. It can be of the following types: Type 1 diabetes – autoimmune-mediated destruction of insulin-producing β-cells in the pancreas, resulting in absolute insulin deficiency Type 2 diabetes – either inadequate insulin production by the β-cells or insulin resistance or both because of reasons not completely understood. there is correlation with diet, with sedentary lifestyle, with obesity, with age and with metabolic syndrome. Causality has been demonstrated in multiple model organisms including mice and monkeys; importantly, non-obese people do get Type 2 diabetes due to diet, sedentary lifestyle and unknown risk factors, though this may not be a causal relationship. it is likely that there is genetic susceptibility to develop Type 2 diabetes under certain environmental conditions Other types of impaired glucose tolerance (see Diabetes) Insulinoma – a tumor of beta cells producing excess insulin or reactive hypoglycemia. Metabolic syndrome – a poorly understood condition first called syndrome X by Gerald Reaven. It is not clear whether the syndrome has a single, treatable cause, or is the result of body changes leading to type 2 diabetes. It is characterized by elevated blood pressure, dyslipidemia (disturbances in blood cholesterol forms and other blood lipids), and increased waist circumference (at least in populations in much of the developed world). The basic underlying cause may be the insulin resistance that precedes type 2 diabetes, which is a diminished capacity for insulin response in some tissues (e.g., muscle, fat). It is common for morbidities such as essential hypertension, obesity, type 2 diabetes, and cardiovascular disease (CVD) to develop. Polycystic ovary syndrome – a complex syndrome in women in the reproductive years where anovulation and androgen excess are commonly displayed as hirsutism. In many cases of PCOS, insulin resistance is present. Medical uses Biosynthetic human insulin (insulin human rDNA, INN) for clinical use is manufactured by recombinant DNA technology. Biosynthetic human insulin has increased purity when compared with extractive animal insulin, enhanced purity reducing antibody formation. Researchers have succeeded in introducing the gene for human insulin into plants as another method of producing insulin ("biopharming") in safflower. This technique is anticipated to reduce production costs. Several analogs of human insulin are available. These insulin analogs are closely related to the human insulin structure, and were developed for specific aspects of glycemic control in terms of fast action (prandial insulins) and long action (basal insulins). The first biosynthetic insulin analog was developed for clinical use at mealtime (prandial insulin), Humalog (insulin lispro), it is more rapidly absorbed after subcutaneous injection than regular insulin, with an effect 15 minutes after injection. Other rapid-acting analogues are NovoRapid and Apidra, with similar profiles. All are rapidly absorbed due to amino acid sequences that will reduce formation of dimers and hexamers (monomeric insulins are more rapidly absorbed). Fast acting insulins do not require the injection-to-meal interval previously recommended for human insulin and animal insulins. The other type is long acting insulin; the first of these was Lantus (insulin glargine). These have a steady effect for an extended period from 18 to 24 hours. Likewise, another protracted insulin analogue (Levemir) is based on a fatty acid acylation approach. A myristic acid molecule is attached to this analogue, which associates the insulin molecule to the abundant serum albumin, which in turn extends the effect and reduces the risk of hypoglycemia. Both protracted analogues need to be taken only once daily, and are used for type 1 diabetics as the basal insulin. A combination of a rapid acting and a protracted insulin is also available, making it more likely for patients to achieve an insulin profile that mimics that of the body's own insulin release. Insulin is also used in many cell lines, such as CHO-s, HEK 293 or Sf9, for the manufacturing of monoclonal antibodies, virus vaccines, and gene therapy products. Insulin is usually taken as subcutaneous injections by single-use syringes with needles, via an insulin pump, or by repeated-use insulin pens with disposable needles. Inhaled insulin is also available in the U.S. market. The Dispovan Single-Use Pen Needle by HMD is India’s first insulin pen needle that makes self-administration easy. Featuring extra-thin walls and a multi-bevel tapered point, these pen needles prioritise patient comfort by minimising pain and ensuring seamless medication delivery. The product aims to provide affordable Pen Needles to the developing part of the country through its wide distribution channel. Additionally, the universal design of these needles guarantees compatibility with all insulin pens. Unlike many medicines, insulin cannot be taken by mouth because, like nearly all other proteins introduced into the gastrointestinal tract, it is reduced to fragments, whereupon all activity is lost. There has been some research into ways to protect insulin from the digestive tract, so that it can be administered orally or sublingually. In 2021, the World Health Organization added insulin to its model list of essential medicines. Insulin, and all other medications, are supplied free of charge to people with diabetes by the National Health Service in the countries of the United Kingdom. History of study Discovery In 1869, while studying the structure of the pancreas under a microscope, Paul Langerhans, a medical student in Berlin, identified some previously unnoticed tissue clumps scattered throughout the bulk of the pancreas. The function of the "little heaps of cells", later known as the islets of Langerhans, initially remained unknown, but Édouard Laguesse later suggested they might produce secretions that play a regulatory role in digestion. Paul Langerhans' son, Archibald, also helped to understand this regulatory role. In 1889, the physician Oskar Minkowski, in collaboration with Joseph von Mering, removed the pancreas from a healthy dog to test its assumed role in digestion. On testing the urine, they found sugar, establishing for the first time a relationship between the pancreas and diabetes. In 1901, another major step was taken by the American physician and scientist Eugene Lindsay Opie, when he isolated the role of the pancreas to the islets of Langerhans: "Diabetes mellitus when the result of a lesion of the pancreas is caused by destruction of the islets of Langerhans and occurs only when these bodies are in part or wholly destroyed". Over the next two decades researchers made several attempts to isolate the islets' secretions. In 1906 George Ludwig Zuelzer achieved partial success in treating dogs with pancreatic extract, but he was unable to continue his work. Between 1911 and 1912, E.L. Scott at the University of Chicago tried aqueous pancreatic extracts and noted "a slight diminution of glycosuria", but was unable to convince his director of his work's value; it was shut down. Israel Kleiner demonstrated similar effects at Rockefeller University in 1915, but World War I interrupted his work and he did not return to it. In 1916, Nicolae Paulescu developed an aqueous pancreatic extract which, when injected into a diabetic dog, had a normalizing effect on blood sugar levels. He had to interrupt his experiments because of World War I, and in 1921 he wrote four papers about his work carried out in Bucharest and his tests on a diabetic dog. Later that year, he published "Research on the Role of the Pancreas in Food Assimilation". The name "insulin" was coined by Edward Albert Sharpey-Schafer in 1916 for a hypothetical molecule produced by pancreatic islets of Langerhans (Latin insula for islet or island) that controls glucose metabolism. Unbeknown to Sharpey-Schafer, Jean de Meyer had introduced the very similar word "insuline" in 1909 for the same molecule. Extraction and purification In October 1920, Canadian Frederick Banting concluded that the digestive secretions that Minkowski had originally studied were breaking down the islet secretion, thereby making it impossible to extract successfully. A surgeon by training, Banting knew that blockages of the pancreatic duct would lead most of the pancreas to atrophy, while leaving the islets of Langerhans intact. He reasoned that a relatively pure extract could be made from the islets once most of the rest of the pancreas was gone. He jotted a note to himself: "Ligate pancreatic ducts of dog. Keep dogs alive till acini degenerate leaving Islets. Try to isolate the internal secretion of these + relieve glycosurea[sic]." In the spring of 1921, Banting traveled to Toronto to explain his idea to John Macleod, Professor of Physiology at the University of Toronto. Macleod was initially skeptical, since Banting had no background in research and was not familiar with the latest literature, but he agreed to provide lab space for Banting to test out his ideas. Macleod also arranged for two undergraduates to be Banting's lab assistants that summer, but Banting required only one lab assistant. Charles Best and Clark Noble flipped a coin; Best won the coin toss and took the first shift. This proved unfortunate for Noble, as Banting kept Best for the entire summer and eventually shared half his Nobel Prize money and credit for the discovery with Best. On 30 July 1921, Banting and Best successfully isolated an extract ("isletin") from the islets of a duct-tied dog and injected it into a diabetic dog, finding that the extract reduced its blood sugar by 40% in 1 hour. Banting and Best presented their results to Macleod on his return to Toronto in the fall of 1921, but Macleod pointed out flaws with the experimental design, and suggested the experiments be repeated with more dogs and better equipment. He moved Banting and Best into a better laboratory and began paying Banting a salary from his research grants. Several weeks later, the second round of experiments was also a success, and Macleod helped publish their results privately in Toronto that November. Bottlenecked by the time-consuming task of duct-tying dogs and waiting several weeks to extract insulin, Banting hit upon the idea of extracting insulin from the fetal calf pancreas, which had not yet developed digestive glands. By December, they had also succeeded in extracting insulin from the adult cow pancreas. Macleod discontinued all other research in his laboratory to concentrate on the purification of insulin. He invited biochemist James Collip to help with this task, and the team felt ready for a clinical test within a month. On 11 January 1922, Leonard Thompson, a 14-year-old diabetic who lay dying at the Toronto General Hospital, was given the first injection of insulin. However, the extract was so impure that Thompson had a severe allergic reaction, and further injections were cancelled. Over the next 12 days, Collip worked day and night to improve the ox-pancreas extract. A second dose was injected on 23 January, eliminating the glycosuria that was typical of diabetes without causing any obvious side-effects. The first American patient was Elizabeth Hughes, the daughter of U.S. Secretary of State Charles Evans Hughes. The first patient treated in the U.S. was future woodcut artist James D. Havens; John Ralston Williams imported insulin from Toronto to Rochester, New York, to treat Havens. Banting and Best never worked well with Collip, regarding him as something of an interloper, and Collip left the project soon after. Over the spring of 1922, Best managed to improve his techniques to the point where large quantities of insulin could be extracted on demand, but the preparation remained impure. The drug firm Eli Lilly and Company had offered assistance not long after the first publications in 1921, and they took Lilly up on the offer in April. In November, Lilly's head chemist, George B. Walden discovered isoelectric precipitation and was able to produce large quantities of highly refined insulin. Shortly thereafter, insulin was offered for sale to the general public. Patent Toward the end of January 1922, tensions mounted between the four "co-discoverers" of insulin and Collip briefly threatened to separately patent his purification process. John G. FitzGerald, director of the non-commercial public health institution Connaught Laboratories, therefore stepped in as peacemaker. The resulting agreement of 25 January 1922 established two key conditions: 1) that the collaborators would sign a contract agreeing not to take out a patent with a commercial pharmaceutical firm during an initial working period with Connaught; and 2) that no changes in research policy would be allowed unless first discussed among FitzGerald and the four collaborators. It helped contain disagreement and tied the research to Connaught's public mandate. Initially, Macleod and Banting were particularly reluctant to patent their process for insulin on grounds of medical ethics. However, concerns remained that a private third-party would hijack and monopolize the research (as Eli Lilly and Company had hinted), and that safe distribution would be difficult to guarantee without capacity for quality control. To this end, Edward Calvin Kendall gave valuable advice. He had isolated thyroxin at the Mayo Clinic in 1914 and patented the process through an arrangement between himself, the brothers Mayo, and the University of Minnesota, transferring the patent to the public university. On 12 April, Banting, Best, Collip, Macleod, and FitzGerald wrote jointly to the president of the University of Toronto to propose a similar arrangement with the aim of assigning a patent to the Board of Governors of the university. The letter emphasized that:The assignment to the University of Toronto Board of Governors was completed on 15 January 1923, for the token payment of $1.00. The arrangement was congratulated in The World's Work in 1923 as "a step forward in medical ethics". It has also received much media attention in the 2010s regarding the issue of healthcare and drug affordability. Following further concern regarding Eli Lilly's attempts to separately patent parts of the manufacturing process, Connaught's Assistant Director and Head of the Insulin Division Robert Defries established a patent pooling policy which would require producers to freely share any improvements to the manufacturing process without compromising affordability. Structural analysis and synthesis Purified animal-sourced insulin was initially the only type of insulin available for experiments and diabetics. John Jacob Abel was the first to produce the crystallised form in 1926. Evidence of the protein nature was first given by Michael Somogyi, Edward A. Doisy, and Philip A. Shaffer in 1924. It was fully proven when Hans Jensen and Earl A. Evans Jr. isolated the amino acids phenylalanine and proline in 1935. The amino acid structure of insulin was first characterized in 1951 by Frederick Sanger, and the first synthetic insulin was produced simultaneously in the labs of Panayotis Katsoyannis at the University of Pittsburgh and Helmut Zahn at RWTH Aachen University in the mid-1960s. Synthetic crystalline bovine insulin was achieved by Chinese researchers in 1965. The complete 3-dimensional structure of insulin was determined by X-ray crystallography in Dorothy Hodgkin's laboratory in 1969. Hans E. Weber discovered preproinsulin while working as a research fellow at the University of California Los Angeles in 1974. In 1973–1974, Weber learned the techniques of how to isolate, purify, and translate messenger RNA. To further investigate insulin, he obtained pancreatic tissues from a slaughterhouse in Los Angeles and then later from animal stock at UCLA. He isolated and purified total messenger RNA from pancreatic islet cells which was then translated in oocytes from Xenopus laevis and precipitated using anti-insulin antibodies. When total translated protein was run on an SDS-polyacrylamide gel electrophoresis and sucrose gradient, peaks corresponding to insulin and proinsulin were isolated. However, to the surprise of Weber a third peak was isolated corresponding to a molecule larger than proinsulin. After reproducing the experiment several times, he consistently noted this large peak prior to proinsulin that he determined must be a larger precursor molecule upstream of proinsulin. In May 1975, at the American Diabetes Association meeting in New York, Weber gave an oral presentation of his work where he was the first to name this precursor molecule "preproinsulin". Following this oral presentation, Weber was invited to dinner to discuss his paper and findings by Donald Steiner, a researcher who contributed to the characterization of proinsulin. A year later in April 1976, this molecule was further characterized and sequenced by Steiner, referencing the work and discovery of Hans Weber. Preproinsulin became an important molecule to study the process of transcription and translation. The first genetically engineered (recombinant), synthetic human insulin was produced using E. coli in 1978 by Arthur Riggs and Keiichi Itakura at the Beckman Research Institute of the City of Hope in collaboration with Herbert Boyer at Genentech. Genentech, founded by Swanson, Boyer and Eli Lilly and Company, went on in 1982 to sell the first commercially available biosynthetic human insulin under the brand name Humulin. The vast majority of insulin used worldwide is biosynthetic recombinant human insulin or its analogues. Recently, another recombinant approach has been used by a pioneering group of Canadian researchers, using an easily grown safflower plant, for the production of much cheaper insulin. Recombinant insulin is produced either in yeast (usually Saccharomyces cerevisiae) or E. coli In yeast, insulin may be engineered as a single-chain protein with a KexII endoprotease (a yeast homolog of PCI/PCII) site that separates the insulin A chain from a C-terminally truncated insulin B chain. A chemically synthesized C-terminal tail containing the missing threonine is then grafted onto insulin by reverse proteolysis using the inexpensive protease trypsin; typically the lysine on the C-terminal tail is protected with a chemical protecting group to prevent proteolysis. The ease of modular synthesis and the relative safety of modifications in that region accounts for common insulin analogs with C-terminal modifications (e.g. lispro, aspart, glulisine). The Genentech synthesis and completely chemical synthesis such as that by Bruce Merrifield are not preferred because the efficiency of recombining the two insulin chains is low, primarily due to competition with the precipitation of insulin B chain. Nobel Prizes The Nobel Prize committee in 1923 credited the practical extraction of insulin to a team at the University of Toronto and awarded the Nobel Prize to two men: Frederick Banting and John Macleod. They were awarded the Nobel Prize in Physiology or Medicine in 1923 for the discovery of insulin. Banting, incensed that Best was not mentioned, shared his prize with him, and Macleod immediately shared his with James Collip. The patent for insulin was sold to the University of Toronto for one dollar. Two other Nobel Prizes have been awarded for work on insulin. British molecular biologist Frederick Sanger, who determined the primary structure of insulin in 1955, was awarded the 1958 Nobel Prize in Chemistry. Rosalyn Sussman Yalow received the 1977 Nobel Prize in Medicine for the development of the radioimmunoassay for insulin. Several Nobel Prizes also have an indirect connection with insulin. George Minot, co-recipient of the 1934 Nobel Prize for the development of the first effective treatment for pernicious anemia, had diabetes. William Castle observed that the 1921 discovery of insulin, arriving in time to keep Minot alive, was therefore also responsible for the discovery of a cure for pernicious anemia. Dorothy Hodgkin was awarded a Nobel Prize in Chemistry in 1964 for the development of crystallography, the technique she used for deciphering the complete molecular structure of insulin in 1969. Controversy The work published by Banting, Best, Collip and Macleod represented the preparation of purified insulin extract suitable for use on human patients. Although Paulescu discovered the principles of the treatment, his saline extract could not be used on humans; he was not mentioned in the 1923 Nobel Prize. Ian Murray was particularly active in working to correct "the historical wrong" against Nicolae Paulescu. Murray was a professor of physiology at the Anderson College of Medicine in Glasgow, Scotland, the head of the department of Metabolic Diseases at a leading Glasgow hospital, vice-president of the British Association of Diabetes, and a founding member of the International Diabetes Federation. Murray wrote: In a private communication, Arne Tiselius, former head of the Nobel Institute, expressed his personal opinion that Paulescu was equally worthy of the award in 1923. References Further reading Famous Canadian Physicians: Sir Frederick Banting at Library and Archives Canada External links University of Toronto Libraries Collection: Discovery and Early Development of Insulin, 1920–1925 CBC Digital Archives – Banting, Best, Macleod, Collip: Chasing a Cure for Diabetes Animations of insulin's action in the body at AboutKidsHealth.ca (archived 9 March 2011) Animal products Genes on human chromosome 11 Hormones of glucose metabolism Human hormones Insulin receptor agonists Insulin-like growth factor receptor agonists Pancreatic hormones Peptide hormones Recombinant proteins Tumor markers
Insulin
[ "Chemistry", "Biology" ]
11,279
[ "Biomarkers", "Natural products", "Biotechnology products", "Animal products", "Tumor markers", "Recombinant proteins", "Chemical pathology" ]
14,900
https://en.wikipedia.org/wiki/ISO%203166
ISO 3166 is a standard published by the International Organization for Standardization (ISO) that defines codes for the names of countries, dependent territories, special areas of geographical interest, and their principal subdivisions (e.g., provinces or states). The official name of the standard is Codes for the representation of names of countries and their subdivisions. Parts It consists of three parts: ISO 3166-1, Codes for the representation of names of countries and their subdivisions – Part 1: Country codes, defines codes for the names of countries, dependent territories, and special areas of geographical interest. It defines three sets of country codes: ISO 3166-1 alpha-2 – two-letter country codes which are the most widely used of the three, and used most prominently for the Internet's country code top-level domains (with a few exceptions). ISO 3166-1 alpha-3 – three-letter country codes which allow a better visual association between the codes and the country names than the alpha-2 codes. ISO 3166-1 numeric – three-digit country codes which are identical to those developed and maintained by the United Nations Statistics Division, with the advantage of script (writing system) independence, and hence useful for people or systems using non-Latin scripts. ISO 3166-2, Codes for the representation of names of countries and their subdivisions – Part 2: Country subdivision code, defines codes for the names of the principal subdivisions (e.g., provinces, states, departments, regions) of all countries coded in ISO 3166-1. ISO 3166-3, Codes for the representation of names of countries and their subdivisions – Part 3: Code for formerly used names of countries, defines codes for country names which have been deleted from ISO 3166-1 since its first publication in 1974. Editions The first edition of ISO 3166, which included only alphabetic country codes, was published in 1974. The second edition, published in 1981, also included numeric country codes, with the third and fourth editions published in 1988 and 1993 respectively. The fifth edition, published between 1997 and 1999, was expanded into three parts to include codes for subdivisions and former countries. ISO 3166 Maintenance Agency The ISO 3166 standard is maintained by the ISO 3166 Maintenance Agency (ISO 3166/MA), located at the ISO central office in Geneva. Originally it was located at the Deutsches Institut für Normung (DIN) in Berlin. Its principal tasks are: To add and to eliminate country names and to assign code elements to them; To publish lists of country names and code elements; To maintain a reference list of all country code elements and subdivision code elements used and their period of use; To issue newsletters announcing changes to the code tables; To advise users on the application of ISO 3166. Members There are fifteen experts with voting rights on the ISO 3166/MA. Nine are representatives of national standards organizations: Association française de normalisation (AFNOR)France American National Standards Institute (ANSI)United States British Standards Institution (BSI)United Kingdom Deutsches Institut für Normung (DIN)Germany Japanese Industrial Standards Committee (JISC)Japan Standards Australia (SA)Australia Kenya Bureau of Standards (KEBS)Kenya Standardization Administration of China (SAC)China Swedish Standards Institute (SIS)Sweden The other six are representatives of major United Nations agencies or other international organizations who are all users of ISO 3166-1: International Atomic Energy Agency (IAEA) International Civil Aviation Organization (ICAO) International Telecommunication Union (ITU) Internet Corporation for Assigned Names and Numbers (ICANN) Universal Postal Union (UPU) United Nations Economic Commission for Europe (UNECE) The ISO 3166/MA has further associated members who do not participate in the votes but who, through their expertise, have significant influence on the decision-taking procedure in the maintenance agency. Codes beginning with "X" Country codes beginning with "X" are used for private custom use (reserved), never for official codes. Despite the words "private custom", the use may include other public standards. ISO affirms that no country code beginning with X will ever be standardised. Examples of X codes include: The ISO 3166-based NATO country codes (STANAG 1059, 9th edition) use "X" codes for imaginary exercise countries ranging from XXB for "Brownland" to XXY for "Yellowland", as well as for major commands such as XXE for SHAPE or XXS for SACLANT. X currencies defined in ISO 4217. Current country codes See also ISO standards ISO 3166 Explanatory notes References External links ISO 3166 Maintenance Agency, International Organization for Standardization (ISO) 03166 1974 introductions 1974 establishments Internationalization and localization
ISO 3166
[ "Technology" ]
970
[ "Natural language and computing", "Internationalization and localization" ]
14,907
https://en.wikipedia.org/wiki/Inverse%20function
In mathematics, the inverse function of a function (also called the inverse of ) is a function that undoes the operation of . The inverse of exists if and only if is bijective, and if it exists, is denoted by For a function , its inverse admits an explicit description: it sends each element to the unique element such that . As an example, consider the real-valued function of a real variable given by . One can think of as the function which multiplies its input by 5 then subtracts 7 from the result. To undo this, one adds 7 to the input, then divides the result by 5. Therefore, the inverse of is the function defined by Definitions Let be a function whose domain is the set , and whose codomain is the set . Then is invertible if there exists a function from to such that for all and for all . If is invertible, then there is exactly one function satisfying this property. The function is called the inverse of , and is usually denoted as , a notation introduced by John Frederick William Herschel in 1813. The function is invertible if and only if it is bijective. This is because the condition for all implies that is injective, and the condition for all implies that is surjective. The inverse function to can be explicitly described as the function . Inverses and composition Recall that if is an invertible function with domain and codomain , then , for every and for every . Using the composition of functions, this statement can be rewritten to the following equations between functions: and where is the identity function on the set ; that is, the function that leaves its argument unchanged. In category theory, this statement is used as the definition of an inverse morphism. Considering function composition helps to understand the notation . Repeatedly composing a function with itself is called iteration. If is applied times, starting with the value , then this is written as ; so , etc. Since , composing and yields , "undoing" the effect of one application of . Notation While the notation might be misunderstood, certainly denotes the multiplicative inverse of and has nothing to do with the inverse function of . The notation might be used for the inverse function to avoid ambiguity with the multiplicative inverse. In keeping with the general notation, some English authors use expressions like to denote the inverse of the sine function applied to (actually a partial inverse; see below). Other authors feel that this may be confused with the notation for the multiplicative inverse of , which can be denoted as . To avoid any confusion, an inverse trigonometric function is often indicated by the prefix "arc" (for Latin ). For instance, the inverse of the sine function is typically called the arcsine function, written as . Similarly, the inverse of a hyperbolic function is indicated by the prefix "ar" (for Latin ). For instance, the inverse of the hyperbolic sine function is typically written as . The expressions like can still be useful to distinguish the multivalued inverse from the partial inverse: . Other inverse special functions are sometimes prefixed with the prefix "inv", if the ambiguity of the notation should be avoided. Examples Squaring and square root functions The function given by is not injective because for all . Therefore, is not invertible. If the domain of the function is restricted to the nonnegative reals, that is, we take the function with the same rule as before, then the function is bijective and so, invertible. The inverse function here is called the (positive) square root function and is denoted by . Standard inverse functions The following table shows several standard functions and their inverses: Formula for the inverse Many functions given by algebraic formulas possess a formula for their inverse. This is because the inverse of an invertible function has an explicit description as . This allows one to easily determine inverses of many functions that are given by algebraic formulas. For example, if is the function then to determine for a real number , one must find the unique real number such that . This equation can be solved: Thus the inverse function is given by the formula Sometimes, the inverse of a function cannot be expressed by a closed-form formula. For example, if is the function then is a bijection, and therefore possesses an inverse function . The formula for this inverse has an expression as an infinite sum: Properties Since a function is a special type of binary relation, many of the properties of an inverse function correspond to properties of converse relations. Uniqueness If an inverse function exists for a given function , then it is unique. This follows since the inverse function must be the converse relation, which is completely determined by . Symmetry There is a symmetry between a function and its inverse. Specifically, if is an invertible function with domain and codomain , then its inverse has domain and image , and the inverse of is the original function . In symbols, for functions and , and This statement is a consequence of the implication that for to be invertible it must be bijective. The involutory nature of the inverse can be concisely expressed by The inverse of a composition of functions is given by Notice that the order of and have been reversed; to undo followed by , we must first undo , and then undo . For example, let and let . Then the composition is the function that first multiplies by three and then adds five, To reverse this process, we must first subtract five, and then divide by three, This is the composition . Self-inverses If is a set, then the identity function on is its own inverse: More generally, a function is equal to its own inverse, if and only if the composition is equal to . Such a function is called an involution. Graph of the inverse If is invertible, then the graph of the function is the same as the graph of the equation This is identical to the equation that defines the graph of , except that the roles of and have been reversed. Thus the graph of can be obtained from the graph of by switching the positions of the and axes. This is equivalent to reflecting the graph across the line . Inverses and derivatives By the inverse function theorem, a continuous function of a single variable (where ) is invertible on its range (image) if and only if it is either strictly increasing or decreasing (with no local maxima or minima). For example, the function is invertible, since the derivative is always positive. If the function is differentiable on an interval and for each , then the inverse is differentiable on . If , the derivative of the inverse is given by the inverse function theorem, Using Leibniz's notation the formula above can be written as This result follows from the chain rule (see the article on inverse functions and differentiation). The inverse function theorem can be generalized to functions of several variables. Specifically, a continuously differentiable multivariable function is invertible in a neighborhood of a point as long as the Jacobian matrix of at is invertible. In this case, the Jacobian of at is the matrix inverse of the Jacobian of at . Real-world examples Let be the function that converts a temperature in degrees Celsius to a temperature in degrees Fahrenheit, then its inverse function converts degrees Fahrenheit to degrees Celsius, since Suppose assigns each child in a family its birth year. An inverse function would output which child was born in a given year. However, if the family has children born in the same year (for instance, twins or triplets, etc.) then the output cannot be known when the input is the common birth year. As well, if a year is given in which no child was born then a child cannot be named. But if each child was born in a separate year, and if we restrict attention to the three years in which a child was born, then we do have an inverse function. For example, Let be the function that leads to an percentage rise of some quantity, and be the function producing an percentage fall. Applied to $100 with = 10%, we find that applying the first function followed by the second does not restore the original value of $100, demonstrating the fact that, despite appearances, these two functions are not inverses of each other. The formula to calculate the pH of a solution is . In many cases we need to find the concentration of acid from a pH measurement. The inverse function is used. Generalizations Partial inverses Even if a function is not one-to-one, it may be possible to define a partial inverse of by restricting the domain. For example, the function is not one-to-one, since . However, the function becomes one-to-one if we restrict to the domain , in which case (If we instead restrict to the domain , then the inverse is the negative of the square root of .) Alternatively, there is no need to restrict the domain if we are content with the inverse being a multivalued function: Sometimes, this multivalued inverse is called the full inverse of , and the portions (such as and −) are called branches. The most important branch of a multivalued function (e.g. the positive square root) is called the principal branch, and its value at is called the principal value of . For a continuous function on the real line, one branch is required between each pair of local extrema. For example, the inverse of a cubic function with a local maximum and a local minimum has three branches (see the adjacent picture). These considerations are particularly important for defining the inverses of trigonometric functions. For example, the sine function is not one-to-one, since for every real (and more generally for every integer ). However, the sine is one-to-one on the interval , and the corresponding partial inverse is called the arcsine. This is considered the principal branch of the inverse sine, so the principal value of the inverse sine is always between − and . The following table describes the principal branch of each inverse trigonometric function: Left and right inverses Function composition on the left and on the right need not coincide. In general, the conditions "There exists such that " and "There exists such that " imply different properties of . For example, let denote the squaring map, such that for all in , and let denote the square root map, such that for all . Then for all in ; that is, is a right inverse to . However, is not a left inverse to , since, e.g., . Left inverses If , a left inverse for (or retraction of ) is a function such that composing with from the left gives the identity function That is, the function satisfies the rule If , then . The function must equal the inverse of on the image of , but may take any values for elements of not in the image. A function with nonempty domain is injective if and only if it has a left inverse. An elementary proof runs as follows: If is the left inverse of , and , then . If nonempty is injective, construct a left inverse as follows: for all , if is in the image of , then there exists such that . Let ; this definition is unique because is injective. Otherwise, let be an arbitrary element of .For all , is in the image of . By construction, , the condition for a left inverse. In classical mathematics, every injective function with a nonempty domain necessarily has a left inverse; however, this may fail in constructive mathematics. For instance, a left inverse of the inclusion of the two-element set in the reals violates indecomposability by giving a retraction of the real line to the set . Right inverses A right inverse for (or section of ) is a function such that That is, the function satisfies the rule If , then Thus, may be any of the elements of that map to under . A function has a right inverse if and only if it is surjective (though constructing such an inverse in general requires the axiom of choice). If is the right inverse of , then is surjective. For all , there is such that . If is surjective, has a right inverse , which can be constructed as follows: for all , there is at least one such that (because is surjective), so we choose one to be the value of . Two-sided inverses An inverse that is both a left and right inverse (a two-sided inverse), if it exists, must be unique. In fact, if a function has a left inverse and a right inverse, they are both the same two-sided inverse, so it can be called the inverse. If is a left inverse and a right inverse of , for all , . A function has a two-sided inverse if and only if it is bijective. A bijective function is injective, so it has a left inverse (if is the empty function, is its own left inverse). is surjective, so it has a right inverse. By the above, the left and right inverse are the same. If has a two-sided inverse , then is a left inverse and right inverse of , so is injective and surjective. Preimages If is any function (not necessarily invertible), the preimage (or inverse image) of an element is defined to be the set of all elements of that map to : The preimage of can be thought of as the image of under the (multivalued) full inverse of the function . The notion can be generalized to subsets of the range. Specifically, if is any subset of , the preimage of , denoted by , is the set of all elements of that map to : For example, take the function . This function is not invertible as it is not bijective, but preimages may be defined for subsets of the codomain, e.g. . The original notion and its generalization are related by the identity The preimage of a single element – a singleton set – is sometimes called the fiber of . When is the set of real numbers, it is common to refer to as a level set. See also Lagrange inversion theorem, gives the Taylor series expansion of the inverse function of an analytic function Integral of inverse functions Inverse Fourier transform Reversible computing Notes References Bibliography Further reading External links Basic concepts in set theory Unary operations
Inverse function
[ "Mathematics" ]
3,011
[ "Functions and mappings", "Unary operations", "Mathematical objects", "Basic concepts in set theory", "Mathematical relations" ]
14,909
https://en.wikipedia.org/wiki/Inertia
Inertia is the natural tendency of objects in motion to stay in motion and objects at rest to stay at rest, unless a force causes the velocity to change. It is one of the fundamental principles in classical physics, and described by Isaac Newton in his first law of motion (also known as The Principle of Inertia). It is one of the primary manifestations of mass, one of the core quantitative properties of physical systems. Newton writes: In his 1687 work Philosophiæ Naturalis Principia Mathematica, Newton defined inertia as a property: History and development Early understanding of inertial motion Professor John H. Lienhard points out the Mozi – based on a Chinese text from the Warring States period (475–221 BCE) – as having given the first description of inertia. Before the European Renaissance, the prevailing theory of motion in western philosophy was that of Aristotle (384–322 BCE). On the surface of the Earth, the inertia property of physical objects is often masked by gravity and the effects of friction and air resistance, both of which tend to decrease the speed of moving objects (commonly to the point of rest). This misled the philosopher Aristotle to believe that objects would move only as long as force was applied to them. Aristotle said that all moving objects (on Earth) eventually come to rest unless an external power (force) continued to move them. Aristotle explained the continued motion of projectiles, after being separated from their projector, as an (itself unexplained) action of the surrounding medium continuing to move the projectile. Despite its general acceptance, Aristotle's concept of motion was disputed on several occasions by notable philosophers over nearly two millennia. For example, Lucretius (following, presumably, Epicurus) stated that the "default state" of the matter was motion, not stasis (stagnation). In the 6th century, John Philoponus criticized the inconsistency between Aristotle's discussion of projectiles, where the medium keeps projectiles going, and his discussion of the void, where the medium would hinder a body's motion. Philoponus proposed that motion was not maintained by the action of a surrounding medium, but by some property imparted to the object when it was set in motion. Although this was not the modern concept of inertia, for there was still the need for a power to keep a body in motion, it proved a fundamental step in that direction. This view was strongly opposed by Averroes and by many scholastic philosophers who supported Aristotle. However, this view did not go unchallenged in the Islamic world, where Philoponus had several supporters who further developed his ideas. In the 11th century, Persian polymath Ibn Sina (Avicenna) claimed that a projectile in a vacuum would not stop unless acted upon. Theory of impetus In the 14th century, Jean Buridan rejected the notion that a motion-generating property, which he named impetus, dissipated spontaneously. Buridan's position was that a moving object would be arrested by the resistance of the air and the weight of the body which would oppose its impetus. Buridan also maintained that impetus increased with speed; thus, his initial idea of impetus was similar in many ways to the modern concept of momentum. Despite the obvious similarities to more modern ideas of inertia, Buridan saw his theory as only a modification to Aristotle's basic philosophy, maintaining many other peripatetic views, including the belief that there was still a fundamental difference between an object in motion and an object at rest. Buridan also believed that impetus could be not only linear but also circular in nature, causing objects (such as celestial bodies) to move in a circle. Buridan's theory was followed up by his pupil Albert of Saxony (1316–1390) and the Oxford Calculators, who performed various experiments which further undermined the Aristotelian model. Their work in turn was elaborated by Nicole Oresme who pioneered the practice of illustrating the laws of motion with graphs. Shortly before Galileo's theory of inertia, Giambattista Benedetti modified the growing theory of impetus to involve linear motion alone: Benedetti cites the motion of a rock in a sling as an example of the inherent linear motion of objects, forced into circular motion. Classical inertia According to science historian Charles Coulston Gillispie, inertia "entered science as a physical consequence of Descartes' geometrization of space-matter, combined with the immutability of God." The first physicist to completely break away from the Aristotelian model of motion was Isaac Beeckman in 1614. The term "inertia" was first introduced by Johannes Kepler in his Epitome Astronomiae Copernicanae (published in three parts from 1617 to 1621). However, the meaning of Kepler's term, which he derived from the Latin word for "idleness" or "laziness", was not quite the same as its modern interpretation. Kepler defined inertia only in terms of resistance to movement, once again based on the axiomatic assumption that rest was a natural state which did not need explanation. It was not until the later work of Galileo and Newton unified rest and motion in one principle that the term "inertia" could be applied to those concepts as it is today. The principle of inertia, as formulated by Aristotle for "motions in a void", includes that a mundane object tends to resist a change in motion. The Aristotelian division of motion into mundane and celestial became increasingly problematic in the face of the conclusions of Nicolaus Copernicus in the 16th century, who argued that the Earth is never at rest, but is actually in constant motion around the Sun. Galileo, in his further development of the Copernican model, recognized these problems with the then-accepted nature of motion and, at least partially, as a result, included a restatement of Aristotle's description of motion in a void as a basic physical principle: A body moving on a level surface will continue in the same direction at a constant speed unless disturbed. Galileo writes that "all external impediments removed, a heavy body on a spherical surface concentric with the earth will maintain itself in that state in which it has been; if placed in a movement towards the west (for example), it will maintain itself in that movement." This notion, which is termed "circular inertia" or "horizontal circular inertia" by historians of science, is a precursor to, but is distinct from, Newton's notion of rectilinear inertia. For Galileo, a motion is "horizontal" if it does not carry the moving body towards or away from the center of the Earth, and for him, "a ship, for instance, having once received some impetus through the tranquil sea, would move continually around our globe without ever stopping." It is also worth noting that Galileo later (in 1632) concluded that based on this initial premise of inertia, it is impossible to tell the difference between a moving object and a stationary one without some outside reference to compare it against. This observation ultimately came to be the basis for Albert Einstein to develop the theory of special relativity. Concepts of inertia in Galileo's writings would later come to be refined, modified, and codified by Isaac Newton as the first of his laws of motion (first published in Newton's work, Philosophiæ Naturalis Principia Mathematica, in 1687): Every body perseveres in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed thereon. Despite having defined the concept in his laws of motion, Newton did not actually use the term "inertia.” In fact, he originally viewed the respective phenomena as being caused by "innate forces" inherent in matter which resist any acceleration. Given this perspective, and borrowing from Kepler, Newton conceived of "inertia" as "the innate force possessed by an object which resists changes in motion", thus defining "inertia" to mean the cause of the phenomenon, rather than the phenomenon itself. However, Newton's original ideas of "innate resistive force" were ultimately problematic for a variety of reasons, and thus most physicists no longer think in these terms. As no alternate mechanism has been readily accepted, and it is now generally accepted that there may not be one that we can know, the term "inertia" has come to mean simply the phenomenon itself, rather than any inherent mechanism. Thus, ultimately, "inertia" in modern classical physics has come to be a name for the same phenomenon as described by Newton's first law of motion, and the two concepts are now considered to be equivalent. Relativity Albert Einstein's theory of special relativity, as proposed in his 1905 paper entitled "On the Electrodynamics of Moving Bodies", was built on the understanding of inertial reference frames developed by Galileo, Huygens and Newton. While this revolutionary theory did significantly change the meaning of many Newtonian concepts such as mass, energy, and distance, Einstein's concept of inertia remained at first unchanged from Newton's original meaning. However, this resulted in a limitation inherent in special relativity: the principle of relativity could only apply to inertial reference frames. To address this limitation, Einstein developed his general theory of relativity ("The Foundation of the General Theory of Relativity", 1916), which provided a theory including noninertial (accelerated) reference frames. In general relativity, the concept of inertial motion got a broader meaning. Taking into account general relativity, inertial motion is any movement of a body that is not affected by forces of electrical, magnetic, or other origin, but that is only under the influence of gravitational masses. Physically speaking, this happens to be exactly what a properly functioning three-axis accelerometer is indicating when it does not detect any proper acceleration. Etymology The term inertia comes from the Latin word iners, meaning idle or sluggish. Rotational inertia A quantity related to inertia is rotational inertia (→ moment of inertia), the property that a rotating rigid body maintains its state of uniform rotational motion. Its angular momentum remains unchanged unless an external torque is applied; this is called conservation of angular momentum. Rotational inertia is often considered in relation to a rigid body. For example, a gyroscope uses the property that it resists any change in the axis of rotation. See also Flywheel energy storage devices which may also be known as an Inertia battery General relativity Vertical and horizontal Inertial navigation system Inertial response of synchronous generators in an electrical grid Kinetic energy List of moments of inertia Mach's principle Newton's laws of motion Classical mechanics Special relativity Parallel axis theorem References Further reading Butterfield, H (1957), The Origins of Modern Science, . Clement, J (1982), "Students' preconceptions in introductory mechanics", American Journal of Physics vol 50, pp 66–71 Crombie, A C (1959), Medieval and Early Modern Science, vol. 2. McCloskey, M (1983), "Intuitive physics", Scientific American, April, pp. 114–123. McCloskey, M & Carmazza, A (1980), "Curvilinear motion in the absence of external forces: naïve beliefs about the motion of objects", Science vol. 210, pp. 1139–1141. External links Why Does the Earth Spin? (YouTube) Classical mechanics Gyroscopes Mass Velocity Articles containing video clips
Inertia
[ "Physics", "Mathematics" ]
2,446
[ "Scalar physical quantities", "Physical phenomena", "Physical quantities", "Quantity", "Mass", "Classical mechanics", "Size", "Motion (physics)", "Mechanics", "Vector physical quantities", "Velocity", "Wikipedia categories named after physical quantities", "Matter" ]
14,914
https://en.wikipedia.org/wiki/Industrial%20Revolution
The Industrial Revolution, sometimes divided into the First Industrial Revolution and Second Industrial Revolution, was a period of global transition of the human economy towards more widespread, efficient and stable manufacturing processes that succeeded the Agricultural Revolution. Beginning in Great Britain, the Industrial Revolution spread to continental Europe and the United States, from around 1760 to about 1820–1840. This transition included going from hand production methods to machines; new chemical manufacturing and iron production processes; the increasing use of water power and steam power; the development of machine tools; and the rise of the mechanised factory system. Output greatly increased, and the result was an unprecedented rise in population and the rate of population growth. The textile industry was the first to use modern production methods, and textiles became the dominant industry in terms of employment, value of output, and capital invested. Many of the technological and architectural innovations were of British origin. By the mid-18th century, Britain was the world's leading commercial nation, controlling a global trading empire with colonies in North America and the Caribbean. Britain had major military and political hegemony on the Indian subcontinent; particularly with the proto-industrialised Mughal Bengal, which underwent the de-industrialisation of India through the activities of the East India Company. The development of trade and the rise of business were among the major causes of the Industrial Revolution. Developments in law also facilitated the revolution, such as courts ruling in favour of property rights. An entrepreneurial spirit and consumer revolution helped drive industrialisation in Britain, which after 1800, was emulated in Belgium, the United States, and France. The Industrial Revolution marked a major turning point in history, comparable only to humanity's adoption of agriculture with respect to material advancement. The Industrial Revolution influenced in some way almost every aspect of daily life. In particular, average income and population began to exhibit unprecedented sustained growth. Some economists have said the most important effect of the Industrial Revolution was that the standard of living for the general population in the Western world began to increase consistently for the first time in history, although others have said that it did not begin to improve meaningfully until the late 19th and 20th centuries. GDP per capita was broadly stable before the Industrial Revolution and the emergence of the modern capitalist economy, while the Industrial Revolution began an era of per-capita economic growth in capitalist economies. Economic historians agree that the onset of the Industrial Revolution is the most important event in human history since the domestication of animals and plants. The precise start and end of the Industrial Revolution is still debated among historians, as is the pace of economic and social changes. According to Cambridge historian Leigh Shaw-Taylor, Britain was already industrialising in the 17th century, and "Our database shows that a groundswell of enterprise and productivity transformed the economy in the 17th century, laying the foundations for the world's first industrial economy. Britain was already a nation of makers by the year 1700" and "the history of Britain needs to be rewritten". Eric Hobsbawm held that the Industrial Revolution began in Britain in the 1780s and was not fully felt until the 1830s or 1840s, while T. S. Ashton held that it occurred roughly between 1760 and 1830. Rapid adoption of mechanized textiles spinning occurred in Britain in the 1780s, and high rates of growth in steam power and iron production occurred after 1800. Mechanised textile production spread from Great Britain to continental Europe and the United States in the early 19th century, with important centres of textiles, iron and coal emerging in Belgium and the United States and later textiles in France. An economic recession occurred from the late 1830s to the early 1840s when the adoption of the Industrial Revolution's early innovations, such as mechanised spinning and weaving, slowed as their markets matured; and despite the increasing adoption of locomotives, steamboats and steamships, and hot blast iron smelting. New technologies such as the electrical telegraph, widely introduced in the 1840s and 1850s in the United Kingdom and the United States, were not powerful enough to drive high rates of economic growth. Rapid economic growth began to reoccur after 1870, springing from a new group of innovations in what has been called the Second Industrial Revolution. These included new steel-making processes, mass production, assembly lines, electrical grid systems, the large-scale manufacture of machine tools, and the use of increasingly advanced machinery in steam-powered factories. Etymology The earliest recorded use of the term "Industrial Revolution" was in July 1799 by French envoy Louis-Guillaume Otto, announcing that France had entered the race to industrialise. In his 1976 book Keywords: A Vocabulary of Culture and Society, Raymond Williams states in the entry for "Industry": "The idea of a new social order based on major industrial change was clear in Southey and Owen, between 1811 and 1818, and was implicit as early as Blake in the early 1790s and Wordsworth at the turn of the [19th] century." The term Industrial Revolution applied to technological change was becoming more common by the late 1830s, as in Jérôme-Adolphe Blanqui's description in 1837 of . Friedrich Engels in The Condition of the Working Class in England in 1844 spoke of "an industrial revolution, a revolution which at the same time changed the whole of civil society". Although Engels wrote his book in the 1840s, it was not translated into English until the late 19th century, and his expression did not enter everyday language until then. Credit for popularising the term may be given to Arnold Toynbee, whose 1881 lectures gave a detailed account of the term. Economic historians and authors such as Mendels, Pomeranz, and Kridte argue that proto-industrialisation in parts of Europe, the Muslim world, Mughal India, and China created the social and economic conditions that led to the Industrial Revolution, thus causing the Great Divergence. Some historians, such as John Clapham and Nicholas Crafts, have argued that the economic and social changes occurred gradually and that the term revolution is a misnomer. This is still a subject of debate among some historians. Requirements Six factors facilitated industrialisation: high levels of agricultural productivity, such as that reflected in the British Agricultural Revolution, to provide excess manpower and food; a pool of managerial and entrepreneurial skills; available ports, rivers, canals, and roads to cheaply move raw materials and outputs; natural resources such as coal, iron, and waterfalls; political stability and a legal system that supported business; and financial capital available to invest. Once industrialisation began in Great Britain, new factors can be added: the eagerness of British entrepreneurs to export industrial expertise and the willingness to import the process. Britain met the criteria and industrialized starting in the 18th century, and then it exported the process to western Europe (especially Belgium, France, and the German states) in the early 19th century. The United States copied the British model in the early 19th century, and Japan copied the Western European models in the late 19th century. Important technological developments The commencement of the Industrial Revolution is closely linked to a small number of innovations, beginning in the second half of the 18th century. By the 1830s, the following gains had been made in important technologies: Textiles – mechanised cotton spinning powered by water, and later steam, increased the output of a worker by a factor of around 500. The power loom increased the output of a worker by a factor of over 40. The cotton gin increased productivity of removing seed from cotton by a factor of 50. Large gains in productivity also occurred in spinning and weaving of wool and linen, but they were not as great as in cotton. Steam power – the efficiency of steam engines increased so that they used between one-fifth and one-tenth as much fuel. The adaptation of stationary steam engines to rotary motion made them suitable for industrial uses. The high-pressure engine had a high power-to-weight ratio, making it suitable for transportation. Steam power underwent a rapid expansion after 1800. Iron making – the substitution of coke for charcoal greatly lowered the fuel cost of pig iron and wrought iron production. Using coke also allowed larger blast furnaces, resulting in economies of scale. The steam engine began being used to power blast air (indirectly by pumping water to a water wheel) in the 1750s, enabling a large increase in iron production by overcoming the limitation of water power. The cast iron blowing cylinder was first used in 1760. It was later improved by making it double acting, which allowed higher blast furnace temperatures. The puddling process produced a structural grade iron at a lower cost than the finery forge. The rolling mill was fifteen times faster than hammering wrought iron. Developed in 1828, hot blast greatly increased fuel efficiency in iron production in the following decades. Invention of machine tools – the first machine tools invented were the screw-cutting lathe, the cylinder boring machine, and the milling machine. Machine tools made the economical manufacture of precision metal parts possible, although it took several decades to develop effective techniques for making interchangeable parts. Textile manufacture British textile industry statistics In 1750, Britain imported 2.5 million pounds of raw cotton, most of which was spun and woven by the cottage industry in Lancashire. The work was done by hand in workers' homes or occasionally in master weavers' shops. Wages in Lancashire were about six times those in India in 1770 when overall productivity in Britain was about three times higher than in India. In 1787, raw cotton consumption was 22 million pounds, most of which was cleaned, carded, and spun on machines. The British textile industry used 52 million pounds of cotton in 1800, which increased to 588 million pounds in 1850. The share of value added by the cotton textile industry in Britain was 2.6% in 1760, 17% in 1801, and 22.4% in 1831. Value added by the British woollen industry was 14.1% in 1801. Cotton factories in Britain numbered approximately 900 in 1797. In 1760, approximately one-third of cotton cloth manufactured in Britain was exported, rising to two-thirds by 1800. In 1781, cotton spun amounted to 5.1 million pounds, which increased to 56 million pounds by 1800. In 1800, less than 0.1% of world cotton cloth was produced on machinery invented in Britain. In 1788, there were 50,000 spindles in Britain, rising to 7 million over the next 30 years. Wool The earliest European attempts at mechanised spinning were with wool; however, wool spinning proved more difficult to mechanise than cotton. Productivity improvement in wool spinning during the Industrial Revolution was significant but far less than that of cotton. Silk Arguably the first highly mechanised factory was John Lombe's water-powered silk mill at Derby, operational by 1721. Lombe learned silk thread manufacturing by taking a job in Italy and acting as an industrial spy; however, because the Italian silk industry guarded its secrets closely, the state of the industry at that time is unknown. Although Lombe's factory was technically successful, the supply of raw silk from Italy was cut off to eliminate competition. In order to promote manufacturing, the Crown paid for models of Lombe's machinery which were exhibited in the Tower of London. Cotton Parts of India, China, Central America, South America, and the Middle East have a long history of hand manufacturing cotton textiles, which became a major industry sometime after 1000 AD. In tropical and subtropical regions where it was grown, most was grown by small farmers alongside their food crops and was spun and woven in households, largely for domestic consumption. In the 15th century, China began to require households to pay part of their taxes in cotton cloth. By the 17th century, almost all Chinese wore cotton clothing. Almost everywhere cotton cloth could be used as a medium of exchange. In India, a significant amount of cotton textiles were manufactured for distant markets, often produced by professional weavers. Some merchants also owned small weaving workshops. India produced a variety of cotton cloth, some of exceptionally fine quality. Cotton was a difficult raw material for Europe to obtain before it was grown on colonial plantations in the Americas. The early Spanish explorers found Native Americans growing unknown species of excellent quality cotton: sea island cotton (Gossypium barbadense) and upland green seeded cotton Gossypium hirsutum. Sea island cotton grew in tropical areas and on barrier islands of Georgia and South Carolina but did poorly inland. Sea island cotton began being exported from Barbados in the 1650s. Upland green seeded cotton grew well on inland areas of the southern U.S. but was not economical because of the difficulty of removing seed, a problem solved by the cotton gin. A strain of cotton seed brought from Mexico to Natchez, Mississippi, in 1806 became the parent genetic material for over 90% of world cotton production today; it produced bolls that were three to four times faster to pick. Trade and textiles The Age of Discovery was followed by a period of colonialism beginning around the 16th century. Following the discovery of a trade route to India around southern Africa by the Portuguese, the British founded the East India Company, along with smaller companies of different nationalities which established trading posts and employed agents to engage in trade throughout the Indian Ocean region. One of the largest segments of this trade was in cotton textiles, which were purchased in India and sold in Southeast Asia, including the Indonesian archipelago where spices were purchased for sale to Southeast Asia and Europe. By the mid-1760s, cloth was over three-quarters of the East India Company's exports. Indian textiles were in demand in the North Atlantic region of Europe where previously only wool and linen were available; however, the number of cotton goods consumed in Western Europe was minor until the early 19th century. Pre-mechanized European textile production By 1600, Flemish refugees began weaving cotton cloth in English towns where cottage spinning and weaving of wool and linen was well established. They were left alone by the guilds who did not consider cotton a threat. Earlier European attempts at cotton spinning and weaving were in 12th-century Italy and 15th-century southern Germany, but these industries eventually ended when the supply of cotton was cut off. The Moors in Spain grew, spun, and wove cotton beginning around the 10th century. British cloth could not compete with Indian cloth because India's labour cost was approximately one-fifth to one-sixth that of Britain's. In 1700 and 1721, the British government passed Calico Acts to protect the domestic woollen and linen industries from the increasing amounts of cotton fabric imported from India. The demand for heavier fabric was met by a domestic industry based around Lancashire that produced fustian, a cloth with flax warp and cotton weft. Flax was used for the warp because wheel-spun cotton did not have sufficient strength, but the resulting blend was not as soft as 100% cotton and was more difficult to sew. On the eve of the Industrial Revolution, spinning and weaving were done in households, for domestic consumption, and as a cottage industry under the putting-out system. Occasionally, the work was done in the workshop of a master weaver. Under the putting-out system, home-based workers produced under contract to merchant sellers, who often supplied the raw materials. In the off-season, the women, typically farmers' wives, did the spinning and the men did the weaving. Using the spinning wheel, it took anywhere from four to eight spinners to supply one handloom weaver. Invention of textile machinery The flying shuttle, patented in 1733 by John Kay—with a number of subsequent improvements including an important one in 1747—doubled the output of a weaver, worsening the imbalance between spinning and weaving. It became widely used around Lancashire after 1760 when John's son, Robert, invented the dropbox, which facilitated changing thread colors. Lewis Paul patented the roller spinning frame and the flyer-and-bobbin system for drawing wool to a more even thickness. The technology was developed with the help of John Wyatt of Birmingham. Paul and Wyatt opened a mill in Birmingham which used their rolling machine powered by a donkey. In 1743, a factory opened in Northampton with 50 spindles on each of five of Paul and Wyatt's machines. This operated until about 1764. A similar mill was built by Daniel Bourn in Leominster, but this burnt down. Both Lewis Paul and Daniel Bourn patented carding machines in 1748. Based on two sets of rollers that travelled at different speeds, it was later used in the first cotton spinning mill. In 1764, in the village of Stanhill, Lancashire, James Hargreaves invented the spinning jenny, which he patented in 1770. It was the first practical spinning frame with multiple spindles. The jenny worked in a similar manner to the spinning wheel, by first clamping down on the fibres, then by drawing them out, followed by twisting. It was a simple, wooden framed machine that only cost about £6 for a 40-spindle model in 1792 and was used mainly by home spinners. The jenny produced a lightly twisted yarn only suitable for weft, not warp. The spinning frame or water frame was developed by Richard Arkwright who, along with two partners, patented it in 1769. The design was partly based on a spinning machine built by Kay, who was hired by Arkwright. For each spindle the water frame used a series of four pairs of rollers, each operating at a successively higher rotating speed, to draw out the fibre which was then twisted by the spindle. The roller spacing was slightly longer than the fibre length. Too close a spacing caused the fibres to break while too distant a spacing caused uneven thread. The top rollers were leather-covered and loading on the rollers was applied by a weight. The weights kept the twist from backing up before the rollers. The bottom rollers were wood and metal, with fluting along the length. The water frame was able to produce a hard, medium-count thread suitable for warp, finally allowing 100% cotton cloth to be made in Britain. Arkwright and his partners used water power at a factory in Cromford, Derbyshire in 1771, giving the invention its name. Samuel Crompton invented the spinning mule in 1779, so called because it is a hybrid of Arkwright's water frame and James Hargreaves's spinning jenny in the same way that a mule is the product of crossbreeding a female horse with a male donkey. Crompton's mule was able to produce finer thread than hand spinning and at a lower cost. Mule-spun thread was of suitable strength to be used as a warp and finally allowed Britain to produce highly competitive yarn in large quantities. Realising that the expiration of the Arkwright patent would greatly increase the supply of spun cotton and lead to a shortage of weavers, Edmund Cartwright developed a vertical power loom which he patented in 1785. In 1776, he patented a two-man operated loom. Cartwright's loom design had several flaws, the most serious being thread breakage. Samuel Horrocks patented a fairly successful loom in 1813. Horock's loom was improved by Richard Roberts in 1822, and these were produced in large numbers by Roberts, Hill & Co. Roberts was additionally a maker of high-quality machine tools and a pioneer in the use of jigs and gauges for precision workshop measurement. The demand for cotton presented an opportunity to planters in the Southern United States, who thought upland cotton would be a profitable crop if a better way could be found to remove the seed. Eli Whitney responded to the challenge by inventing the inexpensive cotton gin. A man using a cotton gin could remove seed from as much upland cotton in one day as would previously have taken two months to process, working at the rate of one pound of cotton per day. These advances were capitalised on by entrepreneurs, of whom the best known is Arkwright. He is credited with a list of inventions, but these were actually developed by such people as Kay and Thomas Highs; Arkwright nurtured the inventors, patented the ideas, financed the initiatives, and protected the machines. He created the cotton mill which brought the production processes together in a factory, and he developed the use of powerfirst horsepower and then water powerwhich made cotton manufacture a mechanised industry. Other inventors increased the efficiency of the individual steps of spinning (carding, twisting and spinning, and rolling) so that the supply of yarn increased greatly. Steam power was then applied to drive textile machinery. Manchester acquired the nickname Cottonopolis during the early 19th century owing to its sprawl of textile factories. Although mechanisation dramatically decreased the cost of cotton cloth, by the mid-19th century machine-woven cloth still could not equal the quality of hand-woven Indian cloth, in part because of the fineness of thread made possible by the type of cotton used in India, which allowed high thread counts. However, the high productivity of British textile manufacturing allowed coarser grades of British cloth to undersell hand-spun and woven fabric in low-wage India, eventually destroying the Indian industry. Iron industry British iron production statistics Bar iron was the commodity form of iron used as the raw material for making hardware goods such as nails, wire, hinges, horseshoes, wagon tires, chains, etc., as well as structural shapes. A small amount of bar iron was converted into steel. Cast iron was used for pots, stoves, and other items where its brittleness was tolerable. Most cast iron was refined and converted to bar iron, with substantial losses. Bar iron was made by the bloomery process, which was the predominant iron smelting process until the late 18th century. In the UK in 1720, there were 20,500 tons of cast iron produced with charcoal and 400 tons with coke. In 1750 charcoal iron production was 24,500 and coke iron was 2,500 tons. In 1788, the production of charcoal cast iron was 14,000 tons while coke iron production was 54,000 tons. In 1806, charcoal cast iron production was 7,800 tons and coke cast iron was 250,000 tons. In 1750, the UK imported 31,200 tons of bar iron and either refined from cast iron or directly produced 18,800 tons of bar iron using charcoal and 100 tons using coke. In 1796, the UK was making 125,000 tons of bar iron with coke and 6,400 tons with charcoal; imports were 38,000 tons and exports were 24,600 tons. In 1806 the UK did not import bar iron but exported 31,500 tons. Iron process innovations A major change in the iron industries during the Industrial Revolution was the replacement of wood and other bio-fuels with coal; for a given amount of heat, mining coal required much less labour than cutting wood and converting it to charcoal, and coal was much more abundant than wood, supplies of which were becoming scarce before the enormous increase in iron production that took place in the late 18th century. In 1709, Abraham Darby made progress using coke to fuel his blast furnaces at Coalbrookdale. However, the coke pig iron he made was not suitable for making wrought iron and was used mostly for the production of cast iron goods, such as pots and kettles. He had the advantage over his rivals in that his pots, cast by his patented process, were thinner and cheaper than theirs. In 1750, coke had generally replaced charcoal in the smelting of copper and lead and was in widespread use in glass production. In the smelting and refining of iron, coal and coke produced inferior iron to that made with charcoal because of the coal's sulfur content. Low sulfur coals were known, but they still contained harmful amounts. Conversion of coal to coke only slightly reduces the sulfur content. A minority of coals are coking. Another factor limiting the iron industry before the Industrial Revolution was the scarcity of water power to power blast bellows. This limitation was overcome by the steam engine. Use of coal in iron smelting started somewhat before the Industrial Revolution, based on innovations by Clement Clerke and others from 1678, using coal reverberatory furnaces known as cupolas. These were operated by the flames playing on the ore and charcoal or coke mixture, reducing the oxide to metal. This has the advantage that impurities (such as sulphur ash) in the coal do not migrate into the metal. This technology was applied to lead from 1678 and to copper from 1687. It was also applied to iron foundry work in the 1690s, but in this case the reverberatory furnace was known as an air furnace. (The foundry cupola is a different, and later, innovation.) Coke pig iron was hardly used to produce wrought iron until 1755–56, when Darby's son Abraham Darby II built furnaces at Horsehay and Ketley where low sulfur coal was available (and not far from Coalbrookdale). These furnaces were equipped with water-powered bellows, the water being pumped by Newcomen steam engines. The Newcomen engines were not attached directly to the blowing cylinders because the engines alone could not produce a steady air blast. Abraham Darby III installed similar steam-pumped, water-powered blowing cylinders at the Dale Company when he took control in 1768. The Dale Company used several Newcomen engines to drain its mines and made parts for engines which it sold throughout the country. Steam engines made the use of higher-pressure and volume blast practical; however, the leather used in bellows was expensive to replace. In 1757, ironmaster John Wilkinson patented a hydraulic powered blowing engine for blast furnaces. The blowing cylinder for blast furnaces was introduced in 1760 and the first blowing cylinder made of cast iron is believed to be the one used at Carrington in 1768 that was designed by John Smeaton. Cast iron cylinders for use with a piston were difficult to manufacture; the cylinders had to be free of holes and had to be machined smooth and straight to remove any warping. James Watt had great difficulty trying to have a cylinder made for his first steam engine. In 1774 Wilkinson invented a precision boring machine for boring cylinders. After Wilkinson bored the first successful cylinder for a Boulton and Watt steam engine in 1776, he was given an exclusive contract for providing cylinders. After Watt developed a rotary steam engine in 1782, they were widely applied to blowing, hammering, rolling and slitting. The solutions to the sulfur problem were the addition of sufficient limestone to the furnace to force sulfur into the slag as well as the use of low sulfur coal. The use of lime or limestone required higher furnace temperatures to form a free-flowing slag. The increased furnace temperature made possible by improved blowing also increased the capacity of blast furnaces and allowed for increased furnace height. In addition to lower cost and greater availability, coke had other important advantages over charcoal in that it was harder and made the column of materials (iron ore, fuel, slag) flowing down the blast furnace more porous and did not crush in the much taller furnaces of the late 19th century. As cast iron became cheaper and widely available, it began being a structural material for bridges and buildings. A famous early example is the Iron Bridge built in 1778 with cast iron produced by Abraham Darby III. However, most cast iron was converted to wrought iron. Conversion of cast iron had long been done in a finery forge. An improved refining process known as potting and stamping was developed, but this was superseded by Henry Cort's puddling process. Cort developed two significant iron manufacturing processes: rolling in 1783 and puddling in 1784. Puddling produced a structural grade iron at a relatively low cost. Puddling was a means of decarburizing molten pig iron by slow oxidation in a reverberatory furnace by manually stirring it with a long rod. The decarburized iron, having a higher melting point than cast iron, was raked into globs by the puddler. When the glob was large enough, the puddler would remove it. Puddling was backbreaking and extremely hot work. Few puddlers lived to be 40. Because puddling was done in a reverberatory furnace, coal or coke could be used as fuel. The puddling process continued to be used until the late 19th century when iron was being displaced by mild steel. Because puddling required human skill in sensing the iron globs, it was never successfully mechanised. Rolling was an important part of the puddling process because the grooved rollers expelled most of the molten slag and consolidated the mass of hot wrought iron. Rolling was 15 times faster at this than a trip hammer. A different use of rolling, which was done at lower temperatures than that for expelling slag, was in the production of iron sheets, and later structural shapes such as beams, angles, and rails. The puddling process was improved in 1818 by Baldwyn Rogers, who replaced some of the sand lining on the reverberatory furnace bottom with iron oxide. In 1838 John Hall patented the use of roasted tap cinder (iron silicate) for the furnace bottom, greatly reducing the loss of iron through increased slag caused by a sand lined bottom. The tap cinder also tied up some phosphorus, but this was not understood at the time. Hall's process also used iron scale or rust which reacted with carbon in the molten iron. Hall's process, called wet puddling, reduced losses of iron with the slag from almost 50% to around 8%. Puddling became widely used after 1800. Up to that time, British iron manufacturers had used considerable amounts of iron imported from Sweden and Russia to supplement domestic supplies. Because of the increased British production, imports began to decline in 1785, and by the 1790s Britain eliminated imports and became a net exporter of bar iron. Hot blast, patented by the Scottish inventor James Beaumont Neilson in 1828, was the most important development of the 19th century for saving energy in making pig iron. By using preheated combustion air, the amount of fuel to make a unit of pig iron was reduced at first by between one-third using coke or two-thirds using coal; the efficiency gains continued as the technology improved. Hot blast also raised the operating temperature of furnaces, increasing their capacity. Using less coal or coke meant introducing fewer impurities into the pig iron. This meant that lower quality coal could be used in areas where coking coal was unavailable or too expensive; however, by the end of the 19th century transportation costs fell considerably. Shortly before the Industrial Revolution, an improvement was made in the production of steel, which was an expensive commodity and used only where iron would not do, such as for cutting edge tools and for springs. Benjamin Huntsman developed his crucible steel technique in the 1740s. The raw material for this was blister steel, made by the cementation process. The supply of cheaper iron and steel aided a number of industries, such as those making nails, hinges, wire, and other hardware items. The development of machine tools allowed better working of iron, causing it to be increasingly used in the rapidly growing machinery and engine industries. Steam power The development of the stationary steam engine was an important element of the Industrial Revolution; however, during the early period of the Industrial Revolution, most industrial power was supplied by water and wind. In Britain, by 1800 an estimated 10,000 horsepower was being supplied by steam. By 1815 steam power had grown to 210,000 hp. The first commercially successful industrial use of steam power was patented by Thomas Savery in 1698. He constructed in London a low-lift combined vacuum and pressure water pump that generated about one horsepower (hp) and was used in numerous waterworks and in a few mines (hence its "brand name", The Miner's Friend). Savery's pump was economical in small horsepower ranges but was prone to boiler explosions in larger sizes. Savery pumps continued to be produced until the late 18th century. The first successful piston steam engine was introduced by Thomas Newcomen before 1712. Newcomen engines were installed for draining hitherto unworkable deep mines, with the engine on the surface; these were large machines, requiring a significant amount of capital to build, and produced upwards of . They were also used to power municipal water supply pumps. They were extremely inefficient by modern standards, but when located where coal was cheap at pit heads, they opened up a great expansion in coal mining by allowing mines to go deeper. Despite their disadvantages, Newcomen engines were reliable and easy to maintain and continued to be used in the coalfields until the early decades of the 19th century. By 1729, when Newcomen died, his engines had spread to Hungary in 1722, and then to Germany, Austria, and Sweden. A total of 110 are known to have been built by 1733 when the joint patent expired, of which 14 were abroad. In the 1770s the engineer John Smeaton built some very large examples and introduced a number of improvements. A total of 1,454 engines had been built by 1800. A fundamental change in working principles was brought about by Scotsman James Watt. With financial support from his business partner Englishman Matthew Boulton, he had succeeded by 1778 in perfecting his steam engine, which incorporated a series of radical improvements, notably the closing off of the upper part of the cylinder thereby making the low-pressure steam drive the top of the piston instead of the atmosphere; use of a steam jacket; and the celebrated separate steam condenser chamber. The separate condenser did away with the cooling water that had been injected directly into the cylinder which cooled the cylinder and wasted steam. Likewise, the steam jacket kept steam from condensing in the cylinder, also improving efficiency. These improvements increased engine efficiency so that Boulton and Watt's engines used only 20–25% as much coal per horsepower-hour as Newcomen's. Boulton and Watt opened the Soho Foundry for the manufacture of such engines in 1795. In 1783, the Watt steam engine had been fully developed into a double-acting rotative type, which meant that it could be used to directly drive the rotary machinery of a factory or mill. Both of Watt's basic engine types were commercially very successful, and by 1800 the firm Boulton & Watt had constructed 496 engines, with 164 driving reciprocating pumps, 24 serving blast furnaces, and 308 powering mill machinery; most of the engines generated from . Until about 1800, the most common pattern of steam engine was the beam engine, built as an integral part of a stone or brick engine-house, but soon various patterns of self-contained rotative engines (readily removable but not on wheels) were developed, such as the table engine. Around the start of the 19th century, at which time the Boulton and Watt patent expired, the Cornish engineer Richard Trevithick and the American Oliver Evans began to construct higher-pressure non-condensing steam engines, exhausting against the atmosphere. High pressure yielded an engine and boiler compact enough to be used on mobile road and rail locomotives and steamboats. Small industrial power requirements continued to be provided by animal and human muscle until widespread electrification in the early 20th century. These included crank-powered, treadle-powered and horse-powered workshop, and light industrial machinery. Machine tools Pre-industrial machinery was built by various craftsmenmillwrights built watermills and windmills; carpenters made wooden framing; and smiths and turners made metal parts. Wooden components had the disadvantage of changing dimensions with temperature and humidity, and the various joints tended to rack (work loose) over time. As the Industrial Revolution progressed, machines with metal parts and frames became more common. Other important uses of metal parts were in firearms and threaded fasteners, such as machine screws, bolts, and nuts. There was also the need for precision in making parts. Precision would allow better working machinery, interchangeability of parts, and standardization of threaded fasteners. The demand for metal parts led to the development of several machine tools. They have their origins in the tools developed in the 18th century by makers of clocks and watches and scientific instrument makers to enable them to batch-produce small mechanisms. Before the advent of machine tools, metal was worked manually using the basic hand tools of hammers, files, scrapers, saws, and chisels. Consequently, the use of metal machine parts was kept to a minimum. Hand methods of production were laborious and costly, and precision was difficult to achieve. The first large precision machine tool was the cylinder boring machine invented by John Wilkinson in 1774. It was designed to bore the large cylinders on early steam engines. Wilkinson's machine was the first to use the principle of line-boring, where the tool is supported on both ends, unlike earlier designs used for boring cannon that relied on a less stable cantilevered boring bar. The planing machine, the milling machine and the shaping machine were developed in the early decades of the 19th century. Although the milling machine was invented at this time, it was not developed as a serious workshop tool until somewhat later in the 19th century. James Fox of Derby and Matthew Murray of Leeds were manufacturers of machine tools who found success in exporting from England and are also notable for having developed the planer around the same time as Richard Roberts of Manchester. Henry Maudslay, who trained a school of machine tool makers early in the 19th century, was a mechanic with superior ability who had been employed at the Royal Arsenal, Woolwich. He worked as an apprentice at the Royal Arsenal under Jan Verbruggen. In 1774 Verbruggen had installed a horizontal boring machine which was the first industrial size lathe in the UK. Maudslay was hired away by Joseph Bramah for the production of high-security metal locks that required precision craftsmanship. Bramah patented a lathe that had similarities to the slide rest lathe. Maudslay perfected the slide rest lathe, which could cut machine screws of different thread pitches by using changeable gears between the spindle and the lead screw. Before its invention, screws could not be cut to any precision using various earlier lathe designs, some of which copied from a template. The slide rest lathe was called one of history's most important inventions. Although it was not entirely Maudslay's idea, he was the first person to build a functional lathe using a combination of known innovations of the lead screw, slide rest, and change gears. Maudslay left Bramah's employment and set up his own shop. He was engaged to build the machinery for making ships' pulley blocks for the Royal Navy in the Portsmouth Block Mills. These machines were all-metal and were the first machines for mass production and making components with a degree of interchangeability. The lessons Maudslay learned about the need for stability and precision he adapted to the development of machine tools, and in his workshops, he trained a generation of men to build on his work, such as Richard Roberts, Joseph Clement and Joseph Whitworth. The techniques to make mass-produced metal parts of sufficient precision to be interchangeable is largely attributed to a program of the U.S. Department of War which perfected interchangeable parts for firearms in the early 19th century. In the half-century following the invention of the fundamental machine tools, the machine industry became the largest industrial sector of the U.S. economy, by value added. Chemicals The large-scale production of chemicals was an important development during the Industrial Revolution. The first of these was the production of sulphuric acid by the lead chamber process invented by the Englishman John Roebuck (James Watt's first partner) in 1746. He was able to greatly increase the scale of the manufacture by replacing the relatively expensive glass vessels formerly used with larger, less expensive chambers made of riveted sheets of lead. Instead of making a small amount each time, he was able to make around in each of the chambers, at least a tenfold increase. The production of an alkali on a large scale became an important goal as well, and Nicolas Leblanc succeeded in 1791 in introducing a method for the production of sodium carbonate (soda ash). The Leblanc process was a reaction of sulfuric acid with sodium chloride to give sodium sulfate and hydrochloric acid. The sodium sulfate was heated with calcium carbonate and coal to give a mixture of sodium carbonate and calcium sulfide. Adding water separated the soluble sodium carbonate from the calcium sulfide. The process produced a large amount of pollution (the hydrochloric acid was initially vented to the atmosphere, and calcium sulfide was a waste product). Nonetheless, this synthetic soda ash proved economical compared to that produced from burning specific plants (barilla or kelp), which were the previously dominant sources of soda ash, and also to potash (potassium carbonate) produced from hardwood ashes. These two chemicals were very important because they enabled the introduction of a host of other inventions, replacing many small-scale operations with more cost-effective and controllable processes. Sodium carbonate had many uses in the glass, textile, soap, and paper industries. Early uses for sulfuric acid included pickling (removing rust from) iron and steel, and for bleaching cloth. The development of bleaching powder (calcium hypochlorite) by Scottish chemist Charles Tennant in about 1800, based on the discoveries of French chemist Claude Louis Berthollet, revolutionised the bleaching processes in the textile industry by dramatically reducing the time required (from months to days) for the traditional process then in use, which required repeated exposure to the sun in bleach fields after soaking the textiles with alkali or sour milk. Tennant's factory at St Rollox, Glasgow, became the largest chemical plant in the world. After 1860 the focus on chemical innovation was in dyestuffs, and Germany took world leadership, building a strong chemical industry. Aspiring chemists flocked to German universities in the 1860–1914 era to learn the latest techniques. British scientists by contrast, lacked research universities and did not train advanced students; instead, the practice was to hire German-trained chemists. Concrete In 1824 Joseph Aspdin, a British bricklayer turned builder, patented a chemical process for making portland cement which was an important advance in the building trades. This process involves sintering a mixture of clay and limestone to about , then grinding it into a fine powder which is then mixed with water, sand and gravel to produce concrete. Portland cement concrete was used by the English engineer Marc Isambard Brunel several years later when constructing the Thames Tunnel. Concrete was used on a large scale in the construction of the London sewer system a generation later. Gas lighting Though others made a similar innovation elsewhere, the large-scale introduction of gas lighting was the work of William Murdoch, an employee of Boulton & Watt. The process consisted of the large-scale gasification of coal in furnaces, the purification of the gas (removal of sulphur, ammonia, and heavy hydrocarbons), and its storage and distribution. The first gas lighting utilities were established in London between 1812 and 1820. They soon became one of the major consumers of coal in the UK. Gas lighting affected social and industrial organisation because it allowed factories and stores to remain open longer than with tallow candles or oil lamps. Its introduction allowed nightlife to flourish in cities and towns as interiors and streets could be lighted on a larger scale than before. Glass making Glass was made in ancient Greece and Rome. A new method of glass production, known as the cylinder process, was developed in Europe during the early 19th century. In 1832 this process was used by the Chance Brothers to create sheet glass. They became the leading producers of window and plate glass. This advancement allowed for larger panes of glass to be created without interruption, thus freeing up the space planning in interiors as well as the fenestration of buildings. The Crystal Palace is the supreme example of the use of sheet glass in a new and innovative structure. Paper machine A machine for making a continuous sheet of paper on a loop of wire fabric was patented in 1798 by Louis-Nicolas Robert in France. The paper machine is known as a Fourdrinier after the financiers, brothers Sealy and Henry Fourdrinier, who were stationers in London. Although greatly improved and with many variations, the Fourdrinier machine is the predominant means of paper production today. The method of continuous production demonstrated by the paper machine influenced the development of continuous rolling of iron and later steel and other continuous production processes. Agriculture The British Agricultural Revolution is considered one of the causes of the Industrial Revolution because improved agricultural productivity freed up workers to work in other sectors of the economy. In contrast, per-capita food supply in Europe was stagnant or declining and did not improve in some parts of Europe until the late 18th century. The English lawyer Jethro Tull invented an improved seed drill in 1701. It was a mechanical seeder that distributed seeds evenly across a plot of land and planted them at the correct depth. This was important because the yield of seeds harvested to seeds planted at that time was around four or five. Tull's seed drill was very expensive and not very reliable and therefore did not have much of an effect. Good quality seed drills were not produced until the mid 18th century. Joseph Foljambe's Rotherham plough of 1730 was the first commercially successful iron plough. The threshing machine, invented by the Scottish engineer Andrew Meikle in 1784, displaced hand threshing with a flail, a laborious job that took about one-quarter of agricultural labour. Lower labor requirements subsequently result in lowered wages and numbers of farm labourers, who faced near starvation, leading to the 1830 agricultural rebellion of the Swing Riots. Machine tools and metalworking techniques developed during the Industrial Revolution eventually resulted in precision manufacturing techniques in the late 19th century for mass-producing agricultural equipment, such as reapers, binders, and combine harvesters. Mining Coal mining in Britain, particularly in South Wales, started early. Before the steam engine, pits were often shallow bell pits following a seam of coal along the surface, which were abandoned as the coal was extracted. In other cases, if the geology was favourable the coal was mined by means of an adit or drift mine driven into the side of a hill. Shaft mining was done in some areas, but the limiting factor was the problem of removing water. It could be done by hauling buckets of water up the shaft or to a sough (a tunnel driven into a hill to drain a mine). In either case, the water had to be discharged into a stream or ditch at a level where it could flow away by gravity. The introduction of the steam pump by Thomas Savery in 1698 and the Newcomen steam engine in 1712 greatly facilitated the removal of water and enabled shafts to be made deeper, enabling more coal to be extracted. These were developments that had begun before the Industrial Revolution, but the adoption of John Smeaton's improvements to the Newcomen engine followed by James Watt's more efficient steam engines from the 1770s reduced the fuel costs of engines, making mines more profitable. The Cornish engine, developed in the 1810s, was much more efficient than the Watt steam engine. Coal mining was very dangerous owing to the presence of firedamp in many coal seams. Some degree of safety was provided by the safety lamp which was invented in 1816 by Sir Humphry Davy and independently by George Stephenson. However, the lamps proved a false dawn because they became unsafe very quickly and provided a weak light. Firedamp explosions continued, often setting off coal dust explosions, so casualties grew during the entire 19th century. Conditions of work were very poor, with a high casualty rate from rock falls. Transportation At the beginning of the Industrial Revolution, inland transport was by navigable rivers and roads, with coastal vessels employed to move heavy goods by sea. Wagonways were used for conveying coal to rivers for further shipment, but canals had not yet been widely constructed. Animals supplied all of the motive power on land, with sails providing the motive power on the sea. The first horse railways were introduced toward the end of the 18th century, with steam locomotives being introduced in the early decades of the 19th century. Improving sailing technologies boosted average sailing speed by 50% between 1750 and 1830. The Industrial Revolution improved Britain's transport infrastructure with a turnpike road network, a canal and waterway network, and a railway network. Raw materials and finished products could be moved more quickly and cheaply than before. Improved transportation also allowed new ideas to spread quickly. Canals and improved waterways Before and during the Industrial Revolution navigation on several British rivers was improved by removing obstructions, straightening curves, widening and deepening, and building navigation locks. Britain had over of navigable rivers and streams by 1750. Canals and waterways allowed bulk materials to be economically transported long distances inland. This was because a horse could pull a barge with a load dozens of times larger than the load that could be drawn in a cart. Canals began to be built in the UK in the late 18th century to link the major manufacturing centres across the country. Known for its huge commercial success, the Bridgewater Canal in North West England, which opened in 1761 and was mostly funded by The 3rd Duke of Bridgewater. From Worsley to the rapidly growing town of Manchester its construction cost £168,000 (£ ), but its advantages over land and river transport meant that within a year of its opening in 1761, the price of coal in Manchester fell by about half. This success helped inspire a period of intense canal building, known as Canal Mania. Canals were hastily built with the aim of replicating the commercial success of the Bridgewater Canal, the most notable being the Leeds and Liverpool Canal and the Thames and Severn Canal which opened in 1774 and 1789 respectively. By the 1820s a national network was in existence. Canal construction served as a model for the organisation and methods later used to construct the railways. They were eventually largely superseded as profitable commercial enterprises by the spread of the railways from the 1840s on. The last major canal to be built in the United Kingdom was the Manchester Ship Canal, which upon opening in 1894 was the largest ship canal in the world, and opened Manchester as a port. However, it never achieved the commercial success its sponsors had hoped for and signalled canals as a dying mode of transport in an age dominated by railways, which were quicker and often cheaper. Britain's canal network, together with its surviving mill buildings, is one of the most enduring features of the early Industrial Revolution to be seen in Britain. Roads France was known for having an excellent system of roads at the time of the Industrial Revolution; however, most of the roads on the European continent and in the UK were in bad condition and dangerously rutted. Much of the original British road system was poorly maintained by thousands of local parishes, but from the 1720s (and occasionally earlier) turnpike trusts were set up to charge tolls and maintain some roads. Increasing numbers of main roads were turnpiked from the 1750s to the extent that almost every main road in England and Wales was the responsibility of a turnpike trust. New engineered roads were built by John Metcalf, Thomas Telford and most notably John McAdam, with the first 'macadam' stretch of road being Marsh Road at Ashton Gate, Bristol in 1816. The first macadam road in the U.S. was the "Boonsborough Turnpike Road" between Hagerstown and Boonsboro, Maryland in 1823. The major turnpikes radiated from London and were the means by which the Royal Mail was able to reach the rest of the country. Heavy goods transport on these roads was by means of slow, broad-wheeled carts hauled by teams of horses. Lighter goods were conveyed by smaller carts or by teams of packhorse. Stagecoaches carried the rich, and the less wealthy could pay to ride on carriers carts. Productivity of road transport increased greatly during the Industrial Revolution, and the cost of travel fell dramatically. Between 1690 and 1840 productivity almost tripled for long-distance carrying and increased four-fold in stage coaching. Railways Railways were made practical by the widespread introduction of inexpensive puddled iron after 1800, the rolling mill for making rails, and the development of the high-pressure steam engine also around 1800. Reducing friction was one of the major reasons for the success of railroads compared to wagons. This was demonstrated on an iron plate-covered wooden tramway in 1805 at Croydon, England. A good horse on an ordinary turnpike road can draw two thousand pounds, or one ton. A party of gentlemen were invited to witness the experiment, that the superiority of the new road might be established by ocular demonstration. Twelve wagons were loaded with stones, till each wagon weighed three tons, and the wagons were fastened together. A horse was then attached, which drew the wagons with ease, in two hours, having stopped four times, in order to show he had the power of starting, as well as drawing his great load. Wagonways for moving coal in the mining areas had started in the 17th century and were often associated with canal or river systems for the further movement of coal. These were all horse-drawn or relied on gravity, with a stationary steam engine to haul the wagons back to the top of the incline. The first applications of the steam locomotive were on wagon or plate ways (as they were then often called from the cast-iron plates used). Horse-drawn public railways begin in the early 19th century when improvements to pig and wrought iron production were lowering costs. Steam locomotives began being built after the introduction of high-pressure steam engines after the expiration of the Boulton and Watt patent in 1800. High-pressure engines exhausted used steam to the atmosphere, doing away with the condenser and cooling water. They were also much lighter weight and smaller in size for a given horsepower than the stationary condensing engines. A few of these early locomotives were used in mines. Steam-hauled public railways began with the Stockton and Darlington Railway in 1825. The rapid introduction of railways followed the 1829 Rainhill trials, which demonstrated Robert Stephenson's successful locomotive design and the 1828 development of hot blast, which dramatically reduced the fuel consumption of making iron and increased the capacity of the blast furnace. On 15 September 1830, the Liverpool and Manchester Railway, the first inter-city railway in the world, was opened and was attended by Prime Minister Arthur Wellesley. The railway was engineered by Joseph Locke and George Stephenson, linked the rapidly expanding industrial town of Manchester with the port town of Liverpool. The opening was marred by problems caused by the primitive nature of the technology being employed; however, problems were gradually solved, and the railway became highly successful, transporting passengers and freight. The success of the inter-city railway, particularly in the transport of freight and commodities, led to Railway Mania. Construction of major railways connecting the larger cities and towns began in the 1830s but only gained momentum at the very end of the first Industrial Revolution. After many of the workers had completed the railways, they did not return to their rural lifestyles but instead remained in the cities, providing additional workers for the factories. Social effects On a structural level the Industrial Revolution asked society the so-called social question, demanding new ideas for managing large groups of individuals. Visible poverty on one hand and growing population and materialistic wealth on the other caused tensions between the very rich and the poorest people within society. These tensions were sometimes violently released and led to philosophical ideas such as socialism, communism and anarchism. Factory system Prior to the Industrial Revolution, most of the workforce was employed in agriculture, either as self-employed farmers as landowners or tenants or as landless agricultural labourers. It was common for families in various parts of the world to spin yarn, weave cloth and make their own clothing. Households also spun and wove for market production. At the beginning of the Industrial Revolution, India, China, and regions of Iraq and elsewhere in Asia and the Middle East produced most of the world's cotton cloth while Europeans produced wool and linen goods. In Great Britain in the 16th century, the putting-out system was practised, by which farmers and townspeople produced goods for a market in their homes, often described as cottage industry. Typical putting-out system goods included spinning and weaving. Merchant capitalists typically provided the raw materials, paid workers by the piece, and were responsible for the sale of the goods. Embezzlement of supplies by workers and poor quality were common problems. The logistical effort in procuring and distributing raw materials and picking up finished goods were also limitations of the putting-out system. Some early spinning and weaving machinery, such as a 40 spindle jenny for about six pounds in 1792, was affordable for cottagers. Later machinery such as spinning frames, spinning mules and power looms were expensive (especially if water-powered), giving rise to capitalist ownership of factories. The majority of textile factory workers during the Industrial Revolution were unmarried women and children, including many orphans. They typically worked for 12 to 14 hours per day with only Sundays off. It was common for women to take factory jobs seasonally during slack periods of farm work. Lack of adequate transportation, long hours, and poor pay made it difficult to recruit and maintain workers. The change in the social relationship of the factory worker compared to farmers and cottagers was viewed unfavourably by Karl Marx; however, he recognized the increase in productivity made possible by technology. Standards of living Some economists, such as Robert Lucas Jr., say that the real effect of the Industrial Revolution was that "for the first time in history, the living standards of the masses of ordinary people have begun to undergo sustained growth ... Nothing remotely like this economic behaviour is mentioned by the classical economists, even as a theoretical possibility." Others argue that while the growth of the economy's overall productive powers was unprecedented during the Industrial Revolution, living standards for the majority of the population did not grow meaningfully until the late 19th and 20th centuries and that in many ways workers' living standards declined under early capitalism: some studies have estimated that real wages in Britain only increased 15% between the 1780s and 1850s and that life expectancy in Britain did not begin to dramatically increase until the 1870s. The average height of the population declined during the Industrial Revolution, implying that their nutritional status was also decreasing. During the Industrial Revolution, the life expectancy of children increased dramatically. The percentage of the children born in London who died before the age of five decreased from 74.5% in 1730–1749 to 31.8% in 1810–1829. The effects on living conditions have been controversial and were hotly debated by economic and social historians from the 1950s to the 1980s. Over the course of the period from 1813 to 1913, there was a significant increase in worker wages. Food and nutrition Chronic hunger and malnutrition were the norms for the majority of the population of the world including Britain and France until the late 19th century. Until about 1750, malnutrition limited life expectancy in France to about 35 years and about 40 years in Britain. The United States population of the time was adequately fed, much taller on average, and had a life expectancy of 45–50 years, although U.S. life expectancy declined by a few years by the mid 19th century. Food consumption per capita also declined during an episode known as the Antebellum Puzzle. Food supply in Great Britain was adversely affected by the Corn Laws (1815–1846) which imposed tariffs on imported grain. The laws were enacted to keep prices high in order to benefit domestic producers. The Corn Laws were repealed in the early years of the Great Irish Famine. The initial technologies of the Industrial Revolution, such as mechanized textiles, iron and coal, did little, if anything, to lower food prices. In Britain and the Netherlands, food supply increased before the Industrial Revolution with better agricultural practices; however, population grew as well. Housing The rapid population growth in the 19th century included the new industrial and manufacturing cities, as well as service centers such as Edinburgh and London. The critical factor was financing, which was handled by building societies that dealt directly with large contracting firms. Private renting from housing landlords was the dominant tenure. P. Kemp says this was usually of advantage to tenants. People moved in so rapidly there was not enough capital to build adequate housing for everyone, so low-income newcomers squeezed into increasingly overcrowded slums. Clean water, sanitation, and public health facilities were inadequate; the death rate was high, especially infant mortality, and tuberculosis among young adults. Cholera from polluted water and typhoid were endemic. Unlike rural areas, there were no famines such as the one that devastated Ireland in the 1840s. A large exposé literature grew up condemning the unhealthy conditions. By far the most famous publication was by one of the founders of the socialist movement, The Condition of the Working Class in England in 1844 Friedrich Engels describes backstreet sections of Manchester and other mill towns, where people lived in crude shanties and shacks, some not completely enclosed, some with dirt floors. These shanty towns had narrow walkways between irregularly shaped lots and dwellings. There were no sanitary facilities. The population density was extremely high. However, not everyone lived in such poor conditions. The Industrial Revolution also created a middle class of businessmen, clerks, foremen, and engineers who lived in much better conditions. Conditions improved over the course of the 19th century with new public health acts regulating things such as sewage, hygiene, and home construction. In the introduction of his 1892 edition, Engels notes that most of the conditions he wrote about in 1844 had been greatly improved. For example, the Public Health Act 1875 (38 & 39 Vict. c. 55) led to the more sanitary byelaw terraced house. Water and sanitation Pre-industrial water supply relied on gravity systems, and pumping of water was done by water wheels. Pipes were typically made of wood. Steam-powered pumps and iron pipes allowed the widespread piping of water to horse watering troughs and households. Engels' book describes how untreated sewage created awful odours and turned the rivers green in industrial cities. In 1854 John Snow traced a cholera outbreak in Soho in London to fecal contamination of a public water well by a home cesspit. Snow's findings that cholera could be spread by contaminated water took some years to be accepted, but his work led to fundamental changes in the design of public water and waste systems. Literacy In the 18th century, there were relatively high levels of literacy among farmers in England and Scotland. This permitted the recruitment of literate craftsmen, skilled workers, foremen, and managers who supervised the emerging textile factories and coal mines. Much of the labour was unskilled, and especially in textile mills children as young as eight proved useful in handling chores and adding to the family income. Indeed, children were taken out of school to work alongside their parents in the factories. However, by the mid-19th century, unskilled labor forces were common in Western Europe, and British industry moved upscale, needing many more engineers and skilled workers who could handle technical instructions and handle complex situations. Literacy was essential to be hired. A senior government official told Parliament in 1870: Upon the speedy provision of elementary education depends are industrial prosperity. It is of no use trying to give technical teaching to our citizens without elementary education; uneducated labourers—and many of our labourers are utterly uneducated—are, for the most part, unskilled labourers, and if we leave our work–folk any longer unskilled, notwithstanding their strong sinews and determined energy, they will become overmatched in the competition of the world. The invention of the paper machine and the application of steam power to the industrial processes of printing supported a massive expansion of newspaper and pamphlet publishing, which contributed to rising literacy and demands for mass political participation. Clothing and consumer goods Consumers benefited from falling prices for clothing and household articles such as cast iron cooking utensils, and in the following decades, stoves for cooking and space heating. Coffee, tea, sugar, tobacco, and chocolate became affordable to many in Europe. The consumer revolution in England from the early 17th century to the mid-18th century had seen a marked increase in the consumption and variety of luxury goods and products by individuals from different economic and social backgrounds. With improvements in transport and manufacturing technology, opportunities for buying and selling became faster and more efficient than previous. The expanding textile trade in the north of England meant the three-piece suit became affordable to the masses. Founded by potter and retail entrepreneur Josiah Wedgwood in 1759, Wedgwood fine china and porcelain tableware was starting to become a common feature on dining tables. Rising prosperity and social mobility in the 18th century increased the number of people with disposable income for consumption, and the marketing of goods (of which Wedgwood was a pioneer) for individuals, as opposed to items for the household, started to appear, and the new status of goods as status symbols related to changes in fashion and desired for aesthetic appeal. New businesses in various industries appeared in towns and cities throughout Britain. Confectionery was one such industry that saw rapid expansion. According to food historian Polly Russell: "chocolate and biscuits became products for the masses, thanks to the Industrial Revolution and the consumers it created. By the mid-19th century, sweet biscuits were an affordable indulgence and business was booming. Manufacturers such as Huntley & Palmers in Reading, Carr's of Carlisle and McVitie's in Edinburgh transformed from small family-run businesses into state-of-the-art operations". In 1847 Fry's of Bristol produced the first chocolate bar. Their competitor Cadbury of Birmingham was the first to commercialize the association between confectionery and romance when they produced a heart-shaped box of chocolates for Valentine's Day in 1868. The department store became a common feature in major High Streets across Britain; one of the first was opened in 1796 by Harding, Howell & Co. on Pall Mall in London. In the 1860s, fish and chip shops emerged across the country in order to satisfy the needs of the growing industrial population. In addition to goods being sold in the growing number of stores, street sellers were common in an increasingly urbanized country. Matthew White: "Crowds swarmed in every thoroughfare. Scores of street sellers 'cried' merchandise from place to place, advertising the wealth of goods and services on offer. Milkmaids, orange sellers, fishwives and piemen, for example, all walked the streets offering their various wares for sale, while knife grinders and the menders of broken chairs and furniture could be found on street corners". An early soft drinks company, R. White's Lemonade, began in 1845 by selling drinks in London in a wheelbarrow. Increased literacy rates, industrialisation, and the invention of the railway created a new market for cheap popular literature for the masses and the ability for it to be circulated on a large scale. Penny dreadfuls were created in the 1830s to meet this demand. The Guardian described penny dreadfuls as "Britain's first taste of mass-produced popular culture for the young", and "the Victorian equivalent of video games". By the 1860s and 1870s more than one million boys' periodicals were sold per week. Labelled an "authorpreneur" by The Paris Review, Charles Dickens used innovations from the revolution to sell his books, such as the new printing presses, enhanced advertising revenues, and the expansion of railroads. His first novel, The Pickwick Papers (1836), became a publishing phenomenon with its unprecedented success sparking numerous spin-offs and merchandise ranging from Pickwick cigars, playing cards, china figurines, Sam Weller puzzles, Weller boot polish and joke books. Nicholas Dames in The Atlantic writes, "Literature" is not a big enough category for Pickwick. It defined its own, a new one that we have learned to call "entertainment". In 1861, Welsh entrepreneur Pryce Pryce-Jones formed the first mail order business, an idea which would change the nature of retail. Selling Welsh flannel, he created mail order catalogues, with customers able to order by mail for the first timethis following the Uniform Penny Post in 1840 and the invention of the postage stamp (Penny Black) where there was a charge of one penny for carriage and delivery between any two places in the United Kingdom irrespective of distanceand the goods were delivered throughout the UK via the newly created railway system. As the railway network expanded overseas, so did his business. Population increase The Industrial Revolution was the first period in history during which there was a simultaneous increase in both population and per capita income. According to Robert Hughes in The Fatal Shore, the population of England and Wales, which had remained steady at six million from 1700 to 1740, rose dramatically after 1740. The population of England had more than doubled from 8.3 million in 1801 to 16.8 million in 1850 and, by 1901, had nearly doubled again to 30.5 million. Improved conditions led to the population of Britain increasing from 10 million to 30 million in the 19th century. Europe's population increased from about 100 million in 1700 to 400 million by 1900. Urbanization The growth of the modern industry since the late 18th century led to massive urbanisation and the rise of new great cities, first in Europe and then in other regions, as new opportunities brought huge numbers of migrants from rural communities into urban areas. In 1800, only 3% of the world's population lived in cities, compared to nearly 50% by the beginning of the 21st century. Manchester had a population of 10,000 in 1717, but by 1911 it had burgeoned to 2.3 million. Effect on women and family life Women's historians have debated the effect of the Industrial Revolution and capitalism generally on the status of women. Taking a pessimistic side, Alice Clark argues that when capitalism arrived in 17th-century England, it lowered the status of women as they lost much of their economic importance. Clark argues that in 16th-century England, women were engaged in many aspects of industry and agriculture. The home was a central unit of production, and women played a vital role in running farms and in some trades and landed estates. Their useful economic roles gave them a sort of equality with their husbands. However, Clark argues, as capitalism expanded in the 17th century, there was more division of labour with the husband taking paid labour jobs outside the home, and the wife was reduced to unpaid household work. Middle- and upper-class women were confined to an idle domestic existence, supervising servants; lower-class women were forced to take poorly paid jobs. Capitalism, therefore, had a negative effect on powerful women. In a more positive interpretation, Ivy Pinchbeck argues that capitalism created the conditions for women's emancipation. Tilly and Scott have emphasised the continuity in the status of women, finding three stages in English history. In the pre-industrial era, production was mostly for home use, and women produced much of the needs of the households. The second stage was the "family wage economy" of early industrialisation; the entire family depended on the collective wages of its members, including husband, wife, and older children. The third or modern stage is the "family consumer economy", in which the family is the site of consumption, and women are employed in large numbers in retail and clerical jobs to support rising standards of consumption. Ideas of thrift and hard work characterised middle-class families as the Industrial Revolution swept Europe. These values were displayed in Samuel Smiles' book Self-Help, in which he states that the misery of the poorer classes was "voluntary and self-imposed—the results of idleness, thriftlessness, intemperance, and misconduct." Labour conditions Social structure and working conditions In terms of social structure, the Industrial Revolution witnessed the triumph of a middle class of industrialists and businessmen over a landed class of nobility and gentry. Ordinary working people found increased opportunities for employment in mills and factories, but these were often under strict working conditions with long hours of labour dominated by a pace set by machines. As late as 1900, most industrial workers in the United States worked a 10-hour day (12 hours in the steel industry), yet earned 20–40% less than the minimum deemed necessary for a decent life; however, most workers in textiles, which was by far the leading industry in terms of employment, were women and children. For workers of the labouring classes, industrial life "was a stony desert, which they had to make habitable by their own efforts." Harsh working conditions were prevalent long before the Industrial Revolution took place. Pre-industrial society was very static and often cruel—child labour, dirty living conditions, and long working hours were just as prevalent before the Industrial Revolution. Factories and urbanisation Industrialisation led to the creation of the factory. The factory system contributed to the growth of urban areas as large numbers of workers migrated into the cities in search of work in the factories. Nowhere was this better illustrated than the mills and associated industries of Manchester, nicknamed "Cottonopolis", and the world's first industrial city. Manchester experienced a six-times increase in its population between 1771 and 1831. Bradford grew by 50% every ten years between 1811 and 1851, and by 1851 only 50% of the population of Bradford were actually born there. In addition, between 1815 and 1939, 20% of Europe's population left home, pushed by poverty, a rapidly growing population, and the displacement of peasant farming and artisan manufacturing. They were pulled abroad by the enormous demand for labour overseas, the ready availability of land, and cheap transportation. Still, many did not find a satisfactory life in their new homes, leading 7 million of them to return to Europe. This mass migration had large demographic effects: in 1800, less than 1% of the world population consisted of overseas Europeans and their descendants; by 1930, they represented 11%. The Americas felt the brunt of this huge emigration, largely concentrated in the United States. For much of the 19th century, production was done in small mills which were typically water-powered and built to serve local needs. Later, each factory would have its own steam engine and a chimney to give an efficient draft through its boiler. In other industries, the transition to factory production was not so divisive. Some industrialists tried to improve factory and living conditions for their workers. One of the earliest such reformers was Robert Owen, known for his pioneering efforts in improving conditions for workers at the New Lanark mills and often regarded as one of the key thinkers of the early socialist movement. By 1746 an integrated brass mill was working at Warmley near Bristol. Raw material went in at one end, was smelted into brass and was turned into pans, pins, wire, and other goods. Housing was provided for workers on site. Josiah Wedgwood and Matthew Boulton (whose Soho Manufactory was completed in 1766) were other prominent early industrialists who employed the factory system. Child labour The Industrial Revolution led to a population increase, but the chances of surviving childhood did not improve throughout the Industrial Revolution, although infant mortality rates were reduced markedly. There was still limited opportunity for education, and children were expected to work. Employers could pay a child less than an adult even though their productivity was comparable; there was no need for strength to operate an industrial machine, and since the industrial system was new, there were no experienced adult labourers. This made child labour the labour of choice for manufacturing in the early phases of the Industrial Revolution between the 18th and 19th centuries. In England and Scotland in 1788, two-thirds of the workers in 143 water-powered cotton mills were described as children. Child labour existed before the Industrial Revolution, but with the increase in population and education it became more visible. Many children were forced to work in relatively bad conditions for much lower pay than their elders, 10–20% of an adult male's wage. Reports were written detailing some of the abuses, particularly in the coal mines and textile factories, and these helped to popularise the children's plight. The public outcry, especially among the upper and middle classes, helped stir change in the young workers' welfare. Politicians and the government tried to limit child labour by law, but factory owners resisted; some felt that they were aiding the poor by giving their children money to buy food to avoid starvation, and others simply welcomed the cheap labour. In 1833 and 1844, the first general laws against child labour, the Factory Acts, were passed in Britain: children younger than nine were not allowed to work, children were not permitted to work at night, and the workday of youth under age 18 was limited to twelve hours. Factory inspectors supervised the execution of the law; however, their scarcity made enforcement difficult. About ten years later, the employment of children and women in mining was forbidden. Although laws such as these decreased the number of child labourers, child labour remained significantly present in Europe and the United States until the 20th century. Organisation of labour The Industrial Revolution concentrated labour into mills, factories, and mines, thus facilitating the organisation of combinations or trade unions to help advance the interests of working people. The power of a union could demand better terms by withdrawing all labour and causing a consequent cessation of production. Employers had to decide between giving in to the union demands at a cost to themselves or suffering the cost of the lost production. Skilled workers were difficult to replace, and these were the first groups to successfully advance their conditions through this kind of bargaining. The main method the unions used to effect change was strike action. Many strikes were painful events for both sides, the unions and the management. In Britain, the Combination Act 1799 forbade workers to form any kind of trade union until its repeal in 1824. Even after this, unions were still severely restricted. One British newspaper in 1834 described unions as "the most dangerous institutions that were ever permitted to take root, under shelter of law, in any country..." In 1832, the Reform Act extended the vote in Britain but did not grant universal suffrage. That year six men from Tolpuddle in Dorset founded the Friendly Society of Agricultural Labourers to protest against the gradual lowering of wages in the 1830s. They refused to work for less than ten shillings per week, although by this time wages had been reduced to seven shillings per week and were due to be further reduced to six. In 1834 James Frampton, a local landowner, wrote to Prime Minister Lord Melbourne to complain about the union, invoking an obscure law from 1797 prohibiting people from swearing oaths to each other, which the members of the Friendly Society had done. Six men were arrested, found guilty, and transported to Australia. They became known as the Tolpuddle Martyrs. In the 1830s and 1840s, the chartist movement was the first large-scale organised working-class political movement that campaigned for political equality and social justice. Its Charter of reforms received over three million signatures but was rejected by Parliament without consideration. Working people also formed friendly societies and cooperative societies as mutual support groups against times of economic hardship. Enlightened industrialists, such as Robert Owen supported these organisations to improve the conditions of the working class. Unions slowly overcame the legal restrictions on the right to strike. In 1842, a general strike involving cotton workers and colliers was organised through the chartist movement which stopped production across Great Britain. Eventually, effective political organisation for working people was achieved through the trades unions who, after the extensions of the franchise in 1867 and 1885, began to support socialist political parties that later merged to become the British Labour Party. Luddites The rapid industrialisation of the English economy cost many craft workers their jobs. The movement started first with lace and hosiery workers near Nottingham and spread to other areas of the textile industry. Many weavers also found themselves suddenly unemployed since they could no longer compete with machines which only required relatively limited (and unskilled) labour to produce more cloth than a single weaver. Many such unemployed workers, weavers, and others turned their animosity towards the machines that had taken their jobs and began destroying factories and machinery. These attackers became known as Luddites, supposedly followers of Ned Ludd, a folklore figure. The first attacks of the Luddite movement began in 1811. The Luddites rapidly gained popularity, and the British government took drastic measures using the militia or army to protect industry. Those rioters who were caught were tried and hanged, or transported for life. Unrest continued in other sectors as they industrialised, such as with agricultural labourers in the 1830s when large parts of southern Britain were affected by the Captain Swing disturbances. Threshing machines were a particular target, and hayrick burning was a popular activity. However, the riots led to the first formation of trade unions and further pressure for reform. Shift in production's centre of gravity The traditional centres of hand textile production such as India, parts of the Middle East, and later China could not withstand the competition from machine-made textiles, which over a period of decades destroyed the hand-made textile industries and left millions of people without work, many of whom starved. The Industrial Revolution generated an enormous and unprecedented economic division in the world, as measured by the share of manufacturing output. Cotton and the expansion of slavery Cheap cotton textiles increased the demand for raw cotton; previously, it had primarily been consumed in subtropical regions where it was grown, with little raw cotton available for export. Consequently, prices of raw cotton rose. British production grew from 2 million pounds in 1700 to 5 million pounds in 1781 to 56 million in 1800. The invention of the cotton gin by American Eli Whitney in 1792 was the decisive event. It allowed green-seeded cotton to become profitable, leading to the widespread growth of the large slave plantation in the United States, Brazil, and the West Indies. In 1791 American cotton production was about 2 million pounds, soaring to 35 million by 1800, half of which was exported. America's cotton plantations were highly efficient and profitable and were able to keep up with demand. The U.S. Civil War created a "cotton famine" that led to increased production in other areas of the world, including European colonies in Africa. Effect on environment The origins of the environmental movement lay in the response to increasing levels of smoke pollution in the atmosphere during the Industrial Revolution. The emergence of great factories and the concomitant immense growth in coal consumption gave rise to an unprecedented level of air pollution in industrial centres; after 1900 the large volume of industrial chemical discharges added to the growing load of untreated human waste. The first large-scale, modern environmental laws came in the form of Britain's Alkali Acts, passed in 1863, to regulate the deleterious air pollution (gaseous hydrochloric acid) given off by the Leblanc process used to produce soda ash. An alkali inspector and four sub-inspectors were appointed to curb this pollution. The responsibilities of the inspectorate were gradually expanded, culminating in the Alkali Order 1958 which placed all major heavy industries that emitted smoke, grit, dust, and fumes under supervision. The manufactured gas industry began in British cities in 1812–1820. The technique used produced highly toxic effluent that was dumped into sewers and rivers. The gas companies were repeatedly sued in nuisance lawsuits. They usually lost and modified the worst practices. The City of London repeatedly indicted gas companies in the 1820s for polluting the Thames and poisoning its fish. Finally, Parliament wrote company charters to regulate toxicity. The industry reached the U.S. around 1850 causing pollution and lawsuits. In industrial cities local experts and reformers, especially after 1890, took the lead in identifying environmental degradation and pollution, and initiating grass-roots movements to demand and achieve reforms. Typically the highest priority went to water and air pollution. The Coal Smoke Abatement Society was formed in Britain in 1898 making it one of the oldest environmental non-governmental organisations. It was founded by artist William Blake Richmond, frustrated with the pall cast by coal smoke. Although there were earlier pieces of legislation, the Public Health Act 1875 required all furnaces and fireplaces to consume their own smoke. It also provided for sanctions against factories that emitted large amounts of black smoke. The provisions of this law were extended in 1926 with the Smoke Abatement Act to include other emissions, such as soot, ash, and gritty particles, and to empower local authorities to impose their own regulations. Industrialisation beyond Great Britain Europe The Industrial Revolution in continental Europe came later than in Great Britain. It started in Belgium and France, then spread to the German states by the middle of the 19th century. In many industries, this involved the application of technology developed in Britain in new places. Typically, the technology was purchased from Britain or British engineers and entrepreneurs moved abroad in search of new opportunities. By 1809, part of the Ruhr Valley in Westphalia was called 'Miniature England' because of its similarities to the industrial areas of Britain. Most European governments provided state funding to the new industries. In some cases (such as iron), the different availability of resources locally meant that only some aspects of the British technology were adopted. Austria-Hungary The Habsburg realms which became Austria-Hungary in 1867 included 23 million inhabitants in 1800, growing to 36 million by 1870. Nationally, the per capita rate of industrial growth averaged about 3% between 1818 and 1870. However, there were strong regional differences. The railway system was built in the 1850–1873 period. Before they arrived transportation was very slow and expensive. In the Alpine and Bohemian (modern-day Czech Republic) regions, proto-industrialisation began by 1750 and became the center of the first phases of the Industrial Revolution after 1800. The textile industry was the main factor, utilising mechanisation, steam engines, and the factory system. In the Czech lands, the "first mechanical loom followed in Varnsdorf in 1801", with the first steam engines appearing in Bohemia and Moravia just a few years later. The textile production flourished particularly in Prague and Brno (German: Brünn), which was considered the 'Moravian Manchester'. The Czech lands, especially Bohemia, became the centre of industrialisation due to its natural and human resources. The iron industry had developed in the Alpine regions after 1750, with smaller centers in Bohemia and Moravia. Hungary—the eastern half of the Dual Monarchy, was heavily rural with little industry before 1870. In 1791, Prague organised the first World's Fair/List of world's fairs, Bohemia (modern-day Czech Republic). The first industrial exhibition was on the occasion of the coronation of Leopold II as a king of Bohemia, which took place in Clementinum, and therefore celebrated the considerable sophistication of manufacturing methods in the Czech lands during that time period. Technological change accelerated industrialisation and urbanisation. The GNP per capita grew roughly 1.76% per year from 1870 to 1913. That level of growth compared very favourably to that of other European nations such as Britain (1%), France (1.06%), and Germany (1.51%). However, in a comparison with Germany and Britain: the Austro-Hungarian economy as a whole still lagged considerably, as sustained modernisation had begun much later. Belgium Belgium was the second country in which the Industrial Revolution took place and the first in continental Europe: Wallonia (French-speaking southern Belgium) took the lead. Starting in the middle of the 1820s, and especially after Belgium became an independent nation in 1830, numerous works comprising coke blast furnaces as well as puddling and rolling mills were built in the coal mining areas around Liège and Charleroi. The leader was John Cockerill, a transplanted Englishman . His factories at Seraing integrated all stages of production, from engineering to the supply of raw materials, as early as 1825. Wallonia exemplified the radical evolution of industrial expansion. Thanks to coal (the French word "houille" was coined in Wallonia), the region geared up to become the 2nd industrial power in the world after Britain. But it is also pointed out by many researchers, with its Sillon industriel, "Especially in the Haine, Sambre and Meuse valleys, between the Borinage and Liège...there was a huge industrial development based on coal-mining and iron-making...". Philippe Raxhon wrote about the period after 1830: "It was not propaganda but a reality the Walloon regions were becoming the second industrial power all over the world after Britain." "The sole industrial centre outside the collieries and blast furnaces of Walloon was the old cloth-making town of Ghent." Professor Michel De Coster stated: "The historians and the economists say that Belgium was the second industrial power of the world, in proportion to its population and its territory [...] But this rank is the one of Wallonia where the coal-mines, the blast furnaces, the iron and zinc factories, the wool industry, the glass industry, the weapons industry... were concentrated." Many of the 19th-century coal mines in Wallonia are now protected as World Heritage Sites. Wallonia was also the birthplace of a strong socialist party and strong trade unions in a particular sociological landscape. At the left, the Sillon industriel, which runs from Mons in the west, to Verviers in the east (except part of North Flanders, in another period of the industrial revolution, after 1920). Even if Belgium is the second industrial country after Britain, the effect of the industrial revolution there was very different. In 'Breaking stereotypes', Muriel Neven and Isabelle Devious say: The Industrial Revolution changed a mainly rural society into an urban one, but with a strong contrast between northern and southern Belgium. During the Middle Ages and the early modern period, Flanders was characterised by the presence of large urban centres [...] at the beginning of the nineteenth century this region (Flanders), with an urbanisation degree of more than 30 percent, remained one of the most urbanised in the world. By comparison, this proportion reached only 17 percent in Wallonia, barely 10 percent in most West European countries, 16 percent in France, and 25 percent in Britain. Nineteenth-century industrialisation did not affect the traditional urban infrastructure, except in Ghent... Also, in Wallonia, the traditional urban network was largely unaffected by the industrialisation process, even though the proportion of city-dwellers rose from 17 to 45 percent between 1831 and 1910. Especially in the Haine, Sambre and Meuse valleys, between the Borinage and Liège, where there was a huge industrial development based on coal-mining and iron-making, urbanisation was fast. During these eighty years, the number of municipalities with more than 5,000 inhabitants increased from only 21 to more than one hundred, concentrating nearly half of the Walloon population in this region. Nevertheless, industrialisation remained quite traditional in the sense that it did not lead to the growth of modern and large urban centres, but to a conurbation of industrial villages and towns developed around a coal mine or a factory. Communication routes between these small centres only became populated later and created a much less dense urban morphology than, for instance, the area around Liège where the old town was there to direct migratory flows. France The Industrial Revolution in France followed a particular course as it did not correspond to the main model followed by other countries. Notably, most French historians argue France did not go through a clear take-off. Instead, France's economic growth and industrialisation process was slow and steady through the 18th and 19th centuries. However, some stages were identified by Maurice Lévy-Leboyer: French Revolution and Napoleonic Wars (1789–1815), industrialisation, along with Britain (1815–1860), economic slowdown (1860–1905), renewal of the growth after 1905. Germany Based on its leadership in chemical research in the universities and industrial laboratories, Germany, which was unified in 1871, became dominant in the world's chemical industry in the late 19th century. At first the production of dyes based on aniline was critical. Germany's political disunitywith three dozen statesand a pervasive conservatism made it difficult to build railways in the 1830s. However, by the 1840s, trunk lines linked the major cities; each German state was responsible for the lines within its own borders. Lacking a technological base at first, the Germans imported their engineering and hardware from Britain, but quickly learned the skills needed to operate and expand the railways. In many cities, the new railway shops were the centres of technological awareness and training, so that by 1850, Germany was self-sufficient in meeting the demands of railroad construction, and the railways were a major impetus for the growth of the new steel industry. Observers found that even as late as 1890, their engineering was inferior to Britain's. However, German unification in 1871 stimulated consolidation, nationalisation into state-owned companies, and further rapid growth. Unlike the situation in France, the goal was the support of industrialisation, and so heavy lines crisscrossed the Ruhr and other industrial districts and provided good connections to the major ports of Hamburg and Bremen. By 1880, Germany had 9,400 locomotives pulling 43,000 passengers and 30,000 tons of freight, and pulled ahead of France. Sweden During the period 1790–1815, Sweden experienced two parallel economic movements: an agricultural revolution with larger agricultural estates, new crops, and farming tools and commercialisation of farming, and a proto industrialisation, with small industries being established in the countryside and with workers switching between agricultural work in summer and industrial production in winter. This led to economic growth benefiting large sections of the population and leading up to a consumption revolution starting in the 1820s. Between 1815 and 1850, the protoindustries developed into more specialised and larger industries. This period witnessed increasing regional specialisation with mining in Bergslagen, textile mills in Sjuhäradsbygden, and forestry in Norrland. Several important institutional changes took place in this period, such as free and mandatory schooling introduced in 1842 (as the first country in the world), the abolition of the national monopoly on trade in handicrafts in 1846, and a stock company law in 1848. From 1850 to 1890, Sweden experienced its "first" Industrial Revolution with a veritable explosion in export, dominated by crops, wood, and steel. Sweden abolished most tariffs and other barriers to free trade in the 1850s and joined the gold standard in 1873. Large infrastructural investments were made during this period, mainly in the expanding railroad network, which was financed in part by the government and in part by private enterprises. From 1890 to 1930, new industries developed with their focus on the domestic market: mechanical engineering, power utilities, papermaking and textile. Japan The Industrial Revolution began about 1870 as Meiji period leaders decided to catch up with the West. The government built railroads, improved roads, and inaugurated a land reform program to prepare the country for further development. It inaugurated a new Western-based education system for all young people, sent thousands of students to the United States and Europe, and hired more than 3,000 Westerners to teach modern science, mathematics, technology, and foreign languages in Japan (Foreign government advisors in Meiji Japan). In 1871, a group of Japanese politicians known as the Iwakura Mission toured Europe and the United States to learn Western ways. The result was a deliberate state-led industrialisation policy to enable Japan to quickly catch up. The Bank of Japan, founded in 1882, used taxes to fund model steel and textile factories. Education was expanded and Japanese students were sent to study in the West. Modern industry first appeared in textiles, including cotton and especially silk, which was based in home workshops in rural areas. United States During the late 18th and early 19th centuries when the UK and parts of Western Europe began to industrialise, the US was primarily an agricultural and natural resource producing and processing economy. The building of roads and canals, the introduction of steamboats and the building of railroads were important for handling agricultural and natural resource products in the large and sparsely populated country of the period. Important American technological contributions during the period of the Industrial Revolution were the cotton gin and the development of a system for making interchangeable parts, which was aided by the development of the milling machine in the United States. The development of machine tools and the system of interchangeable parts was the basis for the rise of the US as the world's leading industrial nation in the late 19th century. Oliver Evans invented an automated flour mill in the mid-1780s that used control mechanisms and conveyors so that no labour was needed from the time grain was loaded into the elevator buckets until the flour was discharged into a wagon. This is considered to be the first modern materials handling system, an important advance in the progress toward mass production. The United States originally used horse-powered machinery for small-scale applications such as grain milling, but eventually switched to water power after textile factories began being built in the 1790s. As a result, industrialisation was concentrated in New England and the Northeastern United States, which has fast-moving rivers. The newer water-powered production lines proved more economical than horse-drawn production. In the late 19th century steam-powered manufacturing overtook water-powered manufacturing, allowing the industry to spread to the Midwest. Thomas Somers and the Cabot Brothers founded the Beverly Cotton Manufactory in 1787, the first cotton mill in America, the largest cotton mill of its era, and a significant milestone in the research and development of cotton mills in the future. This mill was designed to use horsepower, but the operators quickly learned that the horse-drawn platform was economically unstable, and had economic losses for years. Despite the losses, the Manufactory served as a playground of innovation, both in turning a large amount of cotton, but also developing the water-powered milling structure used in Slater's Mill. In 1793, Samuel Slater (1768–1835) founded the Slater Mill at Pawtucket, Rhode Island. He had learned of the new textile technologies as a boy apprentice in Derbyshire, England, and defied laws against the emigration of skilled workers by leaving for New York in 1789, hoping to make money with his knowledge. After founding Slater's Mill, he went on to own 13 textile mills. Daniel Day established a wool carding mill in the Blackstone Valley at Uxbridge, Massachusetts in 1809, the third woollen mill established in the US (The first was in Hartford, Connecticut, and the second at Watertown, Massachusetts.). The John H. Chafee Blackstone River Valley National Heritage Corridor retraces the history of "America's Hardest-Working River', the Blackstone. The Blackstone River and its tributaries, which cover more than from Worcester, Massachusetts to Providence, Rhode Island, was the birthplace of America's Industrial Revolution. At its peak over 1,100 mills operated in this valley, including Slater's Mill, and with it the earliest beginnings of America's industrial and technological development. Merchant Francis Cabot Lowell from Newburyport, Massachusetts, memorised the design of textile machines on his tour of British factories in 1810. Realising that the War of 1812 had ruined his import business but that demand for domestic finished cloth was emerging in America, on his return to the United States, he set up the Boston Manufacturing Company. Lowell and his partners built America's second cotton-to-cloth textile mill at Waltham, Massachusetts, second to the Beverly Cotton Manufactory. After his death in 1817, his associates built America's first planned factory town, which they named after him. This enterprise was capitalised in a public stock offering, one of the first uses of it in the United States. Lowell, Massachusetts, using of canals and delivered by the Merrimack River, is considered by some as a major contributor to the success of the American Industrial Revolution. The short-lived utopia-like Waltham-Lowell system was formed, as a direct response to the poor working conditions in Britain. However, by 1850, especially following the Great Famine of Ireland, the system had been replaced by poor immigrant labour. A major U.S. contribution to industrialisation was the development of techniques to make interchangeable parts from metal. Precision metal machining techniques were developed by the U.S. Department of War to make interchangeable parts for small firearms. The development work took place at the Federal Arsenals at Springfield Armory and Harpers Ferry Armory. Techniques for precision machining using machine tools included using fixtures to hold the parts in the proper position, jigs to guide the cutting tools and precision blocks and gauges to measure the accuracy. The milling machine, a fundamental machine tool, is believed to have been invented by Eli Whitney, who was a government contractor who built firearms as part of this program. Another important invention was the Blanchard lathe, invented by Thomas Blanchard. The Blanchard lathe, or pattern tracing lathe, was actually a shaper that could produce copies of wooden gun stocks. The use of machinery and the techniques for producing standardised and interchangeable parts became known as the American system of manufacturing. Precision manufacturing techniques made it possible to build machines that mechanised the shoe industry and the watch industry. The industrialisation of the watch industry started in 1854 also in Waltham, Massachusetts, at the Waltham Watch Company, with the development of machine tools, gauges and assembling methods adapted to the micro precision required for watches. Second Industrial Revolution Steel is often cited as the first of several new areas for industrial mass-production, which are said to characterise a "Second Industrial Revolution", beginning around 1850, although a method for mass manufacture of steel was not invented until the 1860s, when Sir Henry Bessemer invented a new furnace which could convert molten pig iron into steel in large quantities. However, it only became widely available in the 1870s after the process was modified to produce more uniform quality. Bessemer steel was being displaced by the open hearth furnace near the end of the 19th century. This Second Industrial Revolution gradually grew to include chemicals, mainly the chemical industries, petroleum (refining and distribution), and, in the 20th century, the automotive industry, and was marked by a transition of technological leadership from Britain to the United States and Germany. The increasing availability of economical petroleum products also reduced the importance of coal and further widened the potential for industrialisation. A new revolution began with electricity and electrification in the electrical industries. The introduction of hydroelectric power generation in the Alps enabled the rapid industrialisation of coal-deprived northern Italy, beginning in the 1890s. By the 1890s, industrialisation in these areas had created the first giant industrial corporations with burgeoning global interests, as companies like U.S. Steel, General Electric, Standard Oil and Bayer AG joined the railroad and ship companies on the world's stock markets. New Industrialism The New Industrialist movement advocates for increasing domestic manufacturing while reducing emphasis on a financial-based economy that relies on real estate and trading speculative assets. New Industrialism has been described as "supply-side progressivism" or embracing the idea of "Building More Stuff". New Industrialism developed after the China Shock that resulted in lost manufacturing jobs in the U.S. after China joined the World Trade Organization in 2001. The movement strengthened after the reduction of manufacturing jobs during the Great Recession and when the U.S. was not able to manufacture enough tests or facemasks during the COVID-19 pandemic. New Industrialism calls for building enough housing to satisfy demand in order to reduce the profit in land speculation, to invest in infrastructure, and to develop advanced technology to manufacture green energy for the world. New Industrialists believe that the United States is not building enough productive capital and should invest more into economic growth. Causes The causes of the Industrial Revolution were complicated and remain a topic for debate. Geographic factors include Britain's vast mineral resources. In addition to metal ores, Britain had the highest quality coal reserves known at the time, as well as abundant water power, highly productive agriculture, and numerous seaports and navigable waterways. Some historians believe the Industrial Revolution was an outgrowth of social and institutional changes brought by the end of feudalism in Britain after the English Civil War in the 17th century, although feudalism began to break down after the Black Death of the mid 14th century, followed by other epidemics, until the population reached a low in the 14th century. This created labour shortages and led to falling food prices and a peak in real wages around 1500, after which population growth began reducing wages. After 1540, increasing precious metals supply from the Americas caused coinage debasement (inflation), which caused land rents (often long-term leases that transferred to heirs on death) to fall in real terms. The Enclosure movement and the British Agricultural Revolution made food production more efficient and less labour-intensive, forcing the farmers who could no longer be self-sufficient in agriculture into cottage industry, for example weaving, and in the longer term into the cities and the newly developed factories. The colonial expansion of the 17th century with the accompanying development of international trade, creation of financial markets and accumulation of capital are also cited as factors, as is the scientific revolution of the 17th century. A change in marrying patterns to getting married later made people able to accumulate more human capital during their youth, thereby encouraging economic development. Until the 1980s, it was universally believed by academic historians that technological innovation was the heart of the Industrial Revolution and the key enabling technology was the invention and improvement of the steam engine. Marketing professor Ronald Fullerton suggested that innovative marketing techniques, business practices, and competition also influenced changes in the manufacturing industry. Lewis Mumford has proposed that the Industrial Revolution had its origins in the Early Middle Ages, much earlier than most estimates. He explains that the model for standardised mass production was the printing press and that "the archetypal model for the industrial era was the clock". He also cites the monastic emphasis on order and time-keeping, as well as the fact that medieval cities had at their centre a church with bell ringing at regular intervals as being necessary precursors to a greater synchronisation necessary for later, more physical, manifestations such as the steam engine. The presence of a large domestic market should also be considered an important driver of the Industrial Revolution, particularly explaining why it occurred in Britain. In other nations, such as France, markets were split up by local regions, which often imposed tolls and tariffs on goods traded among them. Internal tariffs were abolished by Henry VIII of England, they survived in Russia until 1753, 1789 in France and 1839 in Spain. Governments' grant of limited monopolies to inventors under a developing patent system (the Statute of Monopolies in 1623) is considered an influential factor. The effects of patents, both good and ill, on the development of industrialisation are clearly illustrated in the history of the steam engine, the key enabling technology. In return for publicly revealing the workings of an invention the patent system rewarded inventors such as James Watt by allowing them to monopolise the production of the first steam engines, thereby rewarding inventors and increasing the pace of technological development. However, monopolies bring with them their own inefficiencies which may counterbalance, or even overbalance, the beneficial effects of publicising ingenuity and rewarding inventors. Watt's monopoly prevented other inventors, such as Richard Trevithick, William Murdoch, or Jonathan Hornblower, whom Boulton and Watt sued, from introducing improved steam engines, thereby retarding the spread of steam power. Causes in Europe One question of active interest to historians is why the Industrial Revolution occurred in Europe and not in other parts of the world in the 18th century, particularly China, India, and the Middle East (which pioneered in shipbuilding, textile production, water mills, and much more in the period between 750 and 1100), or at other times like in Classical Antiquity or the Middle Ages. A recent account argued that Europeans have been characterized for thousands of years by a freedom-loving culture originating from the aristocratic societies of early Indo-European invaders. Many historians, however, have challenged this explanation as being not only Eurocentric, but also ignoring historical context. In fact, before the Industrial Revolution, "there existed something of a global economic parity between the most advanced regions in the world economy." These historians have suggested a number of other factors, including education, technological changes (see Scientific Revolution in Europe), "modern" government, "modern" work attitudes, ecology, and culture. China was the world's most technologically advanced country for many centuries; however, China stagnated economically and technologically and was surpassed by Western Europe before the Age of Discovery, by which time China banned imports and denied entry to foreigners. China was also a totalitarian society. It also taxed transported goods heavily. Modern estimates of per capita income in Western Europe in the late 18th century are of roughly 1,500 dollars in purchasing power parity (and Britain had a per capita income of nearly 2,000 dollars) whereas China, by comparison, had only 450 dollars. India was essentially feudal, politically fragmented and not as economically advanced as Western Europe. Historians such as David Landes and sociologists Max Weber and Rodney Stark credit the different belief systems in Asia and Europe with dictating where the revolution occurred. The religion and beliefs of Europe were largely products of Judaeo-Christianity and Greek thought. Conversely, Chinese society was founded on men like Confucius, Mencius, Han Feizi (Legalism), Lao Tzu (Taoism), and Buddha (Buddhism), resulting in very different worldviews. Other factors include the considerable distance of China's coal deposits, though large, from its cities as well as the then unnavigable Yellow River that connects these deposits to the sea. Economic historian Joel Mokyr argued that political fragmentation, the presence of a large number of European states, made it possible for heterodox ideas to thrive, as entrepreneurs, innovators, ideologues and heretics could easily flee to a neighboring state in the event that the one state would try to suppress their ideas and activities. This is what set Europe apart from the technologically advanced, large unitary empires such as China and India by providing "an insurance against economic and technological stagnation". China had both a printing press and movable type, and India had similar levels of scientific and technological achievement as Europe in 1700, yet the Industrial Revolution would occur in Europe, not China or India. In Europe, political fragmentation was coupled with an "integrated market for ideas" where Europe's intellectuals used the of Latin, had a shared intellectual basis in Europe's classical heritage and the pan-European institution of the Republic of Letters. Political institutions could contribute to the relation between democratization and economic growth during Great Divergence. In addition, Europe's monarchs desperately needed revenue, pushing them into alliances with their merchant classes. Small groups of merchants were granted monopolies and tax-collecting responsibilities in exchange for payments to the state. Located in a region "at the hub of the largest and most varied network of exchange in history", Europe advanced as the leader of the Industrial Revolution. In the Americas, Europeans found a windfall of silver, timber, fish, and maize, leading historian Peter Stearns to conclude that "Europe's Industrial Revolution stemmed in great part from Europe's ability to draw disproportionately on world resources." Modern capitalism originated in the Italian city-states around the end of the first millennium. The city-states were prosperous cities that were independent from feudal lords. They were largely republics whose governments were typically composed of merchants, manufacturers, members of guilds, bankers and financiers. The Italian city-states built a network of branch banks in leading western European cities and introduced double entry bookkeeping. Italian commerce was supported by schools that taught numeracy in financial calculations through abacus schools. Causes in Britain Great Britain provided the legal and cultural foundations that enabled entrepreneurs to pioneer the Industrial Revolution. Key factors fostering this environment were: The period of peace and stability which followed the unification of England and Scotland There were no internal trade barriers, including between England and Scotland, or feudal tolls and tariffs, making Britain the "largest coherent market in Europe" The rule of law (enforcing property rights and respecting the sanctity of contracts) A straightforward legal system that allowed the formation of joint-stock companies (corporations) Free market (capitalism) Geographical and natural resource advantages of Great Britain were the fact that it had extensive coastlines and many navigable rivers in an age where water was the easiest means of transportation and Britain had the highest quality coal in Europe. Britain also had a large number of sites for water power. There were two main values that drove the Industrial Revolution in Britain. These values were self-interest and an entrepreneurial spirit. Because of these interests, many industrial advances were made that resulted in a huge increase in personal wealth and a consumer revolution. These advancements also greatly benefitted British society as a whole. Countries around the world started to recognise the changes and advancements in Britain and use them as an example to begin their own Industrial Revolutions. A debate sparked by Trinidadian politician and historian Eric Williams in his work Capitalism and Slavery (1944) concerned the role of slavery in financing the Industrial Revolution. Williams argued that European capital amassed from slavery was vital in the early years of the revolution, contending that the rise of industrial capitalism was the driving force behind abolitionism instead of humanitarian motivations. These arguments led to significant historiographical debates among historians, with American historian Seymour Drescher critiquing Williams' arguments in Econocide (1977). Instead, the greater liberalisation of trade from a large merchant base may have allowed Britain to produce and use emerging scientific and technological developments more effectively than countries with stronger monarchies, particularly China and Russia. Britain emerged from the Napoleonic Wars as the only European nation not ravaged by financial plunder and economic collapse, and having the only merchant fleet of any useful size (European merchant fleets were destroyed during the war by the Royal Navy). Britain's extensive exporting cottage industries also ensured markets were already available for many early forms of manufactured goods. The conflict resulted in most British warfare being conducted overseas, reducing the devastating effects of territorial conquest that affected much of Europe. This was further aided by Britain's geographical positionan island separated from the rest of mainland Europe. Another theory is that Britain was able to succeed in the Industrial Revolution due to the availability of key resources it possessed. It had a dense population for its small geographical size. Enclosure of common land and the related agricultural revolution made a supply of this labour readily available. There was also a local coincidence of natural resources in the North of England, the English Midlands, South Wales and the Scottish Lowlands. Local supplies of coal, iron, lead, copper, tin, limestone and water power resulted in excellent conditions for the development and expansion of industry. Also, the damp, mild weather conditions of the North West of England provided ideal conditions for the spinning of cotton, providing a natural starting point for the birth of the textiles industry. The stable political situation in Britain from around 1689 following the Glorious Revolution, and British society's greater receptiveness to change (compared with other European countries) can also be said to be factors favouring the Industrial Revolution. Peasant resistance to industrialisation was largely eliminated by the Enclosure movement, and the landed upper classes developed commercial interests that made them pioneers in removing obstacles to the growth of capitalism. (This point is also made in Hilaire Belloc's The Servile State.) The French philosopher Voltaire wrote about capitalism and religious tolerance in his book on English society, Letters on the English (1733), noting why England at that time was more prosperous in comparison to the country's less religiously tolerant European neighbours. "Take a view of the Royal Exchange in London, a place more venerable than many courts of justice, where the representatives of all nations meet for the benefit of mankind. There the Jew, the Mahometan [Muslim], and the Christian transact together, as though they all professed the same religion, and give the name of infidel to none but bankrupts. There the Presbyterian confides in the Anabaptist, and the Churchman depends on the Quaker's word. If one religion only were allowed in England, the Government would very possibly become arbitrary; if there were but two, the people would cut one another's throats; but as there are such a multitude, they all live happy and in peace." Britain's population grew 280% from 1550 to 1820, while the rest of Western Europe grew 50–80%. Seventy percent of European urbanisation happened in Britain from 1750 to 1800. By 1800, only the Netherlands was more urbanised than Britain. This was only possible because coal, coke, imported cotton, brick and slate had replaced wood, charcoal, flax, peat and thatch. The latter compete with land grown to feed people while mined materials do not. Yet more land would be freed when chemical fertilisers replaced manure and horse's work was mechanised. A workhorse needs for fodder while even early steam engines produced four times more mechanical energy. In 1700, five-sixths of the coal mined worldwide was in Britain, while the Netherlands had none; so despite having Europe's best transport, lowest taxes, and most urbanised, well-paid, and literate population, it failed to industrialise. In the 18th century, it was the only European country whose cities and population shrank. Without coal, Britain would have run out of suitable river sites for mills by the 1830s. Based on science and experimentation from the continent, the steam engine was developed specifically for pumping water out of mines, many of which in Britain had been mined to below the water table. Although extremely inefficient they were economical because they used unsaleable coal. Iron rails were developed to transport coal, which was a major economic sector in Britain. Economic historian Robert Allen has argued that high wages, cheap capital and very cheap energy in Britain made it the ideal place for the industrial revolution to occur. These factors made it vastly more profitable to invest in research and development, and to put technology to use in Britain than other societies. However, two 2018 studies in The Economic History Review showed that wages were not particularly high in the British spinning sector or the construction sector, casting doubt on Allen's explanation. A 2022 study in the Journal of Political Economy by Morgan Kelly, Joel Mokyr, and Cormac O Grada found that industrialization happened in areas with low wages and high mechanical skills, whereas literacy, banks and proximity to coal had little explanatory power. Transfer of knowledge Knowledge of innovation was spread by several means. Workers who were trained in the technique might move to another employer or might be poached. A common method was for someone to make a study tour, gathering information where he could. During the whole of the Industrial Revolution and for the century before, all European countries and America engaged in study-touring; some nations, like Sweden and France, even trained civil servants or technicians to undertake it as a matter of state policy. In other countries, notably Britain and America, this practice was carried out by individual manufacturers eager to improve their own methods. Study tours were common then, as now, as was the keeping of travel diaries. Records made by industrialists and technicians of the period are an incomparable source of information about their methods. Another means for the spread of innovation was by the network of informal philosophical societies, like the Lunar Society of Birmingham, in which members met to discuss natural philosophy and often its application to manufacturing. The Lunar Society flourished from 1765 to 1809, and it has been said of them, "They were, if you like, the revolutionary committee of that most far reaching of all the eighteenth-century revolutions, the Industrial Revolution". Other such societies published volumes of proceedings and transactions. For example, the London-based Royal Society of Arts published an illustrated volume of new inventions, as well as papers about them in its annual Transactions. There were publications describing technology. Encyclopaedias such as Harris's Lexicon Technicum (1704) and Abraham Rees's Cyclopaedia (1802–1819) contain much of value. Cyclopaedia contains an enormous amount of information about the science and technology of the first half of the Industrial Revolution, very well illustrated by fine engravings. Foreign printed sources such as the Descriptions des Arts et Métiers and Diderot's Encyclopédie explained foreign methods with fine engraved plates. Periodical publications about manufacturing and technology began to appear in the last decade of the 18th century, and many regularly included notice of the latest patents. Foreign periodicals, such as the Annales des Mines, published accounts of travels made by French engineers who observed British methods on study tours. Protestant work ethic Another theory is that the British advance was due to the presence of an entrepreneurial class which believed in progress, technology and hard work. The existence of this class is often linked to the Protestant work ethic (see Max Weber) and the particular status of the Baptists and the dissenting Protestant sects, such as the Quakers and Presbyterians that had flourished with the English Civil War. Reinforcement of confidence in the rule of law, which followed establishment of the prototype of constitutional monarchy in Britain in the Glorious Revolution of 1688, and the emergence of a stable financial market there based on the management of the national debt by the Bank of England, contributed to the capacity for, and interest in, private financial investment in industrial ventures. Dissenters found themselves barred or discouraged from almost all public offices, as well as education at England's only two universities at the time (although dissenters were still free to study at Scotland's four universities). When the restoration of the monarchy took place and membership in the official Anglican Church became mandatory due to the Test Act, they thereupon became active in banking, manufacturing and education. The Unitarians, in particular, were very involved in education, by running Dissenting Academies, where, in contrast to the universities of Oxford and Cambridge and schools such as Eton and Harrow, much attention was given to mathematics and the sciences – areas of scholarship vital to the development of manufacturing technologies. Historians sometimes consider this social factor to be extremely important, along with the nature of the national economies involved. While members of these sects were excluded from certain circles of the government, they were considered fellow Protestants, to a limited extent, by many in the middle class, such as traditional financiers or other businessmen. Given this relative tolerance and the supply of capital, the natural outlet for the more enterprising members of these sects would be to seek new opportunities in the technologies created in the wake of the scientific revolution of the 17th century. Criticisms The industrial revolution has been criticised for causing ecological collapse, mental illness, pollution and detrimental social systems. It has also been criticised for valuing profits and corporate growth over life and wellbeing. Multiple movements have arisen which reject aspects of the industrial revolution, such as the Amish or primitivists. Individualism humanism and harsh conditions Humanists and individualists criticise the Industrial revolution for mistreating women and children and turning men into work machines that lacked autonomy. Critics of the Industrial revolution promoted a more interventionist state and formed new organisations to promote human rights. Primitivism Primitivism argues that the Industrial Revolution have created an un-natural frame of society and the world in which humans need to adapt to an un-natural urban landscape in which humans are perpetual cogs without personal autonomy. Certain primitivists argue for a return to pre-industrial society, while others argue that technology such as modern medicine, and agriculture are all positive for humanity assuming they are controlled by and serve humanity and have no effect on the natural environment. Pollution and ecological collapse The Industrial Revolution has been criticised for leading to immense ecological and habitat destruction. It has led to immense decrease in the biodiversity of life on Earth. The Industrial revolution has been said to be inherently unsustainable and will lead to eventual collapse of society, mass hunger, starvation, and resource scarcity. The Anthropocene The Anthropocene is a proposed epoch or mass extinction coming from humanity (anthropo- is the Greek root for humanity). Since the start of the Industrial revolution humanity has permanently changed the Earth, such as immense decrease in biodiversity, and mass extinction caused by the Industrial revolution. The effects include permanent changes to the Earth's atmosphere and soil, forests, the mass destruction of the Industrial revolution has led to catastrophic impacts on the Earth. Most organisms are unable to adapt leading to mass extinction with the remaining undergoing evolutionary rescue, as a result of the Industrial revolution. Permanent changes in the distribution of organisms from human influence will become identifiable in the geologic record. Researchers have documented the movement of many species into regions formerly too cold for them, often at rates faster than initially expected. This has occurred in part as a result of changing climate, but also in response to farming and fishing, and to the accidental introduction of non-native species to new areas through global travel. The ecosystem of the entire Black Sea may have changed during the last 2000 years as a result of nutrient and silica input from eroding deforested lands along the Danube River. Opposition from Romanticism During the Industrial Revolution, an intellectual and artistic hostility towards the new industrialisation developed, associated with the Romantic movement. Romanticism revered the traditionalism of rural life and recoiled against the upheavals caused by industrialisation, urbanisation and the wretchedness of the working classes. Its major exponents in English included the artist and poet William Blake and poets William Wordsworth, Samuel Taylor Coleridge, John Keats, Lord Byron and Percy Bysshe Shelley. The movement stressed the importance of "nature" in art and language, in contrast to "monstrous" machines and factories; the "Dark satanic mills" of Blake's poem "And did those feet in ancient time". Mary Shelley's Frankenstein reflected concerns that scientific progress might be two-edged. French Romanticism likewise was highly critical of industry. See also Proto-industrialization Capitalist mode of production (Marxist theory) Industrialization of China Economic history of the United Kingdom Fourth Industrial Revolution History of capitalism Industrial Age Industrial society Law of the handicap of a head start – Dialectics of progress Machine Age The Protestant Ethic and the Spirit of Capitalism Steam Textile manufacture during the British Industrial Revolution, a good description of the early industrial revolution Footnotes References Further reading . Reprinted by McGraw-Hill, New York and London, 1926 (); and by Lindsay Publications, Inc., Bradley, Illinois, (). Historiography External links Internet Modern History Sourcebook: Industrial Revolution BBC History Home Page: Industrial Revolution National Museum of Science and Industry website: machines and personalities Factory Workers in the Industrial Revolution "The Day the World Took Off" Six-part video series from the University of Cambridge tracing the question "Why did the Industrial Revolution begin when and where it did." 18th century in technology 19th century in technology Age of Revolution History of technology Industrial history Late modern Europe Modern history of the United Kingdom Revolutions by type Stages of history
Industrial Revolution
[ "Technology" ]
26,063
[ "Science and technology studies", "History of science and technology", "History of technology" ]
14,922
https://en.wikipedia.org/wiki/If%20and%20only%20if
In logic and related fields such as mathematics and philosophy, "if and only if" (often shortened as "iff") is paraphrased by the biconditional, a logical connective between statements. The biconditional is true in two cases, where either both statements are true or both are false. The connective is biconditional (a statement of material equivalence), and can be likened to the standard material conditional ("only if", equal to "if ... then") combined with its reverse ("if"); hence the name. The result is that the truth of either one of the connected statements requires the truth of the other (i.e. either both statements are true, or both are false), though it is controversial whether the connective thus defined is properly rendered by the English "if and only if"—with its pre-existing meaning. For example, P if and only if Q means that P is true whenever Q is true, and the only case in which P is true is if Q is also true, whereas in the case of P if Q, there could be other scenarios where P is true and Q is false. In writing, phrases commonly used as alternatives to P "if and only if" Q include: Q is necessary and sufficient for P, for P it is necessary and sufficient that Q, P is equivalent (or materially equivalent) to Q (compare with material implication), P precisely if Q, P precisely (or exactly) when Q, P exactly in case Q, and P just in case Q. Some authors regard "iff" as unsuitable in formal writing; others consider it a "borderline case" and tolerate its use. In logical formulae, logical symbols, such as and , are used instead of these phrases; see below. Definition The truth table of P Q is as follows: It is equivalent to that produced by the XNOR gate, and opposite to that produced by the XOR gate. Usage Notation The corresponding logical symbols are "", "", and , and sometimes "iff". These are usually treated as equivalent. However, some texts of mathematical logic (particularly those on first-order logic, rather than propositional logic) make a distinction between these, in which the first, ↔, is used as a symbol in logic formulas, while ⇔ is used in reasoning about those logic formulas (e.g., in metalogic). In Łukasiewicz's Polish notation, it is the prefix symbol . Another term for the logical connective, i.e., the symbol in logic formulas, is exclusive nor. In TeX, "if and only if" is shown as a long double arrow: via command \iff or \Longleftrightarrow. Proofs In most logical systems, one proves a statement of the form "P iff Q" by proving either "if P, then Q" and "if Q, then P", or "if P, then Q" and "if not-P, then not-Q". Proving these pairs of statements sometimes leads to a more natural proof, since there are not obvious conditions in which one would infer a biconditional directly. An alternative is to prove the disjunction "(P and Q) or (not-P and not-Q)", which itself can be inferred directly from either of its disjuncts—that is, because "iff" is truth-functional, "P iff Q" follows if P and Q have been shown to be both true, or both false. Origin of iff and pronunciation Usage of the abbreviation "iff" first appeared in print in John L. Kelley's 1955 book General Topology. Its invention is often credited to Paul Halmos, who wrote "I invented 'iff,' for 'if and only if'—but I could never believe I was really its first inventor." It is somewhat unclear how "iff" was meant to be pronounced. In current practice, the single 'word' "iff" is almost always read as the four words "if and only if". However, in the preface of General Topology, Kelley suggests that it should be read differently: "In some cases where mathematical content requires 'if and only if' and euphony demands something less I use Halmos' 'iff. The authors of one discrete mathematics textbook suggest: "Should you need to pronounce iff, really hang on to the 'ff' so that people hear the difference from 'if, implying that "iff" could be pronounced as . Usage in definitions Conventionally, definitions are "if and only if" statements; some texts — such as Kelley's General Topology — follow this convention, and use "if and only if" or iff in definitions of new terms. However, this usage of "if and only if" is relatively uncommon and overlooks the linguistic fact that the "if" of a definition is interpreted as meaning "if and only if". The majority of textbooks, research papers and articles (including English Wikipedia articles) follow the linguistic convention of interpreting "if" as "if and only if" whenever a mathematical definition is involved (as in "a topological space is compact if every open cover has a finite subcover"). Moreover, in the case of a recursive definition, the only if half of the definition is interpreted as a sentence in the metalanguage stating that the sentences in the definition of a predicate are the only sentences determining the extension of the predicate. In terms of Euler diagrams Euler diagrams show logical relationships among events, properties, and so forth. "P only if Q", "if P then Q", and "P→Q" all mean that P is a subset, either proper or improper, of Q. "P if Q", "if Q then P", and Q→P all mean that Q is a proper or improper subset of P. "P if and only if Q" and "Q if and only if P" both mean that the sets P and Q are identical to each other. More general usage Iff is used outside the field of logic as well. Wherever logic is applied, especially in mathematical discussions, it has the same meaning as above: it is an abbreviation for if and only if, indicating that one statement is both necessary and sufficient for the other. This is an example of mathematical jargon (although, as noted above, if is more often used than iff in statements of definition). The elements of X are all and only the elements of Y means: "For any z in the domain of discourse, z is in X if and only if z is in Y." When "if" means "if and only if" In their Artificial Intelligence: A Modern Approach, Russell and Norvig note (page 282), in effect, that it is often more natural to express if and only if as if together with a "database (or logic programming) semantics". They give the example of the English sentence "Richard has two brothers, Geoffrey and John". In a database or logic program, this could be represented simply by two sentences: Brother(Richard, Geoffrey). Brother(Richard, John). The database semantics interprets the database (or program) as containing all and only the knowledge relevant for problem solving in a given domain. It interprets only if as expressing in the metalanguage that the sentences in the database represent the only knowledge that should be considered when drawing conclusions from the database. In first-order logic (FOL) with the standard semantics, the same English sentence would need to be represented, using if and only if, with only if interpreted in the object language, in some such form as: X(Brother(Richard, X) iff X = Geoffrey or X = John). Geoffrey ≠ John. Compared with the standard semantics for FOL, the database semantics has a more efficient implementation. Instead of reasoning with sentences of the form: conclusion iff conditions it uses sentences of the form: conclusion if conditions to reason forwards from conditions to conclusions or backwards from conclusions to conditions. The database semantics is analogous to the legal principle expressio unius est exclusio alterius (the express mention of one thing excludes all others). Moreover, it underpins the application of logic programming to the representation of legal texts and legal reasoning. See also Definition Equivalence relation Logical biconditional Logical equality Logical equivalence If and only if in logic programs Polysyllogism References External links Language Log: "Just in Case" Southern California Philosophy for philosophy graduate students: "Just in Case" Logical connectives Mathematical terminology Necessity and sufficiency
If and only if
[ "Mathematics" ]
1,804
[ "nan" ]
14,934
https://en.wikipedia.org/wiki/International%20Organization%20for%20Standardization
The International Organization for Standardization (ISO ), French: Organisation internationale de normalisation, is an independent, non-governmental, international standard development organization composed of representatives from the national standards organizations of member countries. Membership requirements are given in Article 3 of the ISO Statutes. ISO was founded on 23 February 1947, and () it has published over 25,000 international standards covering almost all aspects of technology and manufacturing. It has over 800 technical committees (TCs) and subcommittees (SCs) to take care of standards development. The organization develops and publishes international standards in technical and nontechnical fields, including everything from manufactured products and technology to food safety, transport, IT, agriculture, and healthcare. More specialized topics like electrical and electronic engineering are instead handled by the International Electrotechnical Commission. It is headquartered in Geneva, Switzerland. The three official languages of ISO are English, French, and Russian. Name and abbreviations The International Organization for Standardization in French is and in Russian, (). Although one might think ISO is an abbreviation for "International Standardization Organization" or a similar title in another language, the letters do not officially represent an acronym or initialism. The organization provides this explanation of the name:Because 'International Organization for Standardization' would have different acronyms in different languages (IOS in English, OIN in French), our founders decided to give it the short form ISO. ISO is derived from the Greek word (, meaning "equal"). Whatever the country, whatever the language, the short form of our name is always ISO.During the founding meetings of the new organization, however, the Greek word explanation was not invoked, so this meaning may be a false etymology. Both the name ISO and the ISO logo are registered trademarks and their use is restricted. History The organization that is known today as ISO began in 1926 as the International Federation of the National Standardizing Associations (ISA), which primarily focused on mechanical engineering. The ISA was suspended in 1942 during World War II but, after the war, the ISA was approached by the recently-formed United Nations Standards Coordinating Committee (UNSCC) with a proposal to form a new global standards body. In October 1946, ISA and UNSCC delegates from 25 countries met in London and agreed to join forces to create the International Organization for Standardization. The organization officially began operations on 23 February 1947. ISO Standards were originally known as ISO Recommendations (ISO/R), e.g., "ISO 1" was issued in 1951 as "ISO/R 1". Structure and organization ISO is a voluntary organization whose members are recognized authorities on standards, each one representing one country. Members meet annually at a General Assembly to discuss the strategic objectives of ISO. The organization is coordinated by a central secretariat based in Geneva. A council with a rotating membership of 20 member bodies provides guidance and governance, including setting the annual budget of the central secretariat. The technical management board is responsible for more than 250 technical committees, who develop the ISO standards. Joint technical committee with IEC ISO has a joint technical committee (JTC) with the International Electrotechnical Commission (IEC) to develop standards relating to information technology (IT). Known as JTC 1 and entitled "Information technology", it was created in 1987 and its mission is "to develop worldwide Information and Communication Technology (ICT) standards for business and consumer applications." There was previously also a JTC 2 that was created in 2009 for a joint project to establish common terminology for "standardization in the field of energy efficiency and renewable energy sources". It was later disbanded. Membership , there are 167 national members representing ISO in their country, with each country having only one member. ISO has three membership categories, Member bodies are national bodies considered the most representative standards body in each country. These are the only members of ISO that have voting rights. Correspondent members are countries that do not have their own standards organization. These members are informed about the work of ISO, but do not participate in standards promulgation. Subscriber members are countries with small economies. They pay reduced membership fees, but can follow the development of standards. Participating members are called "P" members, as opposed to observing members, who are called "O" members. Financing ISO is funded by a combination of: Organizations that manage the specific projects or loan experts to participate in the technical work Subscriptions from member bodies, whose subscriptions are in proportion to each country's gross national product and trade figures Sale of standards International standards and other publications International standards are the main products of ISO. It also publishes technical reports, technical specifications, publicly available specifications, technical corrigenda (corrections), and guides. International standards These are designated using the format ISO[/IEC] [/ASTM] [IS] nnnnn[-p]:[yyyy] Title, where nnnnn is the number of the standard, p is an optional part number, yyyy is the year published, and Title describes the subject. IEC for International Electrotechnical Commission is included if the standard results from the work of ISO/IEC JTC 1 (the ISO/IEC Joint Technical Committee). ASTM (American Society for Testing and Materials) is used for standards developed in cooperation with ASTM International. yyyy and IS are not used for an incomplete or unpublished standard and, under some circumstances, may be left off the title of a published work. Technical reports These are issued when a technical committee or subcommittee has collected data of a different kind from that normally published as an International Standard, such as references and explanations. The naming conventions for these are the same as for standards, except TR prepended instead of IS in the report's name. For example: ISO/IEC TR 17799:2000 Code of Practice for Information Security Management ISO/TR 19033:2000 Technical product documentation – Metadata for construction documentation Technical and publicly available specifications Technical specifications may be produced when "the subject in question is still under development or where for any other reason there is the future but not immediate possibility of an agreement to publish an International Standard". A publicly available specification is usually "an intermediate specification, published prior to the development of a full International Standard, or, in IEC may be a 'dual logo' publication published in collaboration with an external organization". By convention, both types of specification are named in a manner similar to the organization's technical reports. For example: ISO/TS 16952-1:2006 Technical product documentation – Reference designation system – Part 1: General application rules (later withdrawn and replaced by ISO/TS 81346-3:2012, which was later withdrawn) ISO/PAS 11154:2006 Road vehicles – Roof load carriers (later revised in ISO 11154:2023, which does not have the "PAS" abbreviation in its name) Technical corrigenda When partnering with IEC in their joint technical committee, ISO also sometimes issues "technical corrigenda" (where "corrigenda" is the plural of corrigendum). These are amendments made to existing standards to correct minor technical flaws or ambiguities. ISO guides These are meta-standards covering "matters related to international standardization". They are named using the format "ISO[/IEC] Guide N:yyyy: Title". For example: ISO/IEC Guide 2:2004 Standardization and related activities – General vocabulary ISO/IEC Guide 65:1996 General requirements for bodies operating product certification (since revised and reissued as ISO/IEC 17065:2012 Conformity assessment — Requirements for bodies certifying products, processes and services). Document copyright ISO documents have strict copyright restrictions and ISO charges for most copies. , the typical cost of a copy of an ISO standard is about or more (and electronic copies typically have a single-user license, so they cannot be shared among groups of people). Some standards by ISO and its official U.S. representative (and, via the U.S. National Committee, the International Electrotechnical Commission) are made freely available. Standardization process A standard published by ISO/IEC is the last stage of a long process that commonly starts with the proposal of new work within a committee. Some abbreviations used for marking a standard with its status are: PWI – Preliminary Work Item NP or NWIP – New Proposal / New Work Item Proposal (e.g., ISO/IEC NP 23007) AWI – Approved new Work Item (e.g., ISO/IEC AWI 15444-14) WD – Working Draft (e.g., ISO/IEC WD 27032) CD – Committee Draft (e.g., ISO/IEC CD 23000-5) FCD – Final Committee Draft (e.g., ISO/IEC FCD 23000-12) DIS – Draft International Standard (e.g., ISO/IEC DIS 14297) FDIS – Final Draft International Standard (e.g., ISO/IEC FDIS 27003) PRF – Proof of a new International Standard (e.g., ISO/IEC PRF 18018) IS – International Standard (e.g., ISO/IEC 13818-1:2007) Abbreviations used for amendments are: NP Amd – New Proposal Amendment (e.g., ISO/IEC 15444-2:2004/NP Amd 3) AWI Amd – Approved new Work Item Amendment (e.g., ISO/IEC 14492:2001/AWI Amd 4) WD Amd – Working Draft Amendment (e.g., ISO 11092:1993/WD Amd 1) CD Amd / PDAmd – Committee Draft Amendment / Proposed Draft Amendment (e.g., ISO/IEC 13818-1:2007/CD Amd 6) FPDAmd / DAM (DAmd) – Final Proposed Draft Amendment / Draft Amendment (e.g., ISO/IEC 14496-14:2003/FPDAmd 1) FDAM (FDAmd) – Final Draft Amendment (e.g., ISO/IEC 13818-1:2007/FDAmd 4) PRF Amd – (e.g., ISO 12639:2004/PRF Amd 1) Amd – Amendment (e.g., ISO/IEC 13818-1:2007/Amd 1:2007) Other abbreviations are: TR – Technical Report (e.g., ISO/IEC TR 19791:2006) DTR – Draft Technical Report (e.g., ISO/IEC DTR 19791) TS – Technical Specification (e.g., ISO/TS 16949:2009) DTS – Draft Technical Specification (e.g., ISO/DTS 11602-1) PAS – Publicly Available Specification TTA – Technology Trends Assessment (e.g., ISO/TTA 1:1994) IWA – International Workshop Agreements (e.g., IWA 1:2005) Cor – Technical Corrigendum (e.g., ISO/IEC 13818-1:2007/Cor 1:2008) Guide – a guidance to technical committees for the preparation of standards International Standards are developed by ISO technical committees (TC) and subcommittees (SC) by a process with six steps: Stage 1: Proposal stage Stage 2: Preparatory stage Stage 3: Committee stage Stage 4: Enquiry stage Stage 5: Approval stage Stage 6: Publication stage The TC/SC may set up working groups (WG) of experts for the preparation of a working drafts. Subcommittees may have several working groups, which may have several Sub Groups (SG). It is possible to omit certain stages, if there is a document with a certain degree of maturity at the start of a standardization project, for example, a standard developed by another organization. ISO/IEC directives also allow the so-called "Fast-track procedure". In this procedure, a document is submitted directly for approval as a draft International Standard (DIS) to the ISO member bodies or as a final draft International Standard (FDIS), if the document was developed by an international standardizing body recognized by the ISO Council. The first step, a proposal of work (New Proposal), is approved at the relevant subcommittee or technical committee (e.g., SC 29 and JTC 1 respectively in the case of MPEG, the Moving Picture Experts Group). A working group (WG) of experts is typically set up by the subcommittee for the preparation of a working draft (e.g., MPEG is a collection of seven working groups as of 2023). When the scope of a new work is sufficiently clarified, some of the working groups may make an open request for proposals—known as a "call for proposals". The first document that is produced, for example, for audio and video coding standards is called a verification model (VM) (previously also called a "simulation and test model"). When a sufficient confidence in the stability of the standard under development is reached, a working draft (WD) is produced. This is in the form of a standard, but is kept internal to working group for revision. When a working draft is sufficiently mature and the subcommittee is satisfied that it has developed an appropriate technical document for the problem being addressed, it becomes a committee draft (CD) and is sent to the P-member national bodies of the SC for the collection of formal comments. Revisions may be made in response to the comments, and successive committee drafts may be produced and circulated until consensus is reached to proceed to the next stage, called the "enquiry stage". After a consensus to proceed is established, the subcommittee will produce a draft international standard (DIS), and the text is submitted to national bodies for voting and comment within a period of five months. A document in the DIS stage is available to the public for purchase and may be referred to with its ISO DIS reference number. Following consideration of any comments and revision of the document, the draft is then approved for submission as a Final Draft International Standard (FDIS) if a two-thirds majority of the P-members of the TC/SC are in favour and if not more than one-quarter of the total number of votes cast are negative. ISO will then hold a ballot among the national bodies where no technical changes are allowed (a yes/no final approval ballot), within a period of two months. It is approved as an International Standard (IS) if a two-thirds majority of the P-members of the TC/SC is in favour and not more than one-quarter of the total number of votes cast are negative. After approval, the document is published by the ISO central secretariat, with only minor editorial changes introduced in the publication process before the publication as an International Standard. Except for a relatively small number of standards, ISO standards are not available free of charge, but rather for a purchase fee, which has been seen by some as unaffordable for small open-source projects. The process of developing standards within ISO was criticized around 2007 as being too difficult for timely completion of large and complex standards, and some members were failing to respond to ballots, causing problems in completing the necessary steps within the prescribed time limits. In some cases, alternative processes have been used to develop standards outside of ISO and then submit them for its approval. A more rapid "fast-track" approval procedure was used in ISO/IEC JTC 1 for the standardization of Office Open XML (OOXML, ISO/IEC 29500, approved in April 2008), and another rapid alternative "publicly available specification" (PAS) process had been used by OASIS to obtain approval of OpenDocument as an ISO/IEC standard (ISO/IEC 26300, approved in May 2006). As was suggested at the time by Martin Bryan, the outgoing convenor (chairman) of working group 1 (WG1) of ISO/IEC JTC 1/SC 34, the rules of ISO were eventually tightened so that participating members that fail to respond to votes are demoted to observer status. The computer security entrepreneur and Ubuntu founder, Mark Shuttleworth, was quoted in a ZDNet blog article in 2008 about the process of standardization of OOXML as saying: "I think it de-values the confidence people have in the standards setting process", and alleged that ISO did not carry out its responsibility. He also said that Microsoft had intensely lobbied many countries that traditionally had not participated in ISO and stacked technical committees with Microsoft employees, solution providers, and resellers sympathetic to Office Open XML: When you have a process built on trust and when that trust is abused, ISO should halt the process... ISO is an engineering old boys club and these things are boring so you have to have a lot of passion ... then suddenly you have an investment of a lot of money and lobbying and you get artificial results. The process is not set up to deal with intensive corporate lobbying and so you end up with something being a standard that is not clear. International Workshop Agreements International Workshop Agreements (IWAs) are documents that establish a collaboration agreement that allow "key industry players to negotiate in an open workshop environment" outside of ISO in a way that may eventually lead to development of an ISO standard. Products named after ISO On occasion, the fact that many of the ISO-created standards are ubiquitous has led to common use of "ISO" to describe the product that conforms to a standard. Some examples of this are: Disk images ending in the file extension "ISO" to signify that they are using the ISO 9660 standard file system as opposed to another file system—hence disc images commonly being referred to as "ISOs". The sensitivity of a photographic film to light (its "film speed") is described by ISO 6, ISO 2240, and ISO 5800. Hence, the speed of the film often is referred to by its ISO number. As it was originally defined in ISO 518, the flash hot shoe found on cameras often is called the "ISO shoe". ISO 11783, the communication protocol for the agriculture industry, which is marketed as ISOBUS. ISO 13216, the standardized attachment points for child safety seats, which is marketed as ISOFIX. ISO 668, the standardized intermodal containers, sometimes called "ISO containers". ISO awards ISO presents several awards to acknowledge the valuable contributions made in the realm of international standardization: The Lawrence D. Eicher Award: This award acknowledges outstanding standards development. It is available to all ISO and ISO/IEC technical committees. The ISO Next Generation Award: Aimed at young professionals from ISO member nations, this award highlights those who advocate for sustainability-centric standardization and emphasize the importance of partnerships. The ISO Excellence Award: Dedicated to recognizing the endeavors of ISO's technical professionals, any individual nominated as an expert, project leader, or convenor in a committee working group is eligible for this award. See also – for sustainability information and linking up with reporting on their 17#GlobalGoals indicators – a set of technical standards maintained by the Euro-Asian Council for Standardization, Metrology, and Certification – the Interface Marketing Supplier Integration Institute ISO divisions Some of the 834 Technical Committees of the International Organization for Standardization (ISO) include: ISO/TC 37 - Language and terminology – Terminology and other language content resources ISO/TC 46 - Information and documentation - Libraries, archives, indexing and information science ISO/TC 68 - Financial services - Banking, securities and financial services ISO/TC 176 - Quality management and quality assurance ISO/TC 211 - Geographic information/Geomatics - Geographic data and information ISO/TC 215 - Health informatics - Health-related data/information ISO/TC 262 - Risk management - Risk management ISO/TC 289 - Brand evaluation - Brand evaluation and valuation ISO/TC 292 - Security and resilience - Security of society References Further reading MIT Innovations and Entrepreneurship Seminar Series. External links Publicly Available Standards, with free access to a small subset of the standards. Advanced search for standards and/or projects Online Browsing Platform (OBP), access to most up to date content in ISO standards, graphical symbols, codes or terms and definitions. Organisations based in Geneva Organizations established in 1947 Social responsibility organizations Technical specifications 1947 establishments in Switzerland
International Organization for Standardization
[ "Technology" ]
4,177
[ "nan" ]
14,946
https://en.wikipedia.org/wiki/Ice
Ice is water that is frozen into a solid state, typically forming at or below temperatures of 0 °C, 32 °F, or 273.15 K. It occurs naturally on Earth, on other planets, in Oort cloud objects, and as interstellar ice. As a naturally occurring crystalline inorganic solid with an ordered structure, ice is considered to be a mineral. Depending on the presence of impurities such as particles of soil or bubbles of air, it can appear transparent or a more or less opaque bluish-white color. Virtually all of the ice on Earth is of a hexagonal crystalline structure denoted as ice Ih (spoken as "ice one h"). Depending on temperature and pressure, at least nineteen phases (packing geometries) can exist. The most common phase transition to ice Ih occurs when liquid water is cooled below (, ) at standard atmospheric pressure. When water is cooled rapidly (quenching), up to three types of amorphous ice can form. Interstellar ice is overwhelmingly low-density amorphous ice (LDA), which likely makes LDA ice the most abundant type in the universe. When cooled slowly, correlated proton tunneling occurs below (, ) giving rise to macroscopic quantum phenomena. Ice is abundant on the Earth's surface, particularly in the polar regions and above the snow line, where it can aggregate from snow to form glaciers and ice sheets. As snowflakes and hail, ice is a common form of precipitation, and it may also be deposited directly by water vapor as frost. The transition from ice to water is melting and from ice directly to water vapor is sublimation. These processes plays a key role in Earth's water cycle and climate. In the recent decades, ice volume on Earth has been decreasing due to climate change. The largest declines have occurred in the Arctic and in the mountains located outside of the polar regions. The loss of grounded ice (as opposed to floating sea ice) is the primary contributor to sea level rise. Humans have been using ice for various purposes for thousands of years. Some historic structures designed to hold ice to provide cooling are over 2,000 years old. Before the invention of refrigeration technology, the only way to safely store food without modifying it through preservatives was to use ice. Sufficiently solid surface ice makes waterways accessible to land transport during winter, and dedicated ice roads may be maintained. Ice also plays a major role in winter sports. Physical properties Ice possesses a regular crystalline structure based on the molecule of water, which consists of a single oxygen atom covalently bonded to two hydrogen atoms, or H–O–H. However, many of the physical properties of water and ice are controlled by the formation of hydrogen bonds between adjacent oxygen and hydrogen atoms; while it is a weak bond, it is nonetheless critical in controlling the structure of both water and ice. An unusual property of water is that its solid form—ice frozen at atmospheric pressure—is approximately 8.3% less dense than its liquid form; this is equivalent to a volumetric expansion of 9%. The density of ice is 0.9167–0.9168 g/cm3 at 0 °C and standard atmospheric pressure (101,325 Pa), whereas water has a density of 0.9998–0.999863 g/cm3 at the same temperature and pressure. Liquid water is densest, essentially 1.00 g/cm3, at 4 °C and begins to lose its density as the water molecules begin to form the hexagonal crystals of ice as the freezing point is reached. This is due to hydrogen bonding dominating the intermolecular forces, which results in a packing of molecules less compact in the solid. The density of ice increases slightly with decreasing temperature and has a value of 0.9340 g/cm3 at −180 °C (93 K). When water freezes, it increases in volume (about 9% for fresh water). The effect of expansion during freezing can be dramatic, and ice expansion is a basic cause of freeze-thaw weathering of rock in nature and damage to building foundations and roadways from frost heaving. It is also a common cause of the flooding of houses when water pipes burst due to the pressure of expanding water when it freezes. Because ice is less dense than liquid water, it floats, and this prevents bottom-up freezing of the bodies of water. Instead, a sheltered environment for animal and plant life is formed beneath the floating ice, which protects the underside from short-term weather extremes such as wind chill. Sufficiently thin floating ice allows light to pass through, supporting the photosynthesis of bacterial and algal colonies. When sea water freezes, the ice is riddled with brine-filled channels which sustain sympagic organisms such as bacteria, algae, copepods and annelids. In turn, they provide food for animals such as krill and specialized fish like the bald notothen, fed upon in turn by larger animals such as emperor penguins and minke whales. When ice melts, it absorbs as much energy as it would take to heat an equivalent mass of water by . During the melting process, the temperature remains constant at . While melting, any energy added breaks the hydrogen bonds between ice (water) molecules. Energy becomes available to increase the thermal energy (temperature) only after enough hydrogen bonds are broken that the ice can be considered liquid water. The amount of energy consumed in breaking hydrogen bonds in the transition from ice to water is known as the heat of fusion. As with water, ice absorbs light at the red end of the spectrum preferentially as the result of an overtone of an oxygen–hydrogen (O–H) bond stretch. Compared with water, this absorption is shifted toward slightly lower energies. Thus, ice appears blue, with a slightly greener tint than liquid water. Since absorption is cumulative, the color effect intensifies with increasing thickness or if internal reflections cause the light to take a longer path through the ice. Other colors can appear in the presence of light absorbing impurities, where the impurity is dictating the color rather than the ice itself. For instance, icebergs containing impurities (e.g., sediments, algae, air bubbles) can appear brown, grey or green. Because ice in natural environments is usually close to its melting temperature, its hardness shows pronounced temperature variations. At its melting point, ice has a Mohs hardness of 2 or less, but the hardness increases to about 4 at a temperature of and to 6 at a temperature of , the vaporization point of solid carbon dioxide (dry ice). Phases Most liquids under increased pressure freeze at higher temperatures because the pressure helps to hold the molecules together. However, the strong hydrogen bonds in water make it different: for some pressures higher than , water freezes at a temperature below . Ice, water, and water vapour can coexist at the triple point, which is exactly at a pressure of 611.657 Pa. The kelvin was defined as of the difference between this triple point and absolute zero, though this definition changed in May 2019. Unlike most other solids, ice is difficult to superheat. In an experiment, ice at −3 °C was superheated to about 17 °C for about 250 picoseconds. Subjected to higher pressures and varying temperatures, ice can form in nineteen separate known crystalline phases at various densities, along with hypothetical proposed phases of ice that have not been observed. With care, at least fifteen of these phases (one of the known exceptions being ice X) can be recovered at ambient pressure and low temperature in metastable form. The types are differentiated by their crystalline structure, proton ordering, and density. There are also two metastable phases of ice under pressure, both fully hydrogen-disordered; these are Ice IV and Ice XII. Ice XII was discovered in 1996. In 2006, Ice XIII and Ice XIV were discovered. Ices XI, XIII, and XIV are hydrogen-ordered forms of ices I, V, and XII respectively. In 2009, ice XV was found at extremely high pressures and −143 °C. At even higher pressures, ice is predicted to become a metal; this has been variously estimated to occur at 1.55 TPa or 5.62 TPa. As well as crystalline forms, solid water can exist in amorphous states as amorphous solid water (ASW) of varying densities. In outer space, hexagonal crystalline ice is present in the ice volcanoes, but is extremely rare otherwise. Even icy moons like Ganymede are expected to mainly consist of other crystalline forms of ice. Water in the interstellar medium is dominated by amorphous ice, making it likely the most common form of water in the universe. Low-density ASW (LDA), also known as hyperquenched glassy water, may be responsible for noctilucent clouds on Earth and is usually formed by deposition of water vapor in cold or vacuum conditions. High-density ASW (HDA) is formed by compression of ordinary ice I or LDA at GPa pressures. Very-high-density ASW (VHDA) is HDA slightly warmed to 160 K under 1–2 GPa pressures. Ice from a theorized superionic water may possess two crystalline structures. At pressures in excess of such superionic ice would take on a body-centered cubic structure. However, at pressures in excess of the structure may shift to a more stable face-centered cubic lattice. It is speculated that superionic ice could compose the interior of ice giants such as Uranus and Neptune. Friction properties Ice is "slippery" because it has a low coefficient of friction. This subject was first scientifically investigated in the 19th century. The preferred explanation at the time was "pressure melting" -i.e. the blade of an ice skate, upon exerting pressure on the ice, would melt a thin layer, providing sufficient lubrication for the blade to glide across the ice. Yet, 1939 research by Frank P. Bowden and T. P. Hughes found that skaters would experience a lot more friction than they actually do if it were the only explanation. Further, the optimum temperature for figure skating is and for hockey; yet, according to pressure melting theory, skating below would be outright impossible. Instead, Bowden and Hughes argued that heating and melting of the ice layer is caused by friction. However, this theory does not sufficiently explain why ice is slippery when standing still even at below-zero temperatures. Subsequent research suggested that ice molecules at the interface cannot properly bond with the molecules of the mass of ice beneath (and thus are free to move like molecules of liquid water). These molecules remain in a semi-liquid state, providing lubrication regardless of pressure against the ice exerted by any object. However, the significance of this hypothesis is disputed by experiments showing a high coefficient of friction for ice using atomic force microscopy. Thus, the mechanism controlling the frictional properties of ice is still an active area of scientific study. A comprehensive theory of ice friction must take into account all of the aforementioned mechanisms to estimate friction coefficient of ice against various materials as a function of temperature and sliding speed. 2014 research suggests that frictional heating is the most important process under most typical conditions. Natural formation The term that collectively describes all of the parts of the Earth's surface where water is in frozen form is the cryosphere. Ice is an important component of the global climate, particularly in regard to the water cycle. Glaciers and snowpacks are an important storage mechanism for fresh water; over time, they may sublimate or melt. Snowmelt is an important source of seasonal fresh water. The World Meteorological Organization defines several kinds of ice depending on origin, size, shape, influence and so on. Clathrate hydrates are forms of ice that contain gas molecules trapped within its crystal lattice. In the oceans Ice that is found at sea may be in the form of drift ice floating in the water, fast ice fixed to a shoreline or anchor ice if attached to the seafloor. Ice which calves (breaks off) from an ice shelf or a coastal glacier may become an iceberg. The aftermath of calving events produces a loose mixture of snow and ice known as Ice mélange. Sea ice forms in several stages. At first, small, millimeter-scale crystals accumulate on the water surface in what is known as frazil ice. As they become somewhat larger and more consistent in shape and cover, the water surface begins to look "oily" from above, so this stage is called grease ice. Then, ice continues to clump together, and solidify into flat cohesive pieces known as ice floes. Ice floes are the basic building blocks of sea ice cover, and their horizontal size (defined as half of their diameter) varies dramatically, with the smallest measured in centimeters and the largest in hundreds of kilometers. An area which is over 70% ice on its surface is said to be covered by pack ice. Fully formed sea ice can be forced together by currents and winds to form pressure ridges up to tall. On the other hand, active wave activity can reduce sea ice to small, regularly shaped pieces, known as pancake ice. Sometimes, wind and wave activity "polishes" sea ice to perfectly spherical pieces known as ice eggs. On land The largest ice formations on Earth are the two ice sheets which almost completely cover the world's largest island, Greenland, and the continent of Antarctica. These ice sheets have an average thickness of over and have existed for millions of years. Other major ice formations on land include ice caps, ice fields, ice streams and glaciers. In particular, the Hindu Kush region is known as the Earth's "Third Pole" due to the large number of glaciers it contains. They cover an area of around , and have a combined volume of between 3,000-4,700 km3. These glaciers are nicknamed "Asian water towers", because their meltwater run-off feeds into rivers which provide water for an estimated two billion people. Permafrost refers to soil or underwater sediment which continuously remains below for two years or more. The ice within permafrost is divided into four categories: pore ice, vein ice (also known as ice wedges), buried surface ice and intrasedimental ice (from the freezing of underground waters). One example of ice formation in permafrost areas is aufeis - layered ice that forms in Arctic and subarctic stream valleys. Ice, frozen in the stream bed, blocks normal groundwater discharge, and causes the local water table to rise, resulting in water discharge on top of the frozen layer. This water then freezes, causing the water table to rise further and repeat the cycle. The result is a stratified ice deposit, often several meters thick. Snow line and snow fields are two related concepts, in that snow fields accumulate on top of and ablate away to the equilibrium point (the snow line) in an ice deposit. On rivers and streams Ice which forms on moving water tends to be less uniform and stable than ice which forms on calm water. Ice jams (sometimes called "ice dams"), when broken chunks of ice pile up, are the greatest ice hazard on rivers. Ice jams can cause flooding, damage structures in or near the river, and damage vessels on the river. Ice jams can cause some hydropower industrial facilities to completely shut down. An ice dam is a blockage from the movement of a glacier which may produce a proglacial lake. Heavy ice flows in rivers can also damage vessels and require the use of an icebreaker vessel to keep navigation possible. Ice discs are circular formations of ice floating on river water. They form within eddy currents, and their position results in asymmetric melting, which makes them continuously rotate at a low speed. On lakes Ice forms on calm water from the shores, a thin layer spreading across the surface, and then downward. Ice on lakes is generally four types: primary, secondary, superimposed and agglomerate. Primary ice forms first. Secondary ice forms below the primary ice in a direction parallel to the direction of the heat flow. Superimposed ice forms on top of the ice surface from rain or water which seeps up through cracks in the ice which often settles when loaded with snow. An ice shove occurs when ice movement, caused by ice expansion and/or wind action, occurs to the extent that ice pushes onto the shores of lakes, often displacing sediment that makes up the shoreline. Shelf ice is formed when floating pieces of ice are driven by the wind piling up on the windward shore. This kind of ice may contain large air pockets under a thin surface layer, which makes it particularly hazardous to walk across it. Another dangerous form of rotten ice to traverse on foot is candle ice, which develops in columns perpendicular to the surface of a lake. Because it lacks a firm horizontal structure, a person who has fallen through has nothing to hold onto to pull themselves out. As precipitation Snow and freezing rain Snow crystals form when tiny supercooled cloud droplets (about 10 μm in diameter) freeze. These droplets are able to remain liquid at temperatures lower than , because to freeze, a few molecules in the droplet need to get together by chance to form an arrangement similar to that in an ice lattice; then the droplet freezes around this "nucleus". Experiments show that this "homogeneous" nucleation of cloud droplets only occurs at temperatures lower than . In warmer clouds an aerosol particle or "ice nucleus" must be present in (or in contact with) the droplet to act as a nucleus. Our understanding of what particles make efficient ice nuclei is poor – what we do know is they are very rare compared to that cloud condensation nuclei on which liquid droplets form. Clays, desert dust and biological particles may be effective, although to what extent is unclear. Artificial nuclei are used in cloud seeding. The droplet then grows by condensation of water vapor onto the ice surfaces. Ice storm is a type of winter storm characterized by freezing rain, which produces a glaze of ice on surfaces, including roads and power lines. In the United States, a quarter of winter weather events produce glaze ice, and utilities need to be prepared to minimize damages. Hard forms Hail forms in storm clouds when supercooled water droplets freeze on contact with condensation nuclei, such as dust or dirt. The storm's updraft blows the hailstones to the upper part of the cloud. The updraft dissipates and the hailstones fall down, back into the updraft, and are lifted up again. Hail has a diameter of or more. Within METAR code, GR is used to indicate larger hail, of a diameter of at least and GS for smaller. Stones of , and are the most frequently reported hail sizes in North America. Hailstones can grow to and weigh more than . In large hailstones, latent heat released by further freezing may melt the outer shell of the hailstone. The hailstone then may undergo 'wet growth', where the liquid outer shell collects other smaller hailstones. The hailstone gains an ice layer and grows increasingly larger with each ascent. Once a hailstone becomes too heavy to be supported by the storm's updraft, it falls from the cloud. Hail forms in strong thunderstorm clouds, particularly those with intense updrafts, high liquid water content, great vertical extent, large water droplets, and where a good portion of the cloud layer is below freezing . Hail-producing clouds are often identifiable by their green coloration. The growth rate is maximized at about , and becomes vanishingly small much below as supercooled water droplets become rare. For this reason, hail is most common within continental interiors of the mid-latitudes, as hail formation is considerably more likely when the freezing level is below the altitude of . Entrainment of dry air into strong thunderstorms over continents can increase the frequency of hail by promoting evaporative cooling which lowers the freezing level of thunderstorm clouds giving hail a larger volume to grow in. Accordingly, hail is actually less common in the tropics despite a much higher frequency of thunderstorms than in the mid-latitudes because the atmosphere over the tropics tends to be warmer over a much greater depth. Hail in the tropics occurs mainly at higher elevations. Ice pellets (METAR code PL) are a form of precipitation consisting of small, translucent balls of ice, which are usually smaller than hailstones. This form of precipitation is also referred to as "sleet" by the United States National Weather Service. (In British English "sleet" refers to a mixture of rain and snow.) Ice pellets typically form alongside freezing rain, when a wet warm front ends up between colder and drier atmospheric layers. There, raindrops would both freeze and shrink in size due to evaporative cooling. So-called snow pellets, or graupel, form when multiple water droplets freeze onto snowflakes until a soft ball-like shape is formed. So-called "diamond dust", (METAR code IC) also known as ice needles or ice crystals, forms at temperatures approaching due to air with slightly higher moisture from aloft mixing with colder, surface-based air. On surfaces As water drips and re-freezes, it can form hanging icicles, or stalagmite-like structures on the ground. On sloped roofs, buildup of ice can produce an ice dam, which stops melt water from draining properly and potentially leads to damaging leaks. More generally, water vapor depositing onto surfaces due to high relative humidity and then freezing results in various forms of atmospheric icing, or frost. Inside buildings, this can be seen as ice on the surface of un-insulated windows. Hoar frost is common in the environment, particularly in the low-lying areas such as valleys. In Antarctica, the temperatures can be so low that electrostatic attraction is increased to the point hoarfrost on snow sticks together when blown by wind into tumbleweed-like balls known as yukimarimo. Sometimes, drops of water crystallize on cold objects as rime instead of glaze. Soft rime has a density between a quarter and two thirds that of pure ice, due to a high proportion of trapped air, which also makes soft rime appear white. Hard rime is denser, more transparent, and more likely to appear on ships and aircraft. Cold wind specifically causes what is known as advection frost when it collides with objects. When it occurs on plants, it often causes damage to them. Various methods exist to protect agricultural crops from frost - from simply covering them to using wind machines. In recent decades, irrigation sprinklers have been calibrated to spray just enough water to preemptively create a layer of ice that would form slowly and so avoid a sudden temperature shock to the plant, and not be so thick as to cause damage with its weight. Ablation Ablation of ice refers to both its melting and its dissolution. The melting of ice entails the breaking of hydrogen bonds between the water molecules. The ordering of the molecules in the solid breaks down to a less ordered state and the solid melts to become a liquid. This is achieved by increasing the internal energy of the ice beyond the melting point. When ice melts it absorbs as much energy as would be required to heat an equivalent amount of water by 80 °C. While melting, the temperature of the ice surface remains constant at 0 °C. The rate of the melting process depends on the efficiency of the energy exchange process. An ice surface in fresh water melts solely by free convection with a rate that depends linearly on the water temperature, T∞, when T∞ is less than 3.98 °C, and superlinearly when T∞ is equal to or greater than 3.98 °C, with the rate being proportional to (T∞ − 3.98 °C)α, with α =  for T∞ much greater than 8 °C, and α =  for in between temperatures T∞. In salty ambient conditions, dissolution rather than melting often causes the ablation of ice. For example, the temperature of the Arctic Ocean is generally below the melting point of ablating sea ice. The phase transition from solid to liquid is achieved by mixing salt and water molecules, similar to the dissolution of sugar in water, even though the water temperature is far below the melting point of the sugar. However, the dissolution rate is limited by salt concentration and is therefore slower than melting. Role in human activities Cooling Ice has long been valued as a means of cooling. In 400 BC Iran, Persian engineers had already developed techniques for ice storage in the desert through the summer months. During the winter, ice was transported from harvesting pools and nearby mountains in large quantities to be stored in specially designed, naturally cooled refrigerators, called yakhchal (meaning ice storage). Yakhchals were large underground spaces (up to 5000 m3) that had thick walls (at least two meters at the base) made of a specific type of mortar called sarooj made from sand, clay, egg whites, lime, goat hair, and ash. The mortar was resistant to heat transfer, helping to keep the ice cool enough not to melt; it was also impenetrable by water. Yakhchals often included a qanat and a system of windcatchers that could lower internal temperatures to frigid levels, even during the heat of the summer. One use for the ice was to create chilled treats for royalty. Harvesting There were thriving industries in 16th–17th century England whereby low-lying areas along the Thames Estuary were flooded during the winter, and ice harvested in carts and stored inter-seasonally in insulated wooden houses as a provision to an icehouse often located in large country houses, and widely used to keep fish fresh when caught in distant waters. This was allegedly copied by an Englishman who had seen the same activity in China. Ice was imported into England from Norway on a considerable scale as early as 1823. In the United States, the first cargo of ice was sent from New York City to Charleston, South Carolina, in 1799, and by the first half of the 19th century, ice harvesting had become a big business. Frederic Tudor, who became known as the "Ice King", worked on developing better insulation products for long distance shipments of ice, especially to the tropics; this became known as the ice trade. Between 1812 and 1822, under Lloyd Hesketh Bamford Hesketh's instruction, Gwrych Castle was built with 18 large towers, one of those towers is called the 'Ice Tower'. Its sole purpose was to store Ice. Trieste sent ice to Egypt, Corfu, and Zante; Switzerland, to France; and Germany sometimes was supplied from Bavarian lakes. From 1930s and up until 1994, the Hungarian Parliament building used ice harvested in the winter from Lake Balaton for air conditioning. Ice houses were used to store ice formed in the winter, to make ice available all year long, and an early type of refrigerator known as an icebox was cooled using a block of ice placed inside it. Many cities had a regular ice delivery service during the summer. The advent of artificial refrigeration technology made the delivery of ice obsolete. Ice is still harvested for ice and snow sculpture events. For example, a swing saw is used to get ice for the Harbin International Ice and Snow Sculpture Festival each year from the frozen surface of the Songhua River. Artificial production The earliest known written process to artificially make ice is by the 13th-century writings of Arab historian Ibn Abu Usaybia in his book Kitab Uyun al-anba fi tabaqat-al-atibba concerning medicine in which Ibn Abu Usaybia attributes the process to an even older author, Ibn Bakhtawayhi, of whom nothing is known. Ice is now produced on an industrial scale, for uses including food storage and processing, chemical manufacturing, concrete mixing and curing, and consumer or packaged ice. Most commercial icemakers produce three basic types of fragmentary ice: flake, tubular and plate, using a variety of techniques. Large batch ice makers can produce up to 75 tons of ice per day. In 2002, there were 426 commercial ice-making companies in the United States, with a combined value of shipments of $595,487,000. Home refrigerators can also make ice with a built in icemaker, which will typically make ice cubes or crushed ice. The first such device was presented in 1965 by Frigidaire. Land travel Ice forming on roads is a common winter hazard, and black ice particularly dangerous because it is very difficult to see. It is both very transparent, and often forms specifically in shaded (and therefore cooler and darker) areas, i.e. beneath overpasses. Whenever there is freezing rain or snow which occurs at a temperature near the melting point, it is common for ice to build up on the windows of vehicles. Often, snow melts, re-freezes, and forms a fragmented layer of ice which effectively "glues" snow to the window. In this case, the frozen mass is commonly removed with ice scrapers. A thin layer of ice crystals can also form on the inside surface of car windows during sufficiently cold weather. In the 1970s and 1980s, some vehicles such as Ford Thunderbird could be upgraded with heated windshields as the result. This technology fell out of style as it was too expensive and prone to damage, but rear-window defrosters are cheaper to maintain and so are more widespread. In sufficiently cold places, the layers of ice on water surfaces can get thick enough for ice roads to be built. Some regulations specify that the minimum safe thickness is for a person, for a snowmobile and for an automobile lighter than 5 tonnes. For trucks, effective thickness varies with load - i.e. a vehicle with 9-ton total weight requires a thickness of . Notably, the speed limit for a vehicle moving at a road which meets its minimum safe thickness is 25 km/h (15 mph), going up to 35 km/h (25 mph) if the road's thickness is 2 or more times larger than the minimum safe value. There is a known instance where a railroad has been built on ice. The most famous ice road had been the Road of Life across Lake Ladoga. It operated in the winters of 1941–1942 and 1942–1943, when it was the only land route available to the Soviet Union to relieve the Siege of Leningrad by the German Army Group North. The trucks moved hundreds of thousands tonnes of supplies into the city, and hundreds of thousands of civilians were evacuated. It is now a World Heritage Site. Water-borne travel For ships, ice presents two distinct hazards. Firstly, spray and freezing rain can produce an ice build-up on the superstructure of a vessel sufficient to make it unstable, potentially to the point of capsizing. Earlier, crewmembers were regularly forced to manually hack off ice build-up. After 1980s, spraying de-icing chemicals or melting the ice through hot water/steam hoses became more common. Secondly, icebergs – large masses of ice floating in water (typically created when glaciers reach the sea) – can be dangerous if struck by a ship when underway. Icebergs have been responsible for the sinking of many ships, the most famous being the Titanic. For harbors near the poles, being ice-free, ideally all year long, is an important advantage. Examples are Murmansk (Russia), Petsamo (Russia, formerly Finland), and Vardø (Norway). Harbors which are not ice-free are opened up using specialized vessels, called icebreakers. Icebreakers are also used to open routes through the sea ice for other vessels, as the only alternative is to find the openings called "polynyas" or "leads". A widespread production of icebreakers began during the 19th century. Earlier designs simply had reinforced bows in a spoon-like or diagonal shape to effectively crush the ice. Later designs attached a forward propeller underneath the protruding bow, as the typical rear propellers were incapable of effectively steering the ship through the ice Air travel For aircraft, ice can cause a number of dangers. As an aircraft climbs, it passes through air layers of different temperature and humidity, some of which may be conducive to ice formation. If ice forms on the wings or control surfaces, this may adversely affect the flying qualities of the aircraft. In 1919, during the first non-stop flight across the Atlantic, the British aviators Captain John Alcock and Lieutenant Arthur Whitten Brown encountered such icing conditions – Brown left the cockpit and climbed onto the wing several times to remove ice which was covering the engine air intakes of the Vickers Vimy aircraft they were flying. One vulnerability effected by icing that is associated with reciprocating internal combustion engines is the carburetor. As air is sucked through the carburetor into the engine, the local air pressure is lowered, which causes adiabatic cooling. Thus, in humid near-freezing conditions, the carburetor will be colder, and tend to ice up. This will block the supply of air to the engine, and cause it to fail. Between 1969 and 1975, 468 such instances were recorded, causing 75 aircraft losses, 44 fatalities and 202 serious injuries. Thus, carburetor air intake heaters were developed. Further, reciprocating engines with fuel injection do not require carburetors in the first place. Jet engines do not experience carb icing, but they can be affected by the moisture inherently present in jet fuel freezing and forming ice crystals, which can potentially clog up fuel intake to the engine. Fuel heaters and/or de-icing additives are used to address the issue. Recreation and sports Ice plays a central role in winter recreation and in many sports such as ice skating, tour skating, ice hockey, bandy, ice fishing, ice climbing, curling, broomball and sled racing on bobsled, luge and skeleton. Many of the different sports played on ice get international attention every four years during the Winter Olympic Games. Small boat-like craft can be mounted on blades and be driven across the ice by sails. This sport is known as ice yachting, and it had been practiced for centuries. Another vehicular sport is ice racing, where drivers must speed on lake ice, while also controlling the skid of their vehicle (similar in some ways to dirt track racing). The sport has even been modified for ice rinks. Other uses As thermal ballast Ice is still used to cool and preserve food in portable coolers. Ice cubes or crushed ice can be used to cool drinks. As the ice melts, it absorbs heat and keeps the drink near . Ice can be used as part of an air conditioning system, using battery- or solar-powered fans to blow hot air over the ice. This is especially useful during heat waves when power is out and standard (electrically powered) air conditioners do not work. Ice can be used (like other cold packs) to reduce swelling (by decreasing blood flow) and pain by pressing it against an area of the body. As structural material Engineers used the substantial strength of pack ice when they constructed Antarctica's first floating ice pier in 1973. Such ice piers are used during cargo operations to load and offload ships. Fleet operations personnel make the floating pier during the winter. They build upon naturally occurring frozen seawater in McMurdo Sound until the dock reaches a depth of about . Ice piers are inherently temporary structures, although some can last as long as 10 years. Once a pier is no longer usable, it is towed to sea with an icebreaker. Structures and ice sculptures are built out of large chunks of ice or by spraying water The structures are mostly ornamental (as in the case with ice castles), and not practical for long-term habitation. Ice hotels exist on a seasonal basis in a few cold areas. Igloos are another example of a temporary structure, made primarily from snow. Engineers can also use ice to destroy. In mining, drilling holes in rock structures and then pouring water during cold weather is an accepted alternative to using dynamite, as the rock cracks when the water expands as ice. During World War II, Project Habbakuk was an Allied programme which investigated the use of pykrete (wood fibers mixed with ice) as a possible material for warships, especially aircraft carriers, due to the ease with which a vessel immune to torpedoes, and a large deck, could be constructed by ice. A small-scale prototype was built, but it soon turned out the project would cost far more than a conventional aircraft carrier while being many times slower and also vulnerable to melting. Ice has even been used as the material for a variety of musical instruments, for example by percussionist Terje Isungset. Impacts of climate change Historical Greenhouse gas emissions from human activities unbalance the Earth's energy budget and so cause an accumulation of heat. About 90% of that heat is added to ocean heat content, 1% is retained in the atmosphere and 3-4% goes to melt major parts of the cryosphere. Between 1994 and 2017, 28 trillion tonnes of ice were lost around the globe as the result. Arctic sea ice decline accounted for the single largest loss (7.6 trillion tonnes), followed by the melting of Antarctica's ice shelves (6.5 trillion tonnes), the retreat of mountain glaciers (6.1 trillion tonnes), the melting of the Greenland ice sheet (3.8 trillion tonnes) and finally the melting of the Antarctic ice sheet (2.5 trillion tonnes) and the limited losses of the sea ice in the Southern Ocean (0.9 trillion tonnes). Other than the sea ice (which already displaces water due to Archimedes' principle), these losses are a major cause of sea level rise (SLR) and they are expected to intensify in the future. In particular, the melting of the West Antarctic ice sheet may accelerate substantially as the floating ice shelves are lost and can no longer buttress the glaciers. This would trigger poorly understood marine ice sheet instability processes, which could then increase the SLR expected for the end of the century (between and , depending on future warming), by tens of centimeters more. Ice loss in Greenland and Antarctica also produces large quantities of fresh meltwater, which disrupts the Atlantic meridional overturning circulation (AMOC) and the Southern Ocean overturning circulation, respectively. These two halves of the thermohaline circulation are very important for the global climate. A continuation of high meltwater flows may cause a severe disruption (up to a point of a "collapse") of either circulation, or even both of them. Either event would be considered an example of tipping points in the climate system, because it would be extremely difficult to reverse. AMOC is generally not expected to collapse during the 21st century, while there is only limited knowledge about the Southern Ocean circulation. Another example of ice-related tipping point is permafrost thaw. While the organic content in the permafrost causes and methane emissions once it thaws and begins to decompose, ice melting liqufies the ground, causing anything built above the former permafrost to collapse. By 2050, the economic damages from such infrastructure loss are expected to cost tens of billions of dollars. Predictions In the future, the Arctic Ocean is likely to lose effectively all of its sea ice during at least some Septembers (the end of the ice melting season), although some of the ice would refreeze during the winter. I.e. an ice-free September is likely to occur once in every 40 years if global warming is at , but would occur once in every 8 years at and once in every 1.5 years at . This would affect the regional and global climate due to the ice-albedo feedback. Because ice is highly reflective of solar energy, persistent sea ice cover lowers local temperatures. Once that ice cover melts, the darker ocean waters begin to absorb more heat, which also helps to melt the remaining ice. Global losses of sea ice between 1992 and 2018, almost all of them in the Arctic, have already had the same impact as 10% of greenhouse gas emissions over the same period. If all the Arctic sea ice was gone every year between June and September (polar day, when the Sun is constantly shining), temperatures in the Arctic would increase by over , while the global temperatures would increase by around . By 2100, at least a quarter of mountain glaciers outside of Greenland and Antarctica would melt, and effectively all ice caps on non-polar mountains are likely to be lost around 200 years after global warming reaches . The West Antarctic ice sheet is highly vulnerable and will likely disappear even if the warming does not progress further, although it could take around 2,000 years before its loss is complete. The Greenland ice sheet will most likely be lost with the sustained warming between and , although its total loss requires around 10,000 years. Finally, the East Antarctic ice sheet will take at least 10,000 years to melt entirely, which requires a warming of between and . If all the ice on Earth melted, it would result in about of sea level rise, with some coming from East Antarctica. Due to isostatic rebound, the ice-free land would eventually become higher in Greenland and  in Antarctica, on average. Areas in the center of each landmass would become up to and  higher, respectively. The impact on global temperatures from losing West Antartica, mountain glaciers and the Greenland ice sheet is estimated at , and , respectively, while the lack of the East Antarctic ice sheet would increase the temperatures by . Non-water The solid phases of several other volatile substances are also referred to as ices; generally a volatile is classed as an ice if its melting or sublimation point lies above or around (assuming standard atmospheric pressure). The best known example is dry ice, the solid form of carbon dioxide. Its sublimation/deposition point occurs at . A "magnetic analogue" of ice is also realized in some insulating magnetic materials in which the magnetic moments mimic the position of protons in water ice and obey energetic constraints similar to the Bernal-Fowler ice rules arising from the geometrical frustration of the proton configuration in water ice. These materials are called spin ice. See also References Further reading Brady, Amy. Ice: From Mixed Drinks to Skating Rinks--A Cool History of a Hot Commodity (G. P. Putnam's Sons, 2023). Hogge, Fred. Of Ice and Men: How We've Used Cold to Transform Humanity (Pegasus Books, 2022) Leonard, Max. A Cold Spell: A Human History of Ice (Bloomsbury, 2023) online review of this book External links Webmineral listing for Ice MinDat.org listing and location data for Ice Estimating the bearing capacity of ice High-temperature, high-pressure ice The Surprisingly Cool History of Ice Glaciology Minerals Transparent materials Articles containing video clips Limnology Oceanography Cryosphere
Ice
[ "Physics", "Environmental_science" ]
8,904
[ "Physical phenomena", "Hydrology", "Applied and interdisciplinary physics", "Oceanography", "Cryosphere", "Optical phenomena", "Materials", "Transparent materials", "Matter" ]
14,951
https://en.wikipedia.org/wiki/Ionic%20bonding
Ionic bonding is a type of chemical bonding that involves the electrostatic attraction between oppositely charged ions, or between two atoms with sharply different electronegativities, and is the primary interaction occurring in ionic compounds. It is one of the main types of bonding, along with covalent bonding and metallic bonding. Ions are atoms (or groups of atoms) with an electrostatic charge. Atoms that gain electrons make negatively charged ions (called anions). Atoms that lose electrons make positively charged ions (called cations). This transfer of electrons is known as electrovalence in contrast to covalence. In the simplest case, the cation is a metal atom and the anion is a nonmetal atom, but these ions can be more complex, e.g. molecular ions like or . In simpler words, an ionic bond results from the transfer of electrons from a metal to a non-metal to obtain a full valence shell for both atoms. Clean ionic bonding — in which one atom or molecule completely transfers an electron to another — cannot exist: all ionic compounds have some degree of covalent bonding or electron sharing. Thus, the term "ionic bonding" is given when the ionic character is greater than the covalent character – that is, a bond in which there is a large difference in electronegativity between the two atoms, causing the bonding to be more polar (ionic) than in covalent bonding where electrons are shared more equally. Bonds with partially ionic and partially covalent characters are called polar covalent bonds. Ionic compounds conduct electricity when molten or in solution, typically not when solid. Ionic compounds generally have a high melting point, depending on the charge of the ions they consist of. The higher the charges the stronger the cohesive forces and the higher the melting point. They also tend to be soluble in water; the stronger the cohesive forces, the lower the solubility. Overview Atoms that have an almost full or almost empty valence shell tend to be very reactive. Strongly electronegative atoms (such as halogens) often have only one or two empty electron states in their valence shell, and frequently bond with other atoms or gain electrons to form anions. Weakly electronegative atoms (such as alkali metals) have relatively few valence electrons, which can easily be lost to strongly electronegative atoms. As a result, weakly electronegative atoms tend to distort their electron cloud and form cations. Properties of ionic bonds They are considered to be among the strongest of all types of chemical bonds. This often causes ionic compounds to be very stable. Ionic bonds have high bond energy. Bond energy is the mean amount of energy required to break the bond in the gaseous state. Most ionic compounds exist in the form of a crystal structure, in which the ions occupy the corners of the crystal. Such a structure is called a crystal lattice. Ionic compounds lose their crystal lattice structure and break up into ions when dissolved in water or any other polar solvent. This process is called solvation. The presence of these free ions makes aqueous ionic compound solutions good conductors of electricity. The same occurs when the compounds are heated above their melting point in a process known as melting. Formation Ionic bonding can result from a redox reaction when atoms of an element (usually metal), whose ionization energy is low, give some of their electrons to achieve a stable electron configuration. In doing so, cations are formed. An atom of another element (usually nonmetal) with greater electron affinity accepts one or more electrons to attain a stable electron configuration, and after accepting electrons an atom becomes an anion. Typically, the stable electron configuration is one of the noble gases for elements in the s-block and the p-block, and particular stable electron configurations for d-block and f-block elements. The electrostatic attraction between the anions and cations leads to the formation of a solid with a crystallographic lattice in which the ions are stacked in an alternating fashion. In such a lattice, it is usually not possible to distinguish discrete molecular units, so that the compounds formed are not molecular. However, the ions themselves can be complex and form molecular ions like the acetate anion or the ammonium cation. For example, common table salt is sodium chloride. When sodium (Na) and chlorine (Cl) are combined, the sodium atoms each lose an electron, forming cations (Na+), and the chlorine atoms each gain an electron to form anions (Cl−). These ions are then attracted to each other in a 1:1 ratio to form sodium chloride (NaCl). Na + Cl → Na+ + Cl− → NaCl However, to maintain charge neutrality, strict ratios between anions and cations are observed so that ionic compounds, in general, obey the rules of stoichiometry despite not being molecular compounds. For compounds that are transitional to the alloys and possess mixed ionic and metallic bonding, this may not be the case anymore. Many sulfides, e.g., do form non-stoichiometric compounds. Many ionic compounds are referred to as salts as they can also be formed by the neutralization reaction of an Arrhenius base like NaOH with an Arrhenius acid like HCl NaOH + HCl → NaCl + H2O The salt NaCl is then said to consist of the acid rest Cl− and the base rest Na+. The removal of electrons to form the cation is endothermic, raising the system's overall energy. There may also be energy changes associated with breaking of existing bonds or the addition of more than one electron to form anions. However, the action of the anion's accepting the cation's valence electrons and the subsequent attraction of the ions to each other releases (lattice) energy and, thus, lowers the overall energy of the system. Ionic bonding will occur only if the overall energy change for the reaction is favorable. In general, the reaction is exothermic, but, e.g., the formation of mercuric oxide (HgO) is endothermic. The charge of the resulting ions is a major factor in the strength of ionic bonding, e.g. a salt C+A− is held together by electrostatic forces roughly four times weaker than C2+A2− according to Coulomb's law, where C and A represent a generic cation and anion respectively. The sizes of the ions and the particular packing of the lattice are ignored in this rather simplistic argument. Structures Ionic compounds in the solid state form lattice structures. The two principal factors in determining the form of the lattice are the relative charges of the ions and their relative sizes. Some structures are adopted by a number of compounds; for example, the structure of the rock salt sodium chloride is also adopted by many alkali halides, and binary oxides such as magnesium oxide. Pauling's rules provide guidelines for predicting and rationalizing the crystal structures of ionic crystals Strength of the bonding For a solid crystalline ionic compound the enthalpy change in forming the solid from gaseous ions is termed the lattice energy. The experimental value for the lattice energy can be determined using the Born–Haber cycle. It can also be calculated (predicted) using the Born–Landé equation as the sum of the electrostatic potential energy, calculated by summing interactions between cations and anions, and a short-range repulsive potential energy term. The electrostatic potential can be expressed in terms of the interionic separation and a constant (Madelung constant) that takes account of the geometry of the crystal. The further away from the nucleus the weaker the shield. The Born–Landé equation gives a reasonable fit to the lattice energy of, e.g., sodium chloride, where the calculated (predicted) value is −756 kJ/mol, which compares to −787 kJ/mol using the Born–Haber cycle. In aqueous solution the binding strength can be described by the Bjerrum or Fuoss equation as function of the ion charges, rather independent of the nature of the ions such as polarizability or size. The strength of salt bridges is most often evaluated by measurements of equilibria between molecules containing cationic and anionic sites, most often in solution. Equilibrium constants in water indicate additive free energy contributions for each salt bridge. Another method for the identification of hydrogen bonds in complicated molecules is crystallography, sometimes also NMR-spectroscopy. The attractive forces defining the strength of ionic bonding can be modeled by Coulomb's Law. Ionic bond strengths are typically (cited ranges vary) between 170 and 1500 kJ/mol. Polarization power effects Ions in crystal lattices of purely ionic compounds are spherical; however, if the positive ion is small and/or highly charged, it will distort the electron cloud of the negative ion, an effect summarised in Fajans' rules. This polarization of the negative ion leads to a build-up of extra charge density between the two nuclei, that is, to partial covalency. Larger negative ions are more easily polarized, but the effect is usually important only when positive ions with charges of 3+ (e.g., Al3+) are involved. However, 2+ ions (Be2+) or even 1+ (Li+) show some polarizing power because their sizes are so small (e.g., LiI is ionic but has some covalent bonding present). Note that this is not the ionic polarization effect that refers to the displacement of ions in the lattice due to the application of an electric field. Comparison with covalent bonding In ionic bonding, the atoms are bound by the attraction of oppositely charged ions, whereas, in covalent bonding, atoms are bound by sharing electrons to attain stable electron configurations. In covalent bonding, the molecular geometry around each atom is determined by valence shell electron pair repulsion VSEPR rules, whereas, in ionic materials, the geometry follows maximum packing rules. One could say that covalent bonding is more directional in the sense that the energy penalty for not adhering to the optimum bond angles is large, whereas ionic bonding has no such penalty. There are no shared electron pairs to repel each other, the ions should simply be packed as efficiently as possible. This often leads to much higher coordination numbers. In NaCl, each ion has 6 bonds and all bond angles are 90°. In CsCl the coordination number is 8. By comparison, carbon typically has a maximum of four bonds. Purely ionic bonding cannot exist, as the proximity of the entities involved in the bonding allows some degree of sharing electron density between them. Therefore, all ionic bonding has some covalent character. Thus, bonding is considered ionic where the ionic character is greater than the covalent character. The larger the difference in electronegativity between the two types of atoms involved in the bonding, the more ionic (polar) it is. Bonds with partially ionic and partially covalent character are called polar covalent bonds. For example, Na–Cl and Mg–O interactions have a few percent covalency, while Si–O bonds are usually ~50% ionic and ~50% covalent. Pauling estimated that an electronegativity difference of 1.7 (on the Pauling scale) corresponds to 50% ionic character, so that a difference greater than 1.7 corresponds to a bond which is predominantly ionic. Ionic character in covalent bonds can be directly measured for atoms having quadrupolar nuclei (2H, 14N, 81,79Br, 35,37Cl or 127I). These nuclei are generally objects of NQR nuclear quadrupole resonance and NMR nuclear magnetic resonance studies. Interactions between the nuclear quadrupole moments Q and the electric field gradients (EFG) are characterized via the nuclear quadrupole coupling constants QCC = where the eqzz term corresponds to the principal component of the EFG tensor and e is the elementary charge. In turn, the electric field gradient opens the way to description of bonding modes in molecules when the QCC values are accurately determined by NMR or NQR methods. In general, when ionic bonding occurs in the solid (or liquid) state, it is not possible to talk about a single "ionic bond" between two individual atoms, because the cohesive forces that keep the lattice together are of a more collective nature. This is quite different in the case of covalent bonding, where we can often speak of a distinct bond localized between two particular atoms. However, even if ionic bonding is combined with some covalency, the result is not necessarily discrete bonds of a localized character. In such cases, the resulting bonding often requires description in terms of a band structure consisting of gigantic molecular orbitals spanning the entire crystal. Thus, the bonding in the solid often retains its collective rather than localized nature. When the difference in electronegativity is decreased, the bonding may then lead to a semiconductor, a semimetal or eventually a metallic conductor with metallic bonding. See also Coulomb's law Salt bridge (protein and supramolecular) Ionic potential Linear combination of atomic orbitals Hybridization Chemical polarity Ioliomics Electron configuration Aufbau principle Quantum numbers Azimuthal quantum number Principal quantum number Magnetic quantum number Spin quantum number References External links Ionic bonding tutorial Video on ionic bonding Chemical bonding Ions Supramolecular chemistry
Ionic bonding
[ "Physics", "Chemistry", "Materials_science" ]
2,774
[ "Matter", "Condensed matter physics", "nan", "Nanotechnology", "Chemical bonding", "Ions", "Supramolecular chemistry" ]
14,958
https://en.wikipedia.org/wiki/Immune%20system
The immune system is a network of biological systems that protects an organism from diseases. It detects and responds to a wide variety of pathogens, from viruses to bacteria, as well as cancer cells, parasitic worms, and also objects such as wood splinters, distinguishing them from the organism's own healthy tissue. Many species have two major subsystems of the immune system. The innate immune system provides a preconfigured response to broad groups of situations and stimuli. The adaptive immune system provides a tailored response to each stimulus by learning to recognize molecules it has previously encountered. Both use molecules and cells to perform their functions. Nearly all organisms have some kind of immune system. Bacteria have a rudimentary immune system in the form of enzymes that protect against viral infections. Other basic immune mechanisms evolved in ancient plants and animals and remain in their modern descendants. These mechanisms include phagocytosis, antimicrobial peptides called defensins, and the complement system. Jawed vertebrates, including humans, have even more sophisticated defense mechanisms, including the ability to adapt to recognize pathogens more efficiently. Adaptive (or acquired) immunity creates an immunological memory leading to an enhanced response to subsequent encounters with that same pathogen. This process of acquired immunity is the basis of vaccination. Dysfunction of the immune system can cause autoimmune diseases, inflammatory diseases and cancer. Immunodeficiency occurs when the immune system is less active than normal, resulting in recurring and life-threatening infections. In humans, immunodeficiency can be the result of a genetic disease such as severe combined immunodeficiency, acquired conditions such as HIV/AIDS, or the use of immunosuppressive medication. Autoimmunity results from a hyperactive immune system attacking normal tissues as if they were foreign organisms. Common autoimmune diseases include Hashimoto's thyroiditis, rheumatoid arthritis, diabetes mellitus type 1, and systemic lupus erythematosus. Immunology covers the study of all aspects of the immune system. Layered defense The immune system protects its host from infection with layered defenses of increasing specificity. Physical barriers prevent pathogens such as bacteria and viruses from entering the organism. If a pathogen breaches these barriers, the innate immune system provides an immediate, but non-specific response. Innate immune systems are found in all animals. If pathogens successfully evade the innate response, vertebrates possess a second layer of protection, the adaptive immune system, which is activated by the innate response. Here, the immune system adapts its response during an infection to improve its recognition of the pathogen. This improved response is then retained after the pathogen has been eliminated, in the form of an immunological memory, and allows the adaptive immune system to mount faster and stronger attacks each time this pathogen is encountered. Both innate and adaptive immunity depend on the ability of the immune system to distinguish between self and non-self molecules. In immunology, self molecules are components of an organism's body that can be distinguished from foreign substances by the immune system. Conversely, non-self molecules are those recognized as foreign molecules. One class of non-self molecules are called antigens (originally named for being antibody generators) and are defined as substances that bind to specific immune receptors and elicit an immune response. Surface barriers Several barriers protect organisms from infection, including mechanical, chemical, and biological barriers. The waxy cuticle of most leaves, the exoskeleton of insects, the shells and membranes of externally deposited eggs, and skin are examples of mechanical barriers that are the first line of defense against infection. Organisms cannot be completely sealed from their environments, so systems act to protect body openings such as the lungs, intestines, and the genitourinary tract. In the lungs, coughing and sneezing mechanically eject pathogens and other irritants from the respiratory tract. The flushing action of tears and urine also mechanically expels pathogens, while mucus secreted by the respiratory and gastrointestinal tract serves to trap and entangle microorganisms. Chemical barriers also protect against infection. The skin and respiratory tract secrete antimicrobial peptides such as the β-defensins. Enzymes such as lysozyme and phospholipase A2 in saliva, tears, and breast milk are also antibacterials. Vaginal secretions serve as a chemical barrier following menarche, when they become slightly acidic, while semen contains defensins and zinc to kill pathogens. In the stomach, gastric acid serves as a chemical defense against ingested pathogens. Within the genitourinary and gastrointestinal tracts, commensal flora serve as biological barriers by competing with pathogenic bacteria for food and space and, in some cases, changing the conditions in their environment, such as pH or available iron. As a result, the probability that pathogens will reach sufficient numbers to cause illness is reduced. Innate immune system Microorganisms or toxins that successfully enter an organism encounter the cells and mechanisms of the innate immune system. The innate response is usually triggered when microbes are identified by pattern recognition receptors, which recognize components that are conserved among broad groups of microorganisms, or when damaged, injured or stressed cells send out alarm signals, many of which are recognized by the same receptors as those that recognize pathogens. Innate immune defenses are non-specific, meaning these systems respond to pathogens in a generic way. This system does not confer long-lasting immunity against a pathogen. The innate immune system is the dominant system of host defense in most organisms, and the only one in plants. Immune sensing Cells in the innate immune system use pattern recognition receptors to recognize molecular structures that are produced by pathogens. They are proteins expressed, mainly, by cells of the innate immune system, such as dendritic cells, macrophages, monocytes, neutrophils, and epithelial cells, to identify two classes of molecules: pathogen-associated molecular patterns (PAMPs), which are associated with microbial pathogens, and damage-associated molecular patterns (DAMPs), which are associated with components of host's cells that are released during cell damage or cell death. Recognition of extracellular or endosomal PAMPs is mediated by transmembrane proteins known as toll-like receptors (TLRs). TLRs share a typical structural motif, the leucine rich repeats (LRRs), which give them a curved shape. Toll-like receptors were first discovered in Drosophila and trigger the synthesis and secretion of cytokines and activation of other host defense programs that are necessary for both innate or adaptive immune responses. Ten toll-like receptors have been described in humans. Cells in the innate immune system have pattern recognition receptors, which detect infection or cell damage, inside. Three major classes of these "cytosolic" receptors are NOD–like receptors, RIG (retinoic acid-inducible gene)-like receptors, and cytosolic DNA sensors. Innate immune cells Some leukocytes (white blood cells) act like independent, single-celled organisms and are the second arm of the innate immune system. The innate leukocytes include the "professional" phagocytes (macrophages, neutrophils, and dendritic cells). These cells identify and eliminate pathogens, either by attacking larger pathogens through contact or by engulfing and then killing microorganisms. The other cells involved in the innate response include innate lymphoid cells, mast cells, eosinophils, basophils, and natural killer cells. Phagocytosis is an important feature of cellular innate immunity performed by cells called phagocytes that engulf pathogens or particles. Phagocytes generally patrol the body searching for pathogens, but can be called to specific locations by cytokines. Once a pathogen has been engulfed by a phagocyte, it becomes trapped in an intracellular vesicle called a phagosome, which subsequently fuses with another vesicle called a lysosome to form a phagolysosome. The pathogen is killed by the activity of digestive enzymes or following a respiratory burst that releases free radicals into the phagolysosome. Phagocytosis evolved as a means of acquiring nutrients, but this role was extended in phagocytes to include engulfment of pathogens as a defense mechanism. Phagocytosis probably represents the oldest form of host defense, as phagocytes have been identified in both vertebrate and invertebrate animals. Neutrophils and macrophages are phagocytes that travel throughout the body in pursuit of invading pathogens. Neutrophils are normally found in the bloodstream and are the most abundant type of phagocyte, representing 50% to 60% of total circulating leukocytes. During the acute phase of inflammation, neutrophils migrate toward the site of inflammation in a process called chemotaxis and are usually the first cells to arrive at the scene of infection. Macrophages are versatile cells that reside within tissues and produce an array of chemicals including enzymes, complement proteins, and cytokines. They can also act as scavengers that rid the body of worn-out cells and other debris and as antigen-presenting cells (APCs) that activate the adaptive immune system. Dendritic cells are phagocytes in tissues that are in contact with the external environment; therefore, they are located mainly in the skin, nose, lungs, stomach, and intestines. They are named for their resemblance to neuronal dendrites, as both have many spine-like projections. Dendritic cells serve as a link between the bodily tissues and the innate and adaptive immune systems, as they present antigens to T cells, one of the key cell types of the adaptive immune system. Granulocytes are leukocytes that have granules in their cytoplasm. In this category are neutrophils, mast cells, basophils, and eosinophils. Mast cells reside in connective tissues and mucous membranes and regulate the inflammatory response. They are most often associated with allergy and anaphylaxis. Basophils and eosinophils are related to neutrophils. They secrete chemical mediators that are involved in defending against parasites and play a role in allergic reactions, such as asthma. Innate lymphoid cells (ILCs) are a group of innate immune cells that are derived from common lymphoid progenitor and belong to the lymphoid lineage. These cells are defined by the absence of antigen-specific B- or T-cell receptor (TCR) because of the lack of recombination activating gene. ILCs do not express myeloid or dendritic cell markers. Natural killer cells (NK cells) are lymphocytes and a component of the innate immune system that does not directly attack invading microbes. Rather, NK cells destroy compromised host cells, such as tumor cells or virus-infected cells, recognizing such cells by a condition known as "missing self". This term describes cells with low levels of a cell-surface marker called MHC I (major histocompatibility complex)—a situation that can arise in viral infections of host cells. Normal body cells are not recognized and attacked by NK cells because they express intact self MHC antigens. Those MHC antigens are recognized by killer cell immunoglobulin receptors, which essentially put the brakes on NK cells. Inflammation Inflammation is one of the first responses of the immune system to infection. The symptoms of inflammation are redness, swelling, heat, and pain, which are caused by increased blood flow into tissue. Inflammation is produced by eicosanoids and cytokines, which are released by injured or infected cells. Eicosanoids include prostaglandins that produce fever and the dilation of blood vessels associated with inflammation and leukotrienes that attract certain white blood cells (leukocytes). Common cytokines include interleukins that are responsible for communication between white blood cells; chemokines that promote chemotaxis; and interferons that have antiviral effects, such as shutting down protein synthesis in the host cell. Growth factors and cytotoxic factors may also be released. These cytokines and other chemicals recruit immune cells to the site of infection and promote the healing of any damaged tissue following the removal of pathogens. The pattern-recognition receptors called inflammasomes are multiprotein complexes (consisting of an NLR, the adaptor protein ASC, and the effector molecule pro-caspase-1) that form in response to cytosolic PAMPs and DAMPs, whose function is to generate active forms of the inflammatory cytokines IL-1β and IL-18. Humoral defenses The complement system is a biochemical cascade that attacks the surfaces of foreign cells. It contains over 20 different proteins and is named for its ability to "complement" the killing of pathogens by antibodies. Complement is the major humoral component of the innate immune response. Many species have complement systems, including non-mammals like plants, fish, and some invertebrates. In humans, this response is activated by complement binding to antibodies that have attached to these microbes or the binding of complement proteins to carbohydrates on the surfaces of microbes. This recognition signal triggers a rapid killing response. The speed of the response is a result of signal amplification that occurs after sequential proteolytic activation of complement molecules, which are also proteases. After complement proteins initially bind to the microbe, they activate their protease activity, which in turn activates other complement proteases, and so on. This produces a catalytic cascade that amplifies the initial signal by controlled positive feedback. The cascade results in the production of peptides that attract immune cells, increase vascular permeability, and opsonize (coat) the surface of a pathogen, marking it for destruction. This deposition of complement can also kill cells directly by disrupting their plasma membrane via the formation of a membrane attack complex. Adaptive immune system The adaptive immune system evolved in early vertebrates and allows for a stronger immune response as well as immunological memory, where each pathogen is "remembered" by a signature antigen. The adaptive immune response is antigen-specific and requires the recognition of specific "non-self" antigens during a process called antigen presentation. Antigen specificity allows for the generation of responses that are tailored to specific pathogens or pathogen-infected cells. The ability to mount these tailored responses is maintained in the body by "memory cells". Should a pathogen infect the body more than once, these specific memory cells are used to quickly eliminate it. Recognition of antigen The cells of the adaptive immune system are special types of leukocytes, called lymphocytes. B cells and T cells are the major types of lymphocytes and are derived from hematopoietic stem cells in the bone marrow. B cells are involved in the humoral immune response, whereas T cells are involved in cell-mediated immune response. Killer T cells only recognize antigens coupled to Class I MHC molecules, while helper T cells and regulatory T cells only recognize antigens coupled to Class II MHC molecules. These two mechanisms of antigen presentation reflect the different roles of the two types of T cell. A third, minor subtype are the γδ T cells that recognize intact antigens that are not bound to MHC receptors. The double-positive T cells are exposed to a wide variety of self-antigens in the thymus, in which iodine is necessary for its thymus development and activity. In contrast, the B cell antigen-specific receptor is an antibody molecule on the B cell surface and recognizes native (unprocessed) antigen without any need for antigen processing. Such antigens may be large molecules found on the surfaces of pathogens, but can also be small haptens (such as penicillin) attached to carrier molecule. Each lineage of B cell expresses a different antibody, so the complete set of B cell antigen receptors represent all the antibodies that the body can manufacture. When B or T cells encounter their related antigens they multiply and many "clones" of the cells are produced that target the same antigen. This is called clonal selection. Antigen presentation to T lymphocytes Both B cells and T cells carry receptor molecules that recognize specific targets. T cells recognize a "non-self" target, such as a pathogen, only after antigens (small fragments of the pathogen) have been processed and presented in combination with a "self" receptor called a major histocompatibility complex (MHC) molecule. Cell mediated immunity There are two major subtypes of T cells: the killer T cell and the helper T cell. In addition there are regulatory T cells which have a role in modulating immune response. Killer T cells Killer T cells are a sub-group of T cells that kill cells that are infected with viruses (and other pathogens), or are otherwise damaged or dysfunctional. As with B cells, each type of T cell recognizes a different antigen. Killer T cells are activated when their T-cell receptor binds to this specific antigen in a complex with the MHC Class I receptor of another cell. Recognition of this MHC:antigen complex is aided by a co-receptor on the T cell, called CD8. The T cell then travels throughout the body in search of cells where the MHC I receptors bear this antigen. When an activated T cell contacts such cells, it releases cytotoxins, such as perforin, which form pores in the target cell's plasma membrane, allowing ions, water and toxins to enter. The entry of another toxin called granulysin (a protease) induces the target cell to undergo apoptosis. T cell killing of host cells is particularly important in preventing the replication of viruses. T cell activation is tightly controlled and generally requires a very strong MHC/antigen activation signal, or additional activation signals provided by "helper" T cells (see below). Helper T cells Helper T cells regulate both the innate and adaptive immune responses and help determine which immune responses the body makes to a particular pathogen. These cells have no cytotoxic activity and do not kill infected cells or clear pathogens directly. They instead control the immune response by directing other cells to perform these tasks. Helper T cells express T cell receptors that recognize antigen bound to Class II MHC molecules. The MHC:antigen complex is also recognized by the helper cell's CD4 co-receptor, which recruits molecules inside the T cell (such as Lck) that are responsible for the T cell's activation. Helper T cells have a weaker association with the MHC:antigen complex than observed for killer T cells, meaning many receptors (around 200–300) on the helper T cell must be bound by an MHC:antigen to activate the helper cell, while killer T cells can be activated by engagement of a single MHC:antigen molecule. Helper T cell activation also requires longer duration of engagement with an antigen-presenting cell. The activation of a resting helper T cell causes it to release cytokines that influence the activity of many cell types. Cytokine signals produced by helper T cells enhance the microbicidal function of macrophages and the activity of killer T cells. In addition, helper T cell activation causes an upregulation of molecules expressed on the T cell's surface, such as CD40 ligand (also called CD154), which provide extra stimulatory signals typically required to activate antibody-producing B cells. Gamma delta T cells Gamma delta T cells (γδ T cells) possess an alternative T-cell receptor (TCR) as opposed to CD4+ and CD8+ (αβ) T cells and share the characteristics of helper T cells, cytotoxic T cells and NK cells. The conditions that produce responses from γδ T cells are not fully understood. Like other 'unconventional' T cell subsets bearing invariant TCRs, such as CD1d-restricted natural killer T cells, γδ T cells straddle the border between innate and adaptive immunity. On one hand, γδ T cells are a component of adaptive immunity as they rearrange TCR genes to produce receptor diversity and can also develop a memory phenotype. On the other hand, the various subsets are also part of the innate immune system, as restricted TCR or NK receptors may be used as pattern recognition receptors. For example, large numbers of human Vγ9/Vδ2 T cells respond within hours to common molecules produced by microbes, and highly restricted Vδ1+ T cells in epithelia respond to stressed epithelial cells. Humoral immune response A B cell identifies pathogens when antibodies on its surface bind to a specific foreign antigen. This antigen/antibody complex is taken up by the B cell and processed by proteolysis into peptides. The B cell then displays these antigenic peptides on its surface MHC class II molecules. This combination of MHC and antigen attracts a matching helper T cell, which releases lymphokines and activates the B cell. As the activated B cell then begins to divide, its offspring (plasma cells) secrete millions of copies of the antibody that recognizes this antigen. These antibodies circulate in blood plasma and lymph, bind to pathogens expressing the antigen and mark them for destruction by complement activation or for uptake and destruction by phagocytes. Antibodies can also neutralize challenges directly, by binding to bacterial toxins or by interfering with the receptors that viruses and bacteria use to infect cells. Newborn infants have no prior exposure to microbes and are particularly vulnerable to infection. Several layers of passive protection are provided by the mother. During pregnancy, a particular type of antibody, called IgG, is transported from mother to baby directly through the placenta, so human babies have high levels of antibodies even at birth, with the same range of antigen specificities as their mother. Breast milk or colostrum also contains antibodies that are transferred to the gut of the infant and protect against bacterial infections until the newborn can synthesize its own antibodies. This is passive immunity because the fetus does not actually make any memory cells or antibodies—it only borrows them. This passive immunity is usually short-term, lasting from a few days up to several months. In medicine, protective passive immunity can also be transferred artificially from one individual to another. Immunological memory When B cells and T cells are activated and begin to replicate, some of their offspring become long-lived memory cells. Throughout the lifetime of an animal, these memory cells remember each specific pathogen encountered and can mount a strong response if the pathogen is detected again. T-cells recognize pathogens by small protein-based infection signals, called antigens, that bind to directly to T-cell surface receptors. B-cells use the protein, immunoglobulin, to recognize pathogens by their antigens. This is "adaptive" because it occurs during the lifetime of an individual as an adaptation to infection with that pathogen and prepares the immune system for future challenges. Immunological memory can be in the form of either passive short-term memory or active long-term memory. Physiological regulation The immune system is involved in many aspects of physiological regulation in the body. The immune system interacts intimately with other systems, such as the endocrine and the nervous systems. The immune system also plays a crucial role in embryogenesis (development of the embryo), as well as in tissue repair and regeneration. Hormones Hormones can act as immunomodulators, altering the sensitivity of the immune system. For example, female sex hormones are known immunostimulators of both adaptive and innate immune responses. Some autoimmune diseases such as lupus erythematosus strike women preferentially, and their onset often coincides with puberty. By contrast, male sex hormones such as testosterone seem to be immunosuppressive. Other hormones appear to regulate the immune system as well, most notably prolactin, growth hormone and vitamin D. Vitamin D Although cellular studies indicate that vitamin D has receptors and probable functions in the immune system, there is no clinical evidence to prove that vitamin D deficiency increases the risk for immune diseases or vitamin D supplementation lowers immune disease risk. A 2011 United States Institute of Medicine report stated that "outcomes related to ... immune functioning and autoimmune disorders, and infections ... could not be linked reliably with calcium or vitamin D intake and were often conflicting." Sleep and rest The immune system is affected by sleep and rest, and sleep deprivation is detrimental to immune function. Complex feedback loops involving cytokines, such as interleukin-1 and tumor necrosis factor-α produced in response to infection, appear to also play a role in the regulation of non-rapid eye movement (REM) sleep. Thus the immune response to infection may result in changes to the sleep cycle, including an increase in slow-wave sleep relative to REM sleep. In people with sleep deprivation, active immunizations may have a diminished effect and may result in lower antibody production, and a lower immune response, than would be noted in a well-rested individual. Additionally, proteins such as NFIL3, which have been shown to be closely intertwined with both T-cell differentiation and circadian rhythms, can be affected through the disturbance of natural light and dark cycles through instances of sleep deprivation. These disruptions can lead to an increase in chronic conditions such as heart disease, chronic pain, and asthma. In addition to the negative consequences of sleep deprivation, sleep and the intertwined circadian system have been shown to have strong regulatory effects on immunological functions affecting both innate and adaptive immunity. First, during the early slow-wave-sleep stage, a sudden drop in blood levels of cortisol, epinephrine, and norepinephrine causes increased blood levels of the hormones leptin, pituitary growth hormone, and prolactin. These signals induce a pro-inflammatory state through the production of the pro-inflammatory cytokines interleukin-1, interleukin-12, TNF-alpha and IFN-gamma. These cytokines then stimulate immune functions such as immune cell activation, proliferation, and differentiation. During this time of a slowly evolving adaptive immune response, there is a peak in undifferentiated or less differentiated cells, like naïve and central memory T cells. In addition to these effects, the milieu of hormones produced at this time (leptin, pituitary growth hormone, and prolactin) supports the interactions between APCs and T-cells, a shift of the Th1/Th2 cytokine balance towards one that supports Th1, an increase in overall Th cell proliferation, and naïve T cell migration to lymph nodes. This is also thought to support the formation of long-lasting immune memory through the initiation of Th1 immune responses. During wake periods, differentiated effector cells, such as cytotoxic natural killer cells and cytotoxic T lymphocytes, peak to elicit an effective response against any intruding pathogens. Anti-inflammatory molecules, such as cortisol and catecholamines, also peak during awake active times. Inflammation would cause serious cognitive and physical impairments if it were to occur during wake times, and inflammation may occur during sleep times due to the presence of melatonin. Inflammation causes a great deal of oxidative stress and the presence of melatonin during sleep times could actively counteract free radical production during this time. Physical exercise Physical exercise has a positive effect on the immune system and depending on the frequency and intensity, the pathogenic effects of diseases caused by bacteria and viruses are moderated. Immediately after intense exercise there is a transient immunodepression, where the number of circulating lymphocytes decreases and antibody production declines. This may give rise to a window of opportunity for infection and reactivation of latent virus infections, but the evidence is inconclusive. Changes at the cellular level During exercise there is an increase in circulating white blood cells of all types. This is caused by the frictional force of blood flowing on the endothelial cell surface and catecholamines affecting β-adrenergic receptors (βARs). The number of neutrophils in the blood increases and remains raised for up to six hours and immature forms are present. Although the increase in neutrophils ("neutrophilia") is similar to that seen during bacterial infections, after exercise the cell population returns to normal by around 24 hours. The number of circulating lymphocytes (mainly natural killer cells) decreases during intense exercise but returns to normal after 4 to 6 hours. Although up to 2% of the cells die most migrate from the blood to the tissues, mainly the intestines and lungs, where pathogens are most likely to be encountered. Some monocytes leave the blood circulation and migrate to the muscles where they differentiate and become macrophages. These cells differentiate into two types: proliferative macrophages, which are responsible for increasing the number of stem cells and restorative macrophages, which are involved their maturing to muscle cells. Repair and regeneration The immune system, particularly the innate component, plays a decisive role in tissue repair after an insult. Key actors include macrophages and neutrophils, but other cellular actors, including γδ T cells, innate lymphoid cells (ILCs), and regulatory T cells (Tregs), are also important. The plasticity of immune cells and the balance between pro-inflammatory and anti-inflammatory signals are crucial aspects of efficient tissue repair. Immune components and pathways are involved in regeneration as well, for example in amphibians such as in axolotl limb regeneration. According to one hypothesis, organisms that can regenerate (e.g., axolotls) could be less immunocompetent than organisms that cannot regenerate. Disorders of human immunity Failures of host defense occur and fall into three broad categories: immunodeficiencies, autoimmunity, and hypersensitivities. Immunodeficiencies Immunodeficiencies occur when one or more of the components of the immune system are inactive. The ability of the immune system to respond to pathogens is diminished in both the young and the elderly, with immune responses beginning to decline at around 50 years of age due to immunosenescence. In developed countries, obesity, alcoholism, and drug use are common causes of poor immune function, while malnutrition is the most common cause of immunodeficiency in developing countries. Diets lacking sufficient protein are associated with impaired cell-mediated immunity, complement activity, phagocyte function, IgA antibody concentrations, and cytokine production. Additionally, the loss of the thymus at an early age through genetic mutation or surgical removal results in severe immunodeficiency and a high susceptibility to infection. Immunodeficiencies can also be inherited or 'acquired'. Severe combined immunodeficiency is a rare genetic disorder characterized by the disturbed development of functional T cells and B cells caused by numerous genetic mutations. Chronic granulomatous disease, where phagocytes have a reduced ability to destroy pathogens, is an example of an inherited, or congenital, immunodeficiency. AIDS and some types of cancer cause acquired immunodeficiency. Autoimmunity Overactive immune responses form the other end of immune dysfunction, particularly the autoimmune diseases. Here, the immune system fails to properly distinguish between self and non-self, and attacks part of the body. Under normal circumstances, many T cells and antibodies react with "self" peptides. One of the functions of specialized cells (located in the thymus and bone marrow) is to present young lymphocytes with self antigens produced throughout the body and to eliminate those cells that recognize self-antigens, preventing autoimmunity. Common autoimmune diseases include Hashimoto's thyroiditis, rheumatoid arthritis, diabetes mellitus type 1, and systemic lupus erythematosus. Hypersensitivity Hypersensitivity is an immune response that damages the body's own tissues. It is divided into four classes (Type I – IV) based on the mechanisms involved and the time course of the hypersensitive reaction. Type I hypersensitivity is an immediate or anaphylactic reaction, often associated with allergy. Symptoms can range from mild discomfort to death. Type I hypersensitivity is mediated by IgE, which triggers degranulation of mast cells and basophils when cross-linked by antigen. Type II hypersensitivity occurs when antibodies bind to antigens on the individual's own cells, marking them for destruction. This is also called antibody-dependent (or cytotoxic) hypersensitivity, and is mediated by IgG and IgM antibodies. Immune complexes (aggregations of antigens, complement proteins, and IgG and IgM antibodies) deposited in various tissues trigger Type III hypersensitivity reactions. Type IV hypersensitivity (also known as cell-mediated or delayed type hypersensitivity) usually takes between two and three days to develop. Type IV reactions are involved in many autoimmune and infectious diseases, but may also involve contact dermatitis. These reactions are mediated by T cells, monocytes, and macrophages. Idiopathic inflammation Inflammation is one of the first responses of the immune system to infection, but it can appear without known cause. Inflammation is produced by eicosanoids and cytokines, which are released by injured or infected cells. Eicosanoids include prostaglandins that produce fever and the dilation of blood vessels associated with inflammation, and leukotrienes that attract certain white blood cells (leukocytes). Common cytokines include interleukins that are responsible for communication between white blood cells; chemokines that promote chemotaxis; and interferons that have anti-viral effects, such as shutting down protein synthesis in the host cell. Growth factors and cytotoxic factors may also be released. These cytokines and other chemicals recruit immune cells to the site of infection and promote healing of any damaged tissue following the removal of pathogens. Manipulation in medicine The immune response can be manipulated to suppress unwanted responses resulting from autoimmunity, allergy, and transplant rejection, and to stimulate protective responses against pathogens that largely elude the immune system (see immunization) or cancer. Immunosuppression Immunosuppressive drugs are used to control autoimmune disorders or inflammation when excessive tissue damage occurs, and to prevent rejection after an organ transplant. Anti-inflammatory drugs are often used to control the effects of inflammation. Glucocorticoids are the most powerful of these drugs and can have many undesirable side effects, such as central obesity, hyperglycemia, and osteoporosis. Their use is tightly controlled. Lower doses of anti-inflammatory drugs are often used in conjunction with cytotoxic or immunosuppressive drugs such as methotrexate or azathioprine. Cytotoxic drugs inhibit the immune response by killing dividing cells such as activated T cells. This killing is indiscriminate and other constantly dividing cells and their organs are affected, which causes toxic side effects. Immunosuppressive drugs such as cyclosporin prevent T cells from responding to signals correctly by inhibiting signal transduction pathways. Immunostimulation Claims made by marketers of various products and alternative health providers, such as chiropractors, homeopaths, and acupuncturists to be able to stimulate or "boost" the immune system generally lack meaningful explanation and evidence of effectiveness. Vaccination Long-term active memory is acquired following infection by activation of B and T cells. Active immunity can also be generated artificially, through vaccination. The principle behind vaccination (also called immunization) is to introduce an antigen from a pathogen to stimulate the immune system and develop specific immunity against that particular pathogen without causing disease associated with that organism. This deliberate induction of an immune response is successful because it exploits the natural specificity of the immune system, as well as its inducibility. With infectious disease remaining one of the leading causes of death in the human population, vaccination represents the most effective manipulation of the immune system mankind has developed. Many vaccines are based on acellular components of micro-organisms, including harmless toxin components. Since many antigens derived from acellular vaccines do not strongly induce the adaptive response, most bacterial vaccines are provided with additional adjuvants that activate the antigen-presenting cells of the innate immune system and maximize immunogenicity. Tumor immunology Another important role of the immune system is to identify and eliminate tumors. This is called immune surveillance. The transformed cells of tumors express antigens that are not found on normal cells. To the immune system, these antigens appear foreign, and their presence causes immune cells to attack the transformed tumor cells. The antigens expressed by tumors have several sources; some are derived from oncogenic viruses like human papillomavirus, which causes cancer of the cervix, vulva, vagina, penis, anus, mouth, and throat, while others are the organism's own proteins that occur at low levels in normal cells but reach high levels in tumor cells. One example is an enzyme called tyrosinase that, when expressed at high levels, transforms certain skin cells (for example, melanocytes) into tumors called melanomas. A third possible source of tumor antigens are proteins normally important for regulating cell growth and survival, that commonly mutate into cancer inducing molecules called oncogenes. The main response of the immune system to tumors is to destroy the abnormal cells using killer T cells, sometimes with the assistance of helper T cells. Tumor antigens are presented on MHC class I molecules in a similar way to viral antigens. This allows killer T cells to recognize the tumor cell as abnormal. NK cells also kill tumorous cells in a similar way, especially if the tumor cells have fewer MHC class I molecules on their surface than normal; this is a common phenomenon with tumors. Sometimes antibodies are generated against tumor cells allowing for their destruction by the complement system. Some tumors evade the immune system and go on to become cancers. Tumor cells often have a reduced number of MHC class I molecules on their surface, thus avoiding detection by killer T cells. Some tumor cells also release products that inhibit the immune response; for example by secreting the cytokine TGF-β, which suppresses the activity of macrophages and lymphocytes. In addition, immunological tolerance may develop against tumor antigens, so the immune system no longer attacks the tumor cells. Paradoxically, macrophages can promote tumor growth when tumor cells send out cytokines that attract macrophages, which then generate cytokines and growth factors such as tumor-necrosis factor alpha that nurture tumor development or promote stem-cell-like plasticity. In addition, a combination of hypoxia in the tumor and a cytokine produced by macrophages induces tumor cells to decrease production of a protein that blocks metastasis and thereby assists spread of cancer cells. Anti-tumor M1 macrophages are recruited in early phases to tumor development but are progressively differentiated to M2 with pro-tumor effect, an immunosuppressor switch. The hypoxia reduces the cytokine production for the anti-tumor response and progressively macrophages acquire pro-tumor M2 functions driven by the tumor microenvironment, including IL-4 and IL-10. Cancer immunotherapy covers the medical ways to stimulate the immune system to attack cancer tumors. Predicting immunogenicity Some drugs can cause a neutralizing immune response, meaning that the immune system produces neutralizing antibodies that counteract the action of the drugs, particularly if the drugs are administered repeatedly, or in larger doses. This limits the effectiveness of drugs based on larger peptides and proteins (which are typically larger than 6000 Da). In some cases, the drug itself is not immunogenic, but may be co-administered with an immunogenic compound, as is sometimes the case for Taxol. Computational methods have been developed to predict the immunogenicity of peptides and proteins, which are particularly useful in designing therapeutic antibodies, assessing likely virulence of mutations in viral coat particles, and validation of proposed peptide-based drug treatments. Early techniques relied mainly on the observation that hydrophilic amino acids are overrepresented in epitope regions than hydrophobic amino acids; however, more recent developments rely on machine learning techniques using databases of existing known epitopes, usually on well-studied virus proteins, as a training set. A publicly accessible database has been established for the cataloguing of epitopes from pathogens known to be recognizable by B cells. The emerging field of bioinformatics-based studies of immunogenicity is referred to as immunoinformatics. Immunoproteomics is the study of large sets of proteins (proteomics) involved in the immune response. Evolution and other mechanisms Evolution of the immune system It is likely that a multicomponent, adaptive immune system arose with the first vertebrates, as invertebrates do not generate lymphocytes or an antibody-based humoral response. Immune systems evolved in deuterostomes as shown in the cladogram. Many species, however, use mechanisms that appear to be precursors of these aspects of vertebrate immunity. Immune systems appear even in the structurally simplest forms of life, with bacteria using a unique defense mechanism, called the restriction modification system to protect themselves from viral pathogens, called bacteriophages. Prokaryotes (bacteria and archea) also possess acquired immunity, through a system that uses CRISPR sequences to retain fragments of the genomes of phage that they have come into contact with in the past, which allows them to block virus replication through a form of RNA interference. Prokaryotes also possess other defense mechanisms. Offensive elements of the immune systems are also present in unicellular eukaryotes, but studies of their roles in defense are few. Pattern recognition receptors are proteins used by nearly all organisms to identify molecules associated with pathogens. Antimicrobial peptides called defensins are an evolutionarily conserved component of the innate immune response found in all animals and plants, and represent the main form of invertebrate systemic immunity. The complement system and phagocytic cells are also used by most forms of invertebrate life. Ribonucleases and the RNA interference pathway are conserved across all eukaryotes, and are thought to play a role in the immune response to viruses. Unlike animals, plants lack phagocytic cells, but many plant immune responses involve systemic chemical signals that are sent through a plant. Individual plant cells respond to molecules associated with pathogens known as pathogen-associated molecular patterns or PAMPs. When a part of a plant becomes infected, the plant produces a localized hypersensitive response, whereby cells at the site of infection undergo rapid apoptosis to prevent the spread of the disease to other parts of the plant. Systemic acquired resistance is a type of defensive response used by plants that renders the entire plant resistant to a particular infectious agent. RNA silencing mechanisms are particularly important in this systemic response as they can block virus replication. Alternative adaptive immune system Evolution of the adaptive immune system occurred in an ancestor of the jawed vertebrates. Many of the classical molecules of the adaptive immune system (for example, immunoglobulins and T-cell receptors) exist only in jawed vertebrates. A distinct lymphocyte-derived molecule has been discovered in primitive jawless vertebrates, such as the lamprey and hagfish. These animals possess a large array of molecules called Variable lymphocyte receptors (VLRs) that, like the antigen receptors of jawed vertebrates, are produced from only a small number (one or two) of genes. These molecules are believed to bind pathogenic antigens in a similar way to antibodies, and with the same degree of specificity. Manipulation by pathogens The success of any pathogen depends on its ability to elude host immune responses. Therefore, pathogens evolved several methods that allow them to successfully infect a host, while evading detection or destruction by the immune system. Bacteria often overcome physical barriers by secreting enzymes that digest the barrier, for example, by using a type II secretion system. Alternatively, using a type III secretion system, they may insert a hollow tube into the host cell, providing a direct route for proteins to move from the pathogen to the host. These proteins are often used to shut down host defenses. An evasion strategy used by several pathogens to avoid the innate immune system is to hide within the cells of their host (also called intracellular pathogenesis). Here, a pathogen spends most of its life-cycle inside host cells, where it is shielded from direct contact with immune cells, antibodies and complement. Some examples of intracellular pathogens include viruses, the food poisoning bacterium Salmonella and the eukaryotic parasites that cause malaria (Plasmodium spp.) and leishmaniasis (Leishmania spp.). Other bacteria, such as Mycobacterium tuberculosis, live inside a protective capsule that prevents lysis by complement. Many pathogens secrete compounds that diminish or misdirect the host's immune response. Some bacteria form biofilms to protect themselves from the cells and proteins of the immune system. Such biofilms are present in many successful infections, such as the chronic Pseudomonas aeruginosa and Burkholderia cenocepacia infections characteristic of cystic fibrosis. Other bacteria generate surface proteins that bind to antibodies, rendering them ineffective; examples include Streptococcus (protein G), Staphylococcus aureus (protein A), and Peptostreptococcus magnus (protein L). The mechanisms used to evade the adaptive immune system are more complicated. The simplest approach is to rapidly change non-essential epitopes (amino acids and/or sugars) on the surface of the pathogen, while keeping essential epitopes concealed. This is called antigenic variation. An example is HIV, which mutates rapidly, so the proteins on its viral envelope that are essential for entry into its host target cell are constantly changing. These frequent changes in antigens may explain the failures of vaccines directed at this virus. The parasite Trypanosoma brucei uses a similar strategy, constantly switching one type of surface protein for another, allowing it to stay one step ahead of the antibody response. Masking antigens with host molecules is another common strategy for avoiding detection by the immune system. In HIV, the envelope that covers the virion is formed from the outermost membrane of the host cell; such "self-cloaked" viruses make it difficult for the immune system to identify them as "non-self" structures. History of immunology Immunology is a science that examines the structure and function of the immune system. It originates from medicine and early studies on the causes of immunity to disease. The earliest known reference to immunity was during the plague of Athens in 430 BC. Thucydides noted that people who had recovered from a previous bout of the disease could nurse the sick without contracting the illness a second time. In the 18th century, Pierre-Louis Moreau de Maupertuis experimented with scorpion venom and observed that certain dogs and mice were immune to this venom. In the 10th century, Persian physician al-Razi (also known as Rhazes) wrote the first recorded theory of acquired immunity, noting that a smallpox bout protected its survivors from future infections. Although he explained the immunity in terms of "excess moisture" being expelled from the blood—therefore preventing a second occurrence of the disease—this theory explained many observations about smallpox known during this time. These and other observations of acquired immunity were later exploited by Louis Pasteur in his development of vaccination and his proposed germ theory of disease. Pasteur's theory was in direct opposition to contemporary theories of disease, such as the miasma theory. It was not until Robert Koch's 1891 proofs, for which he was awarded a Nobel Prize in 1905, that microorganisms were confirmed as the cause of infectious disease. Viruses were confirmed as human pathogens in 1901, with the discovery of the yellow fever virus by Walter Reed. Immunology made a great advance towards the end of the 19th century, through rapid developments in the study of humoral immunity and cellular immunity. Particularly important was the work of Paul Ehrlich, who proposed the side-chain theory to explain the specificity of the antigen-antibody reaction; his contributions to the understanding of humoral immunity were recognized by the award of a joint Nobel Prize in 1908, along with the founder of cellular immunology, Elie Metchnikoff. In 1974, Niels Kaj Jerne developed the immune network theory; he shared a Nobel Prize in 1984 with Georges J. F. Köhler and César Milstein for theories related to the immune system. See also Fc receptor List of human cell types Neuroimmune system Original antigenic sin – when the immune system uses immunological memory upon encountering a slightly different pathogen Plant disease resistance Polyclonal response References Citations General bibliography Further reading (The book's sources are only online.) A popular science explanation of the immune system. External links
Immune system
[ "Biology" ]
10,417
[ "Immune system", "Organ systems" ]
14,959
https://en.wikipedia.org/wiki/Immunology
Immunology is a branch of biology and medicine that covers the study of immune systems in all organisms. Immunology charts, measures, and contextualizes the physiological functioning of the immune system in states of both health and diseases; malfunctions of the immune system in immunological disorders (such as autoimmune diseases, hypersensitivities, immune deficiency, and transplant rejection); and the physical, chemical, and physiological characteristics of the components of the immune system in vitro, in situ, and in vivo. Immunology has applications in numerous disciplines of medicine, particularly in the fields of organ transplantation, oncology, rheumatology, virology, bacteriology, parasitology, psychiatry, and dermatology. The term was coined by Russian biologist Ilya Ilyich Mechnikov, who advanced studies on immunology and received the Nobel Prize for his work in 1908 with Paul Ehrlich "in recognition of their work on immunity". He pinned small thorns into starfish larvae and noticed unusual cells surrounding the thorns. This was the active response of the body trying to maintain its integrity. It was Mechnikov who first observed the phenomenon of phagocytosis, in which the body defends itself against a foreign body. Ehrlich accustomed mice to the poisonous ricin and abrin. After feeding them with small but increasing dosages of ricin he ascertained that they had become "ricin-proof". Ehrlich interpreted this as immunization and observed that it was abruptly initiated after a few days and was still in existence after several months. Prior to the designation of immunity, from the etymological root , which is Latin for 'exempt', early physicians characterized organs that would later be proven as essential components of the immune system. The important lymphoid organs of the immune system are the thymus, bone marrow, and chief lymphatic tissues such as spleen, tonsils, lymph vessels, lymph nodes, adenoids, and liver. However, many components of the immune system are cellular in nature, and not associated with specific organs, but rather embedded or circulating in various tissues located throughout the body. Classical immunology Classical immunology ties in with the fields of epidemiology and medicine. It studies the relationship between the body systems, pathogens, and immunity. The earliest written mention of immunity can be traced back to the plague of Athens in 430 BCE. Thucydides noted that people who had recovered from a previous bout of the disease could nurse the sick without contracting the illness a second time. Many other ancient societies have references to this phenomenon, but it was not until the 19th and 20th centuries before the concept developed into scientific theory. The study of the molecular and cellular components that comprise the immune system, including their function and interaction, is the central science of immunology. The immune system has been divided into a more primitive innate immune system and, in vertebrates, an acquired or adaptive immune system. The latter is further divided into humoral (or antibody) and cell-mediated components. The immune system has the capability of self and non-self-recognition. An antigen is a substance that ignites the immune response. The cells involved in recognizing the antigen are Lymphocytes. Once they recognize, they secrete antibodies. Antibodies are proteins that neutralize the disease-causing microorganisms. Antibodies do not directly kill pathogens, but instead, identify antigens as targets for destruction by other immune cells such as phagocytes or NK cells. The (antibody) response is defined as the interaction between antibodies and antigens. Antibodies are specific proteins released from a certain class of immune cells known as B lymphocytes, while antigens are defined as anything that elicits the generation of antibodies (antibody generators). Immunology rests on an understanding of the properties of these two biological entities and the cellular response to both. It is now getting clear that the immune responses contribute to the development of many common disorders not traditionally viewed as immunologic, including metabolic, cardiovascular, cancer, and neurodegenerative conditions like Alzheimer's disease. Besides, there are direct implications of the immune system in the infectious diseases (tuberculosis, malaria, hepatitis, pneumonia, dysentery, and helminth infestations) as well. Hence, research in the field of immunology is of prime importance for the advancements in the fields of modern medicine, biomedical research, and biotechnology. Immunological research continues to become more specialized, pursuing non-classical models of immunity and functions of cells, organs and systems not previously associated with the immune system (Yemeserach 2010). Diagnostic immunology The specificity of the bond between antibody and antigen has made the antibody an excellent tool for the detection of substances by a variety of diagnostic techniques. Antibodies specific for a desired antigen can be conjugated with an isotopic (radio) or fluorescent label or with a color-forming enzyme in order to detect it. However, the similarity between some antigens can lead to false positives and other errors in such tests by antibodies cross-reacting with antigens that are not exact matches. Immunotherapy The use of immune system components or antigens to treat a disease or disorder is known as immunotherapy. Immunotherapy is most commonly used to treat allergies, autoimmune disorders such as Crohn's disease, Hashimoto's thyroiditis and rheumatoid arthritis, and certain cancers. Immunotherapy is also often used for patients who are immunosuppressed (such as those with HIV) and people with other immune deficiencies. This includes regulating factors such as IL-2, IL-10, GM-CSF B, IFN-α. Clinical immunology Clinical immunology is the study of diseases caused by disorders of the immune system (failure, aberrant action, and malignant growth of the cellular elements of the system). It also involves diseases of other systems, where immune reactions play a part in the pathology and clinical features. The diseases caused by disorders of the immune system fall into two broad categories: immunodeficiency, in which parts of the immune system fail to provide an adequate response (examples include chronic granulomatous disease and primary immune diseases); autoimmunity, in which the immune system attacks its own host's body (examples include systemic lupus erythematosus, rheumatoid arthritis, Hashimoto's disease and myasthenia gravis). Other immune system disorders include various hypersensitivities (such as in asthma and other allergies) that respond inappropriately to otherwise harmless compounds. The most well-known disease that affects the immune system itself is AIDS, an immunodeficiency characterized by the suppression of CD4+ ("helper") T cells, dendritic cells and macrophages by the human immunodeficiency virus (HIV). Clinical immunologists also study ways to prevent the immune system's attempts to destroy allografts (transplant rejection). Clinical immunology and allergy is usually a subspecialty of internal medicine or pediatrics. Fellows in Clinical Immunology are typically exposed to many of the different aspects of the specialty and treat allergic conditions, primary immunodeficiencies and systemic autoimmune and autoinflammatory conditions. As part of their training fellows may do additional rotations in rheumatology, pulmonology, otorhinolaryngology, dermatology and the immunologic lab. Clinical and pathology immunology When health conditions worsen to emergency status, portions of immune system organs, including the thymus, spleen, bone marrow, lymph nodes, and other lymphatic tissues, can be surgically excised for examination while patients are still alive. Theoretical immunology Immunology is strongly experimental in everyday practice but is also characterized by an ongoing theoretical attitude. Many theories have been suggested in immunology from the end of the nineteenth century up to the present time. The end of the 19th century and the beginning of the 20th century saw a battle between "cellular" and "humoral" theories of immunity. According to the cellular theory of immunity, represented in particular by Elie Metchnikoff, it was cells – more precisely, phagocytes – that were responsible for immune responses. In contrast, the humoral theory of immunity, held by Robert Koch and Emil von Behring, among others, stated that the active immune agents were soluble components (molecules) found in the organism's "humors" rather than its cells. In the mid-1950s, Macfarlane Burnet, inspired by a suggestion made by Niels Jerne, formulated the clonal selection theory (CST) of immunity. On the basis of CST, Burnet developed a theory of how an immune response is triggered according to the self/nonself distinction: "self" constituents (constituents of the body) do not trigger destructive immune responses, while "nonself" entities (e.g., pathogens, an allograft) trigger a destructive immune response. The theory was later modified to reflect new discoveries regarding histocompatibility or the complex "two-signal" activation of T cells. The self/nonself theory of immunity and the self/nonself vocabulary have been criticized, but remain very influential. More recently, several theoretical frameworks have been suggested in immunology, including "autopoietic" views, "cognitive immune" views, the "danger model" (or "danger theory"), and the "discontinuity" theory. The danger model, suggested by Polly Matzinger and colleagues, has been very influential, arousing many comments and discussions. Developmental immunology The body's capability to react to antigens depends on a person's age, antigen type, maternal factors and the area where the antigen is presented. Neonates are said to be in a state of physiological immunodeficiency, because both their innate and adaptive immunological responses are greatly suppressed. Once born, a child's immune system responds favorably to protein antigens while not as well to glycoproteins and polysaccharides. In fact, many of the infections acquired by neonates are caused by low virulence organisms like Staphylococcus and Pseudomonas. In neonates, opsonic activity and the ability to activate the complement cascade is very limited. For example, the mean level of C3 in a newborn is approximately 65% of that found in the adult. Phagocytic activity is also greatly impaired in newborns. This is due to lower opsonic activity, as well as diminished up-regulation of integrin and selectin receptors, which limit the ability of neutrophils to interact with adhesion molecules in the endothelium. Their monocytes are slow and have a reduced ATP production, which also limits the newborn's phagocytic activity. Although, the number of total lymphocytes is significantly higher than in adults, the cellular and humoral immunity is also impaired. Antigen-presenting cells in newborns have a reduced capability to activate T cells. Also, T cells of a newborn proliferate poorly and produce very small amounts of cytokines like IL-2, IL-4, IL-5, IL-12, and IFN-g which limits their capacity to activate the humoral response as well as the phagocitic activity of macrophage. B cells develop early during gestation but are not fully active. Maternal factors also play a role in the body's immune response. At birth, most of the immunoglobulin present is maternal IgG. These antibodies are transferred from the placenta to the fetus using the FcRn (neonatal Fc receptor). Because IgM, IgD, IgE and IgA do not cross the placenta, they are almost undetectable at birth. Some IgA is provided by breast milk. These passively-acquired antibodies can protect the newborn for up to 18 months, but their response is usually short-lived and of low affinity. These antibodies can also produce a negative response. If a child is exposed to the antibody for a particular antigen before being exposed to the antigen itself then the child will produce a dampened response. Passively acquired maternal antibodies can suppress the antibody response to active immunization. Similarly, the response of T-cells to vaccination differs in children compared to adults, and vaccines that induce Th1 responses in adults do not readily elicit these same responses in neonates. Between six and nine months after birth, a child's immune system begins to respond more strongly to glycoproteins, but there is usually no marked improvement in their response to polysaccharides until they are at least one year old. This can be the reason for distinct time frames found in vaccination schedules. During adolescence, the human body undergoes various physical, physiological and immunological changes triggered and mediated by hormones, of which the most significant in females is 17-β-estradiol (an estrogen) and, in males, is testosterone. Estradiol usually begins to act around the age of 10 and testosterone some months later. There is evidence that these steroids not only act directly on the primary and secondary sexual characteristics but also have an effect on the development and regulation of the immune system, including an increased risk in developing pubescent and post-pubescent autoimmunity. There is also some evidence that cell surface receptors on B cells and macrophages may detect sex hormones in the system. The female sex hormone 17-β-estradiol has been shown to regulate the level of immunological response, while some male androgens such as testosterone seem to suppress the stress response to infection. Other androgens, however, such as DHEA, increase immune response. As in females, the male sex hormones seem to have more control of the immune system during puberty and post-puberty than during the rest of a male's adult life. Physical changes during puberty such as thymic involution also affect immunological response. Ecoimmunology and behavioural immunity Ecoimmunology, or ecological immunology, explores the relationship between the immune system of an organism and its social, biotic and abiotic environment. More recent ecoimmunological research has focused on host pathogen defences traditionally considered "non-immunological", such as pathogen avoidance, self-medication, symbiont-mediated defenses, and fecundity trade-offs. Behavioural immunity, a phrase coined by Mark Schaller, specifically refers to psychological pathogen avoidance drivers, such as disgust aroused by stimuli encountered around pathogen-infected individuals, such as the smell of vomit. More broadly, "behavioural" ecological immunity has been demonstrated in multiple species. For example, the Monarch butterfly often lays its eggs on certain toxic milkweed species when infected with parasites. These toxins reduce parasite growth in the offspring of the infected Monarch. However, when uninfected Monarch butterflies are forced to feed only on these toxic plants, they suffer a fitness cost as reduced lifespan relative to other uninfected Monarch butterflies. This indicates that laying eggs on toxic plants is a costly behaviour in Monarchs which has probably evolved to reduce the severity of parasite infection. Symbiont-mediated defenses are also heritable across host generations, despite a non-genetic direct basis for the transmission. Aphids, for example, rely on several different symbionts for defense from key parasites, and can vertically transmit their symbionts from parent to offspring. Therefore, a symbiont that successfully confers protection from a parasite is more likely to be passed to the host offspring, allowing coevolution with parasites attacking the host in a way similar to traditional immunity. The preserved immune tissues of extinct species, such as the thylacine (Thylacine cynocephalus), can also provide insights into their biology. Cancer immunology The study of the interaction of the immune system with cancer cells can lead to diagnostic tests and therapies with which to find and fight cancer. The immunology concerned with physiological reaction characteristic of the immune state. Inflammation is an immune response that has been observed in many types of cancers. Reproductive immunology This area of the immunology is devoted to the study of immunological aspects of the reproductive process including fetus acceptance. The term has also been used by fertility clinics to address fertility problems, recurrent miscarriages, premature deliveries and dangerous complications such as pre-eclampsia. See also List of immunologists Immunomics International Reviews of Immunology Outline of immunology History of immunology Osteoimmunology References External links American Association of Immunologists British Society for Immunology Federation of Clinical Immunology Societies
Immunology
[ "Biology" ]
3,546
[ "Immunology" ]
14,962
https://en.wikipedia.org/wiki/Identity%20element
In mathematics, an identity element or neutral element of a binary operation is an element that leaves unchanged every element when the operation is applied. For example, 0 is an identity element of the addition of real numbers. This concept is used in algebraic structures such as groups and rings. The term identity element is often shortened to identity (as in the case of additive identity and multiplicative identity) when there is no possibility of confusion, but the identity implicitly depends on the binary operation it is associated with. Definitions Let be a set  equipped with a binary operation ∗. Then an element  of  is called a if for all  in , and a if for all  in . If is both a left identity and a right identity, then it is called a , or simply an . An identity with respect to addition is called an (often denoted as 0) and an identity with respect to multiplication is called a (often denoted as 1). These need not be ordinary addition and multiplication—as the underlying operation could be rather arbitrary. In the case of a group for example, the identity element is sometimes simply denoted by the symbol . The distinction between additive and multiplicative identity is used most often for sets that support both binary operations, such as rings, integral domains, and fields. The multiplicative identity is often called in the latter context (a ring with unity). This should not be confused with a unit in ring theory, which is any element having a multiplicative inverse. By its own definition, unity itself is necessarily a unit. Examples Properties In the example S = {e,f} with the equalities given, S is a semigroup. It demonstrates the possibility for to have several left identities. In fact, every element can be a left identity. In a similar manner, there can be several right identities. But if there is both a right identity and a left identity, then they must be equal, resulting in a single two-sided identity. To see this, note that if is a left identity and is a right identity, then . In particular, there can never be more than one two-sided identity: if there were two, say and , then would have to be equal to both and . It is also quite possible for to have no identity element, such as the case of even integers under the multiplication operation. Another common example is the cross product of vectors, where the absence of an identity element is related to the fact that the direction of any nonzero cross product is always orthogonal to any element multiplied. That is, it is not possible to obtain a non-zero vector in the same direction as the original. Yet another example of structure without identity element involves the additive semigroup of positive natural numbers. See also Absorbing element Additive inverse Generalized inverse Identity (equation) Identity function Inverse element Monoid Pseudo-ring Quasigroup Unital (disambiguation) Notes and references Bibliography Further reading M. Kilp, U. Knauer, A.V. Mikhalev, Monoids, Acts and Categories with Applications to Wreath Products and Graphs, De Gruyter Expositions in Mathematics vol. 29, Walter de Gruyter, 2000, , p. 14–15 Algebraic properties of elements Identity element Properties of binary operations 1 (number)
Identity element
[ "Mathematics" ]
662
[ "Binary operations", "Mathematical relations", "Binary relations" ]
14,968
https://en.wikipedia.org/wiki/Regular%20icosahedron
In geometry, the regular icosahedron (or simply icosahedron) is a convex polyhedron that can be constructed from pentagonal antiprism by attaching two pentagonal pyramids with regular faces to each of its pentagonal faces, or by putting points onto the cube. The resulting polyhedron has 20 equilateral triangles as its faces, 30 edges, and 12 vertices. It is an example of a Platonic solid and of a deltahedron. The icosahedral graph represents the skeleton of a regular icosahedron. Many polyhedra are constructed from the regular icosahedron. For example, most of the Kepler–Poinsot polyhedron is constructed by faceting. Some of the Johnson solids can be constructed by removing the pentagonal pyramids. The regular icosahedron has many relations with other Platonic solids, one of them is the regular dodecahedron as its dual polyhedron and has the historical background on the comparison mensuration. It also has many relations with other polytopes. The appearance of regular icosahedron can be found in nature, such as the virus with icosahedral-shaped shells and radiolarians. Other applications of the regular icosahedron are the usage of its net in cartography, twenty-sided dice that may have been found in ancient times and role-playing games. Construction The regular icosahedron can be constructed like other gyroelongated bipyramids, started from a pentagonal antiprism by attaching two pentagonal pyramids with regular faces to each of its faces. These pyramids cover the pentagonal faces, replacing them with five equilateral triangles, such that the resulting polyhedron has 20 equilateral triangles as its faces. This process construction is known as the gyroelongation. Another way to construct it is by putting two points on each surface of a cube. In each face, draw a segment line between the midpoints of two opposite edges and locate two points with the golden ratio distance from each midpoint. These twelve vertices describe the three mutually perpendicular planes, with edges drawn between each of them. Because of the constructions above, the regular icosahedron is Platonic solid, a family of polyhedra with regular faces. A polyhedron with only equilateral triangles as faces is called a deltahedron. There are only eight different convex deltahedra, one of which is the regular icosahedron. The regular icosahedron can also be constructed starting from a regular octahedron. All triangular faces of a regular octahedron are breaking, twisting at a certain angle, and filling up with other equilateral triangles. This process is known as snub, and the regular icosahedron is also known as snub octahedron. One possible system of Cartesian coordinate for the vertices of a regular icosahedron, giving the edge length 2, is: where denotes the golden ratio. Properties Mensuration The insphere of a convex polyhedron is a sphere inside the polyhedron, touching every face. The circumsphere of a convex polyhedron is a sphere that contains the polyhedron and touches every vertex. The midsphere of a convex polyhedron is a sphere tangent to every edge. Therefore, given that the edge length of a regular icosahedron, the radius of insphere (inradius) , the radius of circumsphere (circumradius) , and the radius of midsphere (midradius) are, respectively: The surface area of a polyhedron is the sum of the areas of its faces. Therefore, the surface area of a regular icosahedron is 20 times that of each of its equilateral triangle faces. The volume of a regular icosahedron can be obtained as 20 times that of a pyramid whose base is one of its faces and whose apex is the icosahedron's center; or as the sum of two uniform pentagonal pyramids and a pentagonal antiprism. The expressions of both are: A problem dating back to the ancient Greeks is determining which of two shapes has a larger volume, an icosahedron inscribed in a sphere, or a dodecahedron inscribed in the same sphere. The problem was solved by Hero, Pappus, and Fibonacci, among others. Apollonius of Perga discovered the curious result that the ratio of volumes of these two shapes is the same as the ratio of their surface areas. Both volumes have formulas involving the golden ratio, but taken to different powers. As it turns out, the icosahedron occupies less of the sphere's volume (60.54%) than the dodecahedron (66.49%). The dihedral angle of a regular icosahedron can be calculated by adding the angle of pentagonal pyramids with regular faces and a pentagonal antiprism. The dihedral angle of a pentagonal antiprism and pentagonal pyramid between two adjacent triangular faces is approximately 38.2°. The dihedral angle of a pentagonal antiprism between pentagon-to-triangle is 100.8°, and the dihedral angle of a pentagonal pyramid between the same faces is 37.4°. Therefore, for the regular icosahedron, the dihedral angle between two adjacent triangles, on the edge where the pentagonal pyramid and pentagonal antiprism are attached is 37.4° + 100.8° = 138.2°. Symmetry The rotational symmetry group of the regular icosahedron is isomorphic to the alternating group on five letters. This non-abelian simple group is the only non-trivial normal subgroup of the symmetric group on five letters. Since the Galois group of the general quintic equation is isomorphic to the symmetric group on five letters, and this normal subgroup is simple and non-abelian, the general quintic equation does not have a solution in radicals. The proof of the Abel–Ruffini theorem uses this simple fact, and Felix Klein wrote a book that made use of the theory of icosahedral symmetries to derive an analytical solution to the general quintic equation. The full symmetry group of the icosahedron (including reflections) is known as the full icosahedral group. It is isomorphic to the product of the rotational symmetry group and the group of size two, which is generated by the reflection through the center of the icosahedron. Icosahedral graph Every Platonic graph, including the icosahedral graph, is a polyhedral graph. This means that they are planar graphs, graphs that can be drawn in the plane without crossing its edges; and they are 3-vertex-connected, meaning that the removal of any two of its vertices leaves a connected subgraph. According to Steinitz theorem, the icosahedral graph endowed with these heretofore properties represents the skeleton of a regular icosahedron. The icosahedral graph is Hamiltonian, meaning that it contains a Hamiltonian cycle, or a cycle that visits each vertex exactly once. Related polyhedra In other Platonic solids Aside from comparing the mensuration between the regular icosahedron and regular dodecahedron, they are dual to each other. An icosahedron can be inscribed in a dodecahedron by placing its vertices at the face centers of the dodecahedron, and vice versa. An icosahedron can be inscribed in an octahedron by placing its 12 vertices on the 12 edges of the octahedron such that they divide each edge into its two golden sections. Because the golden sections are unequal, there are five different ways to do this consistently, so five disjoint icosahedra can be inscribed in each octahedron. An icosahedron of edge length can be inscribed in a unit-edge-length cube by placing six of its edges (3 orthogonal opposite pairs) on the square faces of the cube, centered on the face centers and parallel or perpendicular to the square's edges. Because there are five times as many icosahedron edges as cube faces, there are five ways to do this consistently, so five disjoint icosahedra can be inscribed in each cube. The edge lengths of the cube and the inscribed icosahedron are in the golden ratio. Stellation The icosahedron has a large number of stellations. stated 59 stellations were identified for the regular icosahedron. The first form is the icosahedron itself. One is a regular Kepler–Poinsot polyhedron. Three are regular compound polyhedra. Facetings The small stellated dodecahedron, great dodecahedron, and great icosahedron are three facetings of the regular icosahedron. They share the same vertex arrangement. They all have 30 edges. The regular icosahedron and great dodecahedron share the same edge arrangement but differ in faces (triangles vs pentagons), as do the small stellated dodecahedron and great icosahedron (pentagrams vs triangles). Diminishment A Johnson solid is a polyhedron whose faces are all regular, but which is not uniform. This means the Johnson solids do not include the Archimedean solids, the Catalan solids, the prisms, or the antiprisms. Some of them are constructed involving the removal of the part of a regular icosahedron, a process known as diminishment. They are gyroelongated pentagonal pyramid, metabidiminished icosahedron, and tridiminished icosahedron, which remove one, two, and three pentagonal pyramids from the icosahedron, respectively. The similar dissected regular icosahedron has 2 adjacent vertices diminished, leaving two trapezoidal faces, and a bifastigium has 2 opposite sets of vertices removed and 4 trapezoidal faces. Relations to the 600-cell and other 4-polytopes The icosahedron is the dimensional analogue of the 600-cell, a regular 4-dimensional polytope. The 600-cell has icosahedral cross sections of two sizes, and each of its 120 vertices is an icosahedral pyramid; the icosahedron is the vertex figure of the 600-cell. The unit-radius 600-cell has tetrahedral cells of edge length , 20 of which meet at each vertex to form an icosahedral pyramid (a 4-pyramid with an icosahedron as its base). Thus the 600-cell contains 120 icosahedra of edge length . The 600-cell also contains unit-edge-length cubes and unit-edge-length octahedra as interior features formed by its unit-length chords. In the unit-radius 120-cell (another regular 4-polytope which is both the dual of the 600-cell and a compound of 5 600-cells) we find all three kinds of inscribed icosahedra (in a dodecahedron, in an octahedron, and in a cube). A semiregular 4-polytope, the snub 24-cell, has icosahedral cells. Relations to other uniform polytopes As mentioned above, the regular icosahedron is unique among the Platonic solids in possessing a dihedral angle is approximately Thus, just as hexagons have angles not less than 120° and cannot be used as the faces of a convex regular polyhedron because such a construction would not meet the requirement that at least three faces meet at a vertex and leave a positive defect for folding in three dimensions, icosahedra cannot be used as the cells of a convex regular polychoron because, similarly, at least three cells must meet at an edge and leave a positive defect for folding in four dimensions (in general for a convex polytope in n dimensions, at least three facets must meet at a peak and leave a positive defect for folding in n-space). However, when combined with suitable cells having smaller dihedral angles, icosahedra can be used as cells in semi-regular polychora (for example the snub 24-cell), just as hexagons can be used as faces in semi-regular polyhedra (for example the truncated icosahedron). Finally, non-convex polytopes do not carry the same strict requirements as convex polytopes, and icosahedra are indeed the cells of the icosahedral 120-cell, one of the ten non-convex regular polychora. There are distortions of the icosahedron that, while no longer regular, are nevertheless vertex-uniform. These are invariant under the same rotations as the tetrahedron, and are somewhat analogous to the snub cube and snub dodecahedron, including some forms which are chiral and some with symmetry, i.e. have different planes of symmetry from the tetrahedron. Appearances Dice are the most common objects using different polyhedra, one of them being the regular icosahedron. The twenty-sided die was found in many ancient times. One example is the die from the Ptolemaic of Egypt, which later used Greek letters inscribed on the faces in the period of Greece and Rome. Another example was found in the treasure of Tipu Sultan, which was made out of gold and with numbers written on each face. In several roleplaying games, such as Dungeons & Dragons, the twenty-sided die (labeled as d20) is commonly used in determining success or failure of an action. It may be numbered from "0" to "9" twice, in which form it usually serves as a ten-sided die (d10); most modern versions are labeled from "1" to "20". Scattergories is another board game in which the player names the category entires on a card within a given set time. The naming of such categories is initially with the letters contained in every twenty-sided dice. The regular icosahedron may also appear in many fields of science as follows: In virology, herpes virus have icosahedral shells. The outer protein shell of HIV is enclosed in a regular icosahedron, as is the head of a typical myovirus. Several species of radiolarians discovered by Ernst Haeckel, described its shells as the like-shaped various regular polyhedra; one of which is Circogonia icosahedra, whose skeleton is shaped like a regular icosahedron. In chemistry, the closo-carboranes are compounds with a shape resembling the regular icosahedron. The crystal twinning with icosahedral shapes also occurs in crystals, especially nanoparticles. Many borides and allotropes of boron such as α- and β-rhombohedral contain boron B12 icosahedron as a basic structure unit. In cartography, R. Buckminster Fuller used the net of a regular icosahedron to create a map known as Dymaxion map, by subdividing the net into triangles, followed by calculating the grid on the Earth's surface, and transferring the results from the sphere to the polyhedron. This projection was created during the time that Fuller realized that the Greenland is smaller than South America. In the Thomson problem, concerning the minimum-energy configuration of charged particles on a sphere, and for the Tammes problem of constructing a spherical code maximizing the smallest distance among the points, the minimum solution known for places the points at the vertices of a regular icosahedron, inscribed in a sphere. This configuration is proven optimal for the Tammes problem, but a rigorous solution to this instance of the Thomson problem is unknown. As mentioned above, the regular icosahedron is one of the five Platonic solids. The regular polyhedra have been known since antiquity, but are named after Plato who, in his Timaeus dialogue, identified these with the five elements, whose elementary units were attributed these shapes: fire (tetrahedron), air (octahedron), water (icosahedron), earth (cube) and the shape of the universe as a whole (dodecahedron). Euclid's Elements defined the Platonic solids and solved the problem of finding the ratio of the circumscribed sphere's diameter to the edge length. Following their identification with the elements by Plato, Johannes Kepler in his Harmonices Mundi sketched each of them, in particular, the regular icosahedron. In his Mysterium Cosmographicum, he also proposed a model of the Solar System based on the placement of Platonic solids in a concentric sequence of increasing radius of the inscribed and circumscribed spheres whose radii gave the distance of the six known planets from the common center. The ordering of the solids, from innermost to outermost, consisted of: regular octahedron, regular icosahedron, regular dodecahedron, regular tetrahedron, and cube. Notes References See here for an online book. </ref> , translated from External links K.J.M. MacLean, A Geometric Analysis of the Five Platonic Solids and Other Semi-Regular Polyhedra Virtual Reality Polyhedra The Encyclopedia of Polyhedra Tulane.edu A discussion of viral structure and the icosahedron Origami Polyhedra – Models made with Modular Origami Video of icosahedral mirror sculpture Principle of virus architecture Deltahedra Planar graphs Platonic solids
Regular icosahedron
[ "Mathematics" ]
3,597
[]
14,972
https://en.wikipedia.org/wiki/Idempotence
Idempotence (, ) is the property of certain operations in mathematics and computer science whereby they can be applied multiple times without changing the result beyond the initial application. The concept of idempotence arises in a number of places in abstract algebra (in particular, in the theory of projectors and closure operators) and functional programming (in which it is connected to the property of referential transparency). The term was introduced by American mathematician Benjamin Peirce in 1870 in the context of elements of algebras that remain invariant when raised to a positive integer power, and literally means "(the quality of having) the same power", from + potence (same + power). Definition An element of a set equipped with a binary operator is said to be idempotent under if . The binary operation is said to be idempotent if . Examples In the monoid of the natural numbers with multiplication, only and are idempotent. Indeed, and . In the monoid of the natural numbers with addition, only is idempotent. Indeed, . In a magma , an identity element or an absorbing element , if it exists, is idempotent. Indeed, and . In a group , the identity element is the only idempotent element. Indeed, if is an element of such that , then and finally by multiplying on the left by the inverse element of . In the monoids and of the power set of the set with set union and set intersection respectively, and are idempotent. Indeed, , and . In the monoids and of the Boolean domain with logical disjunction and logical conjunction respectively, and are idempotent. Indeed, , and . In a GCD domain (for instance in ), the operations of GCD and LCM are idempotent. In a Boolean ring, multiplication is idempotent. In a Tropical semiring, addition is idempotent. In a ring of quadratic matrices, the determinant of an idempotent matrix is either 0 or 1. If the determinant is 1, the matrix necessarily is the identity matrix. Idempotent functions In the monoid of the functions from a set to itself (see set exponentiation) with function composition , idempotent elements are the functions such that , that is such that (in other words, the image of each element is a fixed point of ). For example: the absolute value is idempotent. Indeed, , that is ; constant functions are idempotent; the identity function is idempotent; the floor, ceiling and fractional part functions are idempotent; the real part function of a complex number, is idempotent. the subgroup generated function from the power set of a group to itself is idempotent; the convex hull function from the power set of an affine space over the reals to itself is idempotent; the closure and interior functions of the power set of a topological space to itself are idempotent; the Kleene star and Kleene plus functions of the power set of a monoid to itself are idempotent; the idempotent endomorphisms of a vector space are its projections. If the set has elements, we can partition it into chosen fixed points and non-fixed points under , and then is the number of different idempotent functions. Hence, taking into account all possible partitions, is the total number of possible idempotent functions on the set. The integer sequence of the number of idempotent functions as given by the sum above for n = 0, 1, 2, 3, 4, 5, 6, 7, 8, ... starts with 1, 1, 3, 10, 41, 196, 1057, 6322, 41393, ... . Neither the property of being idempotent nor that of being not is preserved under function composition. As an example for the former, mod 3 and are both idempotent, but is not, although happens to be. As an example for the latter, the negation function on the Boolean domain is not idempotent, but is. Similarly, unary negation of real numbers is not idempotent, but is. In both cases, the composition is simply the identity function, which is idempotent. Computer science meaning In computer science, the term idempotence may have a different meaning depending on the context in which it is applied: in imperative programming, a subroutine with side effects is idempotent if multiple calls to the subroutine have the same effect on the system state as a single call, in other words if the function from the system state space to itself associated with the subroutine is idempotent in the mathematical sense given in the definition; in functional programming, a pure function is idempotent if it is idempotent in the mathematical sense given in the definition. This is a very useful property in many situations, as it means that an operation can be repeated or retried as often as necessary without causing unintended effects. With non-idempotent operations, the algorithm may have to keep track of whether the operation was already performed or not. Computer science examples A function looking up a customer's name and address in a database is typically idempotent, since this will not cause the database to change. Similarly, a request for changing a customer's address to XYZ is typically idempotent, because the final address will be the same no matter how many times the request is submitted. However, a customer request for placing an order is typically not idempotent since multiple requests will lead to multiple orders being placed. A request for canceling a particular order is idempotent because no matter how many requests are made the order remains canceled. A sequence of idempotent subroutines where at least one subroutine is different from the others, however, is not necessarily idempotent if a later subroutine in the sequence changes a value that an earlier subroutine depends on—idempotence is not closed under sequential composition. For example, suppose the initial value of a variable is 3 and there is a subroutine sequence that reads the variable, then changes it to 5, and then reads it again. Each step in the sequence is idempotent: both steps reading the variable have no side effects and the step changing the variable to 5 will always have the same effect no matter how many times it is executed. Nonetheless, executing the entire sequence once produces the output (3, 5), but executing it a second time produces the output (5, 5), so the sequence is not idempotent. int x = 3; void inspect() { printf("%d\n", x); } void change() { x = 5; } void sequence() { inspect(); change(); inspect(); } int main() { sequence(); // prints "3\n5\n" sequence(); // prints "5\n5\n" return 0; } In the Hypertext Transfer Protocol (HTTP), idempotence and safety are the major attributes that separate HTTP methods. Of the major HTTP methods, GET, PUT, and DELETE should be implemented in an idempotent manner according to the standard, but POST doesn't need to be. GET retrieves the state of a resource; PUT updates the state of a resource; and DELETE deletes a resource. As in the example above, reading data usually has no side effects, so it is idempotent (in fact nullipotent). Updating and deleting a given data are each usually idempotent as long as the request uniquely identifies the resource and only that resource again in the future. PUT and DELETE with unique identifiers reduce to the simple case of assignment to a variable of either a value or the null-value, respectively, and are idempotent for the same reason; the end result is always the same as the result of the initial execution, even if the response differs. Violation of the unique identification requirement in storage or deletion typically causes violation of idempotence. For example, storing or deleting a given set of content without specifying a unique identifier: POST requests, which do not need to be idempotent, often do not contain unique identifiers, so the creation of the identifier is delegated to the receiving system which then creates a corresponding new record. Similarly, PUT and DELETE requests with nonspecific criteria may result in different outcomes depending on the state of the system - for example, a request to delete the most recent record. In each case, subsequent executions will further modify the state of the system, so they are not idempotent. In event stream processing, idempotence refers to the ability of a system to produce the same outcome, even if the same file, event or message is received more than once. In a load–store architecture, instructions that might possibly cause a page fault are idempotent. So if a page fault occurs, the operating system can load the page from disk and then simply re-execute the faulted instruction. In a processor where such instructions are not idempotent, dealing with page faults is much more complex. When reformatting output, pretty-printing is expected to be idempotent. In other words, if the output is already "pretty", there should be nothing to do for the pretty-printer. In service-oriented architecture (SOA), a multiple-step orchestration process composed entirely of idempotent steps can be replayed without side-effects if any part of that process fails. Many operations that are idempotent often have ways to "resume" a process if it is interrupted ways that finish much faster than starting all over from the beginning. For example, resuming a file transfer, synchronizing files, creating a software build, installing an application and all of its dependencies with a package manager, etc. Applied examples Applied examples that many people could encounter in their day-to-day lives include elevator call buttons and crosswalk buttons. The initial activation of the button moves the system into a requesting state, until the request is satisfied. Subsequent activations of the button between the initial activation and the request being satisfied have no effect, unless the system is designed to adjust the time for satisfying the request based on the number of activations. See also Biordered set Closure operator Fixed point (mathematics) Idempotent of a code Idempotent analysis Idempotent matrix Idempotent relation a generalization of idempotence to binary relations Idempotent (ring theory) Involution (mathematics) Iterated function List of matrices Nilpotent Pure function Referential transparency References Further reading "idempotent" at the Free On-line Dictionary of Computing p. 443 Peirce, Benjamin. Linear Associative Algebra 1870. Properties of binary operations Algebraic properties of elements Closure operators Mathematical relations Theoretical computer science
Idempotence
[ "Mathematics" ]
2,349
[ "Mathematical analysis", "Predicate logic", "Closure operators", "Theoretical computer science", "Applied mathematics", "Basic concepts in set theory", "Mathematical relations", "Order theory" ]
14,976
https://en.wikipedia.org/wiki/Ithaca%20Hours
The Ithaca HOUR was a local currency used in Ithaca, New York, though it is now no longer in circulation. It was one of the longest-running local currency systems, and inspired other similar systems in Madison, Wisconsin; Santa Barbara, California; Corvallis, Oregon; and a proposed system in the Lehigh Valley, Pennsylvania. One Ithaca HOUR was valued at US$10 and was generally recommended to be used as payment for one hour's work, although the rate is negotiable. The currency Ithaca HOURS were not backed by national currency and cannot be freely converted to national currency, although some businesses did agree to buy them. HOURS were printed on high-quality paper and used faint graphics that would be difficult to reproduce. Each bill was stamped with a serial number, to discourage counterfeiting. In 2002, a one-tenth hour bill was introduced, partly due to the encouragement and funding from Alternatives Federal Credit Union and feedback from retailers who complained about the awkwardness of only having larger denominations with which to work; the bills bear the signatures of both HOURS president Steve Burke and the president of AFCU. Ithaca HOUR notes began to fall into disuse for several reasons. First, the founder of the system, Paul Glover, moved out of the area. While in Ithaca, Glover had acted as an evangelist and networker for HOURS, helping spread their use and helping businesses find ways to spend HOURS they had received. Secondly, the use of HOURS declined as a result of the general shift away from cash transactions towards electronic transfers with debit or credit cards. Glover emphasized that every local currency needs at least one full-time networker to "promote, facilitate and troubleshoot" currency circulation. Origin Ithaca HOURS were started by Paul Glover in November 1991. The system has historical roots in scrip and alternative and local currencies that proliferated in America during the Great Depression. While doing research into local economics during 1989, Glover had seen an "Hour" note issued by 19th century British industrialist Robert Owen to his workers for spending at his company store. After Ithaca HOURS began, Glover discovered that Owen's Hours were based on Josiah Warren's "Time Store" notes of 1827. In May 1991, local student Patrice Jennings interviewed Glover about the Ithaca LETS enterprise. This conversation strongly reinforced his interest in trade systems. Jennings's research on the Ithaca LETS and its failure was integral to the development of the HOUR currency; conversations between Jennings and Glover helped ensure that HOURS used knowledge of what had not worked with the LETS system. Within a few days, Glover had designs for the HOUR and Half HOUR notes. He established that each HOUR would be worth the equivalent of $10, which was about the average hourly amount that workers earned in surrounding Tompkins County, although the exact rate of exchange for any given transaction was to be decided by the parties themselves. At GreenStar Cooperative Market, a local food co-op, Glover approached Gary Fine, a local massage therapist, with photocopied samples. Fine became the first person to sign a list formally agreeing to accept HOURS in exchange for services. Soon after, Jim Rohrrsen, the proprietor of a local toy store, became the first retailer to sign-up to accept Ithaca HOURS in exchange for merchandise. When the system was first started, 90 people agreed to accept HOURS as pay for their services. They all agreed to accept HOURS despite the lack of a business plan or guarantee. Glover then began to ask for small donations to help pay for printing HOURS. Fine Line Printing completed the first run of 1,500 HOURS and 1,500 Half HOURS in October 1991. These notes, the first modern local currency, were nearly twice as large as later printings of Ithaca HOURS. Because they didn't fit well in people's wallets, almost all of the original notes have been removed from circulation. The first issue of Ithaca Money was printed at Our Press, a printing shop in Chenango Bridge, New York, on October 16, 1991. The next day Glover issued 10 HOURS to Ithaca Hours, the organization he founded to run the system, as the first of four reimbursements for the cost of printing HOURS. The day after that, October 18, 1991, 382 HOURS were disbursed and prepared for mailing to the first 93 pioneers. On October 19, 1991, Glover bought a samosa from Catherine Martinez at the Farmers' Market with Half HOUR #751—the first use of an HOUR. Several other Market vendors enrolled that day. During the next years more than a thousand individuals enrolled to accept HOURS, plus 500 businesses. Stacks of the Ithaca Money newspaper were distributed all over town with an invitation to "join the fun." A Barter Potluck was held at GIAC on November 12, 1991, the first of many monthly gatherings where food and skills were exchanged, acquaintances made, and friendships renewed. Management and philosophy In 1996, Glover was running the Ithaca Hours system from his home, and the system had an advisory board and a governing board called the "Barter Potluck". The board and Glover put forth the idea that economic interactions should be based on harmony rather than on more Hobbesian forms of competition. In one interview, Glover stated that "There's a growing movement called "ecological economics" and Ithaca HOURS is part of that cosmos. Last year I wrote an article which discusses moving us toward the provision of food, fuel, clothing, housing, transportation, [and other] necessities in ways which are healing of nature, or which are less depleting at least and which bring people together on the basis of their shared pride, not arrogance." Thus one underlying principle of the local currency movement is to create "fair trade" with a minimum of conflict or exploitation of either people or natural resources. The advisory board incorporated the Ithaca HOUR system as Ithaca Hours, Inc. in October 1998, and hosted the first elections for Board of Directors in March 1999. The first Board of Directors included Monica Hargraves, Dan Cogan, Margaret McCasland, Erica Van Etten, Greg Spence Wolf, Bob LeRoy, LeGrace Benson, Wally Woods, Jennifer Elges, and Donald Stephenson. In May 1999 Glover turned the administration of Ithaca HOURS over to the newly elected Board of Directors. Glover has continued to support Ithaca Hours through community outreach to present, most notably through the Ithaca Health Fund (now incorporated as part of the Ithaca Health Alliance) and Ithaca Community News. The current Board of Directors, 2014–2015, includes Erik Lehmann (chair), Danielle Klock, and Bob LeRoy. Economic development Several million dollars value of HOURS have been traded since 1991 among thousands of residents and over 500 area businesses, including the Cayuga Medical Center, Alternatives Federal Credit Union, the public library, many local farmers, movie theatres, restaurants, healers, plumbers, carpenters, electricians, and landlords. One of the primary functions of the Ithaca Hours system is to promote local economic development. Businesses who receive Hours must spend them on local goods and services, thus building a network of inter-supporting local businesses. While non-local businesses are welcome to accept Hours, those businesses need to spend them on local goods and services to be economically sustainable. In their mission to promote local economic development, the Board of Directors also makes interest-free loans of Ithaca HOURS to local businesses and grants to local non-profit organizations. See also Local currency List of community currencies in the United States Labour voucher Time-based currency Wörgl, Silvio Gesell References External links Official Ithaca Hours Website Paul Glover's Website E F Schumacher Society Local Currency website Brief History of Local Currencies Community Currency Online Magazine Hours Local currencies of the United States Economics and time Currencies introduced in 1991 1991 establishments in New York (state)
Ithaca Hours
[ "Physics" ]
1,591
[ "Spacetime", "Economics and time", "Physical quantities", "Time" ]
14,979
https://en.wikipedia.org/wiki/Interstellar%20cloud
An interstellar cloud is generally an accumulation of gas, plasma, and dust in our and other galaxies. But differently, an interstellar cloud is a denser-than-average region of the interstellar medium, the matter and radiation that exists in the space between the star systems in a galaxy. Depending on the density, size, and temperature of a given cloud, its hydrogen can be neutral, making an H I region; ionized, or plasma making it an H II region; or molecular, which are referred to simply as molecular clouds, or sometime dense clouds. Neutral and ionized clouds are sometimes also called diffuse clouds. An interstellar cloud is formed by the gas and dust particles from a red giant in its later life. Chemical compositions The chemical composition of interstellar clouds is determined by studying electromagnetic radiation that they emanate, and we receive – from radio waves through visible light, to gamma rays on the electromagnetic spectrum – that we receive from them. Large radio telescopes scan the intensity in the sky of particular frequencies of electromagnetic radiation, which are characteristic of certain molecules' spectra. Some interstellar clouds are cold and tend to give out electromagnetic radiation of large wavelengths. A map of the abundance of these molecules can be made, enabling an understanding of the varying composition of the clouds. In hot clouds, there are often ions of many elements, whose spectra can be seen in visible and ultraviolet light. Radio telescopes can also scan over the frequencies from one point in the map, recording the intensities of each type of molecule. Peaks of frequencies mean that an abundance of that molecule or atom is present in the cloud. The height of the peak is proportional to the relative percentage that it makes up. Unexpected chemicals detected in interstellar clouds Until recently, the rates of reactions in interstellar clouds were expected to be very slow, with minimal products being produced due to the low temperature and density of the clouds. However, organic molecules were observed in the spectra that scientists would not have expected to find under these conditions, such as formaldehyde, methanol, and vinyl alcohol. The reactions needed to create such substances are familiar to scientists only at the much higher temperatures and pressures of earth and earth-based laboratories. The fact that they were found indicates that these chemical reactions in interstellar clouds take place faster than suspected, likely in gas-phase reactions unfamiliar to organic chemistry as observed on earth. These reactions are studied in the CRESU experiment. Interstellar clouds also provide a medium to study the presence and proportions of metals in space. The presence and ratios of these elements may help develop theories on the means of their production, especially when their proportions are inconsistent with those expected to arise from stars as a result of fusion and thereby suggest alternate means, such as cosmic ray spallation. High-velocity cloud These interstellar clouds possess a velocity higher than can be explained by the rotation of the Milky Way. By definition, these clouds must have a vlsr greater than 90 km s−1, where vlsr is the local standard rest velocity. They are detected primarily in the 21 cm line of neutral hydrogen, and typically have a lower portion of heavy elements than is normal for interstellar clouds in the Milky Way. Theories intended to explain these unusual clouds include materials left over from the formation of the galaxy, or tidally-displaced matter drawn away from other galaxies or members of the Local Group. An example of the latter is the Magellanic Stream. To narrow down the origin of these clouds, a better understanding of their distances and metallicity is needed. High-velocity clouds are identified with an HVC prefix, as with HVC 127-41-330. See also List of molecules in interstellar space Nebula Interplanetary medium – interplanetary dust Interstellar medium – interstellar dust Intergalactic medium – Intergalactic dust Local Interstellar Cloud G-Cloud References External links High Velocity Cloud — The Swinburne Astronomy Online (SAO) encyclopedia. Cloud Cloud Nebulae Cosmic dust Articles containing video clips
Interstellar cloud
[ "Astronomy" ]
810
[ "Interstellar media", "Outer space", "Nebulae", "Intergalactic media", "Astronomical objects", "Cosmic dust" ]
14,984
https://en.wikipedia.org/wiki/International%20Atomic%20Energy%20Agency
The International Atomic Energy Agency (IAEA) is an intergovernmental organization that seeks to promote the peaceful use of nuclear energy and to inhibit its use for any military purpose, including nuclear weapons. It was established in 1957 as an autonomous organization within the United Nations system; though governed by its own founding treaty, the organization reports to both the General Assembly and the Security Council of the United Nations, and is headquartered at the UN Office at Vienna, Austria. The IAEA was created in response to growing international concern toward nuclear weapons, especially amid rising tensions between the foremost nuclear powers, the United States and the Soviet Union. U.S. president Dwight D. Eisenhower's "Atoms for Peace" speech, which called for the creation of an international organization to monitor the global proliferation of nuclear resources and technology, is credited with catalyzing the formation of the IAEA, whose treaty came into force on 29 July 1957 upon U.S. ratification. The IAEA serves as an intergovernmental forum for scientific and technical cooperation on the peaceful use of nuclear technology and nuclear power worldwide. It maintains several programs that encourage the development of peaceful applications of nuclear energy, science, and technology; provide international safeguards against misuse of nuclear technology and nuclear materials; and promote and implement nuclear safety (including radiation protection) and nuclear security standards. The organization also conducts research in nuclear science and provides technical support and training in nuclear technology to countries worldwide, particularly in the developing world. Following the ratification of the Treaty on the Non-Proliferation of Nuclear Weapons in 1968, all non-nuclear powers are required to negotiate a safeguards agreement with the IAEA, which is given the authority to monitor nuclear programs and to inspect nuclear facilities. In 2005, the IAEA and its administrative head, Director General Mohamed ElBaradei, were awarded the Nobel Peace Prize "for their efforts to prevent nuclear energy from being used for military purposes and to ensure that nuclear energy for peaceful purposes is used in the safest possible way". Missions The IAEA is generally described as having three main missions: Peaceful uses: Promoting the peaceful uses of nuclear energy by its member states, Safeguards: Implementing safeguards to verify that nuclear energy is not used for military purposes, and Nuclear safety: Promoting high standards for nuclear safety. Peaceful uses According to Article II of the IAEA Statute, the objectives of the IAEA are "to accelerate and enlarge the contribution of atomic energy to peace, health and prosperity throughout the world" and to "ensure ... that assistance provided by it or at its request or under its supervision or control is not used in such a way as to further any military purpose." Its primary functions in this area, according to Article III, are to encourage research and development, to secure or provide materials, services, equipment, and facilities for Member States, and to foster the exchange of scientific and technical information and training. Three of the IAEA's six departments are principally charged with promoting the peaceful uses of nuclear energy. The Department of Nuclear Energy focuses on providing advice and services to Member States on nuclear power and the nuclear fuel cycle. The Department of Nuclear Sciences and Applications focuses on the use of non-power nuclear and isotope techniques to help IAEA Member States in the areas of water, energy, health, biodiversity, and agriculture. The Department of Technical Cooperation provides direct assistance to IAEA Member States, through national, regional, and inter-regional projects through training, expert missions, scientific exchanges, and provision of equipment. Safeguards Article II of the IAEA Statute defines the Agency's twin objectives as promoting peaceful uses of atomic energy and "ensur[ing], so far as it is able, that assistance provided by it or at its request or under its supervision or control is not used in such a way as to further any military purpose." To do this, the IAEA is authorized in Article III.A.5 of the Statute "to establish and administer safeguards designed to ensure that special fissionable and other materials, services, equipment, facilities, and information made available by the Agency or at its request or under its supervision or control are not used in such a way as to further any military purpose; and to apply safeguards, at the request of the parties, to any bilateral or multilateral arrangement, or at the request of a State, to any of that State's activities in the field of atomic energy." The Department of Safeguards is responsible for carrying out this mission, through technical measures designed to verify the correctness and completeness of states' nuclear declarations. Nuclear safety The IAEA classifies safety as one of its top three priorities. It spends 8.9 percent of its 352 million-euro ($469 million) regular budget in 2011 on making plants secure from accidents. Its resources are used on the other two priorities: technical co-operation and preventing nuclear weapons proliferation. The IAEA itself says that, beginning in 1986, in response to the nuclear reactor explosion and disaster near Chernobyl, Ukraine, the IAEA redoubled its efforts in the field of nuclear safety. The IAEA says that the same happened after the Fukushima disaster in Fukushima, Japan. In June 2011, the IAEA chief said he had "broad support for his plan to strengthen international safety checks on nuclear power plants to help avoid any repeat of Japan's Fukushima crisis". Peer-reviewed safety checks on reactors worldwide, organized by the IAEA, have been proposed. History In 1946 United Nations Atomic Energy Commission was founded, but stopped working in 1949 and was disbanded in 1952. In 1953, U.S. President Dwight D. Eisenhower proposed the creation of an international body to both regulate and promote the peaceful use of atomic power (nuclear power), in his Atoms for Peace address to the UN General Assembly. In September 1954, the United States proposed to the General Assembly the creation of an international agency to take control of fissile material, which could be used either for nuclear power or for nuclear weapons. This agency would establish a kind of "nuclear bank". The United States also called for an international scientific conference on all of the peaceful aspects of nuclear power. By November 1954, it had become clear that the Soviet Union would reject any international custody of fissile material if the United States did not agree to disarmament first, but that a clearinghouse for nuclear transactions might be possible. From 8 to 20 August 1955, the United Nations held the International Conference on the Peaceful Uses of Atomic Energy in Geneva, Switzerland. In October 1957, a Conference on the IAEA Statute was held at the Headquarters of the United Nations to approve the founding document for the IAEA, which was negotiated in 1955–1957 by a group of twelve countries. The Statute of the IAEA was approved on 23 October 1956 and came into force on 29 July 1957. Former US Congressman W. Sterling Cole served as the IAEA's first Director-General from 1957 to 1961. Cole served only one term, after which the IAEA was headed by two Swedes for nearly four decades: the scientist Sigvard Eklund held the job from 1961 to 1981, followed by former Swedish Foreign Minister Hans Blix, who served from 1981 to 1997. Blix was succeeded as Director General by Mohamed ElBaradei of Egypt, who served until November 2009. Beginning in 1986, in response to the nuclear reactor explosion and disaster near Chernobyl, Ukraine, the IAEA increased its efforts in the field of nuclear safety. The same happened after the 2011 Fukushima disaster in Fukushima, Japan. Both the IAEA and its then Director General, ElBaradei, were awarded the Nobel Peace Prize in 2005. In his acceptance speech in Oslo, ElBaradei stated that only one percent of the money spent on developing new weapons would be enough to feed the entire world, and that, if we hope to escape self-destruction, then nuclear weapons should have no place in our collective conscience, and no role in our security. On 2 July 2009, Yukiya Amano of Japan was elected as the Director General for the IAEA, defeating Abdul Samad Minty of South Africa and Luis E. Echávarri of Spain. On 3 July 2009, the Board of Governors voted to appoint Yukiya Amano "by acclamation", and IAEA General Conference in September 2009 approved. He took office on 1 December 2009. After Amano's death, his Chief of Coordination Cornel Feruta of Romania was named Acting Director General. On 2 August 2019, Rafael Grossi was presented as the Argentine candidate to become the Director General of IAEA. On 28 October 2019, the IAEA Board of Governors held its first vote to elect the new Director General, but none of the candidates secured the two-thirds majority (23 votes) in the 35-member IAEA Board of Governors that was needed to be elected. The next day, 29 October, the second voting round was held, and Grossi won 24 votes. He assumed office on 3 December 2019. Following a special meeting of the IAEA General Conference to approve his appointment, on 3 December Grossi became the first Latin American to head the Agency. During the Russian invasion of Ukraine, Grossi visited Ukraine multiple times as part of the ongoing efforts to help prevent a nuclear accident during the war. He warned against any complacency towards the dangers that the Zaporizhzhia Nuclear Power Plant, Europe's largest nuclear power plant, was facing. The plant has come under fire multiple times during the war. Structure and function General The IAEA's mission is guided by the interests and needs of Member States, strategic plans, and the vision embodied in the IAEA Statute (see below). Three main pillars – or areas of work – underpin the IAEA's mission: Safety and Security; Science and Technology; and Safeguards and Verification. The IAEA as an autonomous organization is not under the direct control of the UN, but the IAEA does report to both the UN General Assembly and Security Council. Unlike most other specialized international agencies, the IAEA does much of its work with the Security Council, and not with the United Nations Economic and Social Council. The structure and functions of the IAEA are defined by its founding document, the IAEA Statute (see below). The IAEA has three main bodies: the Board of Governors, the General Conference, and the Secretariat. The IAEA exists to pursue the "safe, secure and peaceful uses of nuclear sciences and technology" (Pillars 2005). The IAEA executes this mission with three main functions: the inspection of existing nuclear facilities to ensure their peaceful use, providing information and developing standards to ensure the safety and security of nuclear facilities, and as a hub for the various fields of science involved in the peaceful applications of nuclear technology. The IAEA recognizes knowledge as the nuclear energy industry's most valuable asset and resource, without which the industry cannot operate safely and economically. Following the IAEA General Conference since 2002 resolutions the Nuclear Knowledge Management, a formal program was established to address Member States' priorities in the 21st century. In 2004, the IAEA developed a Programme of Action for Cancer Therapy (PACT). PACT responds to the needs of developing countries to establish, to improve, or to expand radiotherapy treatment programs. The IAEA is raising money to help efforts by its Member States to save lives and reduce the suffering of cancer victims. The IAEA has established programs to help developing countries in planning to build systematically the capability to manage a nuclear power program, including the Integrated Nuclear Infrastructure Group, which has carried out Integrated Nuclear Infrastructure Review missions in Indonesia, Jordan, Thailand and Vietnam. The IAEA reports that roughly 60 countries are considering how to include nuclear power in their energy plans. To enhance the sharing of information and experience among IAEA Member States concerning the seismic safety of nuclear facilities, in 2008 the IAEA established the International Seismic Safety Centre. This centre is establishing safety standards and providing for their application in relation to site selection, site evaluation and seismic design. The IAEA has its headquarters since its founding in Vienna, Austria. The IAEA has two "Regional Safeguards Offices" which are located in Toronto, Canada, and in Tokyo, Japan. The IAEA also has two liaison offices which are located in New York City, United States, and in Geneva, Switzerland. In addition, the IAEA has laboratories and research centers located in Seibersdorf, Austria, in Monaco and in Trieste, Italy. Board of Governors The Board of Governors is one of two policy-making bodies of the IAEA. The Board consists of 22 member states elected by the General Conference, and at least 10 member states nominated by the outgoing Board. The outgoing Board designates the ten members who are the most advanced in atomic energy technology, plus the most advanced members from any of the following areas that are not represented by the first ten: North America, Latin America, Western Europe, Eastern Europe, Africa, Middle East, and South Asia, South East Asia, the Pacific, and the Far East. These members are designated for one year terms. The General Conference elects 22 members from the remaining nations to two-year terms. Eleven are elected each year. The 22 elected members must also represent a stipulated geographic diversity. The Board, in its five-yearly meetings, is responsible for making most of the policies of the IAEA. The Board makes recommendations to the General Conference on IAEA activities and budget, is responsible for publishing IAEA standards and appoints the Director-General subject to General Conference approval. Board members each receive one vote. Budget matters require a two-thirds majority. All other matters require only a simple majority. The simple majority also has the power to stipulate issues that will thereafter require a two-thirds majority. Two-thirds of all Board members must be present to call a vote. The Board elects its own chairman. General Conference The General Conference is made up of all 180 member states. It meets once a year, typically in September, to approve the actions and budgets passed on from the Board of Governors. The General Conference also approves the nominee for Director General and requests reports from the Board on issues in question (Statute). Each member receives one vote. Issues of budget, Statute amendment and suspension of a member's privileges require a two-thirds majority and all other issues require a simple majority. Similar to the Board, the General Conference can, by simple majority, designate issues to require a two-thirds majority. The General Conference elects a President at each annual meeting to facilitate an effective meeting. The President only serves for the duration of the session (Statute). The main function of the General Conference is to serve as a forum for debate on current issues and policies. Any of the other IAEA organs, the Director-General, the Board and member states can table issues to be discussed by the General Conference (IAEA Primer). This function of the General Conference is almost identical to the General Assembly of the United Nations. Secretariat The Secretariat is the professional and general service staff of the IAEA. The Secretariat is headed by the Director General. The Director General is responsible for enforcement of the actions passed by the Board of Governors and the General Conference. The Director General is selected by the Board and approved by the General Conference for renewable four-year terms. The Director General oversees six departments that do the actual work in carrying out the policies of the IAEA: Nuclear Energy, Nuclear Safety and Security, Nuclear Sciences and Applications, Safeguards, Technical Cooperation, and Management. The IAEA budget is in two parts. The regular budget funds most activities of the IAEA and is assessed to each member nation (€344 million in 2014). The Technical Cooperation Fund is funded by voluntary contributions with a general target in the US$90 million range. Criticism In 2011, Russian nuclear accident specialist Yuliy Andreev was critical of the response to Fukushima, and says that the IAEA did not learn from the 1986 Chernobyl disaster. He has accused the IAEA and corporations of "wilfully ignoring lessons from the world's worst nuclear accident 25 years ago to protect the industry's expansion". The IAEA's role "as an advocate for nuclear power has made it a target for protests". The journal Nature has reported that the IAEA response to the 2011 Fukushima Daiichi nuclear disaster in Japan was "sluggish and sometimes confusing", drawing calls for the agency to "take a more proactive role in nuclear safety". But nuclear experts say that the agency's complicated mandate and the constraints imposed by its member states mean that reforms will not happen quickly or easily, although its INES "emergency scale is very likely to be revisited" given the confusing way in which it was used in Japan. Some scientists say that the Fukushima nuclear accidents have revealed that the nuclear industry lacks sufficient oversight, leading to renewed calls to redefine the mandate of the IAEA so that it can better police nuclear power plants worldwide. There are several problems with the IAEA says Najmedin Meshkati of University of Southern California: It recommends safety standards, but member states are not required to comply; it promotes nuclear energy, but it also monitors nuclear use; it is the sole global organisation overseeing the nuclear energy industry, yet it is also weighed down by checking compliance with the Nuclear Non-Proliferation Treaty (NPT). In 2011, the journal Nature reported that the International Atomic Energy Agency should be strengthened to make independent assessments of nuclear safety and that "the public would be better served by an IAEA more able to deliver frank and independent assessments of nuclear crises as they unfold". Membership The process of joining the IAEA is fairly simple. Normally, a State would notify the Director General of its desire to join, and the Director would submit the application to the Board for consideration. If the Board recommends approval, and the General Conference approves the application for membership, the State must then submit its instrument of acceptance of the IAEA Statute to the United States, which functions as the depositary Government for the IAEA Statute. The State is considered a member when its acceptance letter is deposited. The United States then informs the IAEA, which notifies other IAEA Member States. Signature and ratification of the Nuclear Non-Proliferation Treaty (NPT) are not preconditions for membership in the IAEA. The IAEA has 180 member states. Most UN members and the Holy See are Member States of the IAEA. Four states have withdrawn from the IAEA. North Korea was a Member State from 1974 to 1994, but withdrew after the Board of Governors found it in non-compliance with its safeguards agreement and suspended most technical co-operation. Nicaragua became a member in 1957, withdrew its membership in 1970, and rejoined in 1977, Honduras joined in 1957, withdrew in 1967, and rejoined in 2003, while Cambodia joined in 1958, withdrew in 2003, and rejoined in 2009. Regional Cooperative Agreements There are four regional cooperative areas within IAEA, that share information, and organize conferences within their regions: AFRA The African Regional Cooperative Agreement for Research, Development and Training Related to Nuclear Science and Technology (AFRA): ARASIA Cooperative Agreement for Arab States in Asia for Research, Development and Training related to Nuclear Science and Technology (ARASIA): RCA Regional Cooperative Agreement for Research, Development and Training Related to Nuclear Science and Technology for Asia and the Pacific (RCA): ARCAL Cooperation Agreement for the Promotion of Nuclear Science and Technology in Latin America and the Caribbean (ARCAL): List of directors general Publications Typically issued in July each year, the IAEA Annual Report summarizes and highlights developments over the past year in major areas of the Agency's work. It includes a summary of major issues, activities, and achievements, and status tables and graphs related to safeguards, safety, and science and technology. Alongside the Annual Report, the IAEA also issues Topical Reviews which detail specific sectors of its work, comprising the Nuclear Safety Review, Nuclear Security Review, Safeguards Implementation Report, Nuclear Technology Review, and Technical Cooperation Report. IAEA Annual Report 2022 In the 2022 Annual Report, the IAEA demonstrated its commitment to its objectives despite global challenges. The report showcases the IAEA's initiatives aimed at fostering the safe, secure, and peaceful applications of nuclear technology. The agency's "Rays of Hope" initiative marked an effort to reduce disparities in cancer treatment by increasing the availability of radiation medicine, with a particular emphasis on African nations, in partnership with relevant professional societies and the World Health Organization (WHO). In response to the emergent threat posed by zoonotic diseases, the IAEA instituted the Zoonotic Disease Integrated Action (ZODIAC) initiative, which encourages international cooperation with member states, the WHO, and the Food and Agriculture Organization (FAO), to enhance preparedness and response. The "NUTeC Plastics" initiative reflects the agency's engagement with environmental concerns, utilizing nuclear technology to address the growing problem of plastic pollution. The IAEA also made strides in the field of nuclear energy with the introduction of the Nuclear Harmonization and Standardization Initiative (NHSI), aiming to harmonize regulatory standards to facilitate the deployment of small modular reactors, a critical component in the global pursuit of net-zero emissions. See also European Organization for Nuclear Research Global Initiative to Combat Nuclear Terrorism IAEA Areas Institute of Nuclear Materials Management International Energy Agency International Renewable Energy Agency International Radiation Protection Association International reactions to the Fukushima Daiichi nuclear disaster Lists of nuclear disasters and radioactive incidents List of states with nuclear weapons Nuclear ambiguity Nuclear Energy Agency OPANAL Proliferation Security Initiative United Nations Atomic Energy Commission (UNAEC) World Association of Nuclear Operators World Nuclear Association References Notes Works cited Board of Governors rules IAEA Primer Pillars of nuclear cooperation 2005 Radiation Protection of Patients Further reading Adamson, Matthew. "Showcasing the international atom: the IAEA Bulletin as a visual science diplomacy instrument, 1958–1962." British Journal for the History of Science (2023): 1–19. Fischer, David. History of the international atomic energy agency. The first forty years (1. International Atomic Energy Agency, 1997) online. Holloway, David. "The Soviet Union and the creation of the International Atomic Energy Agency." Cold War History 16.2 (2016): 177–193. Roehrlich, Elisabeth. "The Cold War, the developing world, and the creation of the International Atomic Energy Agency (IAEA), 1953–1957." Cold War History 16.2 (2016): 195–212. Roehrlich, Elisabeth. Inspectors for peace: A history of the International Atomic Energy Agency (JHU Press, 2022); full text online in Project MUSE; see also online scholarly review of this book Scheinman, Lawrence. The international atomic energy agency and world nuclear order (Routledge, 2016) online. Stoessinger, John G. "The International Atomic Energy Agency: The First Phase." International Organization 13.3 (1959): 394–411. External links International Atomic Energy Agency Official Website NUCLEUS – The IAEA Nuclear Knowledge and Information Portal Agreement on the Privileges and Immunities of the International Atomic Energy Agency, 1 July 1959 IAEA Department of Technical Cooperation website Programme of Action for Cancer Therapy (PACT) – Comprehensive Cancer Control Information and Fighting Cancer in Developing Countries International Nuclear Library Network (INLN) The Woodrow Wilson Center's Nuclear Proliferation International History Project or NPIHP is a global network of individuals and institutions engaged in the study of international nuclear history through archival documents, oral history interviews and other empirical sources. International Atomic Energy Agency International nuclear energy organizations Organizations awarded Nobel Peace Prizes Nuclear proliferation Atoms for Peace International organisations based in Austria Organizations established in 1957 Research institutes established in 1957 Scientific organizations established in 1957 1957 establishments in Austria 1957 in international relations
International Atomic Energy Agency
[ "Engineering" ]
4,863
[ "International nuclear energy organizations", "Nuclear organizations" ]
15,018
https://en.wikipedia.org/wiki/Infusoria
Infusoria is a word used to describe various freshwater microorganisms, including ciliates, copepods, euglenoids, planktonic crustaceans, protozoa, unicellular algae and small invertebrates. Some authors (e.g., Bütschli) have used the term as a synonym for Ciliophora. In modern, formal classifications, the term is considered obsolete; the microorganisms previously and colloquially referred to as Infusoria are mostly assigned to the kingdom Protista. In other contexts, the term is used to define various aquatic microorganisms found in decomposing matter. Aquarium use Certain microorganisms, including cyclops and daphnia (among others), are sold as a supplemental fish food. Some fish stores or pet shops may have these infusoria available for live purchase, but typically they are sold in frozen cubes—for example, by the Japan-based fish food brand Hikari. Still, some advanced aquarists, with especially large collections of fish, will breed and cultivate their own supplies of the microorganisms. Infusoria are especially used by aquarists and fish breeders to feed fish fry; because of their small sizes, infusoria can be used to rear newly-hatched offspring of many common (and also less common) aquarium species. Many average home aquaria are unable to naturally supply sufficient infusoria for fish-rearing, so hobbyists may create and maintain their own cultures, either through utilizing their own existing aquarium water or by using one of the many commercial cultures available. Infusoria can be cultured at-home by soaking any decomposing vegetative matter, such as papaya or cucumber peels, in a jar of aged (i.e., chlorine-free) water, preferably from an existing aquarium setup. The culture starts to proliferate in two to three days, depending on temperature and light received. The water first turns cloudy because of a rise in levels of bacteria, but clears up once the infusoria consume them. At this point, the infusoria are usually visible to the naked eye as small, white motile specks. They can be easily fed to fish with the use of a large turkey-baster or by gently scooping with a very fine net. Additionally, the water in which the infusoria are kept in can be changed periodically, even one to two times per week, by draining and replacing up to 50% of the volume of water (for hygienic and maintenance purposes). See also Animalcules References Bibliography Ratcliff, Marc J. (2009). The Emergence of the Systematics of Infusoria. In: The Quest for the Invisible: Microscopy in the Enlightenment. Aldershot: Ashgate. infusoria dieses first identified in 18th sentury in 1773 by o.f.mular(zoologist) External links Types of Protozoans and video Pond Life Identification Kit Fishkeeping Obsolete eukaryote taxa
Infusoria
[ "Biology" ]
631
[ "Eukaryotes", "Eukaryote stubs" ]
15,020
https://en.wikipedia.org/wiki/ISO/IEC%208859
ISO/IEC 8859 is a joint ISO and IEC series of standards for 8-bit character encodings. The series of standards consists of numbered parts, such as ISO/IEC 8859-1, ISO/IEC 8859-2, etc. There are 15 parts, excluding the abandoned ISO/IEC 8859-12. The ISO working group maintaining this series of standards has been disbanded. ISO/IEC 8859 parts 1, 2, 3, and 4 were originally Ecma International standard ECMA-94. Introduction While the bit patterns of the 95 printable ASCII characters are sufficient to exchange information in modern English, most other languages that use Latin alphabets need additional symbols not covered by ASCII. ISO/IEC 8859 sought to remedy this problem by utilizing the eighth bit in an 8-bit byte to allow positions for another 96 printable characters. Early encodings were limited to 7 bits because of restrictions of some data transmission protocols, and partially for historical reasons. However, more characters were needed than could fit in a single 8-bit character encoding, so several mappings were developed, including at least ten suitable for various Latin alphabets. The ISO/IEC 8859 standard parts only define printable characters, although they explicitly set apart the byte ranges 0x00–1F and 0x7F–9F as "combinations that do not represent graphic characters" (i.e. which are reserved for use as control characters) in accordance with ISO/IEC 4873; they were designed to be used in conjunction with a separate standard defining the control functions associated with these bytes, such as ISO 6429 or ISO 6630. To this end a series of encodings registered with the IANA add the C0 control set (control characters mapped to bytes 0 to 31) from ISO 646 and the C1 control set (control characters mapped to bytes 128 to 159) from ISO 6429, resulting in full 8-bit character maps with most, if not all, bytes assigned. These sets have ISO-8859-n as their preferred MIME name or, in cases where a preferred MIME name is not specified, their canonical name. Many people use the terms ISO/IEC 8859-n and ISO-8859-n interchangeably. ISO/IEC 8859-11 did not get such a charset assigned, presumably because it was almost identical to TIS 620. Characters The ISO/IEC 8859 standard is designed for reliable information exchange, not typography; the standard omits symbols needed for high-quality typography, such as optional ligatures, curly quotation marks, dashes, etc. As a result, high-quality typesetting systems often use proprietary or idiosyncratic extensions on top of the ASCII and ISO/IEC 8859 standards, or use Unicode instead. An inexact rule based on practical experience states that if a character or symbol was not already part of a widely used data-processing character set and was also not usually provided on typewriter keyboards for a national language, it did not get in. Hence the directional double quotation marks « and » used for some European languages were included, but not the directional double quotation marks “ and ” used for English and some other languages. French did not get its œ and Œ ligatures because they could be typed as 'oe'. Likewise, Ÿ, needed for all-caps text, was dropped as well. Albeit under different codepoints, these three characters were later reintroduced with ISO/IEC 8859-15 in 1999, which also introduced the new euro sign character €. Likewise Dutch did not get the ij and IJ letters, because Dutch speakers had become used to typing these as two letters instead. Romanian did not initially get its Ș/ș and Ț/ț (with comma) letters, because these letters were initially unified with Ş/ş and Ţ/ţ (with cedilla) by the Unicode Consortium, considering the shapes with comma beneath to be glyph variants of the shapes with cedilla. However, the letters with explicit comma below were later added to the Unicode standard and are also in ISO/IEC 8859-16. Most of the ISO/IEC 8859 encodings provide diacritic marks required for various European languages using the Latin script. Others provide non-Latin alphabets: Greek, Cyrillic, Hebrew, Arabic and Thai. Most of the encodings contain only spacing characters, although the Thai, Hebrew, and Arabic ones do also contain combining characters. The standard makes no provision for the scripts of East Asian languages (CJK), as their ideographic writing systems require many thousands of code points. Although it uses Latin based characters, Vietnamese does not fit into 96 positions (without using combining diacritics such as in Windows-1258) either. Each Japanese syllabic alphabet (hiragana or katakana, see Kana) would fit, as in JIS X 0201, but like several other alphabets of the world they are not encoded in the ISO/IEC 8859 system. The parts of ISO/IEC 8859 ISO/IEC 8859 is divided into the following parts: Each part of ISO/IEC 8859 is designed to support languages that often borrow from each other, so the characters needed by each language are usually accommodated by a single part. However, there are some characters and language combinations that are not accommodated without transcriptions. Efforts were made to make conversions as smooth as possible. For example, German has all of its seven special characters at the same positions in all Latin variants (1–4, 9, 10, 13–16), and in many positions the characters only differ in the diacritics between the sets. In particular, variants 1–4 were designed jointly, and have the property that every encoded character appears either at a given position or not at all. Table unassigned code points. new additions in ISO/IEC 8859-7:2003 and ISO/IEC 8859-8:1999 versions, previously unassigned. Relationship to Unicode and the UCS Since 1991, the Unicode Consortium has been working with ISO and IEC to develop the Unicode Standard and ISO/IEC 10646: the Universal Character Set (UCS) in tandem. Newer editions of ISO/IEC 8859 express characters in terms of their Unicode/UCS names and the U+nnnn notation, effectively causing each part of ISO/IEC 8859 to be a Unicode/UCS character encoding scheme that maps a very small subset of the UCS to single 8-bit bytes. The first 256 characters in Unicode and the UCS are identical to those in ISO/IEC-8859-1 (Latin-1). Single-byte character sets including the parts of ISO/IEC 8859 and derivatives of them were favoured throughout the 1990s, having the advantages of being well-established and more easily implemented in software: the equation of one byte to one character is simple and adequate for most single-language applications, and there are no combining characters or variant forms. As Unicode-enabled operating systems became more widespread, ISO/IEC 8859 and other legacy encodings became less popular. While remnants of ISO 8859 and single-byte character models remain entrenched in many operating systems, programming languages, data storage systems, networking applications, display hardware, and end-user application software, most modern computing applications use Unicode internally, and rely on conversion tables to map to and from other encodings, when necessary. Current status The ISO/IEC 8859 standard was maintained by ISO/IEC Joint Technical Committee 1, Subcommittee 2, Working Group 3 (ISO/IEC JTC 1/SC 2/WG 3). In June 2004, WG 3 disbanded, and maintenance duties were transferred to SC 2. The standard is not currently being updated, as the Subcommittee's only remaining working group, WG 2, is concentrating on development of Unicode's Universal Coded Character Set. The WHATWG Encoding Standard, which specifies the character encodings permitted in HTML5 which compliant browsers must support, includes most parts of ISO/IEC 8859, except for parts 1, 9 and 11, which are instead interpreted as Windows-1252, Windows-1254 and Windows-874 respectively. Authors of new pages and the designers of new protocols are instructed to use UTF-8 instead. See also List of information system character sets Number Forms RPL character set (an ISO/IEC 8859-1 superset on HP calculators, referred to as "ECMA-94" as well) DEC Multinational Character Set (MCS) DEC National Replacement Character Set (NRCS) Notes References Further reading Published versions of each part of ISO/IEC 8859 are available, for a fee, from the ISO catalogue site and from the IEC Webstore. PDF versions of the final drafts of some parts of ISO/IEC 8859 as submitted to the ISO/IEC JTC 1/SC 2/WG 3 for review & publication are available at the WG 3 web site: ISO/IEC 8859-1:1998 - 8-bit single-byte coded graphic character sets, Part 1: Latin alphabet No. 1 (draft dated February 12, 1998, published April 15, 1998) ISO/IEC 8859-4:1998 - 8-bit single-byte coded graphic character sets, Part 4: Latin alphabet No. 4 (draft dated February 12, 1998, published July 1, 1998) ISO/IEC 8859-7:1999 - 8-bit single-byte coded graphic character sets, Part 7: Latin/Greek alphabet (draft dated June 10, 1999; superseded by ISO/IEC 8859-7:2003, published October 10, 2003) ISO/IEC 8859-10:1998 - 8-bit single-byte coded graphic character sets, Part 10: Latin alphabet No. 6 (draft dated February 12, 1998, published July 15, 1998) ISO/IEC 8859-11:1999 - 8-bit single-byte coded graphic character sets, Part 11: Latin/Thai character set (draft dated June 22, 1999; superseded by ISO/IEC 8859-11:2001, published 15 December 2001) ISO/IEC 8859-13:1998 - 8-bit single-byte coded graphic character sets, Part 13: Latin alphabet No. 7 (draft dated April 15, 1998, published October 15, 1998) ISO/IEC 8859-15:1998 - 8-bit single-byte coded graphic character sets, Part 15: Latin alphabet No. 9 (draft dated August 1, 1997; superseded by ISO/IEC 8859-15:1999, published March 15, 1999) ISO/IEC 8859-16:2000 - 8-bit single-byte coded graphic character sets, Part 16: Latin alphabet No. 10 (draft dated November 15, 1999; superseded by ISO/IEC 8859-16:2001, published July 15, 2001) ECMA standards, which in intent correspond exactly to the ISO/IEC 8859 character set standards, can be found at: Standard ECMA-94: 8-Bit Single Byte Coded Graphic Character Sets - Latin Alphabets No. 1 to No. 4 2nd edition (June 1986) Standard ECMA-113: 8-Bit Single-Byte Coded Graphic Character Sets - Latin/Cyrillic Alphabet 3rd edition (December 1999) Standard ECMA-114: 8-Bit Single-Byte Coded Graphic Character Sets - Latin/Arabic Alphabet 2nd edition (December 2000) Standard ECMA-118: 8-Bit Single-Byte Coded Graphic Character Sets - Latin/Greek Alphabet (December 1986) Standard ECMA-121: 8-Bit Single-Byte Coded Graphic Character Sets - Latin/Hebrew Alphabet 2nd edition (December 2000) Standard ECMA-128: 8-Bit Single-Byte Coded Graphic Character Sets - Latin Alphabet No. 5 2nd edition (December 1999) Standard ECMA-144: 8-Bit Single-Byte Coded Character Sets - Latin Alphabet No. 6 3rd edition (December 2000) ISO/IEC 8859-1 to Unicode mapping tables as plain text files are at the Unicode FTP site. Informal descriptions and code charts for most ISO/IEC 8859 standards are available in ISO/IEC 8859 Alphabet Soup (Mirror) Character sets Ecma standards 08859
ISO/IEC 8859
[ "Technology" ]
2,520
[ "Computer standards", "Ecma standards" ]
15,022
https://en.wikipedia.org/wiki/Infrared
Infrared (IR; sometimes called infrared light) is electromagnetic radiation (EMR) with wavelengths longer than that of visible light but shorter than microwaves. The infrared spectral band begins with waves that are just longer than those of red light (the longest waves in the visible spectrum), so IR is invisible to the human eye. IR is generally understood to include wavelengths from around to . IR is commonly divided between longer-wavelength thermal IR, emitted from terrestrial sources, and shorter-wavelength IR or near-IR, part of the solar spectrum. Longer IR wavelengths (30–100 μm) are sometimes included as part of the terahertz radiation band. Almost all black-body radiation from objects near room temperature is in the IR band. As a form of EMR, IR carries energy and momentum, exerts radiation pressure, and has properties corresponding to both those of a wave and of a particle, the photon. It was long known that fires emit invisible heat; in 1681 the pioneering experimenter Edme Mariotte showed that glass, though transparent to sunlight, obstructed radiant heat. In 1800 the astronomer Sir William Herschel discovered that infrared radiation is a type of invisible radiation in the spectrum lower in energy than red light, by means of its effect on a thermometer. Slightly more than half of the energy from the Sun was eventually found, through Herschel's studies, to arrive on Earth in the form of infrared. The balance between absorbed and emitted infrared radiation has an important effect on Earth's climate. Infrared radiation is emitted or absorbed by molecules when changing rotational-vibrational movements. It excites vibrational modes in a molecule through a change in the dipole moment, making it a useful frequency range for study of these energy states for molecules of the proper symmetry. Infrared spectroscopy examines absorption and transmission of photons in the infrared range. Infrared radiation is used in industrial, scientific, military, commercial, and medical applications. Night-vision devices using active near-infrared illumination allow people or animals to be observed without the observer being detected. Infrared astronomy uses sensor-equipped telescopes to penetrate dusty regions of space such as molecular clouds, to detect objects such as planets, and to view highly red-shifted objects from the early days of the universe. Infrared thermal-imaging cameras are used to detect heat loss in insulated systems, to observe changing blood flow in the skin, to assist firefighting, and to detect the overheating of electrical components. Military and civilian applications include target acquisition, surveillance, night vision, homing, and tracking. Humans at normal body temperature radiate chiefly at wavelengths around 10 μm. Non-military uses include thermal efficiency analysis, environmental monitoring, industrial facility inspections, detection of grow-ops, remote temperature sensing, short-range wireless communication, spectroscopy, and weather forecasting. Definition and relationship to the electromagnetic spectrum There is no universally accepted definition of the range of infrared radiation. Typically, it is taken to extend from the nominal red edge of the visible spectrum at 780 nm to 1 mm. This range of wavelengths corresponds to a frequency range of approximately 430 THz down to 300 GHz. Beyond infrared is the microwave portion of the electromagnetic spectrum. Increasingly, terahertz radiation is counted as part of the microwave band, not infrared, moving the band edge of infrared to 0.1 mm (3 THz). Nature Sunlight, at an effective temperature of 5,780 K (5,510 °C, 9,940 °F), is composed of near-thermal-spectrum radiation that is slightly more than half infrared. At zenith, sunlight provides an irradiance of just over 1 kW per square meter at sea level. Of this energy, 527 W is infrared radiation, 445 W is visible light, and 32 W is ultraviolet radiation. Nearly all the infrared radiation in sunlight is near infrared, shorter than 4 μm. On the surface of Earth, at far lower temperatures than the surface of the Sun, some thermal radiation consists of infrared in the mid-infrared region, much longer than in sunlight. Black-body, or thermal, radiation is continuous: it radiates at all wavelengths. Of these natural thermal radiation processes, only lightning and natural fires are hot enough to produce much visible energy, and fires produce far more infrared than visible-light energy. Regions In general, objects emit infrared radiation across a spectrum of wavelengths, but sometimes only a limited region of the spectrum is of interest because sensors usually collect radiation only within a specific bandwidth. Thermal infrared radiation also has a maximum emission wavelength, which is inversely proportional to the absolute temperature of object, in accordance with Wien's displacement law. The infrared band is often subdivided into smaller sections, although how the IR spectrum is thereby divided varies between different areas in which IR is employed. Visible limit Infrared radiation is generally considered to begin with wavelengths longer than visible by the human eye. There is no hard wavelength limit to what is visible, as the eye's sensitivity decreases rapidly but smoothly, for wavelengths exceeding about 700 nm. Therefore wavelengths just longer than that can be seen if they are sufficiently bright, though they may still be classified as infrared according to usual definitions. Light from a near-IR laser may thus appear dim red and can present a hazard since it may actually be quite bright. Even IR at wavelengths up to 1,050 nm from pulsed lasers can be seen by humans under certain conditions. Commonly used subdivision scheme A commonly used subdivision scheme is: NIR and SWIR together is sometimes called "reflected infrared", whereas MWIR and LWIR is sometimes referred to as "thermal infrared". CIE division scheme The International Commission on Illumination (CIE) recommended the division of infrared radiation into the following three bands: ISO 20473 scheme ISO 20473 specifies the following scheme: Astronomy division scheme Astronomers typically divide the infrared spectrum as follows: These divisions are not precise and can vary depending on the publication. The three regions are used for observation of different temperature ranges, and hence different environments in space. The most common photometric system used in astronomy allocates capital letters to different spectral regions according to filters used; I, J, H, and K cover the near-infrared wavelengths; L, M, N, and Q refer to the mid-infrared region. These letters are commonly understood in reference to atmospheric windows and appear, for instance, in the titles of many papers. Sensor response division scheme A third scheme divides up the band based on the response of various detectors: Near-infrared: from 0.7 to 1.0 μm (from the approximate end of the response of the human eye to that of silicon). Short-wave infrared: 1.0 to 3 μm (from the cut-off of silicon to that of the MWIR atmospheric window). InGaAs covers to about 1.8 μm; the less sensitive lead salts cover this region. Cryogenically cooled MCT detectors can cover the region of 1.0–2.5μm. Mid-wave infrared: 3 to 5 μm (defined by the atmospheric window and covered by indium antimonide, InSb and mercury cadmium telluride, HgCdTe, and partially by lead selenide, PbSe). Long-wave infrared: 8 to 12, or 7 to 14 μm (this is the atmospheric window covered by HgCdTe and microbolometers). Very-long wave infrared (VLWIR) (12 to about 30 μm, covered by doped silicon). Near-infrared is the region closest in wavelength to the radiation detectable by the human eye. mid- and far-infrared are progressively further from the visible spectrum. Other definitions follow different physical mechanisms (emission peaks, vs. bands, water absorption) and the newest follow technical reasons (the common silicon detectors are sensitive to about 1,050 nm, while InGaAs's sensitivity starts around 950 nm and ends between 1,700 and 2,600 nm, depending on the specific configuration). No international standards for these specifications are currently available. The onset of infrared is defined (according to different standards) at various values typically between 700 nm and 800 nm, but the boundary between visible and infrared light is not precisely defined. The human eye is markedly less sensitive to light above 700 nm wavelength, so longer wavelengths make insignificant contributions to scenes illuminated by common light sources. Particularly intense near-IR light (e.g., from lasers, LEDs or bright daylight with the visible light filtered out) can be detected up to approximately 780 nm, and will be perceived as red light. Intense light sources providing wavelengths as long as 1,050 nm can be seen as a dull red glow, causing some difficulty in near-IR illumination of scenes in the dark (usually this practical problem is solved by indirect illumination). Leaves are particularly bright in the near IR, and if all visible light leaks from around an IR-filter are blocked, and the eye is given a moment to adjust to the extremely dim image coming through a visually opaque IR-passing photographic filter, it is possible to see the Wood effect that consists of IR-glowing foliage. Telecommunication bands In optical communications, the part of the infrared spectrum that is used is divided into seven bands based on availability of light sources, transmitting/absorbing materials (fibers), and detectors: The C-band is the dominant band for long-distance telecommunications networks. The S and L bands are based on less well established technology, and are not as widely deployed. Heat Infrared radiation is popularly known as "heat radiation", but light and electromagnetic waves of any frequency will heat surfaces that absorb them. Infrared light from the Sun accounts for 49% of the heating of Earth, with the rest being caused by visible light that is absorbed then re-radiated at longer wavelengths. Visible light or ultraviolet-emitting lasers can char paper and incandescently hot objects emit visible radiation. Objects at room temperature will emit radiation concentrated mostly in the 8 to 25 μm band, but this is not distinct from the emission of visible light by incandescent objects and ultraviolet by even hotter objects (see black body and Wien's displacement law). Heat is energy in transit that flows due to a temperature difference. Unlike heat transmitted by thermal conduction or thermal convection, thermal radiation can propagate through a vacuum. Thermal radiation is characterized by a particular spectrum of many wavelengths that are associated with emission from an object, due to the vibration of its molecules at a given temperature. Thermal radiation can be emitted from objects at any wavelength, and at very high temperatures such radiation is associated with spectra far above the infrared, extending into visible, ultraviolet, and even X-ray regions (e.g. the solar corona). Thus, the popular association of infrared radiation with thermal radiation is only a coincidence based on typical (comparatively low) temperatures often found near the surface of planet Earth. The concept of emissivity is important in understanding the infrared emissions of objects. This is a property of a surface that describes how its thermal emissions deviate from the ideal of a black body. To further explain, two objects at the same physical temperature may not show the same infrared image if they have differing emissivity. For example, for any pre-set emissivity value, objects with higher emissivity will appear hotter, and those with a lower emissivity will appear cooler (assuming, as is often the case, that the surrounding environment is cooler than the objects being viewed). When an object has less than perfect emissivity, it obtains properties of reflectivity and/or transparency, and so the temperature of the surrounding environment is partially reflected by and/or transmitted through the object. If the object were in a hotter environment, then a lower emissivity object at the same temperature would likely appear to be hotter than a more emissive one. For that reason, incorrect selection of emissivity and not accounting for environmental temperatures will give inaccurate results when using infrared cameras and pyrometers. Applications Night vision Infrared is used in night vision equipment when there is insufficient visible light to see. Night vision devices operate through a process involving the conversion of ambient light photons into electrons that are then amplified by a chemical and electrical process and then converted back into visible light. Infrared light sources can be used to augment the available ambient light for conversion by night vision devices, increasing in-the-dark visibility without actually using a visible light source. The use of infrared light and night vision devices should not be confused with thermal imaging, which creates images based on differences in surface temperature by detecting infrared radiation (heat) that emanates from objects and their surrounding environment. Thermography Infrared radiation can be used to remotely determine the temperature of objects (if the emissivity is known). This is termed thermography, or in the case of very hot objects in the NIR or visible it is termed pyrometry. Thermography (thermal imaging) is mainly used in military and industrial applications but the technology is reaching the public market in the form of infrared cameras on cars due to greatly reduced production costs. Thermographic cameras detect radiation in the infrared range of the electromagnetic spectrum (roughly 9,000–14,000 nm or 9–14 μm) and produce images of that radiation. Since infrared radiation is emitted by all objects based on their temperatures, according to the black-body radiation law, thermography makes it possible to "see" one's environment with or without visible illumination. The amount of radiation emitted by an object increases with temperature, therefore thermography allows one to see variations in temperature (hence the name). Hyperspectral imaging A hyperspectral image is a "picture" containing continuous spectrum through a wide spectral range at each pixel. Hyperspectral imaging is gaining importance in the field of applied spectroscopy particularly with NIR, SWIR, MWIR, and LWIR spectral regions. Typical applications include biological, mineralogical, defence, and industrial measurements. Thermal infrared hyperspectral imaging can be similarly performed using a thermographic camera, with the fundamental difference that each pixel contains a full LWIR spectrum. Consequently, chemical identification of the object can be performed without a need for an external light source such as the Sun or the Moon. Such cameras are typically applied for geological measurements, outdoor surveillance and UAV applications. Other imaging In infrared photography, infrared filters are used to capture the near-infrared spectrum. Digital cameras often use infrared blockers. Cheaper digital cameras and camera phones have less effective filters and can view intense near-infrared, appearing as a bright purple-white color. This is especially pronounced when taking pictures of subjects near IR-bright areas (such as near a lamp), where the resulting infrared interference can wash out the image. There is also a technique called 'T-ray' imaging, which is imaging using far-infrared or terahertz radiation. Lack of bright sources can make terahertz photography more challenging than most other infrared imaging techniques. Recently T-ray imaging has been of considerable interest due to a number of new developments such as terahertz time-domain spectroscopy. Tracking Infrared tracking, also known as infrared homing, refers to a passive missile guidance system, which uses the emission from a target of electromagnetic radiation in the infrared part of the spectrum to track it. Missiles that use infrared seeking are often referred to as "heat-seekers" since infrared (IR) is just below the visible spectrum of light in frequency and is radiated strongly by hot bodies. Many objects such as people, vehicle engines, and aircraft generate and retain heat, and as such, are especially visible in the infrared wavelengths of light compared to objects in the background. Heating Infrared radiation can be used as a deliberate heating source. For example, it is used in infrared saunas to heat the occupants. It may also be used in other heating applications, such as to remove ice from the wings of aircraft (de-icing). Infrared heating is also becoming more popular in industrial manufacturing processes, e.g. curing of coatings, forming of plastics, annealing, plastic welding, and print drying. In these applications, infrared heaters replace convection ovens and contact heating. Cooling A variety of technologies or proposed technologies take advantage of infrared emissions to cool buildings or other systems. The LWIR (8–15 μm) region is especially useful since some radiation at these wavelengths can escape into space through the atmosphere's infrared window. This is how passive daytime radiative cooling (PDRC) surfaces are able to achieve sub-ambient cooling temperatures under direct solar intensity, enhancing terrestrial heat flow to outer space with zero energy consumption or pollution. PDRC surfaces maximize shortwave solar reflectance to lessen heat gain while maintaining strong longwave infrared (LWIR) thermal radiation heat transfer. When imagined on a worldwide scale, this cooling method has been proposed as a way to slow and even reverse global warming, with some estimates proposing a global surface area coverage of 1-2% to balance global heat fluxes. Communications IR data transmission is also employed in short-range communication among computer peripherals and personal digital assistants. These devices usually conform to standards published by IrDA, the Infrared Data Association. Remote controls and IrDA devices use infrared light-emitting diodes (LEDs) to emit infrared radiation that may be concentrated by a lens into a beam that the user aims at the detector. The beam is modulated, i.e. switched on and off, according to a code which the receiver interprets. Usually very near-IR is used (below 800 nm) for practical reasons. This wavelength is efficiently detected by inexpensive silicon photodiodes, which the receiver uses to convert the detected radiation to an electric current. That electrical signal is passed through a high-pass filter which retains the rapid pulsations due to the IR transmitter but filters out slowly changing infrared radiation from ambient light. Infrared communications are useful for indoor use in areas of high population density. IR does not penetrate walls and so does not interfere with other devices in adjoining rooms. Infrared is the most common way for remote controls to command appliances. Infrared remote control protocols like RC-5, SIRC, are used to communicate with infrared. Free-space optical communication using infrared lasers can be a relatively inexpensive way to install a communications link in an urban area operating at up to 4 gigabit/s, compared to the cost of burying fiber optic cable, except for the radiation damage. "Since the eye cannot detect IR, blinking or closing the eyes to help prevent or reduce damage may not happen." Infrared lasers are used to provide the light for optical fiber communications systems. Wavelengths around 1,330 nm (least dispersion) or 1,550 nm (best transmission) are the best choices for standard silica fibers. IR data transmission of audio versions of printed signs is being researched as an aid for visually impaired people through the Remote infrared audible signage project. Transmitting IR data from one device to another is sometimes referred to as beaming. IR is sometimes used for assistive audio as an alternative to an audio induction loop. Spectroscopy Infrared vibrational spectroscopy (see also near-infrared spectroscopy) is a technique that can be used to identify molecules by analysis of their constituent bonds. Each chemical bond in a molecule vibrates at a frequency characteristic of that bond. A group of atoms in a molecule (e.g., CH2) may have multiple modes of oscillation caused by the stretching and bending motions of the group as a whole. If an oscillation leads to a change in dipole in the molecule then it will absorb a photon that has the same frequency. The vibrational frequencies of most molecules correspond to the frequencies of infrared light. Typically, the technique is used to study organic compounds using light radiation from the mid-infrared, 4,000–400 cm−1. A spectrum of all the frequencies of absorption in a sample is recorded. This can be used to gain information about the sample composition in terms of chemical groups present and also its purity (for example, a wet sample will show a broad O-H absorption around 3200 cm−1). The unit for expressing radiation in this application, cm−1, is the spectroscopic wavenumber. It is the frequency divided by the speed of light in vacuum. Thin film metrology In the semiconductor industry, infrared light can be used to characterize materials such as thin films and periodic trench structures. By measuring the reflectance of light from the surface of a semiconductor wafer, the index of refraction (n) and the extinction Coefficient (k) can be determined via the Forouhi–Bloomer dispersion equations. The reflectance from the infrared light can also be used to determine the critical dimension, depth, and sidewall angle of high aspect ratio trench structures. Meteorology Weather satellites equipped with scanning radiometers produce thermal or infrared images, which can then enable a trained analyst to determine cloud heights and types, to calculate land and surface water temperatures, and to locate ocean surface features. The scanning is typically in the range 10.3–12.5 μm (IR4 and IR5 channels). Clouds with high and cold tops, such as cyclones or cumulonimbus clouds, are often displayed as red or black, lower warmer clouds such as stratus or stratocumulus are displayed as blue or grey, with intermediate clouds shaded accordingly. Hot land surfaces are shown as dark-grey or black. One disadvantage of infrared imagery is that low clouds such as stratus or fog can have a temperature similar to the surrounding land or sea surface and do not show up. However, using the difference in brightness of the IR4 channel (10.3–11.5 μm) and the near-infrared channel (1.58–1.64 μm), low clouds can be distinguished, producing a fog satellite picture. The main advantage of infrared is that images can be produced at night, allowing a continuous sequence of weather to be studied. These infrared pictures can depict ocean eddies or vortices and map currents such as the Gulf Stream, which are valuable to the shipping industry. Fishermen and farmers are interested in knowing land and water temperatures to protect their crops against frost or increase their catch from the sea. Even El Niño phenomena can be spotted. Using color-digitized techniques, the gray-shaded thermal images can be converted to color for easier identification of desired information. The main water vapour channel at 6.40 to 7.08 μm can be imaged by some weather satellites and shows the amount of moisture in the atmosphere. Climatology In the field of climatology, atmospheric infrared radiation is monitored to detect trends in the energy exchange between the Earth and the atmosphere. These trends provide information on long-term changes in Earth's climate. It is one of the primary parameters studied in research into global warming, together with solar radiation. A pyrgeometer is utilized in this field of research to perform continuous outdoor measurements. This is a broadband infrared radiometer with sensitivity for infrared radiation between approximately 4.5 μm and 50 μm. Astronomy Astronomers observe objects in the infrared portion of the electromagnetic spectrum using optical components, including mirrors, lenses and solid state digital detectors. For this reason it is classified as part of optical astronomy. To form an image, the components of an infrared telescope need to be carefully shielded from heat sources, and the detectors are chilled using liquid helium. The sensitivity of Earth-based infrared telescopes is significantly limited by water vapor in the atmosphere, which absorbs a portion of the infrared radiation arriving from space outside of selected atmospheric windows. This limitation can be partially alleviated by placing the telescope observatory at a high altitude, or by carrying the telescope aloft with a balloon or an aircraft. Space telescopes do not suffer from this handicap, and so outer space is considered the ideal location for infrared astronomy. The infrared portion of the spectrum has several useful benefits for astronomers. Cold, dark molecular clouds of gas and dust in our galaxy will glow with radiated heat as they are irradiated by imbedded stars. Infrared can also be used to detect protostars before they begin to emit visible light. Stars emit a smaller portion of their energy in the infrared spectrum, so nearby cool objects such as planets can be more readily detected. (In the visible light spectrum, the glare from the star will drown out the reflected light from a planet.) Infrared light is also useful for observing the cores of active galaxies, which are often cloaked in gas and dust. Distant galaxies with a high redshift will have the peak portion of their spectrum shifted toward longer wavelengths, so they are more readily observed in the infrared. Cleaning Infrared cleaning is a technique used by some motion picture film scanners, film scanners and flatbed scanners to reduce or remove the effect of dust and scratches upon the finished scan. It works by collecting an additional infrared channel from the scan at the same position and resolution as the three visible color channels (red, green, and blue). The infrared channel, in combination with the other channels, is used to detect the location of scratches and dust. Once located, those defects can be corrected by scaling or replaced by inpainting. Art conservation and analysis Infrared reflectography can be applied to paintings to reveal underlying layers in a non-destructive manner, in particular the artist's underdrawing or outline drawn as a guide. Art conservators use the technique to examine how the visible layers of paint differ from the underdrawing or layers in between (such alterations are called pentimenti when made by the original artist). This is very useful information in deciding whether a painting is the prime version by the original artist or a copy, and whether it has been altered by over-enthusiastic restoration work. In general, the more pentimenti, the more likely a painting is to be the prime version. It also gives useful insights into working practices. Reflectography often reveals the artist's use of carbon black, which shows up well in reflectograms, as long as it has not also been used in the ground underlying the whole painting. Recent progress in the design of infrared-sensitive cameras makes it possible to discover and depict not only underpaintings and pentimenti, but entire paintings that were later overpainted by the artist. Notable examples are Picasso's Woman Ironing and Blue Room, where in both cases a portrait of a man has been made visible under the painting as it is known today. Similar uses of infrared are made by conservators and scientists on various types of objects, especially very old written documents such as the Dead Sea Scrolls, the Roman works in the Villa of the Papyri, and the Silk Road texts found in the Dunhuang Caves. Carbon black used in ink can show up extremely well. Biological systems The pit viper has a pair of infrared sensory pits on its head. There is uncertainty regarding the exact thermal sensitivity of this biological infrared detection system. Other organisms that have thermoreceptive organs are pythons (family Pythonidae), some boas (family Boidae), the Common Vampire Bat (Desmodus rotundus), a variety of jewel beetles (Melanophila acuminata), darkly pigmented butterflies (Pachliopta aristolochiae and Troides rhadamantus plateni), and possibly blood-sucking bugs (Triatoma infestans). By detecting the heat that their prey emits, crotaline and boid snakes identify and capture their prey using their IR-sensitive pit organs. Comparably, IR-sensitive pits on the Common Vampire Bat (Desmodus rotundus) aid in the identification of blood-rich regions on its warm-blooded victim. The jewel beetle, Melanophila acuminata, locates forest fires via infrared pit organs, where on recently burnt trees, they deposit their eggs. Thermoreceptors on the wings and antennae of butterflies with dark pigmentation, such Pachliopta aristolochiae and Troides rhadamantus plateni, shield them from heat damage as they sunbathe in the sun. Additionally, it's hypothesised that thermoreceptors let bloodsucking bugs (Triatoma infestans) locate their warm-blooded victims by sensing their body heat. Some fungi like Venturia inaequalis require near-infrared light for ejection. Although near-infrared vision (780–1,000 nm) has long been deemed impossible due to noise in visual pigments, sensation of near-infrared light was reported in the common carp and in three cichlid species. Fish use NIR to capture prey and for phototactic swimming orientation. NIR sensation in fish may be relevant under poor lighting conditions during twilight and in turbid surface waters. Photobiomodulation Near-infrared light, or photobiomodulation, is used for treatment of chemotherapy-induced oral ulceration as well as wound healing. There is some work relating to anti-herpes virus treatment. Research projects include work on central nervous system healing effects via cytochrome c oxidase upregulation and other possible mechanisms. Health hazards Strong infrared radiation in certain industry high-heat settings may be hazardous to the eyes, resulting in damage or blindness to the user. Since the radiation is invisible, special IR-proof goggles must be worn in such places. Scientific history The discovery of infrared radiation is ascribed to William Herschel, the astronomer, in the early 19th century. Herschel published his results in 1800 before the Royal Society of London. Herschel used a prism to refract light from the sun and detected the infrared, beyond the red part of the spectrum, through an increase in the temperature recorded on a thermometer. He was surprised at the result and called them "Calorific Rays". The term "infrared" did not appear until late 19th century. An earlier experiment in 1790 by Marc-Auguste Pictet demonstrated the reflection and focusing of radiant heat via mirrors in the absence of visible light. Other important dates include: 1830: Leopoldo Nobili made the first thermopile IR detector. 1840: John Herschel produces the first thermal image, called a thermogram. 1860: Gustav Kirchhoff formulated the blackbody theorem . 1873: Willoughby Smith discovered the photoconductivity of selenium. 1878: Samuel Pierpont Langley invents the first bolometer, a device which is able to measure small temperature fluctuations, and thus the power of far infrared sources. 1879: Stefan–Boltzmann law formulated empirically that the power radiated by a blackbody is proportional to T4. 1880s and 1890s: Lord Rayleigh and Wilhelm Wien solved part of the blackbody equation, but both solutions diverged in parts of the electromagnetic spectrum. This problem was called the "ultraviolet catastrophe and infrared catastrophe". 1892: Willem Henri Julius published infrared spectra of 20 organic compounds measured with a bolometer in units of angular displacement. 1901: Max Planck published the blackbody equation and theorem. He solved the problem by quantizing the allowable energy transitions. 1905: Albert Einstein developed the theory of the photoelectric effect. 1905–1908: William Coblentz published infrared spectra in units of wavelength (micrometers) for several chemical compounds in Investigations of Infra-Red Spectra. 1917: Theodore Case developed the thallous sulfide detector, which helped produce the first infrared search and track device able to detect aircraft at a range of one mile (1.6 km). 1935: Lead salts – early missile guidance in World War II. 1938: Yeou Ta predicted that the pyroelectric effect could be used to detect infrared radiation. 1945: The Zielgerät 1229 "Vampir" infrared weapon system was introduced as the first portable infrared device for military applications. 1952: Heinrich Welker grew synthetic InSb crystals. 1950s and 1960s: Nomenclature and radiometric units defined by Fred Nicodemenus, G. J. Zissis and R. Clark; Robert Clark Jones defined D*. 1958: W. D. Lawson (Royal Radar Establishment in Malvern) discovered IR detection properties of Mercury cadmium telluride (HgCdTe). 1958: Falcon and Sidewinder missiles were developed using infrared technology. 1960s: Paul Kruse and his colleagues at Honeywell Research Center demonstrate the use of HgCdTe as an effective compound for infrared detection. 1962: J. Cooper demonstrated pyroelectric detection. 1964: W. G. Evans discovered infrared thermoreceptors in a pyrophile beetle. 1965: First IR handbook; first commercial imagers (Barnes, Agema (now part of FLIR Systems Inc.)); Richard Hudson's landmark text; F4 TRAM FLIR by Hughes; phenomenology pioneered by Fred Simmons and A. T. Stair; U.S. Army's night vision lab formed (now Night Vision and Electronic Sensors Directorate (NVESD)), and Rachets develops detection, recognition and identification modeling there. 1970: Willard Boyle and George E. Smith proposed CCD at Bell Labs for picture phone. 1973: Common module program started by NVESD. 1978: Infrared imaging astronomy came of age, observatories planned, IRTF on Mauna Kea opened; 32 × 32 and 64 × 64 arrays produced using InSb, HgCdTe and other materials. 2013: On 14 February, researchers developed a neural implant that gives rats the ability to sense infrared light, which for the first time provides living creatures with new abilities, instead of simply replacing or augmenting existing abilities. See also Notes References External links Infrared: A Historical Perspective (Omega Engineering) Infrared Data Association , a standards organization for infrared data interconnection SIRC Protocol How to build a USB infrared receiver to control PC's remotely Infrared Waves: detailed explanation of infrared light. (NASA) Herschel's original paper from 1800 announcing the discovery of infrared light The thermographic's library , collection of thermogram Infrared reflectography in analysis of paintings at ColourLex Molly Faries, Techniques and Applications – Analytical Capabilities of Infrared Reflectography: An Art Historian s Perspective , in Scientific Examination of Art: Modern Techniques in Conservation and Analysis, Sackler NAS Colloquium, 2005 Electromagnetic spectrum
Infrared
[ "Physics" ]
7,051
[ "Infrared", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
15,024
https://en.wikipedia.org/wiki/ISO%208601
ISO 8601 is an international standard covering the worldwide exchange and communication of date and time-related data. It is maintained by the International Organization for Standardization (ISO) and was first published in 1988, with updates in 1991, 2000, 2004, and 2019, and an amendment in 2022. The standard provides a well-defined, unambiguous method of representing calendar dates and times in worldwide communications, especially to avoid misinterpreting numeric dates and times when such data is transferred between countries with different conventions for writing numeric dates and times. ISO 8601 applies to these representations and formats: dates, in the Gregorian calendar (including the proleptic Gregorian calendar); times, based on the 24-hour timekeeping system, with optional UTC offset; time intervals; and combinations thereof. The standard does not assign specific meaning to any element of the dates/times represented: the meaning of any element depends on the context of its use. Dates and times represented cannot use words that do not have a specified numerical meaning within the standard (thus excluding names of years in the Chinese calendar), or that do not use computer characters (excludes images or sounds). In representations that adhere to the ISO 8601 interchange standard, dates and times are arranged such that the greatest temporal term (typically a year) is placed at the left and each successively lesser term is placed to the right of the previous term. Representations must be written in a combination of Arabic numerals and the specific computer characters (such as "", ":", "T", "W", "Z") that are assigned specific meanings within the standard; that is, such commonplace descriptors of dates (or parts of dates) as "January", "Thursday", or "New Year's Day" are not allowed in interchange representations within the standard. History The first edition of the ISO 8601 standard was published as ISO 8601:1988 in 1988. It unified and replaced a number of older ISO standards on various aspects of date and time notation: ISO 2014, ISO 2015, ISO 2711, ISO 3307, and ISO 4031. It has been superseded by a second edition ISO 8601:2000 in 2000, by a third edition ISO 8601:2004 published on 1 December 2004, and withdrawn and revised by ISO 8601-1:2019 and ISO 8601-2:2019 on 25 February 2019. ISO 8601 was prepared by, and is under the direct responsibility of, ISO Technical Committee TC 154. ISO 2014, though superseded, is the standard that originally introduced the all-numeric date notation in most-to-least-significant order . The ISO week numbering system was introduced in ISO 2015, and the identification of days by ordinal dates was originally defined in ISO 2711. Issued in February 2019, the fourth revision of the standard ISO 8601-1:2019 represents slightly updated contents of the previous ISO 8601:2004 standard, whereas the new ISO 8601-2:2019 defines various extensions such as uncertainties or parts of the Extended Date/Time Format (EDTF). An amendment was published in October 2022 featuring minor technical clarifications and attempts to remove ambiguities in definitions. The most significant change, however, was the reintroduction of the "24:00:00" format to refer to the instant at the end of a calendar day. General principles Date and time values are ordered from the largest to smallest unit of time: year, month (or week), day, hour, minute, second, and fraction of second. The lexicographical order of the representation thus corresponds to chronological order, except for date representations involving negative years or time offset. This allows dates to be naturally sorted by, for example, file systems. Each date and time value has a fixed number of digits that must be padded with leading zeros. Representations can be done in one of two formatsa basic format with a minimal number of separators or an extended format with separators added to enhance human readability. The standard notes that "The basic format should be avoided in plain text." The separator used between date values (year, month, week, and day) is the hyphen, while the colon is used as the separator between time values (hours, minutes, and seconds). For example, the 6th day of the 1st month of the year 2009 may be written as in the extended format or as "20090106" in the basic format without ambiguity. For reduced precision, any number of values may be dropped from any of the date and time representations, but in the order from the least to the most significant. For example, "2004-05" is a valid ISO 8601 date, which indicates May (the fifth month) 2004. This format will never represent the 5th day of an unspecified month in 2004, nor will it represent a time-span extending from 2004 into 2005. If necessary for a particular application, the standard supports the addition of a decimal fraction to the smallest time value in the representation. Dates The standard uses the Gregorian calendar, which "serves as an international standard for civil use". ISO 8601:2004 fixes a reference calendar date to the Gregorian calendar of 20 May 1875 as the date the (Metre Convention) was signed in Paris (the explicit reference date was removed in ISO 8601-1:2019). However, ISO calendar dates before the convention are still compatible with the Gregorian calendar all the way back to the official introduction of the Gregorian calendar on 15 October 1582. Earlier dates, in the proleptic Gregorian calendar, may be used by mutual agreement of the partners exchanging information. The standard states that every date must be consecutive, so usage of the Julian calendar would be contrary to the standard (because at the switchover date, the dates would not be consecutive). Years ISO 8601 prescribes, as a minimum, a four-digit year [YYYY] to avoid the year 2000 problem. It therefore represents years from 0000 to 9999, year 0000 being equal to 1 BC and all others AD, similar to astronomical year numbering. However, years before 1583 (the first full year following the introduction of the Gregorian calendar) are not automatically allowed by the standard. Instead, the standard states that "values in the range [0000] through [1582] shall only be used by mutual agreement of the partners in information interchange". To represent years before 0000 or after 9999, the standard also permits the expansion of the year representation but only by prior agreement between the sender and the receiver. An expanded year representation [±YYYYY] must have an agreed-upon number of extra year digits beyond the four-digit minimum, and it must be prefixed with a + or − sign instead of the more common AD/BC (or CE/BCE) notation; by convention 1 BC is labelled +0000, 2 BC is labeled −0001, and so on. Calendar dates Calendar date representations are in the form shown in the adjacent box. [YYYY] indicates a four-digit year, 0000 through 9999. [MM] indicates a two-digit month of the year, 01 through 12. [DD] indicates a two-digit day of that month, 01 through 31. For example, "5 April 1981" may be represented as either in the extended format or "19810405" in the basic format. The standard also allows for calendar dates to be written with reduced precision. For example, one may write to mean "1981 April". One may simply write "1981" to refer to that year, "198" to refer to the decade from 1980 to 1989 inclusive, or "19" to refer to the century from 1900 to 1999 inclusive. Although the standard allows both the and YYYYMMDD formats for complete calendar date representations, if the day [DD] is omitted then only the format is allowed. By disallowing dates of the form YYYYMM, the standard avoids confusion with the truncated representation YYMMDD (still often used). The 2000 version also allowed writing the truncation to mean "April 5" but the 2004 version does not allow omitting the year when a month is present. Examples: 7 January 2000 can be written as "2000-01-07" or "20000107" Week dates Week date representations are in the formats as shown in the adjacent box. [YYYY] indicates the ISO week-numbering year which is slightly different from the traditional Gregorian calendar year (see below). [Www] is the week number prefixed by the letter W, from W01 through W53. [D] is the weekday number, from 1 through 7, beginning with Monday and ending with Sunday. There are several mutually equivalent and compatible descriptions of week 01: the week with the first business day in the starting year (considering that Saturdays, Sundays and 1 January are non-working days), the week with the starting year's first Thursday in it (the formal ISO definition), the week with 4 January in it, the first week with the majority (four or more) of its days in the starting year, and the week starting with the Monday in the period 29 December to 4 January. As a consequence, if 1 January is on a Monday, Tuesday, Wednesday or Thursday, it is in week 01. If 1 January is on a Friday, Saturday or Sunday, it is in week 52 or 53 of the previous year (there is no week 00). 28 December is always in the last week of its year. The week number can be described by counting the Thursdays: week 12 contains the 12th Thursday of the year. The ISO week-numbering year starts at the first day (Monday) of week 01 and ends at the Sunday before the new ISO year (hence without overlap or gap). It consists of 52 or 53 full weeks. The first ISO week of a year may have up to three days that are actually in the Gregorian calendar year that is ending; if three, they are Monday, Tuesday and Wednesday. Similarly, the last ISO week of a year may have up to three days that are actually in the Gregorian calendar year that is starting; if three, they are Friday, Saturday, and Sunday. The Thursday of each ISO week is always in the Gregorian calendar year denoted by the ISO week-numbering year. Examples: is written "" is written "" Ordinal dates An ordinal date is an ordinal format for the multiples of a day elapsed since the start of year. It is represented as "YYYY-DDD" (or YYYYDDD), where [YYYY] indicates a year and [DDD] is the "day of year", from 001 through 365 (366 in leap years). For example, is the same as . This simple form is preferable for occasions when the arbitrary nature of week and month definitions are more of an impediment than an aid, for instance, when comparing dates from different calendars. This format is used with simple hardware systems that have a need for a date system, but where including full calendar calculation software may be a significant nuisance. This system is sometimes referred to as "Julian Date", but this can cause confusion with the astronomical Julian day, a sequential count of the number of days since day 0 beginning Greenwich noon, Julian proleptic calendar (or noon on ISO date which uses the Gregorian proleptic calendar with a year 0000). Times ISO 8601 uses the 24-hour clock system. As of ISO 8601-1:2019, the basic format is T[hh][mm][ss] and the extended format is T[hh]:[mm]:[ss]. Earlier versions omitted the T (representing time) in both formats. [hh] refers to a zero-padded hour between 00 and 24. [mm] refers to a zero-padded minute between 00 and 59. [ss] refers to a zero-padded second between 00 and 60 (where 60 is only used to denote an added leap second). So a time might appear as either "T134730" in the basic format or "T13:47:30" in the extended format. ISO 8601-1:2019 allows the T to be omitted in the extended format, as in "13:47:30", but only allows the T to be omitted in the basic format when there is no risk of confusion with date expressions. Either the seconds, or the minutes and seconds, may be omitted from the basic or extended time formats for greater brevity but decreased precision; the resulting reduced precision time formats are: T[hh][mm] in basic format or T[hh]:[mm] in extended format, when seconds are omitted. T[hh], when both seconds and minutes are omitted. As of ISO 8601-1:2019/Amd 1:2022, "00:00:00" may be used to refer to midnight corresponding to the instant at the beginning of a calendar day; and "24:00:00" to refer to midnight corresponding to the instant at the end of a calendar day. ISO 8601-1:2019 as originally published removed "24:00:00" as a representation for the end of day although it had been permitted in earlier versions of the standard. A decimal fraction may be added to the lowest order time element present in any of these representations. A decimal mark, either a comma or a dot on the baseline, is used as a separator between the time element and its fraction. (Following ISO 80000-1 according to ISO 8601:1-2019, it does not stipulate a preference except within International Standards, but with a preference for a comma according to ISO 8601:2004.) For example, to denote "14 hours, 30 and one half minutes", do not include a seconds figure; represent it as "14:30,5", "T1430,5", "14:30.5", or "T1430.5". There is no limit on the number of decimal places for the decimal fraction. However, the number of decimal places needs to be agreed to by the communicating parties. For example, in Microsoft SQL Server, the precision of a decimal fraction is 3 for a DATETIME, i.e., "yyyy-mm-ddThh:mm:ss[.mmm]". Time zone designators Time zones in ISO 8601 are represented as local time (with the location unspecified), as UTC, or as an offset from UTC. Local time (unqualified) If no UTC relation information is given with a time representation, the time is assumed to be in local time. While it may be safe to assume local time when communicating in the same time zone, it is ambiguous when used in communicating across different time zones. Even within a single geographic time zone, some local times will be ambiguous if the region observes daylight saving time. It is usually preferable to indicate a time zone (zone designator) using the standard's notation. Coordinated Universal Time (UTC) If the time is in UTC, add a Z directly after the time without a space. Z is the zone designator for the zero UTC offset. "09:30 UTC" is therefore represented as "09:30Z" or "T0930Z". "14:45:15 UTC" would be "14:45:15Z" or "T144515Z". The Z suffix in the ISO 8601 time representation is sometimes referred to as "Zulu time" or "Zulu meridian" because the same letter is used to designate the Zulu time zone. However the ACP 121 standard that defines the list of military time zones makes no mention of UTC and derives the "Zulu time" from the Greenwich Mean Time which was formerly used as the international civil time standard. GMT is no longer precisely defined by the scientific community and can refer to either UTC or UT1 depending on context. Time offsets from UTC The UTC offset is appended directly to the time instead of "Z" suffix above; other nautical time zone letters are not used. The offset is applied to UTC to get the civil time in the designated time zone in the format '±[hh]:[mm]', '±[hh][mm]', or '±[hh]'. A negative UTC offset describes a time zone west of the prime meridian where the civil time is behind UTC. So the zone designation for New York (on standard time) would be "−05:00","−0500", or "−05". Conversely, a positive UTC offset describes a time zone east of the prime meridian where the civil time is ahead of UTC. So the zone designation for Cairo will be "+02:00","+0200", or "+02". A time zone where the civil time coincides with UTC is always designated as positive, though the offset is zero (see related specifications below). So the zone designation for London (on standard time) would be "+00:00", "+0000", or "+00". Additional examples "−10:00" for Honolulu "−06:00" for Chicago on standard time, or Denver on daylight saving time "−03:00" for Brasília "+01:00" for London on British Summer Time "+04:00" for Dubai "+05:30" for India "+09:00" for Japan See List of UTC offsets for other UTC offsets. Other time offset specifications It is not permitted to state a zero value time offset with a negative sign, as "−00:00", "−0000", or "−00". The section dictating sign usage states that a plus sign must be used for a positive or zero value, and a minus sign for a negative value. A plus-minus-sign () may also be used if it is available. Contrary to this rule, RFC 3339, which is otherwise a profile of ISO 8601, permits the use of "−00" with the same denotation as "+00" but a differing connotation: an unknown UTC offset. To represent a negative offset, ISO 8601 specifies using a minus sign (). If the interchange character set is limited and does not have a minus sign character, then the hyphen-minus should be used (). ASCII does not have a minus sign, so its hyphen-minus character (code 4510) would be used. If the character set has a minus sign, such as in Unicode, then that character should be used. The HTML character entity invocation for is &minus;. ISO 8601-2:2019 allows for general durations for time offsets. For example, more precision can be added to the time offset with the format '<time>±[hh]:[mm]:[ss].[sss]' or '<time>±[n]H[n]M[n]S' as below. Combined date and time representations A single point in time can be represented by concatenating a complete date expression, the letter "T" as a delimiter, and a valid time expression. For example, . In ISO 8601:2004 it was permitted to omit the "T" character by mutual agreement as in , but this provision was removed in ISO 8601-1:2019. Separating date and time parts with other characters such as space is not allowed in ISO 8601, but allowed in its profile RFC 3339. If a time zone designator is required, it follows the combined date and time. For example, or . Either basic or extended formats may be used, but both date and time must use the same format. The date expression may be calendar, week, or ordinal, and must use a complete representation. The time may be represented using a specified reduced precision format. Durations Durations define the amount of intervening time in a time interval and are represented by the format P[n]Y[n]M[n]DT[n]H[n]M[n]S or P[n]W as shown on the aside. In these representations, the [n] is replaced by the value for each of the date and time elements that follow the [n]. Leading zeros are not required, but the maximum number of digits for each element should be agreed to by the communicating parties. The capital letters P, Y, M, W, D, T, H, M, and S are designators for each of the date and time elements and are not replaced. P is the duration designator (for period) placed at the start of the duration representation. Y is the year designator that follows the value for the number of calendar years. M is the month designator that follows the value for the number of calendar months. W is the week designator that follows the value for the number of weeks. D is the day designator that follows the value for the number of calendar days. T is the time designator that precedes the time components of the representation. H is the hour designator that follows the value for the number of hours. M is the minute designator that follows the value for the number of minutes. S is the second designator that follows the value for the number of seconds. For example, "P3Y6M4DT12H30M5S" represents a duration of "three years, six months, four days, twelve hours, thirty minutes, and five seconds". Date and time elements including their designator may be omitted if their value is zero, and lower-order elements may also be omitted for reduced precision. For example, "P23DT23H" and "P4Y" are both acceptable duration representations. However, at least one element must be present, thus "P" is not a valid representation for a duration of 0 seconds. "PT0S" or "P0D", however, are both valid and represent the same duration. To resolve ambiguity, "P1M" is a one-month duration and "PT1M" is a one-minute duration (note the time designator, T, that precedes the time value). The smallest value used may also have a decimal fraction, as in "P0.5Y" to indicate half a year. This decimal fraction may be specified with either a comma or a full stop, as in "P0,5Y" or "P0.5Y". The standard does not prohibit date and time values in a duration representation from exceeding their "carry over points" except as noted below. Thus, "PT36H" could be used as well as "P1DT12H" for representing the same duration. But keep in mind that "PT36H" is not the same as "P1DT12H" when switching from or to Daylight saving time. Alternatively, a format for duration based on combined date and time representations may be used by agreement between the communicating parties either in the basic format PYYYYMMDDThhmmss or in the extended format . For example, the first duration shown above would be . However, individual date and time values cannot exceed their moduli (e.g. a value of 13 for the month or 25 for the hour would not be permissible). The standard describes a duration as part of time intervals, which are discussed in the next section. The duration format on its own is ambiguous regarding the total number of days in a calendar year and calendar month. The number of seconds in a calendar day is also ambiguous because of leap seconds. For example "P1M" on its own could be 28, 29, 30, or 31 days. There is no ambiguity when used in a time interval. Using example "P2M" duration of two calendar months: interval 2003-02-15T00:00:00Z/P2M ends two calendar months later at 2003-04-15T00:00:00Z which is 59 days later interval 2003-07-15T00:00:00Z/P2M ends two calendar months later at 2003-09-15T00:00:00Z which is 62 days later The duration format (or a subset thereof) is widely used independent of time intervals, as with the Java 8 Duration class which supports a subset of the duration format. Time intervals A time interval is the intervening time between two time points. The amount of intervening time is expressed by a duration (as described in the previous section). The two time points (start and end) are expressed by either a combined date and time representation or just a date representation. There are four ways to express a time interval: Start and end, such as "2007-03-01T13:00:00Z/2008-05-11T15:30:00Z" Start and duration, such as "2007-03-01T13:00:00Z/P1Y2M10DT2H30M" Duration and end, such as "P1Y2M10DT2H30M/2008-05-11T15:30:00Z" Duration only, such as "P1Y2M10DT2H30M", with additional context information Of these, the first three require two values separated by an interval designator which is usually a solidus (more commonly referred to as a forward slash "/"). Section 3.2.6 of ISO 8601-1:2019 notes that "A solidus may be replaced by a double hyphen ["--"] by mutual agreement of the communicating partners", and previous versions used notations like "2000--2002". Use of a double hyphen instead of a solidus allows inclusion in computer filenames; in common operating systems, a solidus is a reserved character and is not allowed in a filename. For <start>/<end> expressions, if any elements are missing from the end value, they are assumed to be the same as for the start value including the time zone. This feature of the standard allows for concise representations of time intervals. For example, the date of a two-hour meeting including the start and finish times could be shown as "2007-12-14T13:30/15:30", where "/15:30" implies "/2007-12-14T15:30" (the same date as the start), or the beginning and end dates of a monthly billing period as "2008-02-15/03-14", where "/03-14" implies "/2008-03-14" (the same year as the start). If greater precision is desirable to represent the time interval, then more time elements can be added to the representation. An interval denoted can start at any time on and end at any time on , whereas includes the start and end times. To explicitly include all of the start and end dates, the interval would be represented as . Repeating intervals Repeating intervals are specified in clause "4.5 Recurring time interval". They are formed by adding "R[n]/" to the beginning of an interval expression, where R is used as the letter itself and [n] is replaced by the number of repetitions. Leaving out the value for [n] or specifying a value of -1, means an unbounded number of repetitions. A value of 0 for [n] means the interval is not repeated. If the interval specifies the start (forms 1 and 2 above), then this is the start of the repeating interval. If the interval specifies the end but not the start (form 3 above), then this is the end of the repeating interval. For example, to repeat the interval of "P1Y2M10DT2H30M" five times starting at , use . Truncated representations (deprecated) ISO 8601:2000 allowed truncation (by agreement), where leading components of a date or time are omitted. Notably, this allowed two-digit years to be used as well as the ambiguous formats YY-MM-DD and YYMMDD. This provision was removed in ISO 8601:2004. The first and seventh examples given above omit the leading - for century. Other formats have one leading - per omitted century, year, month, week, hour and minute as necessary to disambiguate the format. Standardised extensions ISO 8601-2:2019 defines a set of standardised extensions to the ISO 8601 date and time formats. Extended Date/Time Format (EDTF) The EDTF is given as an example of a profile of ISO 8601. Some of its features are: Uncertain and approximate qualifiers, '?' and '~', as well as their combined used, '%'; they can be applied to the whole date or to individual components. Time intervals with an open (unbounded) end or an unknown end. Exponential and significant figure notation in years. Special "month" values indicating sub-year groupings such as seasons and quarters. Syntax for serializing a list of dates. The EDTF features are described in the "Date and Time Extensions" section of ISO 8601-2:2019. Repeat rules for recurring time intervals ISO 8601-2:2019 also defines a format to constrain repeating intervals based on syntax from iCalendar. Usage On the Internet, the World Wide Web Consortium (W3C) uses the IETF standard based on ISO 8601 in defining a profile of the standard that restricts the supported date and time formats to reduce the chance of error and the complexity of software. The very simple specification is based on a draft of the RFC 3339 mentioned below. ISO 8601 is referenced by several specifications, but the full range of options of ISO 8601 is not always used. For example, the various electronic program guide standards for TV, digital radio, etc. use several forms to describe points in time and durations. The ID3 audio meta-data specification also makes use of a subset of ISO 8601. The X.690 encoding standard's GeneralizedTime makes use of another subset of ISO 8601. Commerce As of 2006, the ISO week date appears in its basic form on major brand commercial packaging in the United States. Its appearance depended on the particular packaging, canning, or bottling plant more than any particular brand. The format is particularly useful for quality assurance, so that production errors can be readily traced. RFCs IETF RFC 3339 defines a profile of ISO 8601 for use in Internet protocols and standards. It explicitly excludes durations and dates before the common era. The more complex formats such as week numbers and ordinal days are not permitted. RFC 3339 deviates from ISO 8601 in allowing a zero time zone offset to be specified as "-00:00", which ISO 8601 forbids. RFC 3339 intends "-00:00" to carry the connotation that it is not stating a preferred time zone, whereas the conforming "+00:00" or any non-zero offset connotes that the offset being used is preferred. This convention regarding "-00:00" is derived from earlier RFCs, such as RFC 2822 which uses it for timestamps in email headers. RFC 2822 made no claim that any part of its timestamp format conforms to ISO 8601, and so was free to use this convention without conflict. Building upon the foundations of RFC 3339, the IETF introduced the Internet Extended Date/Time Format (IXDTF) in RFC 9557. This format extends the timestamp representation to include additional information such as an associated time zone name. The inclusion of time zone names is particularly useful for applications that need to account for events like daylight saving time transitions. Furthermore, IXDTF maintains compatibility with pre-existing syntax for attaching time zone names to timestamps, providing a standardized and flexible approach to timestamp representation on the internet. Example: 1996-12-19T16:39:57-08:00[America/Los_Angeles] Adoption as national standards See also Astronomical year numbering Date and time representation by country List of date formats by country Chronometry ISO 8601 and computing differences between dates on Wikiversity Notes and references External links ISO's catalog entry for ISO 8601:2004 Preview of ISO 8601-1:2019 from ISO Preview of ISO 8601-2:2019 from ISO The 2016 prototype of ISO 8601-1 (ISO/TC 154 N) (archived) The 2016 prototype of ISO 8601-2 (ISO/TC 154 N) (archived) Use international date format (ISO) – Quality Web Tips The World Wide Web Consortium (W3C) ISO 8601 summary by Dr. Markus Kuhn from Cambridge University The Mathematics of the ISO 8601 Calendar from the website of R.H. van Gent of Utrecht University W3C "NOTE": Date and Time Formats, specifying a profile of ISO 8601:1988 RFC 3339 vs ISO 8601 — Venn diagram illustrating the difference between the two standards. Implementation overview ISO 8601 Implementation Around The World (1998) Calendaring standards Date and time representation 08601 Specific calendars Time measurement systems
ISO 8601
[ "Physics" ]
6,972
[ "Physical quantities", "Time measurement systems", "Time", "Date and time representation", "Spacetime" ]
15,029
https://en.wikipedia.org/wiki/Industry%20Standard%20Architecture
Industry Standard Architecture (ISA) is the 16-bit internal bus of IBM PC/AT and similar computers based on the Intel 80286 and its immediate successors during the 1980s. The bus was (largely) backward compatible with the 8-bit bus of the 8088-based IBM PC, including the IBM PC/XT as well as IBM PC compatibles. Originally referred to as the PC bus (8-bit) or AT bus (16-bit), it was also termed I/O Channel by IBM. The ISA term was coined as a retronym by IBM PC clone manufacturers in the late 1980s or early 1990s as a reaction to IBM attempts to replace the AT bus with its new and incompatible Micro Channel architecture. The 16-bit ISA bus was also used with 32-bit processors for several years. An attempt to extend it to 32 bits, called Extended Industry Standard Architecture (EISA), was not very successful, however. Later buses such as VESA Local Bus and PCI were used instead, often along with ISA slots on the same mainboard. Derivatives of the AT bus structure were and still are used in ATA/IDE, the PCMCIA standard, CompactFlash, the PC/104 bus, and internally within Super I/O chips. Even though ISA disappeared from consumer desktops many years ago, it is still used in industrial PCs, where certain specialized expansion cards that never transitioned to PCI and PCI Express are used. History The original PC bus was developed by a team led by Mark Dean at IBM as part of the IBM PC project in 1981. It was an 8-bit bus based on the I/O bus of the IBM System/23 Datamaster system - it used the same physical connector, and a similar signal protocol and pinout. A 16-bit version, the IBM AT bus, was introduced with the release of the IBM PC/AT in 1984. The AT bus was a mostly backward-compatible extension of the PC bus—the AT bus connector was a superset of the PC bus connector. In 1988, the 32-bit EISA standard was proposed by the "Gang of Nine" group of PC-compatible manufacturers that included Compaq. Compaq created the term Industry Standard Architecture (ISA) to replace PC compatible. In the process, they retroactively renamed the AT bus to ISA to avoid infringing IBM's trademark on its PC and PC/AT systems (and to avoid giving their major competitor, IBM, free advertisement). IBM designed the 8-bit version as a buffered interface to the motherboard buses of the Intel 8088 (16/8 bit) CPU in the IBM PC and PC/XT, augmented with prioritized interrupts and DMA channels. The 16-bit version was an upgrade for the motherboard buses of the Intel 80286 CPU (and expanded interrupt and DMA facilities) used in the IBM AT, with improved support for bus mastering. The ISA bus was therefore synchronous with the CPU clock until sophisticated buffering methods were implemented by chipsets to interface ISA to much faster CPUs. ISA was designed to connect peripheral cards to the motherboard and allows for bus mastering. Only the first 16 MB of main memory is addressable. The original 8-bit bus ran from the 4.77 MHz clock of the 8088 CPU in the IBM PC and PC/XT. The original 16-bit bus ran from the CPU clock of the 80286 in IBM PC/AT computers, which was 6 MHz in the first models and 8 MHz in later models. The IBM RT PC also used the 16-bit bus. ISA was also used in some non-IBM compatible machines such as Motorola 68k-based Apollo (68020) and Amiga 3000 (68030) workstations, the short-lived AT&T Hobbit and the later PowerPC-based BeBox. Companies like Dell improved the AT bus's performance but in 1987, IBM replaced the AT bus with its proprietary Micro Channel Architecture (MCA). MCA overcame many of the limitations then apparent in ISA but was also an effort by IBM to regain control of the PC architecture and the PC market. MCA was far more advanced than ISA and had many features that would later appear in PCI. However, MCA was also a closed standard whereas IBM had released full specifications and circuit schematics for ISA. Computer manufacturers responded to MCA by developing the Extended Industry Standard Architecture (EISA) and the later VESA Local Bus (VLB). VLB used some electronic parts originally intended for MCA because component manufacturers were already equipped to manufacture them. Both EISA and VLB were backward-compatible expansions of the AT (ISA) bus. Users of ISA-based machines had to know special information about the hardware they were adding to the system. While a handful of devices were essentially plug-n-play, this was rare. Users frequently had to configure parameters when adding a new device, such as the IRQ line, I/O address, or DMA channel. MCA had done away with this complication and PCI actually incorporated many of the ideas first explored with MCA, though it was more directly descended from EISA. This trouble with configuration eventually led to the creation of ISA PnP, a plug-n-play system that used a combination of modifications to hardware, the system BIOS, and operating system software to automatically manage resource allocations. In reality, ISA PnP could be troublesome and did not become well-supported until the architecture was in its final days. A PnP ISA, EISA or VLB device may have a 5-byte EISA ID (3-byte manufacturer ID + 2-byte hex number) to identify the device. For example, CTL0044 corresponds to Creative Sound Blaster 16 / 32 PnP. PCI slots were the first physically incompatible expansion ports to directly squeeze ISA off the motherboard. At first, motherboards were largely ISA, including a few PCI slots. By the mid-1990s, the two slot types were roughly balanced, and ISA slots soon were in the minority of consumer systems. Microsoft's PC-99 specification recommended that ISA slots be removed entirely, though the system architecture still required ISA to be present in some vestigial way internally to handle the floppy drive, serial ports, etc., which was why the software compatible LPC bus was created. ISA slots remained for a few more years and towards the turn of the century it was common to see systems with an Accelerated Graphics Port (AGP) sitting near the central processing unit, an array of PCI slots, and one or two ISA slots near the end. In late 2008, even floppy disk drives and serial ports were disappearing, and the extinction of vestigial ISA (by then the LPC bus) from chipsets was on the horizon. PCI slots are rotated compared to their ISA counterparts—PCI cards were essentially inserted upside-down, allowing ISA and PCI connectors to squeeze together on the motherboard. Only one of the two connectors can be used in each slot at a time, but this allowed for greater flexibility. The AT Attachment (ATA) hard disk interface is directly descended from the 16-bit ISA of the PC/AT. ATA has its origins in the IBM Personal Computer Fixed Disk and Diskette Adapter, the standard dual-function floppy disk controller and hard disk controller card for the IBM PC AT; the fixed disk controller on this card implemented the register set and the basic command set which became the basis of the ATA interface (and which differed greatly from the interface of IBM's fixed disk controller card for the PC XT). Direct precursors to ATA were third-party ISA hardcards that integrated a hard disk drive (HDD) and a hard disk controller (HDC) onto one card. This was at best awkward and at worst damaging to the motherboard, as ISA slots were not designed to support such heavy devices as HDDs. The next generation of Integrated Drive Electronics drives moved both the drive and controller to a drive bay and used a ribbon cable and a very simple interface board to connect it to an ISA slot. ATA is basically a standardization of this arrangement plus a uniform command structure for software to interface with the HDC within the drive. ATA has since been separated from the ISA bus and connected directly to the local bus, usually by integration into the chipset, for much higher clock rates and data throughput than ISA could support. ATA has clear characteristics of 16-bit ISA, such as a 16-bit transfer size, signal timing in the PIO modes and the interrupt and DMA mechanisms. ISA bus architecture The PC/XT-bus is an eight-bit ISA bus used by Intel 8086 and Intel 8088 systems in the IBM PC and IBM PC XT in the 1980s. Among its 62 pins were demultiplexed and electrically buffered versions of the 8 data and 20 address lines of the 8088 processor, along with power lines, clocks, read/write strobes, interrupt lines, etc. Power lines included −5 V and ±12 V in order to directly support pMOS and enhancement mode nMOS circuits such as dynamic RAMs among other things. The XT bus architecture uses a single Intel 8259 PIC, giving eight vectorized and prioritized interrupt lines. It has four DMA channels originally provided by the Intel 8237. Three of the DMA channels are brought out to the XT bus expansion slots; of these, 2 are normally already allocated to machine functions (diskette drive and hard disk controller): The PC/AT-bus, a 16-bit (or 80286-) version of the PC/XT bus, was introduced with the IBM PC/AT. This bus was officially termed I/O Channel by IBM. It extends the XT-bus by adding a second shorter edge connector in-line with the eight-bit XT-bus connector, which is unchanged, retaining compatibility with most 8-bit cards. The second connector adds four additional address lines for a total of 24, and 8 additional data lines for a total of 16. It also adds new interrupt lines connected to a second 8259 PIC (connected to one of the lines of the first) and 4 × 16-bit DMA channels, as well as control lines to select 8- or 16-bit transfers. The 16-bit AT bus slot originally used two standard edge connector sockets in early IBM PC/AT machines. However, with the popularity of the AT architecture and the 16-bit ISA bus, manufacturers introduced specialized 98-pin connectors that integrated the two sockets into one unit. These can be found in almost every AT-class PC manufactured after the mid-1980s. The ISA slot connector is typically black (distinguishing it from the brown EISA connectors and white PCI connectors). Number of devices Motherboard devices have dedicated IRQs (not present in the slots). 16-bit devices can use either PC-bus or PC/AT-bus IRQs. It is therefore possible to connect up to 6 devices that use one 8-bit IRQ each and up to 5 devices that use one 16-bit IRQ each. At the same time, up to 4 devices may use one 8-bit DMA channel each, while up to 3 devices can use one 16-bit DMA channel each. Varying bus speeds Originally, the bus clock was synchronous with the CPU clock, resulting in varying bus clock frequencies among the many different IBM clones on the market (sometimes as high as 16 or 20 MHz), leading to software or electrical timing problems for certain ISA cards at bus speeds they were not designed for. Later motherboards or integrated chipsets used a separate clock generator, or a clock divider which either fixed the ISA bus frequency at 4, 6, or 8 MHz or allowed the user to adjust the frequency via the BIOS setup. When used at a higher bus frequency, some ISA cards (certain Hercules-compatible video cards, for instance), could show significant performance improvements. 8/16-bit incompatibilities Memory address decoding for the selection of 8 or 16-bit transfer mode was limited to 128 KB sections, leading to problems when mixing 8- and 16-bit cards as they could not co-exist in the same 128 KB area. This is because the MEMCS16 line is required to be set based on the value of LA17-23 only. Past and current use ISA is still used today for specialized industrial purposes. In 2008, IEI Technologies released a modern motherboard for Intel Core 2 Duo processors which, in addition to other special I/O features, is equipped with two ISA slots. It was marketed to industrial and military users who had invested in expensive specialized ISA bus adaptors, which were not available in PCI bus versions. Similarly, ADEK Industrial Computers released a modern motherboard in early 2013 for Intel Core i3/i5/i7 processors, which contains one (non-DMA) ISA slot. Also, MSI released a modern motherboard with one ISA slot in 2020. The PC/104 bus, used in industrial and embedded applications, is a derivative of the ISA bus, utilizing the same signal lines with different connectors. The LPC bus has replaced the ISA bus as the connection to the legacy I/O devices on current motherboards; while physically quite different, LPC looks just like ISA to software, so the peculiarities of ISA such as the 16 MiB DMA limit (which corresponds to the full address space of the Intel 80286 CPU used in the original IBM AT) are likely to stick around for a while. ATA As explained in the History section, ISA was the basis for development of the ATA interface, used for ATA (a.k.a. IDE) hard disks. Physically, ATA is essentially a simple subset of ISA, with 16 data bits, support for exactly one IRQ and one DMA channel, and 3 address bits. To this ISA subset, ATA adds two IDE address select ("chip select") lines (i.e. address decodes, effectively equivalent to address bits) and a few unique signal lines specific to ATA/IDE hard disks (such as the Cable Select/Spindle Sync. line.) In addition to the physical interface channel, ATA goes beyond and far outside the scope of ISA by also specifying a set of physical device registers to be implemented on every ATA (IDE) drive and a full set of protocols and device commands for controlling fixed disk drives using these registers. The ATA device registers are accessed using the address bits and address select signals in the ATA physical interface channel, and all operations of ATA hard disks are performed using the ATA-specified protocols through the ATA command set. The earliest versions of the ATA standard featured a few simple protocols and a basic command set comparable to the command sets of MFM and RLL controllers (which preceded ATA controllers), but the latest ATA standards have much more complex protocols and instruction sets that include optional commands and protocols providing such advanced optional-use features as sizable hidden system storage areas, password security locking, and programmable geometry translation. In the mid-1990s, the ATA host controller (usually integrated into the chipset) was moved to PCI form. A further deviation between ISA and ATA is that while the ISA bus remained locked into a single standard clock rate (for backward hardware compatibility), the ATA interface offered many different speed modes, could select among them to match the maximum speed supported by the attached drives, and kept adding faster speeds with later versions of the ATA standard (up to for ATA-6, the latest.) In most forms, ATA ran much faster than ISA, provided it was connected directly to a local bus (e.g. southbridge-integrated IDE interfaces) faster than the ISA bus. XT-IDE Before the 16-bit ATA/IDE interface, there was an 8-bit XT-IDE (also known as XTA) interface for hard disks. It was not nearly as popular as ATA has become, and XT-IDE hardware is now fairly hard to find. Some XT-IDE adapters were available as 8-bit ISA cards, and XTA sockets were also present on the motherboards of Amstrad's later XT clones as well as a short-lived line of Philips units. The XTA pinout was very similar to ATA, but only eight data lines and two address lines were used, and the physical device registers had completely different meanings. A few hard drives (such as the Seagate ST351A/X) could support either type of interface, selected with a jumper. Many later AT (and AT successor) motherboards had no integrated hard drive interface but relied on a separate hard drive interface plugged into an ISA/EISA/VLB slot. There were even a few 80486-based units shipped with MFM/RLL interfaces and drives instead of the increasingly common AT-IDE. Commodore built the XT-IDE-based peripheral hard drive and memory expansion unit A590 for their Amiga 500 and 500+ computers that also supported a SCSI drive. Later models – the A600, A1200, and the Amiga 4000 series – use AT-IDE drives. PCMCIA The PCMCIA specification can be seen as a superset of ATA. The standard for PCMCIA hard disk interfaces, which included PCMCIA flash drives, allows for the mutual configuration of the port and the drive in an ATA mode. As a de facto extension, most PCMCIA flash drives additionally allow for a simple ATA mode that is enabled by pulling a single pin low, so that PCMCIA hardware and firmware are unnecessary to use them as an ATA drive connected to an ATA port. PCMCIA flash drive to ATA adapters are thus simple and inexpensive but are not guaranteed to work with any and every standard PCMCIA flash drive. Further, such adapters cannot be used as generic PCMCIA ports, as the PCMCIA interface is much more complex than ATA. Emulation by embedded chips Although most modern computers do not have physical ISA buses, almost all PCs — IA-32, and x86-64 — have ISA buses allocated in physical address space. Some Southbridges and some CPUs themselves provide services such as temperature monitoring and voltage readings through ISA buses as ISA devices. Standardization IEEE started a standardization of the ISA bus in 1985, called the P996 specification. However, despite books being published on the P996 specification, it never officially progressed past draft status. Modern ISA cards There still is an existing user base with old computers, so some ISA cards are still manufactured, e.g. with USB ports or complete single-board computers based on modern processors, USB 3.0, and SATA. See also PC/104 - Embedded variant of ISA Low Pin Count (LPC) Extended Industry Standard Architecture (EISA) Micro Channel architecture (MCA) VESA Local Bus (VLB) Peripheral Component Interconnect (PCI) Accelerated Graphics Port (AGP) PCI-X PCI Express (PCI-E or PCIe) List of computer bus interfaces Amiga Zorro II NuBus Switched fabric List of device bandwidths CompactPCI PC card Universal Serial Bus (USB) Legacy port Backplane References Further reading Intel ISA Bus Specification and Application Notes - Rev 2.01; Intel; 73 pages; 1989. External links Computer-related introductions in 1981 Computer buses Motherboard expansion slot X86 IBM personal computers IBM PC compatibles Legacy hardware Computer hardware standards
Industry Standard Architecture
[ "Technology" ]
4,039
[ "Computer standards", "Computer hardware standards" ]
15,036
https://en.wikipedia.org/wiki/Information%20security
Information security is the practice of protecting information by mitigating information risks. It is part of information risk management. It typically involves preventing or reducing the probability of unauthorized or inappropriate access to data or the unlawful use, disclosure, disruption, deletion, corruption, modification, inspection, recording, or devaluation of information. It also involves actions intended to reduce the adverse impacts of such incidents. Protected information may take any form, e.g., electronic or physical, tangible (e.g., paperwork), or intangible (e.g., knowledge). Information security's primary focus is the balanced protection of data confidentiality, integrity, and availability (also known as the 'CIA' triad) while maintaining a focus on efficient policy implementation, all without hampering organization productivity. This is largely achieved through a structured risk management process. To standardize this discipline, academics and professionals collaborate to offer guidance, policies, and industry standards on passwords, antivirus software, firewalls, encryption software, legal liability, security awareness and training, and so forth. This standardization may be further driven by a wide variety of laws and regulations that affect how data is accessed, processed, stored, transferred, and destroyed. While paper-based business operations are still prevalent, requiring their own set of information security practices, enterprise digital initiatives are increasingly being emphasized, with information assurance now typically being dealt with by information technology (IT) security specialists. These specialists apply information security to technology (most often some form of computer system). IT security specialists are almost always found in any major enterprise/establishment due to the nature and value of the data within larger businesses. They are responsible for keeping all of the technology within the company secure from malicious attacks that often attempt to acquire critical private information or gain control of the internal systems. There are many specialist roles in Information Security including securing networks and allied infrastructure, securing applications and databases, security testing, information systems auditing, business continuity planning, electronic record discovery, and digital forensics. Standards Information security standards are techniques generally outlined in published materials that attempt to protect the information of a user or organization. This environment includes users themselves, networks, devices, all software, processes, information in storage or transit, applications, services, and systems that can be connected directly or indirectly to networks. The principal objective is to reduce the risks, including preventing or mitigating attacks. These published materials consist of tools, policies, security concepts, security safeguards, guidelines, risk management approaches, actions, training, best practices, assurance and technologies. Common information security standards include ISO/IEC 27001 and the NIST Cybersecurity Framework. Threats Information security threats come in many different forms. Some of the most common threats today are software attacks, theft of intellectual property, theft of identity, theft of equipment or information, sabotage, and information extortion. Viruses, worms, phishing attacks, and Trojan horses are a few common examples of software attacks. The theft of intellectual property has also been an extensive issue for many businesses. Identity theft is the attempt to act as someone else usually to obtain that person's personal information or to take advantage of their access to vital information through social engineering. Sabotage usually consists of the destruction of an organization's website in an attempt to cause loss of confidence on the part of its customers. Information extortion consists of theft of a company's property or information as an attempt to receive a payment in exchange for returning the information or property back to its owner, as with ransomware. One of the most functional precautions against these attacks is to conduct periodical user awareness. Governments, military, corporations, financial institutions, hospitals, non-profit organisations, and private businesses amass a great deal of confidential information about their employees, customers, products, research, and financial status. Should confidential information about a business's customers or finances or new product line fall into the hands of a competitor or hacker, a business and its customers could suffer widespread, irreparable financial loss, as well as damage to the company's reputation. From a business perspective, information security must be balanced against cost; the Gordon-Loeb Model provides a mathematical economic approach for addressing this concern. For the individual, information security has a significant effect on privacy, which is viewed very differently in various cultures. History Since the early days of communication, diplomats and military commanders understood that it was necessary to provide some mechanism to protect the confidentiality of correspondence and to have some means of detecting tampering. Julius Caesar is credited with the invention of the Caesar cipher c. 50 B.C., which was created in order to prevent his secret messages from being read should a message fall into the wrong hands. However, for the most part protection was achieved through the application of procedural handling controls. Sensitive information was marked up to indicate that it should be protected and transported by trusted persons, guarded and stored in a secure environment or strong box. As postal services expanded, governments created official organizations to intercept, decipher, read, and reseal letters (e.g., the U.K.'s Secret Office, founded in 1653). In the mid-nineteenth century more complex classification systems were developed to allow governments to manage their information according to the degree of sensitivity. For example, the British Government codified this, to some extent, with the publication of the Official Secrets Act in 1889. Section 1 of the law concerned espionage and unlawful disclosures of information, while Section 2 dealt with breaches of official trust. A public interest defense was soon added to defend disclosures in the interest of the state. A similar law was passed in India in 1889, The Indian Official Secrets Act, which was associated with the British colonial era and used to crack down on newspapers that opposed the Raj's policies. A newer version was passed in 1923 that extended to all matters of confidential or secret information for governance. By the time of the First World War, multi-tier classification systems were used to communicate information to and from various fronts, which encouraged greater use of code making and breaking sections in diplomatic and military headquarters. Encoding became more sophisticated between the wars as machines were employed to scramble and unscramble information. The establishment of computer security inaugurated the history of information security. The need for such appeared during World War II. The volume of information shared by the Allied countries during the Second World War necessitated formal alignment of classification systems and procedural controls. An arcane range of markings evolved to indicate who could handle documents (usually officers rather than enlisted troops) and where they should be stored as increasingly complex safes and storage facilities were developed. The Enigma Machine, which was employed by the Germans to encrypt the data of warfare and was successfully decrypted by Alan Turing, can be regarded as a striking example of creating and using secured information. Procedures evolved to ensure documents were destroyed properly, and it was the failure to follow these procedures which led to some of the greatest intelligence coups of the war (e.g., the capture of U-570). Various mainframe computers were connected online during the Cold War to complete more sophisticated tasks, in a communication process easier than mailing magnetic tapes back and forth by computer centers. As such, the Advanced Research Projects Agency (ARPA), of the United States Department of Defense, started researching the feasibility of a networked system of communication to trade information within the United States Armed Forces. In 1968, the ARPANET project was formulated by Larry Roberts, which would later evolve into what is known as the internet. In 1973, important elements of ARPANET security were found by internet pioneer Robert Metcalfe to have many flaws such as the: "vulnerability of password structure and formats; lack of safety procedures for dial-up connections; and nonexistent user identification and authorizations", aside from the lack of controls and safeguards to keep data safe from unauthorized access. Hackers had effortless access to ARPANET, as phone numbers were known by the public. Due to these problems, coupled with the constant violation of computer security, as well as the exponential increase in the number of hosts and users of the system, "network security" was often alluded to as "network insecurity". The end of the twentieth century and the early years of the twenty-first century saw rapid advancements in telecommunications, computing hardware and software, and data encryption. The availability of smaller, more powerful, and less expensive computing equipment made electronic data processing within the reach of small business and home users. The establishment of Transfer Control Protocol/Internetwork Protocol (TCP/IP) in the early 1980s enabled different types of computers to communicate. These computers quickly became interconnected through the internet. The rapid growth and widespread use of electronic data processing and electronic business conducted through the internet, along with numerous occurrences of international terrorism, fueled the need for better methods of protecting the computers and the information they store, process, and transmit. The academic disciplines of computer security and information assurance emerged along with numerous professional organizations, all sharing the common goals of ensuring the security and reliability of information systems. Security Goals CIA triad The "CIA triad" of confidentiality, integrity, and availability is at the heart of information security. The concept was introduced in the Anderson Report in 1972 and later repeated in The Protection of Information in Computer Systems. The abbreviation was coined by Steve Lipner around 1986. Debate continues about whether or not this triad is sufficient to address rapidly changing technology and business requirements, with recommendations to consider expanding on the intersections between availability and confidentiality, as well as the relationship between security and privacy. Other principles such as "accountability" have sometimes been proposed; it has been pointed out that issues such as non-repudiation do not fit well within the three core concepts. Confidentiality In information security, confidentiality "is the property, that information is not made available or disclosed to unauthorized individuals, entities, or processes." While similar to "privacy," the two words are not interchangeable. Rather, confidentiality is a component of privacy that implements to protect our data from unauthorized viewers. Examples of confidentiality of electronic data being compromised include laptop theft, password theft, or sensitive emails being sent to the incorrect individuals. Integrity In IT security, data integrity means maintaining and assuring the accuracy and completeness of data over its entire lifecycle. This means that data cannot be modified in an unauthorized or undetected manner. This is not the same thing as referential integrity in databases, although it can be viewed as a special case of consistency as understood in the classic ACID model of transaction processing. Information security systems typically incorporate controls to ensure their own integrity, in particular protecting the kernel or core functions against both deliberate and accidental threats. Multi-purpose and multi-user computer systems aim to compartmentalize the data and processing such that no user or process can adversely impact another: the controls may not succeed however, as we see in incidents such as malware infections, hacks, data theft, fraud, and privacy breaches. More broadly, integrity is an information security principle that involves human/social, process, and commercial integrity, as well as data integrity. As such it touches on aspects such as credibility, consistency, truthfulness, completeness, accuracy, timeliness, and assurance. Availability For any information system to serve its purpose, the information must be available when it is needed. This means the computing systems used to store and process the information, the security controls used to protect it, and the communication channels used to access it must be functioning correctly. High availability systems aim to remain available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades. Ensuring availability also involves preventing denial-of-service attacks, such as a flood of incoming messages to the target system, essentially forcing it to shut down. In the realm of information security, availability can often be viewed as one of the most important parts of a successful information security program. Ultimately end-users need to be able to perform job functions; by ensuring availability an organization is able to perform to the standards that an organization's stakeholders expect. This can involve topics such as proxy configurations, outside web access, the ability to access shared drives and the ability to send emails. Executives oftentimes do not understand the technical side of information security and look at availability as an easy fix, but this often requires collaboration from many different organizational teams, such as network operations, development operations, incident response, and policy/change management. A successful information security team involves many different key roles to mesh and align for the "CIA" triad to be provided effectively. Additional security goals In addition to the classic CIA triad of security goals, some organisations may want to include security goals like authenticity, accountability, non-repudiation, and reliability. Non-repudiation In law, non-repudiation implies one's intention to fulfill their obligations to a contract. It also implies that one party of a transaction cannot deny having received a transaction, nor can the other party deny having sent a transaction. It is important to note that while technology such as cryptographic systems can assist in non-repudiation efforts, the concept is at its core a legal concept transcending the realm of technology. It is not, for instance, sufficient to show that the message matches a digital signature signed with the sender's private key, and thus only the sender could have sent the message, and nobody else could have altered it in transit (data integrity). The alleged sender could in return demonstrate that the digital signature algorithm is vulnerable or flawed, or allege or prove that his signing key has been compromised. The fault for these violations may or may not lie with the sender, and such assertions may or may not relieve the sender of liability, but the assertion would invalidate the claim that the signature necessarily proves authenticity and integrity. As such, the sender may repudiate the message (because authenticity and integrity are pre-requisites for non-repudiation). Other Models In 1992 and revised in 2002, the OECD's Guidelines for the Security of Information Systems and Networks proposed the nine generally accepted principles: awareness, responsibility, response, ethics, democracy, risk assessment, security design and implementation, security management, and reassessment. Building upon those, in 2004 the NIST's Engineering Principles for Information Technology Security proposed 33 principles. In 1998, Donn Parker proposed an alternative model for the classic "CIA" triad that he called the six atomic elements of information. The elements are confidentiality, possession, integrity, authenticity, availability, and utility. The merits of the Parkerian Hexad are a subject of debate amongst security professionals. In 2011, The Open Group published the information security management standard O-ISM3. This standard proposed an operational definition of the key concepts of security, with elements called "security objectives", related to access control (9), availability (3), data quality (1), compliance, and technical (4). Risk management Risk is the likelihood that something bad will happen that causes harm to an informational asset (or the loss of the asset). A vulnerability is a weakness that could be used to endanger or cause harm to an informational asset. A threat is anything (man-made or act of nature) that has the potential to cause harm. The likelihood that a threat will use a vulnerability to cause harm creates a risk. When a threat does use a vulnerability to inflict harm, it has an impact. In the context of information security, the impact is a loss of availability, integrity, and confidentiality, and possibly other losses (lost income, loss of life, loss of real property). The Certified Information Systems Auditor (CISA) Review Manual 2006 defines risk management as "the process of identifying vulnerabilities and threats to the information resources used by an organization in achieving business objectives, and deciding what countermeasures, if any, to take in reducing risk to an acceptable level, based on the value of the information resource to the organization." There are two things in this definition that may need some clarification. First, the process of risk management is an ongoing, iterative process. It must be repeated indefinitely. The business environment is constantly changing and new threats and vulnerabilities emerge every day. Second, the choice of countermeasures (controls) used to manage risks must strike a balance between productivity, cost, effectiveness of the countermeasure, and the value of the informational asset being protected. Furthermore, these processes have limitations as security breaches are generally rare and emerge in a specific context which may not be easily duplicated. Thus, any process and countermeasure should itself be evaluated for vulnerabilities. It is not possible to identify all risks, nor is it possible to eliminate all risk. The remaining risk is called "residual risk". A risk assessment is carried out by a team of people who have knowledge of specific areas of the business. Membership of the team may vary over time as different parts of the business are assessed. The assessment may use a subjective qualitative analysis based on informed opinion, or where reliable dollar figures and historical information is available, the analysis may use quantitative analysis. Research has shown that the most vulnerable point in most information systems is the human user, operator, designer, or other human. The ISO/IEC 27002:2005 Code of practice for information security management recommends the following be examined during a risk assessment: security policy, organization of information security, asset management, human resources security, physical and environmental security, communications and operations management, access control, information systems acquisition, development, and maintenance, information security incident management, business continuity management regulatory compliance. In broad terms, the risk management process consists of: Identification of assets and estimating their value. Include: people, buildings, hardware, software, data (electronic, print, other), supplies. Conduct a threat assessment. Include: Acts of nature, acts of war, accidents, malicious acts originating from inside or outside the organization. Conduct a vulnerability assessment, and for each vulnerability, calculate the probability that it will be exploited. Evaluate policies, procedures, standards, training, physical security, quality control, technical security. Calculate the impact that each threat would have on each asset. Use qualitative analysis or quantitative analysis. Identify, select and implement appropriate controls. Provide a proportional response. Consider productivity, cost effectiveness, and value of the asset. Evaluate the effectiveness of the control measures. Ensure the controls provide the required cost effective protection without discernible loss of productivity. For any given risk, management can choose to accept the risk based upon the relative low value of the asset, the relative low frequency of occurrence, and the relative low impact on the business. Or, leadership may choose to mitigate the risk by selecting and implementing appropriate control measures to reduce the risk. In some cases, the risk can be transferred to another business by buying insurance or outsourcing to another business. The reality of some risks may be disputed. In such cases leadership may choose to deny the risk. Security controls Selecting and implementing proper security controls will initially help an organization bring down risk to acceptable levels. Control selection should follow and should be based on the risk assessment. Controls can vary in nature, but fundamentally they are ways of protecting the confidentiality, integrity or availability of information. ISO/IEC 27001 has defined controls in different areas. Organizations can implement additional controls according to requirement of the organization. ISO/IEC 27002 offers a guideline for organizational information security standards. Defense in depth Defense in depth is a fundamental security philosophy that relies on overlapping security systems designed to maintain protection even if individual components fail. Rather than depending on a single security measure, it combines multiple layers of security controls both in the cloud and at network endpoints. This approach includes combinations like firewalls with intrusion-detection systems, email filtering services with desktop anti-virus, and cloud-based security alongside traditional network defenses. The concept can be implemented through three distinct layers of administrative, logical, and physical controls, or visualized as an onion model with data at the core, surrounded by people, network security, host-based security, and application security layers. The strategy emphasizes that security involves not just technology, but also people and processes working together, with real-time monitoring and response being crucial components. Classification An important aspect of information security and risk management is recognizing the value of information and defining appropriate procedures and protection requirements for the information. Not all information is equal and so not all information requires the same degree of protection. This requires information to be assigned a security classification. The first step in information classification is to identify a member of senior management as the owner of the particular information to be classified. Next, develop a classification policy. The policy should describe the different classification labels, define the criteria for information to be assigned a particular label, and list the required security controls for each classification. Some factors that influence which classification information should be assigned include how much value that information has to the organization, how old the information is and whether or not the information has become obsolete. Laws and other regulatory requirements are also important considerations when classifying information. The Information Systems Audit and Control Association (ISACA) and its Business Model for Information Security also serves as a tool for security professionals to examine security from a systems perspective, creating an environment where security can be managed holistically, allowing actual risks to be addressed. The type of information security classification labels selected and used will depend on the nature of the organization, with examples being: In the business sector, labels such as: Public, Sensitive, Private, Confidential. In the government sector, labels such as: Unclassified, Unofficial, Protected, Confidential, Secret, Top Secret, and their non-English equivalents. In cross-sectoral formations, the Traffic Light Protocol, which consists of: White, Green, Amber, and Red. In the personal sector, one label such as Financial. This includes activities related to managing money, such as online banking. All employees in the organization, as well as business partners, must be trained on the classification schema and understand the required security controls and handling procedures for each classification. The classification of a particular information asset that has been assigned should be reviewed periodically to ensure the classification is still appropriate for the information and to ensure the security controls required by the classification are in place and are followed in their right procedures. Access control Access to protected information must be restricted to people who are authorized to access the information. The computer programs, and in many cases the computers that process the information, must also be authorized. This requires that mechanisms be in place to control the access to protected information. The sophistication of the access control mechanisms should be in parity with the value of the information being protected; the more sensitive or valuable the information the stronger the control mechanisms need to be. The foundation on which access control mechanisms are built start with identification and authentication. Access control is generally considered in three steps: identification, authentication, and authorization. Identification Identification is an assertion of who someone is or what something is. If a person makes the statement "Hello, my name is John Doe" they are making a claim of who they are. However, their claim may or may not be true. Before John Doe can be granted access to protected information it will be necessary to verify that the person claiming to be John Doe really is John Doe. Typically the claim is in the form of a username. By entering that username you are claiming "I am the person the username belongs to". Authentication Authentication is the act of verifying a claim of identity. When John Doe goes into a bank to make a withdrawal, he tells the bank teller he is John Doe, a claim of identity. The bank teller asks to see a photo ID, so he hands the teller his driver's license. The bank teller checks the license to make sure it has John Doe printed on it and compares the photograph on the license against the person claiming to be John Doe. If the photo and name match the person, then the teller has authenticated that John Doe is who he claimed to be. Similarly, by entering the correct password, the user is providing evidence that he/she is the person the username belongs to. There are three different types of information that can be used for authentication: Something you know: things such as a PIN, a password, or your mother's maiden name Something you have: a driver's license or a magnetic swipe card Something you are: biometrics, including palm prints, fingerprints, voice prints, and retina (eye) scans Strong authentication requires providing more than one type of authentication information (two-factor authentication). The username is the most common form of identification on computer systems today and the password is the most common form of authentication. Usernames and passwords have served their purpose, but they are increasingly inadequate. Usernames and passwords are slowly being replaced or supplemented with more sophisticated authentication mechanisms such as time-based one-time password algorithms. Authorization After a person, program or computer has successfully been identified and authenticated then it must be determined what informational resources they are permitted to access and what actions they will be allowed to perform (run, view, create, delete, or change). This is called authorization. Authorization to access information and other computing services begins with administrative policies and procedures. The policies prescribe what information and computing services can be accessed, by whom, and under what conditions. The access control mechanisms are then configured to enforce these policies. Different computing systems are equipped with different kinds of access control mechanisms. Some may even offer a choice of different access control mechanisms. The access control mechanism a system offers will be based upon one of three approaches to access control, or it may be derived from a combination of the three approaches. The non-discretionary approach consolidates all access control under a centralized administration. The access to information and other resources is usually based on the individuals function (role) in the organization or the tasks the individual must perform. The discretionary approach gives the creator or owner of the information resource the ability to control access to those resources. In the mandatory access control approach, access is granted or denied basing upon the security classification assigned to the information resource. Examples of common access control mechanisms in use today include role-based access control, available in many advanced database management systems; simple file permissions provided in the UNIX and Windows operating systems; Group Policy Objects provided in Windows network systems; and Kerberos, RADIUS, TACACS, and the simple access lists used in many firewalls and routers. To be effective, policies and other security controls must be enforceable and upheld. Effective policies ensure that people are held accountable for their actions. The U.S. Treasury's guidelines for systems processing sensitive or proprietary information, for example, states that all failed and successful authentication and access attempts must be logged, and all access to information must leave some type of audit trail. Also, the need-to-know principle needs to be in effect when talking about access control. This principle gives access rights to a person to perform their job functions. This principle is used in the government when dealing with difference clearances. Even though two employees in different departments have a top-secret clearance, they must have a need-to-know in order for information to be exchanged. Within the need-to-know principle, network administrators grant the employee the least amount of privilege to prevent employees from accessing more than what they are supposed to. Need-to-know helps to enforce the confidentiality-integrity-availability triad. Need-to-know directly impacts the confidential area of the triad. Cryptography Information security uses cryptography to transform usable information into a form that renders it unusable by anyone other than an authorized user; this process is called encryption. Information that has been encrypted (rendered unusable) can be transformed back into its original usable form by an authorized user who possesses the cryptographic key, through the process of decryption. Cryptography is used in information security to protect information from unauthorized or accidental disclosure while the information is in transit (either electronically or physically) and while information is in storage. Cryptography provides information security with other useful applications as well, including improved authentication methods, message digests, digital signatures, non-repudiation, and encrypted network communications. Older, less secure applications such as Telnet and File Transfer Protocol (FTP) are slowly being replaced with more secure applications such as Secure Shell (SSH) that use encrypted network communications. Wireless communications can be encrypted using protocols such as WPA/WPA2 or the older (and less secure) WEP. Wired communications (such as ITU‑T G.hn) are secured using AES for encryption and X.1035 for authentication and key exchange. Software applications such as GnuPG or PGP can be used to encrypt data files and email. Cryptography can introduce security problems when it is not implemented correctly. Cryptographic solutions need to be implemented using industry-accepted solutions that have undergone rigorous peer review by independent experts in cryptography. The length and strength of the encryption key is also an important consideration. A key that is weak or too short will produce weak encryption. The keys used for encryption and decryption must be protected with the same degree of rigor as any other confidential information. They must be protected from unauthorized disclosure and destruction, and they must be available when needed. Public key infrastructure (PKI) solutions address many of the problems that surround key management. Process U.S. Federal Sentencing Guidelines now make it possible to hold corporate officers liable for failing to exercise due care and due diligence in the management of their information systems. In the field of information security, Harris offers the following definitions of due care and due diligence: "Due care are steps that are taken to show that a company has taken responsibility for the activities that take place within the corporation and has taken the necessary steps to help protect the company, its resources, and employees." And, [Due diligence are the] "continual activities that make sure the protection mechanisms are continually maintained and operational." Attention should be made to two important points in these definitions. First, in due care, steps are taken to show; this means that the steps can be verified, measured, or even produce tangible artifacts. Second, in due diligence, there are continual activities; this means that people are actually doing things to monitor and maintain the protection mechanisms, and these activities are ongoing. Organizations have a responsibility with practicing duty of care when applying information security. The Duty of Care Risk Analysis Standard (DoCRA) provides principles and practices for evaluating risk. It considers all parties that could be affected by those risks. DoCRA helps evaluate safeguards if they are appropriate in protecting others from harm while presenting a reasonable burden. With increased data breach litigation, companies must balance security controls, compliance, and its mission. Incident response plans Computer security incident management is a specialized form of incident management focused on monitoring, detecting, and responding to security events on computers and networks in a predictable way. Organizations implement this through incident response plans (IRPs) that are activated when security breaches are detected. These plans typically involve an incident response team (IRT) with specialized skills in areas like penetration testing, computer forensics, and network security. Change management Change management is a formal process for directing and controlling alterations to the information processing environment. This includes alterations to desktop computers, the network, servers, and software. The objectives of change management are to reduce the risks posed by changes to the information processing environment and improve the stability and reliability of the processing environment as changes are made. It is not the objective of change management to prevent or hinder necessary changes from being implemented. Any change to the information processing environment introduces an element of risk. Even apparently simple changes can have unexpected effects. One of management's many responsibilities is the management of risk. Change management is a tool for managing the risks introduced by changes to the information processing environment. Part of the change management process ensures that changes are not implemented at inopportune times when they may disrupt critical business processes or interfere with other changes being implemented. Not every change needs to be managed. Some kinds of changes are a part of the everyday routine of information processing and adhere to a predefined procedure, which reduces the overall level of risk to the processing environment. Creating a new user account or deploying a new desktop computer are examples of changes that do not generally require change management. However, relocating user file shares, or upgrading the Email server pose a much higher level of risk to the processing environment and are not a normal everyday activity. The critical first steps in change management are (a) defining change (and communicating that definition) and (b) defining the scope of the change system. Change management is usually overseen by a change review board composed of representatives from key business areas, security, networking, systems administrators, database administration, application developers, desktop support, and the help desk. The tasks of the change review board can be facilitated with the use of automated work flow application. The responsibility of the change review board is to ensure the organization's documented change management procedures are followed. The change management process is as follows Request: Anyone can request a change. The person making the change request may or may not be the same person that performs the analysis or implements the change. When a request for change is received, it may undergo a preliminary review to determine if the requested change is compatible with the organizations business model and practices, and to determine the amount of resources needed to implement the change. Approve: Management runs the business and controls the allocation of resources therefore, management must approve requests for changes and assign a priority for every change. Management might choose to reject a change request if the change is not compatible with the business model, industry standards or best practices. Management might also choose to reject a change request if the change requires more resources than can be allocated for the change. Plan: Planning a change involves discovering the scope and impact of the proposed change; analyzing the complexity of the change; allocation of resources and, developing, testing, and documenting both implementation and back-out plans. Need to define the criteria on which a decision to back out will be made. Test: Every change must be tested in a safe test environment, which closely reflects the actual production environment, before the change is applied to the production environment. The backout plan must also be tested. Schedule: Part of the change review board's responsibility is to assist in the scheduling of changes by reviewing the proposed implementation date for potential conflicts with other scheduled changes or critical business activities. Communicate: Once a change has been scheduled it must be communicated. The communication is to give others the opportunity to remind the change review board about other changes or critical business activities that might have been overlooked when scheduling the change. The communication also serves to make the help desk and users aware that a change is about to occur. Another responsibility of the change review board is to ensure that scheduled changes have been properly communicated to those who will be affected by the change or otherwise have an interest in the change. Implement: At the appointed date and time, the changes must be implemented. Part of the planning process was to develop an implementation plan, testing plan and, a back out plan. If the implementation of the change should fail or, the post implementation testing fails or, other "drop dead" criteria have been met, the back out plan should be implemented. Document: All changes must be documented. The documentation includes the initial request for change, its approval, the priority assigned to it, the implementation, testing and back out plans, the results of the change review board critique, the date/time the change was implemented, who implemented it, and whether the change was implemented successfully, failed or postponed. Post-change review: The change review board should hold a post-implementation review of changes. It is particularly important to review failed and backed out changes. The review board should try to understand the problems that were encountered, and look for areas for improvement. Change management procedures that are simple to follow and easy to use can greatly reduce the overall risks created when changes are made to the information processing environment. Good change management procedures improve the overall quality and success of changes as they are implemented. This is accomplished through planning, peer review, documentation, and communication. ISO/IEC 20000, The Visible OPS Handbook: Implementing ITIL in 4 Practical and Auditable Steps (Full book summary), and ITIL all provide valuable guidance on implementing an efficient and effective change management program information security. Business continuity Business continuity management (BCM) concerns arrangements aiming to protect an organization's critical business functions from interruption due to incidents, or at least minimize the effects. BCM is essential to any organization to keep technology and business in line with current threats to the continuation of business as usual. The BCM should be included in an organizations risk analysis plan to ensure that all of the necessary business functions have what they need to keep going in the event of any type of threat to any business function. It encompasses: Analysis of requirements, e.g., identifying critical business functions, dependencies and potential failure points, potential threats and hence incidents or risks of concern to the organization; Specification, e.g., maximum tolerable outage periods; recovery point objectives (maximum acceptable periods of data loss); Architecture and design, e.g., an appropriate combination of approaches including resilience (e.g. engineering IT systems and processes for high availability, avoiding or preventing situations that might interrupt the business), incident and emergency management (e.g., evacuating premises, calling the emergency services, triage/situation assessment and invoking recovery plans), recovery (e.g., rebuilding) and contingency management (generic capabilities to deal positively with whatever occurs using whatever resources are available); Implementation, e.g., configuring and scheduling backups, data transfers, etc., duplicating and strengthening critical elements; contracting with service and equipment suppliers; Testing, e.g., business continuity exercises of various types, costs and assurance levels; Management, e.g., defining strategies, setting objectives and goals; planning and directing the work; allocating funds, people and other resources; prioritization relative to other activities; team building, leadership, control, motivation and coordination with other business functions and activities (e.g., IT, facilities, human resources, risk management, information risk and security, operations); monitoring the situation, checking and updating the arrangements when things change; maturing the approach through continuous improvement, learning and appropriate investment; Assurance, e.g., testing against specified requirements; measuring, analyzing, and reporting key parameters; conducting additional tests, reviews and audits for greater confidence that the arrangements will go to plan if invoked. Whereas BCM takes a broad approach to minimizing disaster-related risks by reducing both the probability and the severity of incidents, a disaster recovery plan (DRP) focuses specifically on resuming business operations as quickly as possible after a disaster. A disaster recovery plan, invoked soon after a disaster occurs, lays out the steps necessary to recover critical information and communications technology (ICT) infrastructure. Disaster recovery planning includes establishing a planning group, performing risk assessment, establishing priorities, developing recovery strategies, preparing inventories and documentation of the plan, developing verification criteria and procedure, and lastly implementing the plan. Laws and regulations Below is a partial listing of governmental laws and regulations in various parts of the world that have, had, or will have, a significant effect on data processing and information security. Important industry sector regulations have also been included when they have a significant impact on information security. The UK Data Protection Act 1998 makes new provisions for the regulation of the processing of information relating to individuals, including the obtaining, holding, use or disclosure of such information. The European Union Data Protection Directive (EUDPD) requires that all E.U. members adopt national regulations to standardize the protection of data privacy for citizens throughout the E.U. The Computer Misuse Act 1990 is an Act of the U.K. Parliament making computer crime (e.g., hacking) a criminal offense. The act has become a model upon which several other countries, including Canada and Ireland, have drawn inspiration from when subsequently drafting their own information security laws. The E.U.'s Data Retention Directive (annulled) required internet service providers and phone companies to keep data on every electronic message sent and phone call made for between six months and two years. The Family Educational Rights and Privacy Act (FERPA) ( g; 34 CFR Part 99) is a U.S. Federal law that protects the privacy of student education records. The law applies to all schools that receive funds under an applicable program of the U.S. Department of Education. Generally, schools must have written permission from the parent or eligible student in order to release any information from a student's education record. The Federal Financial Institutions Examination Council's (FFIEC) security guidelines for auditors specifies requirements for online banking security. The Health Insurance Portability and Accountability Act (HIPAA) of 1996 requires the adoption of national standards for electronic health care transactions and national identifiers for providers, health insurance plans, and employers. Additionally, it requires health care providers, insurance providers and employers to safeguard the security and privacy of health data. The Gramm–Leach–Bliley Act of 1999 (GLBA), also known as the Financial Services Modernization Act of 1999, protects the privacy and security of private financial information that financial institutions collect, hold, and process. Section 404 of the Sarbanes–Oxley Act of 2002 (SOX) requires publicly traded companies to assess the effectiveness of their internal controls for financial reporting in annual reports they submit at the end of each fiscal year. Chief information officers are responsible for the security, accuracy, and the reliability of the systems that manage and report the financial data. The act also requires publicly traded companies to engage with independent auditors who must attest to, and report on, the validity of their assessments. The Payment Card Industry Data Security Standard (PCI DSS) establishes comprehensive requirements for enhancing payment account data security. It was developed by the founding payment brands of the PCI Security Standards Council — including American Express, Discover Financial Services, JCB, MasterCard Worldwide, and Visa International — to help facilitate the broad adoption of consistent data security measures on a global basis. The PCI DSS is a multifaceted security standard that includes requirements for security management, policies, procedures, network architecture, software design, and other critical protective measures. State security breach notification laws (California and many others) require businesses, nonprofits, and state institutions to notify consumers when unencrypted "personal information" may have been compromised, lost, or stolen. The Personal Information Protection and Electronics Document Act (PIPEDA) of Canada supports and promotes electronic commerce by protecting personal information that is collected, used or disclosed in certain circumstances, by providing for the use of electronic means to communicate or record information or transactions and by amending the Canada Evidence Act, the Statutory Instruments Act and the Statute Revision Act. Greece's Hellenic Authority for Communication Security and Privacy (ADAE) (Law 165/2011) establishes and describes the minimum information security controls that should be deployed by every company which provides electronic communication networks and/or services in Greece in order to protect customers' confidentiality. These include both managerial and technical controls (e.g., log records should be stored for two years). Greece's Hellenic Authority for Communication Security and Privacy (ADAE) (Law 205/2013) concentrates around the protection of the integrity and availability of the services and data offered by Greek telecommunication companies. The law forces these and other related companies to build, deploy, and test appropriate business continuity plans and redundant infrastructures. The US Department of Defense (DoD) issued DoD Directive 8570 in 2004, supplemented by DoD Directive 8140, requiring all DoD employees and all DoD contract personnel involved in information assurance roles and activities to earn and maintain various industry Information Technology (IT) certifications in an effort to ensure that all DoD personnel involved in network infrastructure defense have minimum levels of IT industry recognized knowledge, skills and abilities (KSA). Andersson and Reimers (2019) report these certifications range from CompTIA's A+ and Security+ through the ICS2.org's CISSP, etc. Culture Describing more than simply how security aware employees are, information security culture is the ideas, customs, and social behaviors of an organization that impact information security in both positive and negative ways. Cultural concepts can help different segments of the organization work effectively or work against effectiveness towards information security within an organization. The way employees think and feel about security and the actions they take can have a big impact on information security in organizations. Roer & Petric (2017) identify seven core dimensions of information security culture in organizations: Attitudes: employees' feelings and emotions about the various activities that pertain to the organizational security of information. Behaviors: actual or intended activities and risk-taking actions of employees that have direct or indirect impact on information security. Cognition: employees' awareness, verifiable knowledge, and beliefs regarding practices, activities, and self-efficacy relation that are related to information security. Communication: ways employees communicate with each other, sense of belonging, support for security issues, and incident reporting. Compliance: adherence to organizational security policies, awareness of the existence of such policies and the ability to recall the substance of such policies. Norms: perceptions of security-related organizational conduct and practices that are informally deemed either normal or deviant by employees and their peers, e.g. hidden expectations regarding security behaviors and unwritten rules regarding uses of information-communication technologies. Responsibilities: employees' understanding of the roles and responsibilities they have as a critical factor in sustaining or endangering the security of information, and thereby the organization. Andersson and Reimers (2014) found that employees often do not see themselves as part of the organization Information Security "effort" and often take actions that ignore organizational information security best interests. Research shows information security culture needs to be improved continuously. In Information Security Culture from Analysis to Change, authors commented, "It's a never ending process, a cycle of evaluation and change or maintenance." To manage the information security culture, five steps should be taken: pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation. Pre-evaluation: to identify the awareness of information security within employees and to analyze current security policy Strategic planning: to come up a better awareness-program, we need to set clear targets. Clustering people is helpful to achieve it Operative planning: create a good security culture based on internal communication, management buy-in, security awareness, and training programs Implementation: should feature commitment of management, communication with organizational members, courses for all organizational members, and commitment of the employees Post-evaluation: to better gauge the effectiveness of the prior steps and build on continuous improvement Other definitions Various definitions of information security are suggested below, summarized from different sources: "Preservation of confidentiality, integrity and availability of information. Note: In addition, other properties, such as authenticity, accountability, non-repudiation and reliability can also be involved." (ISO/IEC 27000:2018) "The protection of information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide confidentiality, integrity, and availability." (CNSS, 2010) "Ensures that only authorized users (confidentiality) have access to accurate and complete information (integrity) when required (availability)." (ISACA, 2008) "Information Security is the process of protecting the intellectual property of an organisation." (Pipkin, 2000) "...information security is a risk management discipline, whose job is to manage the cost of information risk to the business." (McDermott and Geer, 2001) "A well-informed sense of assurance that information risks and controls are in balance." (Anderson, J., 2003) "Information security is the protection of information and minimizes the risk of exposing information to unauthorized parties." (Venter and Eloff, 2003) "Information Security is a multidisciplinary area of study and professional activity which is concerned with the development and implementation of security mechanisms of all available types (technical, organizational, human-oriented and legal) in order to keep information in all its locations (within and outside the organization's perimeter) and, consequently, information systems, where information is created, processed, stored, transmitted and destroyed, free from threats. Information and information resource security using telecommunication system or devices means protecting information, information systems or books from unauthorized access, damage, theft, or destruction (Kurose and Ross, 2010). See also Backup Capability-based security Data-centric security Enterprise information security architecture Identity-based security Information privacy Information infrastructure Information security indicators Information technology IT risk ITIL security management Kill chain List of computer security certifications Mobile security Network Security Services Privacy engineering Privacy-enhancing technologies Security convergence Security information management Security level management Security of Information Act Security service (telecommunication) Verification and validation Gordon–Loeb model for information security investments References Bibliography Further reading Anderson, K., "IT Security Professionals Must Evolve for Changing Market", SC Magazine, October 12, 2006. Aceituno, V., "On Information Security Paradigms", ISSA Journal, September 2005. Easttom, C., Computer Security Fundamentals (2nd Edition) Pearson Education, 2011. Lambo, T., "ISO/IEC 27001: The future of infosec certification", ISSA Journal, November 2006. Dustin, D., " Awareness of How Your Data is Being Used and What to Do About It", "CDR Blog", May 2017. Dhillon, G., "The intellectual core of Information Systems Security", Journal of Information Systems Security, Vol. 19, No 2. External links DoD IA Policy Chart on the DoD Information Assurance Technology Analysis Center web site. patterns & practices Security Engineering Explained IWS – Information Security Chapter Ross Anderson's book "Security Engineering" Data security Security Crime prevention National security Cryptography Information governance
Information security
[ "Mathematics", "Engineering" ]
10,329
[ "Applied mathematics", "Data security", "Cryptography", "Cybersecurity engineering" ]
5,682,384
https://en.wikipedia.org/wiki/Yannis%20Bakos
Yannis Bakos is a professor at the Leonard N. Stern School of Business at New York University. His primary area of expertise is the economic and business implications of information technology, the Internet, and online media. He is the co-founder (with Chris F. Kemerer) of the Workshop on Information Systems and Economics (WISE), and the co-inventor of Flexplay DVDs. Early life Bakos holds a Ph.D. in Management and an MBA in Finance from the MIT Sloan School of Management. He also received a master's degree in Electrical Engineering and Computer Science and a B.S. in Computer Engineering from MIT's Department of Electrical Engineering and Computer Science. Before coming to NYU, Professor Bakos was on the faculty of the Merage School of Business at the University of California, Irvine and the Sloan School of Management at MIT. Career Bakos' early work showed that the internet would reduce the search costs of buyers and sellers, and that the resulting electronic marketplaces would result in lower prices and more competition among the sellers. He has more recently studied pricing strategies for information. For example, his work with Erik Brynjolfsson showed that Product bundling can be particularly effective for "digital information goods" with very low or zero marginal cost. In other recent work, he has been studying how reputation mechanisms, like the ones used by eBay, offer an alternative to traditional litigation as a way to settle disputes. Bakos is the co-inventor of Flexplay DVDs, which are limited play DVDs that expire a preset period after the package is opened. Expiration is triggered when a special chemical layer in the DVD is exposed to oxygen in the air, and thus does not depend on the electronics of the DVD player. This invention received several U.S. and international patents. Dr. Bakos co-founded Flexplay Technologies, where he was chairman of the board from 2001 until the company was sold to the Convex Group in 2004. Disney released about 100 movies in the U.S. using this technology under the ez-D trade name, and the technology was used in Japan until 2010. Notes See Bundling and Competition on the Internet and Bundling Information Goods: Pricing, Profits and Efficiency by Yannis Bakos and Erik Brynjolfsson. External links Home page MIT Sloan School of Management alumni American economists Information systems researchers New York University Stern School of Business faculty Living people Year of birth missing (living people)
Yannis Bakos
[ "Technology" ]
499
[ "Information systems", "Information systems researchers" ]
5,682,984
https://en.wikipedia.org/wiki/Donkey%20punch
Donkey punch is the sexual practice of inflicting blunt force trauma to the back of the head or lower back of the receiving partner during anal or vaginal sex as an attempt by the penetrating partner to induce involuntary tightening of internal or external anal sphincter muscles or vaginal passage of the receiving partner. According to Jeffrey Bahr of Medical College of Wisconsin, there is no reflex in humans that would cause such tensing in response to a blow on the head, although striking a partner on the back of the neck or head could cause severe, even lethal injury. Urban legend Sex columnist Dan Savage has discussed the alleged practice on several occasions. In 2004, Savage referred to the donkey punch as "a sex act that exists only in the imaginations of adolescent boys," adding "no one has ever attempted 'the Pirate,' just as no one has ever performed a Hot Karl, delivered a Donkey Punch, or inserted an Icy Mike. They're all fictions." Responding to an enquiry from Wikipedia editors, he again discussed the donkey punch urban legend in his "Savage Love" column in 2006. He wrote, "attempting a Donkey Punch can lead to ... unpleasant outcomes," including "injury, death or incarceration"; he also pointed out that it "doesn't even work." He quoted Jeffrey Bahr, a faculty member at the Medical College of Wisconsin, Jordan Tate, commenting in The Contemporary Dictionary of Sexual Euphemisms (2007) on the "almost purely theoretical nature" of the term, claimed, Pornography The adult film star credited as the first known recipient of a donkey punch is Gia Paloma, who had the act performed on her by Alex Sanders in the 2004 film Gutter Mouths 30. Donkey Punch, a pornographic film premised around the act, was released by JM Productions in 2005. The film consists of four scenes in which the male actors engage in rough sex with their female co-stars, punching them repeatedly in the head and body throughout. In response to her experience on the set, performer Alex Devine allegedly stated "Donkey Punch was the most brutal, depressing, scary scene that I have ever done," and commented that "I actually stopped the scene while it was being filmed because I was in too much pain." The viciousness of the film prompted Peter van Aarle of Cyberspace Adult Video Reviews to forgo covering any further releases from JM, while Zack Parsons of Something Awful (which awarded Donkey Punch a score of -49, where -50 is the worst score possible) wrote that the film was "one of the most morally repugnant pornographic movies I have seen" and "the sort of movie that the government would cite when trying to arrest pornographers and outlaw pornography." Enron scandal "Donkey punch" was one of several slang terms used by Enron traders to refer to their price gouging methods. During investigations into the 2004 Enron scandal over manipulation of the electricity market in California, recordings of Enron traders were uncovered dating from 2000 and 2001. In the recordings, fraudulent accounting schemes were referred to using slang terms, including "Donkey Punch." The 2007 report by the Federal Energy Regulatory Commission was unable to identify the meaning that Enron had attached to the term "Donkey Punch." U.S. senator Maria Cantwell, in a 2004 press release about the Enron hearings, identified the Donkey Punch as "a crude pornographic term," one of many "lewd acts" that Enron employees used to describe their schemes. Cantwell asked the Federal Energy Regulatory Commission to take down the emails that were on its website due to the content. Jeopardy! The term received extensive coverage online after it was given as an incorrect response on the January 16, 2012, broadcast of the American game show Jeopardy!. The prompt was "A blow to the back of the neck is the punch named for this animal"; the correct response was rabbit punch, a dangerous boxing move. The first contestant responded with "What is a donkey?" The subsequent contestant gave the correct response. A clip of the scene became a viral video. References External links Donkey punch citation at Double-Tongued Word Wrestler Dictionary Anal eroticism Sexual slang Sexual acts Sexual urban legends Sexual violence Violence against women
Donkey punch
[ "Biology" ]
868
[ "Sexual acts", "Behavior", "Sexuality", "Mating" ]
5,683,309
https://en.wikipedia.org/wiki/Institution%20of%20Structural%20Engineers
The Institution of Structural Engineers is a British professional body for structural engineers. In 2021, it had 29,900 members operating in 112 countries. It provides professional accreditation and publishes a magazine, The Structural Engineer, which has been produced monthly since 1924. It also has a research journal, Structures, published by Elsevier. History The Institution gained its Royal Charter in March 1934. It was established at the Ritz Hotel, London on 21 July 1908 as the Concrete Institute, as the result of a need to define standards and rules for the proper use of concrete in the construction industry. H. Kempton Dyson was one of the founder members and the first permanent secretary. On 22 February 1909, the Institution was incorporated under the Companies Acts 1862-1907 as a company limited by guarantee not having a capital divided into shares. It was renamed the Institution of Structural Engineers in 1922, when its areas of interest were extended to cover 'structures' of all kinds. By 1925 the Institution had 1,700 members and has continued to grow over the years. It has fifty groups worldwide. The first woman member to be elected as an Associate member was Florence Mary Taylor in 1926. It took until 1947 for Mary Irvine to be the first women to be elected a Chartered Member, and until 1954 when Marjem Chatterton was the first woman elected as a Fellow. Presidents See also Construction Industry Council Engineering Council UK Institution of Civil Engineers Gold Medal of the Institution of Structural Engineers Structural Awards References ECUK Licensed Members 1908 establishments in the United Kingdom Organizations established in 1908
Institution of Structural Engineers
[ "Engineering" ]
309
[ "Structural engineering", "Institution of Structural Engineers" ]
5,683,324
https://en.wikipedia.org/wiki/Macrophage%20colony-stimulating%20factor
The colony stimulating factor 1 (CSF1), also known as macrophage colony-stimulating factor (M-CSF), is a secreted cytokine which causes hematopoietic stem cells to differentiate into macrophages or other related cell types. Eukaryotic cells also produce M-CSF in order to combat intercellular viral infection. It is one of the three experimentally described colony-stimulating factors. M-CSF binds to the colony stimulating factor 1 receptor. It may also be involved in development of the placenta. Structure M-CSF is a cytokine, being a smaller protein involved in cell signaling. The active form of the protein is found extracellularly as a disulfide-linked homodimer, and is thought to be produced by proteolytic cleavage of membrane-bound precursors. Four transcript variants encoding three different isoforms (a proteoglycan, glycoprotein and cell surface protein) have been found for this gene. Function M-CSF (or CSF-1) is a hematopoietic growth factor that is involved in the proliferation, differentiation, and survival of monocytes, macrophages, and bone marrow progenitor cells. M-CSF affects macrophages and monocytes in several ways, including stimulating increased phagocytic and chemotactic activity, and increased tumour cell cytotoxicity. The role of M-CSF is not only restricted to the monocyte/macrophage cell lineage. By interacting with its membrane receptor (CSF1R or M-CSF-R encoded by the c-fms proto-oncogene), M-CSF also modulates the proliferation of earlier hematopoietic progenitors and influence numerous physiological processes involved in immunology, metabolism, fertility and pregnancy. M-CSF released by osteoblasts (as a result of endocrine stimulation by parathyroid hormone) exerts paracrine effects on osteoclasts. M-CSF binds to receptors on osteoclasts inducing differentiation, and ultimately leading to increased plasma calcium levels—through the resorption (breakdown) of bone. Additionally, high levels of CSF-1 expression are observed in the endometrial epithelium of the pregnant uterus as well as high levels of its receptor CSF1R in the placental trophoblast. Studies have shown that activation of trophoblastic CSF1R by local high levels of CSF-1 is essential for normal embryonic implantation and placental development. More recently, it was discovered that CSF-1 and its receptor CSF1R are implicated in the mammary gland during normal development and neoplastic growth. Clinical significance Locally produced M-CSF in the vessel wall contributes to the development and progression of atherosclerosis. M-CSF has been described to play a role in renal pathology including acute kidney injury and chronic kidney failure. The chronic activation of monocytes can lead to multiple metabolic, hematologic and immunologic abnormalities in patients with chronic kidney failure. In the context of acute kidney injury, M-CSF has been implicated in promoting repair following injury, but also been described in an opposing role, driving proliferation of a pro-inflammatory macrophage phenotype. As a drug target PD-0360324 and MCS110 are CSF1 inhibitors in clinical trials for some cancers. See also CSF1R inhibitors. Interactions Macrophage colony-stimulating factor has been shown to interact with PIK3R2. References Further reading External links Cytokines Glycoproteins Proteoglycans
Macrophage colony-stimulating factor
[ "Chemistry" ]
781
[ "Glycoproteins", "Glycobiology", "Cytokines", "Signal transduction" ]
5,683,664
https://en.wikipedia.org/wiki/Ceva%20%28semiconductor%20company%29
Ceva Inc. is a publicly traded semiconductor intellectual property (IP) company, headquartered in Rockville, Maryland and specializes in digital signal processor (DSP) technology. The company's main development facility is located in Herzliya, Israel and Sophia Antipolis, France. History Ceva Inc. was created in November 2002, through the combination of the DSP IP licensing division of DSP Group (based in Israel) and Parthus Technologies plc. Parthus was originally named Silicon Systems Ltd, and founded in Dublin, Ireland, in 1993 by Brian Long and Peter McManamon, Parthus had its initial public offering in 2000, just as the dot-com bubble was bursting in May, 2000. The agreement was announced in April, 2002. The DSP Group had founded a US company originally called DSP Cores, Inc, and then Corage, Inc. in 2001. The company used the name ParthusCeva for the combination, and planned to list its shared on Nasdaq with symbol PCVA and London Stock Exchange symbol PCV. In December, 2003, the company dropped the "Parthus" from their name, and changed the sticker symbol to Ceva. In 2007, it sold its stake in Dublin-based company GloNav to NXP Semiconductor for a gain of $10.9 million. The company develops semiconductor intellectual property core technologies for multimedia and wireless communications. Ceva claimed the largest number of baseband processors in 2010, and a 90% DSP IP market share in 2011. In July 2014 it acquired RivieraWaves SAS, a private company based in France. A 2018 document promoting Israeli innovations mentioned the company. In July 2019 it acquired the Hillcrest Labs sensor fusion business from InterDigital. Also in July 2019, it entered into a strategic partnership with a Canadian company, Immervision to secure exclusive licensing rights for its patented image processing and sensor fusion technologies for wide-angle cameras. On May 31, 2021, Ceva acquired Intrinsix, another semiconductor design company, for an estimated $33 million. On September 20, 2023, Cadence acquired Intrinsix Corporation from Ceva. In December 2023, Ceva launched a new brand identity reflecting its focus on smart edge IP innovation reflecting the company's commitment to being the partner of choice for transformative IP solutions that power the smart edge. Technologies Imaging and computer vision Ceva develops technology for low-cost, low-power computational photography and computer vision. The company provides vision DSP cores, deep neural network toolkits, real-time software libraries, hardware accelerators, and algorithm developer ecosystems. Deep learning Ceva develops software for deep neural networks centered on the Ceva-XM computer vision and NeuPro AI cores. NeuPro is Ceva's family of low-power artificial intelligence processors for deep learning. NeuPro processors are self-contained, specialized AI processors, scaling in performance for a broad range of end markets including IoT, smartphones, surveillance, automotive, robotics, medical, and industrial. This group of products offers high-performance configurations ranging from 2 Tera Ops Per Second (TOPS) for the entry-level processor and 12.5 TOPS for the most advanced configuration. Wireless IoT Wireless connectivity is often used in devices being created for the Internet of things (IoT). Ceva develops Wi-Fi, Bluetooth, ultra-wideband, and narrowband IoT integrated wireless IoT platforms for integration into a system on a chip (SoC). See also Qualcomm Hexagon Texas Instruments TMS320 References Companies based in Silicon Valley Companies listed on the Nasdaq Computer companies of the United States Computer hardware companies Embedded microprocessors Semiconductor companies of Israel Electronics companies established in 2002
Ceva (semiconductor company)
[ "Technology" ]
774
[ "Computer hardware companies", "Computers" ]
5,684,046
https://en.wikipedia.org/wiki/Methacrolein
Methacrolein, or methacrylaldehyde, is an unsaturated aldehyde. It is a clear, colorless, flammable liquid. Methacrolein is one of two major products resulting from the reaction of isoprene with OH in the atmosphere, the other product being methyl vinyl ketone (MVK, also known as butenone). These compounds are important components of the atmospheric oxidation chemistry of biogenic chemicals, which can result in the formation of ozone and/or particulates. Methacrylaldehyde is also present in cigarette smoke. It can be found in the essential oil of the plant Big Sagebrush (Artemisia tridentata) which contains 5% methacrolein. Industrially, the primary use of methacrolein is in the manufacture of polymers and synthetic resins. Exposure to methacrolein is highly irritating to the eyes, nose, throat and lungs. See also Acrolein Methacrylic acid References External links Hazardous Substance Fact Sheet Alkenals Monomers Enones
Methacrolein
[ "Chemistry", "Materials_science" ]
227
[ "Monomers", "Polymer chemistry" ]
5,684,672
https://en.wikipedia.org/wiki/Circumhorizontal%20arc
A circumhorizontal arc is an optical phenomenon that belongs to the family of ice halos formed by the refraction of sunlight or moonlight in plate-shaped ice crystals suspended in the atmosphere, typically in actual cirrus or cirrostratus clouds. In its full form, the arc has the appearance of a large, brightly spectrum-coloured band (red being the topmost colour) running parallel to the horizon, located far below the Sun or Moon. The distance between the arc and the Sun or Moon is twice as far as the common 22-degree halo. Often, when the halo-forming cloud is small or patchy, only fragments of the arc are seen. As with all halos, it can be caused by the Sun as well as (but much more rarely) the Moon. Other currently accepted names for the circumhorizontal arc are circumhorizon arc or lower symmetric 46° plate arc. The misleading term "fire rainbow" is sometimes used to describe this phenomenon, although it is neither a rainbow, nor related in any way to fire. The term, apparently coined in 2006, may originate in the occasional appearance of the arc as "flames" in the sky, when it occurs in fragmentary cirrus clouds. Formation The halo is formed by sunlight entering horizontally-oriented, flat, hexagonal ice crystals through a vertical side face and leaving through the near horizontal bottom face (plate thickness does not affect the formation of the halo). In principle, Parry oriented column crystals may also produce the arc, although this is rare. The 90° inclination between the ray entrance and exit faces produce the well-separated spectral colours. The arc has a considerable angular extent and thus, rarely is complete. When only fragments of a cirrus cloud are in the appropriate sky and sun position, they may appear to shine with spectral colours. Frequency How often a circumhorizontal arc is seen depends on the location and the latitude of the observer. In the United States it is a relatively common halo, seen several times each summer in any one place. In contrast, it is a rare phenomenon in northern Europe for several reasons. Apart from the presence of ice-containing clouds in the right position in the sky, the halo requires that the light source (Sun or Moon) be very high in the sky, at an elevation of 58° or greater. This means that the solar variety of the halo is impossible to see at locations north of 55°N or south of 55°S. A lunar circumhorizon arc might be visible at other latitudes, but is much rarer since it requires a nearly full Moon to produce enough light. At other latitudes the solar circumhorizontal arc is visible, for a greater or lesser time, around the summer solstice. Slots of visibility for different latitudes and locations may be looked up here. For example, in London the sun is only high enough for 140 hours between mid-May and late July, whereas Los Angeles has the sun higher than 58 degrees for 670 hours between late March and late September. Artificial circumhorizontal arcs A water glass experiment (known about since at least 1920) may be modified slightly to create an artificial circumhorizontal arc. Illuminating under a very steep angle from below the side face of a nearly completely water-filled cylindrical glass will refract the light into the water. The glass should be situated at the edge of a table. The second refraction at the top water-air interface will then project a hyperbola at a vertical wall behind it. The overall refraction is then equivalent to the refraction through an upright hexagonal plate crystal when the rotational averaging is taken into account. A colorful artificial circumhorizontal arc will then appear projected on the wall. Using a spherical projection screen instead will result in a closer analogy to the natural halo counterpart. Other artificial halos can be created by similar means. Similar optical phenomena Circumhorizontal arcs, especially when only fragments can be seen, are sometimes confused with cloud iridescence. This phenomenon also causes clouds to appear multi-coloured, but it originates from diffraction (typically by liquid water droplets or ice crystals) rather than refraction. The two phenomena can be distinguished by several features. Firstly, a circumhorizon arc always has a fixed location in the sky in relation to the Sun or Moon (namely below it at an angle of 46°), while iridescence can occur in different positions (often directly around the Sun or Moon). Secondly, the colour bands in a circumhorizon arc always run horizontally with the red on top, while in iridescence they are much more random in sequence and shape, which roughly follows the contours of the cloud that causes it. Finally, the colours of a circumhorizon arc are pure and spectral (more so than in a rainbow), while the colours in cloud iridescence have a more washed-out, "mother of pearl" appearance. Confusion with other members of the halo family, such as sun dogs or the circumzenithal arc, may also arise, but these are easily dismissed by their entirely different positions in relation to the Sun or Moon. More difficult is the distinction between the circumhorizontal arc and the infralateral arc, both of which almost entirely overlap when the Sun or Moon is at a high elevation. The difference is that the circumhorizontal arc always runs parallel to the horizon (although pictures typically show it as a curved line due to perspective distortion), whereas the infralateral arc curves upward at its ends. Gallery See also Halo (optical phenomenon) Sundogs Cloud iridescence Circumzenithal arc Polar stratospheric cloud References External links Atmospheric Optics - Circumhorizon Arc How rare are they? When to see them. Atmospheric Optics - Image gallery Circumhorizontal Arc - Arbeitskreis Meteore e.V. Circumhorizontal Arc - Harald Edens Weather Photography Images of artificial circumhorizontal, circumzenithal and suncave Parry arcs Gilbert light experiments for boys - (1920), p. 98, Experiment No. 94 Atmospheric optical phenomena
Circumhorizontal arc
[ "Physics" ]
1,314
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
5,684,779
https://en.wikipedia.org/wiki/Coleto%20Creek%20Reservoir
Coleto Creek Reservoir is a reservoir on Coleto Creek and Perdido Creek located in Fannin, Texas, 15 miles (24 km) southwest of Victoria, Texas. The surface of the lake extends into Victoria and Goliad counties. The reservoir was formed in 1980 by the construction of a dam by the Guadalupe-Blanco River Authority to provide a power station cooling pond for electric power generation. Coleto Creek Reservoir is a venue for outdoor recreation, including fishing and boating. Fish and plant life Coleto Creek Reservoir has been stocked with species of fish intended to improve the utility of the reservoir for recreational fishing. Fish present in the reservoir include white bass, hybrid striped bass, catfish, crappie, sunfish, bluegill, and largemouth bass. Vegetation in the lake includes cattail, pondweed, American lotus, rushes, and hydrilla. Recreational uses The Guadalupe-Blanco River Authority maintains a public park at the reservoir with recreational facilities for boating and fishing. The reservoir has camp sites, picnic areas, cabins, a boat ramp for access to the water, a long lighted fishing pier, a hiking path, and restroom facilities. Climate According to the Köppen Climate Classification system, the area has a humid subtropical climate, abbreviated "Cfa" on climate maps. The hottest temperature recorded at Coleto Creek Reservoir was on August 19, 2011, while the coldest temperature recorded was on February 15–16, 2021 and December 23, 2022. References External links Coleto Creek Reservoir - Guadalupe-Blanco River Authority Coleto Creek Reservoir - Texas Parks & Wildlife Coleto Creek - Handbook of Texas Online Reservoirs in Texas Protected areas of Goliad County, Texas Protected areas of Victoria County, Texas Bodies of water of Goliad County, Texas Bodies of water of Victoria County, Texas Cooling ponds Guadalupe-Blanco River Authority
Coleto Creek Reservoir
[ "Chemistry", "Environmental_science" ]
369
[ "Cooling ponds", "Water pollution" ]
5,684,937
https://en.wikipedia.org/wiki/Instrument%20control
Instrument control consists of connecting a desktop instrument to a computer and taking measurements. History In the late 1960s the first bus used for communication was developed by Hewlett-Packard and was called HP-IB (Hewlett-Packard Interface Bus). Since HP-IB was originally designed to only work with HP instruments, the need arose for a standard, high-speed interface for communication between instruments and controllers from a variety of vendors. This need was addressed in 1975 by the Institute of Electrical and Electronics Engineers (IEEE) published ANSI/IEEE Standard 488-1975, IEEE Standard Digital Interface for Programmable Instrumentation, which contained the electrical, mechanical, and functional specifications of an interfacing system. The standard was updated in 1987 and again in 1992 This bus is known by three different names, General Purpose Interface Bus (GPIB), Hewlett-Packard Interface Bus (HP-IB), and IEEE-488 Bus, and is used worldwide. Today, there are several other buses in addition to the GPIB that can be used for instrument control. These include: Ethernet, USB, Serial, PCI, and PXI. Software In addition to the hardware bus to control an instrument, software for the PC is also needed. Virtual Instrument Software Architecture, or VISA, was developed by the VME eXtensions for Instrumentation (VXI) plug and play Systems Alliance as a specification for I/O software. VISA was a step toward industry-wide software compatibility. The VISA specification defines a software standard for VXI, and for GPIB, serial, Ethernet and other interfaces. More than 35 of the largest instrumentation companies in the industry endorse VISA as the standard. The alliance created distinct frameworks by grouping the most popular operating systems, application development environments, and programming languages and defined in-depth specifications to guarantee interoperability of components within each framework. Instruments can be programmed by sending and receiving text based SCPI commands or by using an instrument driver . To ease the programming of instruments, many instruments are provided with industry standard instrument drivers such as VXIplug&play or IVI. These drivers require a VISA library to be to installed on the PC. IVI instrument drivers were designed to enable interchangeability of instruments in a manufacturing setting where automation and reduced down-time are important, but they are often used in other applications as well. Application development environments can support instrument control by supporting VISA and industry standard instrument drivers. Environments supporting VISA include LabVIEW, LabWindows/CVI, MATLAB, and VEE. Furthermore, the VISA library can support programming languages like C, C++, C#, Python and others. See also Agilent VEE Automation IEEE-488 Instrument Driver LabVIEW LabWindows LAN eXtensions for Instrumentation MATLAB Standard Commands for Programmable Instruments Virtual Instrument Software Architecture (VISA) References External links Instrument Control Fundamentals Presents technical content through theory, real-world examples, and interactive audiovisual tutorials - From National Instruments VXIplug&play GPIB Tutorial IVI Foundation Development Hints and Best Practices for Using Instrument Drivers - From Rohde & Schwarz Further reading Computer buses Electronic test equipment
Instrument control
[ "Technology", "Engineering" ]
640
[ "Electronic test equipment", "Measuring instruments" ]
5,684,961
https://en.wikipedia.org/wiki/Mothers%20against%20decapentaplegic%20homolog%206
SMAD family member 6, also known as SMAD6, is a protein that in humans is encoded by the SMAD6 gene. SMAD6 is a protein that, as its name describes, is a homolog of the Drosophila gene "mothers against decapentaplegic". It belongs to the SMAD family of proteins, which belong to the TGFβ superfamily of modulators. Like many other TGFβ family members SMAD6 is involved in cell signalling. It acts as a regulator of TGFβ family (such as bone morphogenetic proteins) activity by competing with SMAD4 and preventing the transcription of SMAD4's gene products. There are two known isoforms of this protein. Nomenclature The SMAD proteins are homologs of both the drosophila protein, mothers against decapentaplegic (MAD) and the C. elegans protein SMA. The name is a combination of the two. During Drosophila research, it was found that a mutation in the gene MAD in the mother repressed the gene decapentaplegic in the embryo. The phrase "Mothers against" was added as a humorous take-off on organizations opposing various issues e.g., Mothers Against Drunk Driving, or MADD; and based on a tradition of such unusual naming within the gene research community. Disease associations Heterozygous, damaging mutations in SMAD6 are the most frequent genetic cause of non-syndromic craniosynostosis identified to date. Interactions Mothers against decapentaplegic homolog 6 has been shown to interact with: HOXC8, MAP3K7, Mothers against decapentaplegic homolog 7, PIAS4, and STRAP. References Further reading Transcription factors Developmental genes and proteins MH1 domain MH2 domain Human proteins
Mothers against decapentaplegic homolog 6
[ "Chemistry", "Biology" ]
381
[ "Gene expression", "Molecular and cellular biology stubs", "Signal transduction", "Biochemistry stubs", "Induced stem cells", "Developmental genes and proteins", "Transcription factors" ]
5,685,017
https://en.wikipedia.org/wiki/Behavior%20of%20nuclear%20fuel%20during%20a%20reactor%20accident
This page describes how uranium dioxide nuclear fuel behaves during both normal nuclear reactor operation and under reactor accident conditions, such as overheating. Work in this area is often very expensive to conduct, and so has often been performed on a collaborative basis between groups of countries, usually under the aegis of the Organisation for Economic Co-operation and Development's Committee on the Safety of Nuclear Installations (CSNI). Swelling Cladding Both the fuel and cladding can swell. Cladding covers the fuel to form a fuel pin and can be deformed. It is normal to fill the gap between the fuel and the cladding with helium gas to permit better thermal contact between the fuel and the cladding. During use the amount of gas inside the fuel pin can increase because of the formation of noble gases (krypton and xenon) by the fission process. If a Loss-of-coolant accident (LOCA) (e.g. Three Mile Island) or a Reactivity Initiated Accident (RIA) (e.g. Chernobyl or SL-1) occurs then the temperature of this gas can increase. As the fuel pin is sealed the pressure of the gas will increase (PV = nRT) and it is possible to deform and burst the cladding. It has been noticed that both corrosion and irradiation can alter the properties of the zirconium alloy commonly used as cladding, making it brittle. As a result, the experiments using unirradiated zirconium alloy tubes can be misleading. According to one paper the following difference between the cladding failure mode of unused and used fuel was seen. Unirradiated fuel rods were pressurized before being placed in a special reactor at the Japanese Nuclear Safety Research Reactor (NSRR) where they were subjected to a simulated RIA transient. These rods failed after ballooning late in the transient when the cladding temperature was high. The failure of the cladding in these tests was ductile, and it was a burst opening. The used fuel (61 GW days/tonne of uranium) failed early in the transient with a brittle fracture which was a longitudinal crack. It was found that hydrided zirconium tube is weaker and the bursting pressure is lower. The common failure process of fuel in the water-cooled reactors is a transition to film boiling and subsequent ignition of zirconium cladding in the steam. The effects of the intense hot hydrogen reaction product flow on the fuel pellets and on the bundle's wall well represented on the sidebar picture. Fuel The nuclear fuel can swell during use, this is because of effects such as fission gas formation in the fuel and the damage which occurs to the lattice of the solid. The fission gases accumulate in the void that forms in the center of a fuel pellet as burnup increases. As the void forms, the once-cylindrical pellet degrades into pieces. The swelling of the fuel pellet can cause pellet-cladding interaction when it thermally expands to the inside of the cladding tubing. The swollen fuel pellet imposes mechanical stresses upon the cladding. A document on the subject of the swelling of the fuel can be downloaded from the NASA web site. Fission gas release As the fuel is degraded or heated the more volatile fission products which are trapped within the uranium dioxide may become free. For example, see Colle et al.. A report on the release of 85Kr, 106Ru and 137Cs from uranium when air is present has been written. It was found that uranium dioxide was converted to U3O8 between about 300 and 500 °C in air. They report that this process requires some time to start, after the induction time the sample gains mass. The authors report that a layer of U3O7 was present on the uranium dioxide surface during this induction time. They report that 3 to 8% of the krypton-85 was released, and that much less of the ruthenium (0.5%) and caesium (2.6 x 10−3%) occurred during the oxidation of the uranium dioxide. Heat transfer between the cladding and the water In a water-cooled power reactor (or in a water-filled spent fuel pool, SFP), if a power surge occurs as a result of a reactivity initiated accident, an understanding of the transfer of heat from the surface of the cladding to the water is very useful. In a French study, metal pipe immersed in water (both under typical PWR and SFP conditions), was electrically heated to simulate the generation of heat within a fuel pin by nuclear processes. The temperature of the pipe was monitored by thermocouples and for the tests conducted under PWR conditions the water entering the larger pipe (14.2 mm diameter) holding the test metal pipe (9.5 mm outside diameter and 600 mm long) was at 280 °C and 15 MPa. The water was flowing past the inner pipe at circa 4 ms−1 and the cladding was subjected to heating at 2200 to 4900 °C s−1 to simulate an RIA. It was found that as the temperature of the cladding increased the rate of heat transfer from the surface of the cladding increased at first as the water boiled at nucleation sites. When the heat flux is greater than the critical heat flux a boiling crisis occurs. This occurs as the temperature of the fuel cladding surface increases so that the surface of the metal was too hot (surface dries out) for nucleation boiling. When the surface dries out the rate of heat transfer decreases, after a further increase in the temperature of the metal surface the boiling resumes but it is now film boiling. Hydriding and waterside corrosion As a nuclear fuel bundle increases in burnup (time in reactor), the radiation begins changing not only the fuel pellets inside the cladding, but the cladding material itself. The zirconium chemically reacts to the water flowing around it as coolant, forming a protective oxide on the surface of the cladding. Typically a fifth of the cladding wall will be consumed by oxide in PWRs. There is a smaller corrosion layer thickness in BWRs. The chemical reaction that takes place is: Zr + 2 H2O → ZrO2 + 2 H2 (g) Hydriding occurs when the product gas (hydrogen) precipitates out as hydrides within the zirconium. This causes the cladding to become embrittled, instead of ductile. The hydride bands form in rings within the cladding. As the cladding experiences hoop stress from the growing amount of fission products, the hoop stress increases. The material limitations of the cladding is one aspect that limits the amount of burnup nuclear fuel can accumulate in a reactor. CRUD (Chalk River Unidentified Deposits) was discovered by Chalk River Laboratories. It occurs on the exterior of the clad as burnup is accumulated. When a nuclear fuel assembly is prepared for onsite storage, it is dried and moved to a spent nuclear fuel shipping cask with scores of other assemblies. Then it sits on a concrete pad for a number of years waiting for an intermediate storage facility or reprocessing. The transportation of radiation-damaged cladding is tricky, because it is so fragile. After being removed from the reactor and cooling down in the spent fuel pool, the hydrides within the cladding of an assembly reorient themselves so that they radially point out from the fuel, rather than circularly in the direction of the hoop stress. This puts the fuel in a situation so that when it is moved to its final resting place, if the cask were to fall, the cladding would be so weak it could break and release the spent fuel pellets inside the cask. Corrosion on the inside of the cladding Zirconium alloys can undergo stress corrosion cracking when exposed to iodine; the iodine is formed as a fission product which depending on the nature of the fuel can escape from the pellet. It has been shown that iodine causes the rate of cracking in pressurised zircaloy-4 tubing to increase. Graphite moderated reactors In the cases of carbon dioxide cooled graphite moderated reactors such as magnox and AGR power reactors an important corrosion reaction is the reaction of a molecule of carbon dioxide with graphite (carbon) to form two molecules of carbon monoxide. This is one of the processes which limits the working life of this type of reactor. Water-cooled reactors Corrosion In a water-cooled reactor the action of radiation on the water (radiolysis) forms hydrogen peroxide and oxygen. These can cause stress corrosion cracking of metal parts which include fuel cladding and other pipework. To mitigate this hydrazine and hydrogen are injected into a BWR or PWR primary cooling circuit as corrosion inhibitors to adjust the redox properties of the system. A review of recent developments on this topic has been published. Thermal stresses upon quenching In a loss-of-coolant accident (LOCA) it is thought that the surface of the cladding could reach a temperature between 800 and 1400 K, and the cladding will be exposed to steam for some time before water is reintroduced into the reactor to cool the fuel. During this time when the hot cladding is exposed to steam some oxidation of the zirconium will occur to form a zirconium oxide which is more zirconium rich than zirconia. This Zr(O) phase is the α-phase, further oxidation forms zirconia. The longer the cladding is exposed to steam the less ductile it will be. One measure of the ductility is to compress a ring along a diameter (at a constant rate of displacement, in this case 2 mm min−1) until the first crack occurs, then the ring will start to fail. The elongation which occurs between when the maximum force is applied and when the mechanical load is declined to 80% of the load required to induce the first crack is the L0.8 value in mm. The more ductile a sample is the greater this L0.8 value will be. In one experiment the zirconium is heated in steam to 1473 K, the sample is slowly cooled in steam to 1173 K before being quenched in water. As the heating time at 1473 K is increased the zirconium becomes more brittle and the L0.8 value declines. Aging of steels Irradiation causes the properties of steels to become poorer, for instance SS316 becomes less ductile and less tough. Also creep and stress corrosion cracking become worse. Papers on this effect continue to be published. Cracking and overheating of the fuel This is due to the fact that as the fuel expands on heating, the core of the pellet expands more than the rim. Because of the thermal stress thus formed the fuel cracks, the cracks tend to go from the center to the edge in a star shaped pattern. A PhD thesis on the subject has been published by a student at the Royal Institute of Technology in Stockholm (Sweden). The cracking of the fuel has an effect on the release of radioactivity from fuel both under accident conditions and also when the spent fuel is used as the final disposal form. The cracking increases the surface area of the fuel which increases the rate at which fission products can leave the fuel. The temperature of the fuel varies as a function of the distance from the center to the rim. At distance x from the center the temperature (Tx) is described by the equation where ρ is the power density (W m−3) and Kf is the thermal conductivity. Tx = TRim + ρ (rpellet² – x²) (4 Kf)−1 To explain this for a series of fuel pellets being used with a rim temperature of 200 °C (typical for a BWR) with different diameters and power densities of 250 Wm−3 have been modeled using the above equation. These fuel pellets are rather large; it is normal to use oxide pellets which are about 10 mm in diameter. To show the effects of different power densities on the centerline temperatures two graphs for 20 mm pellets at different power levels are shown below. It is clear that for all pellets (and most true of uranium dioxide) that for a given sized pellet that a limit must be set on the power density. It is likely that the maths used for these calculations would be used to explain how electrical fuses function and also it could be used to predict the centerline temperature in any system where heat is released throughout a cylinder shaped object. Loss of volatile fission products from pellets The heating of pellets can result in some of the fission products being lost from the core of the pellet. If the xenon can rapidly leave the pellet then the amount of 134Cs and 137Cs which is present in the gap between the cladding and the fuel will increase. As a result, if the zircaloy tubes holding the pellet are broken then a greater release of radioactive caesium from the fuel will occur. The 134Cs and 137Cs are formed in different ways, and hence as a result the two caesium isotopes can be found at different parts of a fuel pin. It is clear that the volatile iodine and xenon isotopes have minutes in which they can diffuse out of the pellet and into the gap between the fuel and the cladding. Here the xenon can decay to the long lived caesium isotope. Genesis of 137Cs These fission yields were calculated for 235U assuming thermal neutrons (0.0253 eV) using data from the chart of the nuclides. Genesis of 134Cs In the case of 134Cs the precursor to this isotope is stable 133Cs which is formed by the decay of much longer lived xenon and iodine isotopes. No 134Cs is formed without neutron activation as 134Xe is a stable isotope. As a result of this different mode of formation the physical location of 134Cs can differ from that of 137Cs. These fission yields were calculated for 235U assuming thermal neutrons (0.0253 eV) using data from the chart of the nuclides. An example of a recent PIE study In a recent study, used 20% enriched uranium dispersed in a range of different matrices was examined to determine the physical locations of different isotopes and chemical elements. A solid solution of urania in yttria-stabilized zirconia (YSZ) (Y:Zr atom ratio of 1:4). Urania particles in an inert matrix formed by a mixture of YSZ and spinel (MgAl2O4). Urania particles dispersed in the inert matrix formed by a mixture of YSZ and alumina. The fuels varied in their ability to retain the fission xenon; the first of the three fuels retained 97% of the 133Xe, the second retained 94% while the last fuel only retained 76% of this xenon isotope. The 133Xe is a long-lived radioactive isotope which can diffuse slowly out of the pellet before being neutron activated to form 134Cs. The more short-lived 137Xe was less able to leach out of the pellets; 99%, 98% and 95% of the 137Xe was retained within the pellets. It was also found that the 137Cs concentration in the core of the pellet was much lower than the concentration in the rim of the pellet, while the less volatile 106Ru was spread more evenly throughout the pellets. The following fuel is particles of solid solution of urania in yttria-stabilized zirconia dispersed in alumina which had burnt up to 105 GW-days per cubic meter. The scanning electron microscope (SEM) is of the interface between the alumina and a fuel particle. It can be seen that the fission products are well confined to within the fuel, little of the fission products have entered the alumina matrix. The neodymium is spread throughout the fuel in a uniform manner, while the caesium is almost homogenously spread out throughout the fuel. The caesium concentration is slightly higher at two points where xenon bubbles are present. Much of the xenon is present in bubbles, while almost all of the ruthenium is present in the form of nanoparticles. The ruthenium nanoparticles are not always colocated with the xenon bubbles. Release of fission products into coolant water in a Three Mile Island type accident At Three Mile Island a recently SCRAMed core was starved of cooling water, as a result of the decay heat the core dried out and the fuel was damaged. Attempts were made to recool the core using water. According to the International Atomic Energy Agency for a 3,000 MW (t) PWR the normal coolant radioactivity levels are shown below in the table, and the coolant activities for reactors which have been allowed to dry out (and over heat) before being recovered with water. In a gap release the activity in the fuel/cladding gap has been released while in the core melt release the core was melted before being recovered by water. Chernobyl release The release of radioactivity from the used fuel is greatly controlled by the volatility of the elements. At Chernobyl much of the xenon and iodine was released while much less of the zirconium was released. The fact that only the more volatile fission products are released with ease will greatly retard the release of radioactivity in the event of an accident which causes serious damage to the core. Using two sources of data it is possible to see that the elements which were in the form of gases, volatile compounds or semi-volatile compounds (such as CsI) were released at Chernobyl while the less volatile elements which form solid solutions with the fuel remained inside the reactor fuel. According to the OECD NEA report on Chernobyl (ten years on), the following proportions of the core inventory were released. The physical and chemical forms of the release included gases, aerosols and finely fragmented solid fuel. According to some research the ruthenium is very mobile when the nuclear fuel is heated with air. This mobility has been more evident in reprocessing, with related releases of ruthenium, the most recent being the airborne radioactivity increase in Europe in autumn 2017, as with the ionizing radiation environment of spent fuel and the presence of oxygen, radiolysis-reactions can generate the volatile compound ruthenium(VIII) oxide, which has a boiling point of approximately and is a strong oxidizer, reacting with virtually any fuel/hydrocarbon, that are used in PUREX. Some work on TRISO fuel heated in air, with the respective encapsulation of nuclides, has been published. Table of chemical data The releases of fission products and uranium from uranium dioxide (from spent BWR fuel, burnup was 65 GWd t−1) which was heated in a Knudsen cell has been repeated. Fuel was heated in the Knudsen cell both with and without preoxidation in oxygen at c 650 K. It was found even for the noble gases that a high temperature was required to liberate them from the uranium oxide solid. For unoxidized fuel 2300 K was required to release 10% of the uranium while oxidized fuel only requires 1700 K to release 10% of the uranium. According to the report on Chernobyl used in the above table 3.5% of the following isotopes in the core were released 239Np, 238Pu, 239Pu, 240Pu, 241Pu and 242Cm. Degradation of the whole fuel element Water and zirconium can react violently at 1200 °C, at the same temperature the zircaloy cladding can react with uranium dioxide to form zirconium oxide and a uranium/zirconium alloy melt. PHEBUS In France a facility exists in which a fuel melting incident can be made to happen under strictly controlled conditions. In the PHEBUS research program fuels have been allowed to heat up to temperatures in excess of the normal operating temperatures, the fuel in question is in a special channel which is in a toroidal nuclear reactor. The nuclear reactor is used as a driver core to irradiate the test fuel. While the reactor is cooled as normal by its own cooling system the test fuel has its own cooling system, which is fitted with filters and equipment to study the release of radioactivity from the damaged fuel. Already the release of radioisotopes from fuel under different conditions has been studied. After the fuel has been used in the experiment it is subject to a detailed examination (PIE), In the 2004 annual report from the ITU some results of the PIE on PHEBUS (FPT2) fuel are reported in section 3.6. LOFT The Loss of Fluid Tests (LOFT) were an early attempt to scope the response of real nuclear fuel to conditions under a loss-of-coolant accident, funded by USNRC. The facility was built at Idaho National Laboratory, and was essentially a scale-model of a commercial PWR. ('Power/volume scaling' was used between the LOFT model, with a 50MWth core, and a commercial plant of 3000MWth). The original intention (1963–1975) was to study only one or two major (large break) LOCA, since these had been the main concern of US 'rule-making' hearings in the late 1960s and early 1970s. These rules had focussed around a rather stylised large-break accident, and a set of criteria (e.g. for extent of fuel-clad oxidation) set out in 'Appendix K' of 10CFR50 (Code of Federal Regulations). Following the accident at Three Mile Island, detailed modelling of much smaller LOCA became of equal concern. 38 LOFT tests were eventually performed and their scope was broadened to study a wide spectrum of breach sizes. These tests were used to help validate a series of computer codes (such as RELAP-4, RELAP-5 and TRAC) then being developed to calculate the thermal-hydraulics of LOCA. See also NUREG-1150 Nuclear power Contact of molten fuel with water and concrete Water Extensive work was done from 1970 to 1990 on the possibility of a steam explosion or FCI when molten 'corium' contacted water. Many experiments suggested quite low conversion of thermal to mechanical energy, whereas the theoretical models available appeared to suggest that much higher efficiencies were possible. A NEA/OECD report was written on the subject in 2000 which states that a steam explosion caused by contact of corium with water has four stages. Premixing As the jet of corium enters the water, it breaks up into droplets. During this stage the thermal contact between the corium and the water is not good because a vapor film surrounds the droplets of corium and this insulates the two from each other. It is possible for this meta-stable state to quench without an explosion or it can trigger in the next step Triggering A externally or internally generated trigger (such as a pressure wave) causes a collapse of the vapor film between the corium and the water. Propagation The local increase in pressure due to the increased heating of the water can generate enhanced heat transfer (usually due to rapid fragmentation of the hot fluid within the colder more volatile one) and a greater pressure wave, this process can be self-sustained. (The mechanics of this stage would then be similar to those in a classical ZND detonation wave). Expansion This process leads to the whole of the water being suddenly heated to boiling. This causes an increase in pressure (in layman's terms, an explosion), which can result in damage to the plant. Recent work Work in Japan in 2003 melted uranium dioxide and zirconium dioxide in a crucible before being added to water. The fragmentation of the fuel which results is reported in the Journal of Nuclear Science and Technology. Concrete A review of the subject can be read at and work on the subject continues to this day; in Germany at the FZK some work has been done on the effect of thermite on concrete, this is a simulation of the effect of the molten core of a reactor breaking through the bottom of the pressure vessel into the containment building. Lava flows from corium The corium (molten core) will cool and change to a solid with time. It is thought that the solid is weathering with time. The solid can be described as Fuel Containing Mass, it is a mixture of sand, zirconium and uranium dioxide which had been heated at a very high temperature until it has melted. The chemical nature of this FCM has been the subject of some research. The amount of fuel left in this form within the plant has been considered. A silicone polymer has been used to fix the contamination. The Chernobyl melt was a silicate melt which did contain inclusions of Zr/U phases, molten steel and high uranium zirconium silicate. The lava flow consists of more than one type of material—a brown lava and a porous ceramic material have been found. The uranium to zirconium for different parts of the solid differs a lot, in the brown lava a uranium rich phase with a U:Zr ratio of 19:3 to about 38:10 is found. The uranium poor phase in the brown lava has a U:Zr ratio of about 1:10. It is possible from the examination of the Zr/U phases to know the thermal history of the mixture. It can be shown that before the explosion that in part of the core the temperature was higher than 2000 °C, while in some areas the temperature was over 2400–2600 °C. Spent fuel corrosion Uranium dioxide films Uranium dioxide films can be deposited by reactive sputtering using an argon and oxygen mixture at a low pressure. This has been used to make a layer of the uranium oxide on a gold surface which was then studied with AC impedance spectroscopy. Noble metal nanoparticles and hydrogen According to the work of the corrosion electrochemist Shoesmith the nanoparticles of Mo-Tc-Ru-Pd have a strong effect on the corrosion of uranium dioxide fuel. For instance his work suggests that when the hydrogen (H2) concentration is high (due to the anaerobic corrosion of the steel waste can) the oxidation of hydrogen at the nanoparticles will exert a protective effect on the uranium dioxide. This effect can be thought of as an example of protection by a sacrificial anode where instead of a metal anode reacting and dissolving it is the hydrogen gas which is consumed. References External links LOFT tests INEL News Idaho National Engineering Laboratory, 4 December 1979 LOFT L2-3 tests completed successfully, Idaho National Engineering Laboratory, June 1979 Second loss of fluid small break test conducted, Idaho National Engineering Laboratory, February 1980 Nuclear chemistry Nuclear fuels Nuclear reprocessing Nuclear safety and security Nuclear technology Uranium
Behavior of nuclear fuel during a reactor accident
[ "Physics", "Chemistry" ]
5,610
[ "Nuclear chemistry", "Nuclear technology", "nan", "Nuclear physics" ]
5,685,431
https://en.wikipedia.org/wiki/Ichnotaxon
An ichnotaxon (plural ichnotaxa) is "a taxon based on the fossilized work of an organism", i.e. the non-human equivalent of an artifact. Ichnotaxon comes from the Ancient Greek (íchnos) meaning "track" and English , itself derived from Ancient Greek (táxis) meaning "ordering". Ichnotaxa are names used to identify and distinguish morphologically distinctive ichnofossils, more commonly known as trace fossils (fossil records of lifeforms' movement, rather than of the lifeforms themselves). They are assigned genus and species ranks by ichnologists, much like organisms in Linnaean taxonomy. These are known as ichnogenera and ichnospecies, respectively. "Ichnogenus" and "ichnospecies" are commonly abbreviated as "igen." and "isp.". The binomial names of ichnospecies and their genera are to be written in italics. Most researchers classify trace fossils only as far as the ichnogenus rank, based upon trace fossils that resemble each other in morphology but have subtle differences. Some authors have constructed detailed hierarchies up to ichnosuperclass, recognizing such fine detail as to identify ichnosuperorder and ichnoinfraclass, but such attempts are controversial. Naming Due to the chaotic nature of trace fossil classification, several ichnogenera hold names normally affiliated with animal body fossils or plant fossils. For example, many ichnogenera are named with the suffix -phycus due to misidentification as algae. Edward Hitchcock was the first to use the now common -ichnus suffix in 1858, with Cochlichnus. History Due to trace fossils' history of being difficult to classify, there have been several attempts to enforce consistency in the naming of ichnotaxa. The first edition of the International Code of Zoological Nomenclature, published in 1961, ruled that names of taxa published after 1930 should be 'accompanied by a statement that purports to give characters differentiating the taxon'. This had the effect that names for most trace fossil taxa published after 1930 were unavailable under the code. This restriction was removed for ichnotaxa in the third edition of the code, published in 1985. See also Bird ichnology Trace fossil classification Glossary of scientific naming References External links Comments on the draft proposal to amend the Code with respect to trace fossils Trace Fossils - Kansas University Catalogue of Ichnotaxa Biological classification Trace fossils Zoological nomenclature
Ichnotaxon
[ "Biology" ]
517
[ "Zoological nomenclature", "Biological nomenclature", "nan" ]
5,685,631
https://en.wikipedia.org/wiki/Stark%20conjectures
In number theory, the Stark conjectures, introduced by and later expanded by , give conjectural information about the coefficient of the leading term in the Taylor expansion of an Artin L-function associated with a Galois extension K/k of algebraic number fields. The conjectures generalize the analytic class number formula expressing the leading coefficient of the Taylor series for the Dedekind zeta function of a number field as the product of a regulator related to S-units of the field and a rational number. When K/k is an abelian extension and the order of vanishing of the L-function at s = 0 is one, Stark gave a refinement of his conjecture, predicting the existence of certain S-units, called Stark units, which generate abelian extensions of number fields. Formulation General case The Stark conjectures, in the most general form, predict that the leading coefficient of an Artin L-function is the product of a type of regulator, the Stark regulator, with an algebraic number. Abelian rank-one case When the extension is abelian and the order of vanishing of an L-function at s = 0 is one, Stark's refined conjecture predicts the existence of Stark units, whose roots generate Kummer extensions of K that are abelian over the base field k (and not just abelian over K, as Kummer theory implies). As such, this refinement of his conjecture has theoretical implications for solving Hilbert's twelfth problem. Computation Stark units in the abelian rank-one case have been computed in specific examples, allowing verification of the veracity of his refined conjecture. These also provide an important computational tool for generating abelian extensions of number fields, forming the basis for some standard algorithms for computing abelian extensions of number fields. The first rank-zero cases are used in recent versions of the PARI/GP computer algebra system to compute Hilbert class fields of totally real number fields, and the conjectures provide one solution to Hilbert's twelfth problem, which challenged mathematicians to show how class fields may be constructed over any number field by the methods of complex analysis. Progress Stark's principal conjecture has been proven in a few special cases, such as when the character defining the L-function takes on only rational values. Except when the base field is the field of rational numbers or an imaginary quadratic field, which were covered in the work of Stark, the abelian Stark conjectures is still unproved for number fields. More progress has been made in function fields of an algebraic variety. related Stark's conjectures to the noncommutative geometry of Alain Connes. This provides a conceptual framework for studying the conjectures, although at the moment it is unclear whether Manin's techniques will yield the actual proof. Variations In 1980, Benedict Gross formulated the Gross–Stark conjecture, a p-adic analogue of the Stark conjectures relating derivatives of Deligne–Ribet p-adic L-functions (for totally even characters of totally real number fields) to p-units. This was proved conditionally by Henri Darmon, Samit Dasgupta, and Robert Pollack in 2011. The proof was completed and made unconditional by Dasgupta, Mahesh Kakde, and Kevin Ventullo in 2018. A further refinement of the p-adic conjecture was proposed by Gross in 1988. In 1984, John Tate formulated the Brumer–Stark conjecture, which gives a refinement of the abelian rank-one Stark conjecture at totally split finite primes (for totally complex extensions of totally real base fields). The function field analogue of the Brumer–Stark conjecture was proved by John Tate and Pierre Deligne in 1984. In 2023, Dasgupta and Kakde proved the Brumer–Stark conjecture away from the prime 2. In 1996, Karl Rubin proposed an integral refinement of the Stark conjecture in the abelian case. In 1999, Cristian Dumitru Popescu proposed a function field analogue of Rubin's conjecture and proved it in some cases. Notes References External links Conjectures Unsolved problems in number theory Field (mathematics) Algebraic number theory Zeta and L-functions
Stark conjectures
[ "Mathematics" ]
861
[ "Unsolved problems in mathematics", "Unsolved problems in number theory", "Conjectures", "Algebraic number theory", "Mathematical problems", "Number theory" ]
5,685,724
https://en.wikipedia.org/wiki/Journal%20of%20Management%20Information%20Systems
The journal of Management Information Systems (JMIS) is a top-tier peer-reviewed academic journal that publishes impactful research articles making a significant novel contributions in the areas of information systems and information technology. Established in 1984, the current editor-in-chief of JMIS is Vladimir Zwass. JMIS is published by Taylor & Francis in print and online. The mission of JMIS is to present an integrated view of the field of Information Systems (IS) through the significant novel contributions by the best thinkers. The IS discipline aims to understand. how systems can be organized, developed, and deployed effectively to manage information and knowledge toward specified outcomes, in order to support people, organizations, marketplaces, and products. Many prominent research streams, in the discipline have their origins in the foundational paper published in the journal. JMIS has always reflected the belief that thematic and methodological diversity of the highest quality papers within a well-defined IS domain is the strength of the field. JMIS is ranked as one of the three top-tier Information Systems journals, along with Information Systems Research (ISR) and MIS Quarterly (MISQ), in the comprehensive scientometric study published in MISQ and confirmed by other scholarly studies. JMIS is one of the 50 leading scholarly journals on Financial Times FT50 list. JMIS serves the researchers investigating new modes of information technology deployment and the changing landscape of information policy making, as well as practitioners and executives managing the information resource. Along with the pursuit of knowledge, the quarterly aims to serve the societal goals, and to bridge the gap between theory and practice of information systems. The journal accepts for the double-blind review full-scale research submissions that make a significant contribution to the field of information systems. Such contributions may include: Impactful and methodologically sound empirical and theoretical work leading to the progress of the IS knowledge field Paradigmatic and generalizable designs and applications Analyses of informational policy making in an organizational, national, or international setting Investigations of societal and economic issues of organizational computing, in particular aiming at the improvements in health, sustainability, and equity Analytical attention is focused on the following key issues: Information systems for competitive positioning Business processes and management enabled by information technology Business value of information technology Resilience and security of information-technology infrastructures Entrepreneurial deployment of information technology Management of information resources Relationship between information technology and organizational performance and structures Enterprise-wide systems architectures and infrastructures Electronic business, net-enabled organizations, and platforms The organization and impacts of big data and data analytics Artificial intelligence with machine learning in organizational information systems Social media, social commerce, and social networks in the organizational perspective Systems sourcing, development, and stewardship in organizations Informational support of collaborative work and co-creation Knowledge management, organizational learning, and organizational memory The human element in organizational computing The submissions are refereed in a double-blind process by the internationally recognized expert referees and by Associate Editors who serve on the distinguished Editorial Board of JMIS. JMIS reviews have been ranked #1 in 2020 for quality and timeliness by the IS scholarly community. Critics This journal's fairness and transparency in handling manuscripts have been criticized by some scholars. There have been comments on the academic journal review website, Scirev, stating that this journal rejected submissions without providing any explanation, simply stating that they were "not suitable". There were also comments mentioning that within the peer review feedback they received, some individuals claimed that this journal no longer publishes articles using the Partial Least Squares (PLS) method for data analysis. However, the journal's website has never addressed this issue, and the PLS method is used in many other top-tier journals. Even if there are concerns about the method, further explanation and decision-making should be provided instead of simply rejecting articles based on that reason. See also MIS Quarterly Information Systems Research Information Systems journal journal of Information Technology References External links journal page at publisher's website Business and management journals Information systems journals Quarterly journals English-language journals Taylor & Francis academic journals
Journal of Management Information Systems
[ "Technology" ]
815
[ "Information systems journals", "Information systems" ]
5,685,862
https://en.wikipedia.org/wiki/Journal%20of%20the%20Association%20for%20Information%20Systems
The Journal of the Association for Information Systems (JAIS) is a top-tier peer-reviewed scientific journal that covers research in the areas of information systems and technology. It is an official journal of the Association for Information Systems and published electronically. The journal was established in 2000 and is abstracted and indexed in Science Citation Index Expanded, Social Sciences Citation Index, and Current Contents/Social & Behavioral Sciences. According to the Journal Citation Reports, the journal has a 2018 impact factor of 3.103. Editors-in-chief The following persons have been editors-in-chief of the journal: Phillip Ein-Dor, Tel Aviv University (2000-2002) Sirkka Jarvenpaa, University of Texas at Austin (2002-2005) Kalle Lyytinen, Case Western Reserve University (2005-2010) Shirley Gregor, Australian National University (2010-2013) Suprateek Sarker, University of Virginia (2013–2019) Dorothy E. Leidner, University of Virginia (2019–present) References External links Association for Information Systems academic journals Academic journals established in 2000 Information systems journals English-language journals Monthly journals
Journal of the Association for Information Systems
[ "Technology" ]
231
[ "Information systems journals", "Information systems" ]
5,686,025
https://en.wikipedia.org/wiki/Breast%20ironing
Breast ironing, also known as breast flattening, is the pounding and massaging of a pubescent girl's breasts, using hard or heated objects, to try to make them stop developing or disappear. The practice is typically performed by a close female figure to the victim, traditionally fulfilled by a mother, grandmother, aunt, or female guardian who will say she is trying to protect the girl from sexual harassment and rape, to prevent early pregnancy that would tarnish the family name, to prevent the spread of sexually transmitted infections such as HIV/AIDS, or to allow the girl to pursue education rather than be forced into early marriage. It is mostly practiced in parts of Cameroon, where boys and men may think that girls whose breasts have begun to grow are ready for sex. Evidence suggests that it has spread to the Cameroonian diaspora, for example to Britain, where the law defines it as child abuse. The most widely used implement for breast ironing is a wooden pestle normally used for pounding tubers. Other tools used include leaves, bananas, coconut shells, grinding stones, ladles, spatulas, and hammers heated over coals. The ironing practice is generally performed around dusk or dawn in a private area such as the household kitchen to prevent others from seeing the victim or becoming aware of the process, particularly fathers or other male figures. The massaging process could occur anywhere between one week to several months, depending on the victim's refusal and the resistance of the breasts; in cases where the breasts appear to be consistently protruding, the ironing practice may occur more than once a day for weeks or months at a time. History Breast ironing may be derived from the ancient practice of breast massage. Breast massage aims to help even out different breast sizes and reduce the pain of nursing mothers by massaging the breast with warm objects, see Treatment for mastitis. Incidence The breast ironing practice has been documented in Nigeria, Togo, Republic of Guinea, Côte d'Ivoire, Kenya, and Zimbabwe. Additionally it has been found in other African countries, including Burkina Faso, Central African Republic (CAR), Benin, and Guinea-Conakry. Breast "sweeping" has been reported in South Africa. The practice has become commonly associated with Cameroon as a result of media attention and local levels of activism from human rights groups. All of Cameroon's 200 ethnic groups engage in breast ironing, with no known relation to religion, socio-economic status, or any other identifier. A 2006 survey by the German development agency GIZ of more than 5,000 Cameroonian girls and women between the ages of 10 and 82 estimated that nearly one in four had undergone breast ironing, corresponding to four million girls. The survey also reported that it is most commonly practiced in urban areas, where mothers fear their daughters could be more exposed to sexual abuse. Incidence is 53 percent in the Cameroon's southeastern region of Littoral. Compared with Cameroon's Christian and animist south, breast ironing is less common in the Muslim north, where only 10 percent of women are affected. Some hypothesize that this is related to the practice of early marriage, which is more common in the north, making early sexual development irrelevant or even preferable. Research suggests that 16% of girls, particularly in the far North regions where child marriages are highly common, try to flatten their own breasts in an attempt to delay early sexual maturity and early marriage. A 2007 journal suggested that social norms in Cameroon result in women lacking bodily autonomy, as Cameroonian women are not socialized to negotiate safer sex practices, while Cameroonian men are encouraged to engage in polygyny and to take concubines. This lack of bodily autonomy contributes to an increased incidence of breast ironing, sexual coercion, and the normalization of early marriage practices. In an interview, one human rights activist stated that parents who resist under-aged marriages "usually point to the fact that the girlʼs breasts have not grown meaning that she is not yet ready for sexual intercourse. For parents who practice child marriage, by ironing the breasts of the prospective bride, they can continue receiving goods and services from their in-laws." A 2008 report suggested that the rise in the incidence of breast ironing is due to the earlier onset of puberty, caused by dietary improvements in Cameroon over the previous 50 years. Half of Cameroonian girls who develop under the age of nine have their breasts ironed, and 38% of those who develop before eleven. Additionally, since 1976, the percentage of women married by the age of 19 has decreased from nearly 50% to 20%, leading to an increasingly long gap between childhood and marriage. The later age of marriage may be due to changed social norms that allow girls and women to attend school through university and to hold jobs in the formal sector; previously, girls entered married life young, wed to an older man without informed consent. Women who delay marriage in pursuit of education and career are more likely to be financially independent later in life, whereas girls who become pregnant are often forced to drop out of school and forgo formal employment. One of the only full-length reports on breast ironing dates from 2011, when a Cameroonian NGO sponsored by GIZ called it "a harmful traditional practice that has been silenced for too long". There are fears that the practice has spread to the Cameroonian diaspora, for example to Britain. A charity, CAME Women and Girls Development Organisation, is working with London's Metropolitan Police Service and social services departments to raise awareness of breast ironing. Health consequences Breast ironing is extremely painful and can cause tissue damage. , there have been no medical studies on its effects. However, medical experts warn that it might contribute toward breast cancer, cysts and depression, and perhaps interfere with breastfeeding later. In addition to this, breast ironing puts girls at risk of abscesses, cysts, infections, and permanent tissue damage, resulting in breast pimples, imbalance in breast size, and milk infection from scarring. In extreme cases of damage, there are currently ten cases of diagnosed breast cancer reported from women who identified as victims of breast ironing. Other possible side effects reported by GIZ include malformed breasts and the eradication of one or both breasts. The practice ranges dramatically in its severity, from using heated leaves to press and massage the breasts, to using a scalding grinding stone to crush the budding gland. Due to this variation, health consequences vary from benign to acute. The Child Rights Information Network (CRIN) reports the delay of breast milk development after giving birth, endangering the life of newborns. Breast ironing can cause women to fear sexual activity. Men have said that breast loss detracts from women's sexual experiences, although this has not been corroborated by women. Many women also suffer mental trauma after undergoing breast ironing. Victims feel as if it is punishment and often internalise blame, and fear breastfeeding in the future. Opposition As well as being dangerous, breast ironing is criticised as being ineffective for stopping early sex and pregnancy. GIZ (then called "GTZ") and the Network of Aunties (RENATA), a Cameroonian non-governmental organization that supports young mothers, campaign against breast ironing, and are supported by the Ministry for the Promotion of Women and the Family. Some have also advocated a law against the practice; however, no such law has been passed. Some consider the practice to be an emerging human rights issue, recognized as an act of gender-based violence as breast ironing affects women and girls regardless of race, class, religion, socioeconomic background, or age. In regards to recent opposition, in 2000, the United Nations (UN) identified breast ironing as one of five intersecting forms of discrimination and overlooked crimes against women. According to one Cameroonian lawyer, if a medical doctor determines that damage has been caused to the breasts, the perpetrator can be punished by up to three years in prison, provided the matter is reported within a few months. However, it is unclear if such a law exists as there are no recorded instances of legal enforcement. The GIZ survey found that in 2006, 39 percent of Cameroonian women opposed breast ironing, with 41 percent expressing support and 26 percent indifferent. Reuters reported in 2014 that nationwide campaigning against the practice had helped reduce the rate of breast ironing by 50 percent in the country. See also Breast reduction Breast binding Female genital mutilation Mastectomy Amazons Thelarche, the stage of pubertal development at which breast buds appear Precocious puberty References External links Breast ironing in the UK – BBC, 2019 Plastic Dream – photographic work and writing of testimonies by Gildas Paré Abuse Body modification Breast Culture of Cameroon Children's rights Violence against women in Cameroon Women's rights in Cameroon Child abuse in Africa Violence against children in Africa Children's rights in Africa Gender-related violence Child sexual abuse Sexual violence in Africa
Breast ironing
[ "Biology" ]
1,842
[ "Abuse", "Behavior", "Aggression", "Human behavior" ]
5,686,380
https://en.wikipedia.org/wiki/Integral%20windup
Integral windup, also known as integrator windup or reset windup, refers to the situation in a PID controller where a large change in setpoint occurs (say a positive change) and the integral term accumulates a significant error during the rise (windup), thus overshooting and continuing to increase as this accumulated error is unwound (offset by errors in the other direction). Solutions This problem can be addressed by Initializing the controller integral to a desired value, for instance to the value before the problem Increasing the setpoint in a suitable ramp Conditional Integration: disabling the integral function until the to-be-controlled process variable (PV) has entered the controllable region Preventing the integral term from accumulating above or below pre-determined bounds Back-calculating the integral term to constrain the process output within feasible bounds. Clegg Integrator: Zeroing the integral value every time the error is equal to, or crosses zero. This avoids having the controller attempt to drive the system to have the same error integral in the opposite direction as was caused by a perturbation, but induces oscillation if a non-zero control value required to maintain the process at setpoint. Occurrence Integral windup particularly occurs as a limitation of physical systems, compared with ideal systems, due to the ideal output being physically impossible (process saturation: the output of the process being limited at the top or bottom of its scale, making the error constant). For example, the position of a valve cannot be any more open than fully open and also cannot be closed any more than fully closed. In this case, anti-windup can actually involve the integrator being turned off for periods of time until the response falls back into an acceptable range. This usually occurs when the controller's output can no longer affect the controlled variable, or if the controller is part of a selection scheme and it is selected right. Integral windup was more of a problem in analog controllers. Within modern distributed control systems and programmable logic controllers, it is much easier to prevent integral windup by either limiting the controller output, limiting the integral to produce feasible output, or by using external reset feedback, which is a means of feeding back the selected output to the integral circuit of all controllers in the selection scheme so that a closed loop is maintained. References Control engineering Classical control theory
Integral windup
[ "Engineering" ]
487
[ "Control engineering" ]
5,687,457
https://en.wikipedia.org/wiki/Mothers%20against%20decapentaplegic%20homolog%205
Mothers against decapentaplegic homolog 5 also known as SMAD5 is a protein that in humans is encoded by the SMAD5 gene. SMAD5, as its name describes, is a homolog of the Drosophila gene: "Mothers against decapentaplegic", based on a tradition of such unusual naming within the gene research community. It belongs to the SMAD family of proteins, which belong to the TGFβ superfamily of modulators. Like many other TGFβ family members SMAD5 is involved in cell signalling and modulates signals of bone morphogenetic proteins (BMP's). The binding of ligands causes the oligomerization and phosphorylation of the SMAD5 protein. SMAD5 is a receptor regulated SMAD (R-SMAD) and is activated by bone morphogenetic protein type 1 receptor kinase. It may play a role in the pathway where TGFβ is an inhibitor of hematopoietic progenitor cells. References Developmental genes and proteins MH1 domain MH2 domain R-SMAD Transcription factors Human proteins
Mothers against decapentaplegic homolog 5
[ "Chemistry", "Biology" ]
235
[ "Gene expression", "Molecular and cellular biology stubs", "Signal transduction", "Biochemistry stubs", "Induced stem cells", "Developmental genes and proteins", "Transcription factors" ]
5,687,611
https://en.wikipedia.org/wiki/Brian%20Sutton-Smith
Brian Sutton Smith (July 15, 1924 – March 7, 2015), better known as Brian Sutton-Smith, was a play theorist who spent his lifetime attempting to discover the cultural significance of play in human life, arguing that any useful definition of play must apply to both adults and children. He demonstrated that children are not innocent in their play and that adults are indeed guilty in theirs. In both cases play pretends to assist them in surmounting their Darwinian struggles for survival. His book Play As Emotional Survival is a response to his own deconstruction of play theories in his work, The Ambiguity of Play (1997, Harvard University Press). Sutton-Smith's interdisciplinary approach included research into play history and cross cultural studies of play, as well as research in psychology, education, and folklore. He maintained that the interpretation of play must involve all of its forms, from child's play to gambling, sports, festivals, imagination, and nonsense. Biography Brian Sutton-Smith was born in Wellington, New Zealand in 1924. He trained as a teacher, completed a BA and MA, and was then awarded the first education PhD in New Zealand in 1954. Following the completion of his PhD, Sutton-Smith travelled to the USA on grant from the Fulbright Program, where he began an academic career with a focus on children's games, adult games, children's play, children's drama, films and narratives, as well as children's gender issues and sibling position. Sutton-Smith was the author of some 50 books, the most recent of which is The Ambiguity of Play, and some 350 scholarly articles. He served as president of The Anthropological Association for the Study of Play and of The American Psychological Association, Division g10 (Psychology and the Arts). As a founder of the Children's Folklore Society he received a Lifetime Achievement Award from the American Folklore Society. For his research in toys he received awards from the BRIO and Lego toy companies of Sweden and Denmark. He participated in making television programs on toys and play in Great Britain, Canada, and the U.S., and was a consultant for Captain Kangaroo, Nickelodeon, Murdoch Children's Television, and the Please Touch Museum in Philadelphia. His academic life consisted of 10 years at Bowling Green State University, Ohio, 10 years at Teachers College, Columbia University in New York, and 17 years at the University of Pennsylvania. He then retired to Sarasota, Florida. He died of Alzheimer's disease on March 7, 2015 in White River Junction, Vermont. Sutton-Smith had recently been engaged as resident scholar at The Strong in Rochester, New York, home to the Brian Sutton-Smith Library and Archives of Play. In addition, the New Zealand Association for Research in Education has created the Sutton-Smith Doctoral Award, which will be awarded annually for an excellent Doctoral thesis by an NZARE member. The Ambiguity of Play In The Ambiguity of Play, Sutton-Smith details seven "rhetorics" of play, or ideologies that have been used to explain, justify, and privilege certain forms of play. These seven rhetorics are progress, fate, power, (community) identity, imaginary, self, and frivolity. Three of these—fate, power, and identity—Sutton-Smith identifies as ancient but still active and associates with a more collective focus. Another three are more recent, associated with a modern focus on the individual: progress, imaginary, and self. Sutton-Smith argues that the seventh rhetoric, frivolity, serves as a responsive rhetoric, in the sense that nonhegemonic forms of play are often deemed frivolous. In the conclusion, Sutton-Smith notes that variation is one of play's key features, with important resemblance to biological variation. While acknowledging that he is advancing a version of the progress narrative of play, Sutton-Smith posits that play may serve an important role in evolutionary adaptation. Key works The Sibling (1970) The Study of Games (1971) Child's Play (1971) The Folkgames of Children (1972) How to Play with Your Children (1974) co-author Shirley Sutton-Smith Play and Learning (1979) The Folkstories of Children (1981) A History of Children's Play (1981) Toys as Culture (1986) Play and Intervention (1994) Children's Folklore Source Book (1995) The Ambiguity of Play (1997) Works of fiction Sutton-Smith is also the author of a series of novels about boys growing up in New Zealand in the 1930s, entitled Our Street, Smitty Does a Bunk, and The Cobbers. Initially published in serial form in 1949 in the New Zealand School Journal, the stories created a national furor as Brian Sutton-Smith allegedly endorsed morally unacceptable behavior in them. See also Strong National Museum of Play in Rochester, New York References External links Harvard University Press 1924 births 2015 deaths New Zealand emigrants to the United States New Zealand educators People from Wellington City Bowling Green State University faculty Columbia University faculty University of Pennsylvania faculty Play (activity) University of New Zealand alumni
Brian Sutton-Smith
[ "Biology" ]
1,028
[ "Play (activity)", "Behavior", "Human behavior" ]
5,687,865
https://en.wikipedia.org/wiki/Holevo%27s%20theorem
Holevo's theorem is an important limitative theorem in quantum computing, an interdisciplinary field of physics and computer science. It is sometimes called Holevo's bound, since it establishes an upper bound to the amount of information that can be known about a quantum state (accessible information). It was published by Alexander Holevo in 1973. Statement of the theorem Suppose Alice wants to send a classical message to Bob by encoding it into a quantum state, and suppose she can prepare a state from some fixed set , with the i-th state prepared with probability . Let be the classical register containing the choice of state made by Alice. Bob's objective is to recover the value of from measurement results on the state he received. Let be the classical register containing Bob's measurement outcome. Note that is therefore a random variable whose probability distribution depends on Bob's choice of measurement. Holevo's theorem bounds the amount of correlation between the classical registers and , regardless of Bob's measurement choice, in terms of the Holevo information. This is useful in practice because the Holevo information does not depend on the measurement choice, and therefore its computation does not require performing an optimization over the possible measurements. More precisely, define the accessible information between and as the (classical) mutual information between the two registers maximized over all possible choices of measurements on Bob's side:where is the (classical) mutual information of the joint probability distribution given by . There is currently no known formula to analytically solve the optimization in the definition of accessible information in the general case. Nonetheless, we always have the upper bound:where is the ensemble of states Alice is using to send information, and is the von Neumann entropy. This is called the Holevo information or Holevo χ quantity. Note that the Holevo information also equals the quantum mutual information of the classical-quantum state corresponding to the ensemble:with the quantum mutual information of the bipartite state . It follows that Holevo's theorem can be concisely summarized as a bound on the accessible information in terms of the quantum mutual information for classical-quantum states. Proof Consider the composite system that describes the entire communication process, which involves Alice's classical input , the quantum system , and Bob's classical output . The classical input can be written as a classical register with respect to some orthonormal basis . By writing in this manner, the von Neumann entropy of the state corresponds to the Shannon entropy of the probability distribution : The initial state of the system, where Alice prepares the state with probability , is described by Afterwards, Alice sends the quantum state to Bob. As Bob only has access to the quantum system but not the input , he receives a mixed state of the form . Bob measures this state with respect to the POVM elements , and the probabilities of measuring the outcomes form the classical output . This measurement process can be described as a quantum instrument where is the probability of outcome given the state , while for some unitary is the normalised post-measurement state. Then, the state of the entire system after the measurement process is Here is the identity channel on the system . Since is a quantum channel, and the quantum mutual information is monotonic under completely positive trace-preserving maps, . Additionally, as the partial trace over is also completely positive and trace-preserving, . These two inequalities give On the left-hand side, the quantities of interest depend only on with joint probabilities . Clearly, and , which are in the same form as , describe classical registers. Hence, Meanwhile, depends on the term where is the identity operator on the quantum system . Then, the right-hand side is which completes the proof. Comments and remarks In essence, the Holevo bound proves that given n qubits, although they can "carry" a larger amount of (classical) information (thanks to quantum superposition), the amount of classical information that can be retrieved, i.e. accessed, can be only up to n classical (non-quantum encoded) bits. It was also established, both theoretically and experimentally, that there are computations where quantum bits carry more information through the process of the computation than is possible classically. See also Superdense coding References Further reading (see page 531, subsection 12.1.1 - equation (12.6) ) . See in particular Section 11.6 and following. Holevo's theorem is presented as exercise 11.9.1 on page 288. Quantum mechanical entropy Quantum information theory Limits of computation
Holevo's theorem
[ "Physics" ]
922
[ "Physical phenomena", "Physical quantities", "Entropy", "Quantum mechanical entropy", "Limits of computation" ]
5,688,280
https://en.wikipedia.org/wiki/Sugar%20phosphates
Sugar phosphates (sugars that have added or substituted phosphate groups) are often used in biological systems to store or transfer energy. They also form the backbone for DNA and RNA. Sugar phosphate backbone geometry is altered in the vicinity of the modified nucleotides. Examples include: Dihydroxyacetonephosphate Glucose-6-phosphate Phytic acid Teichoic acid Electronic structure of the sugar-phosphate backbone The sugar-phosphate backbone has multiplex electronic structure and the electron delocalisation complicates its theoretical description. Some part of the electronic density is delocalised over the whole backbone and the extent of the delocalisation is affected by backbone conformation due to hyper-conjugation effects. Hyper-conjugation arises from donor-acceptor interactions of localised orbitals in 1,3 positions. Phosphodiesters in DNA and RNA The phosphodiester backbone of DNA and RNA consists of pairs of deoxyribose or ribose sugars linked by phosphates at the respective 3' and 5' positions. The backbone is negatively charged and hydrophilic, which allows strong interactions with water. Sugar-phosphate backbone forms the structural framework of nucleic acids, including DNA and RNA. Sugar phosphates are defined as carbohydrates to which a phosphate group is bound by an ester or an either linkage, depending on whether it involves an alcoholic or a hemiacetalic hydroxyl, respectively. Solubility, acid hydrolysis rates, acid strengths, and ability to act as sugar group donors are the knowledge of physical and chemical properties required for the analysis of both types of sugar phosphates. The photosynthetic carbon reduction cycle is closely associated with sugar phosphates, and sugar phosphates are one of the key molecules in metabolism,(Sugar phosphates are major players in metabolism due to their task of storing and transferring energy. Not only ribose 5-phosphate but also fructose 6-phosphate are an intermediate of the pentose-phosphate pathway which generates nicotinamide adenine dinucleotide phosphate (NADPH) and pentoses from glucose polymers and their degradation products.) oxidative pentose phosphate pathways, gluconeogenesis, important intermediates in glycolysis. Sugar phosphates are not only involved in metabolic regulation and signaling but also involved in the synthesis of other phosphate compounds. Peptide nucleic acids Peptide nucleic acid (PNA) is a nucleic acid in which natural nucleic acid has been replaced by a synthetic peptide backbone formed from N-(2-amino-ethyl)-glycine units along with sugar phosphate backbone forming in an achiral and uncharged moiety that mimics RNA or DNA oligonucleotides. PNA cannot be degraded inside living cells but it is chemically stable and resistant to hydrolytic (enzymatic) cleavage. Role in metabolism Sugar phosphates are major players in metabolism due to their task of storing and transferring energy. Not only ribose 5-phosphate but also fructose 6-phosphate are an intermediate of the pentose-phosphate pathway which generates nicotinamide adenine dinucleotide phosphate (NADPH) and pentoses from glucose polymers and their degradation products. The pathway is known as glycolysis where the same carbohydrates are degraded into pyruvates thus providing energy. Enzymes are catalysed for the reactions of these pathways. Some enzymes contain metal centers in their active site which is important part of the enzymes and as well as for the catalysed reaction. The phosphate group can coordinate to the metal center for example, 1,6-bisphosphatase and ADP-ribose pyrophosphatase. Phosphoglycerate and several sugar phosphates that are known intermediates of the Calvin photosynthetic carbon cycle, stimulate light-dependent carbon dioxide fixation by isolated chloroplasts. This ability is shared by certain other metabolites (e.g. glucose 1-phosphate) from which the accepted Calvin-cycle intermediates could easily be derived by known metabolic routes. References External links Glycomics Phosphates Nucleotides DNA RNA
Sugar phosphates
[ "Chemistry" ]
868
[ "Glycomics", "Glycobiology", "Phosphates", "Salts" ]
5,688,324
https://en.wikipedia.org/wiki/Jos%C3%A9%20Enrique%20Moyal
José Enrique Moyal (‎; 1 October 1910 – 22 May 1998) was an Australian mathematician and mathematical physicist who contributed to aeronautical engineering, electrical engineering and statistics, among other fields. Career Moyal helped establish the phase space formulation of quantum mechanics in 1949 by bringing together the ideas of Hermann Weyl, John von Neumann, Eugene Wigner, and Hip Groenewold. This formulation is statistical in nature and makes logical connections between quantum mechanics and classical statistical mechanics, enabling a natural comparison between the two formulations. Phase space quantization, also known as Moyal quantization, largely avoids the use of operators for quantum mechanical observables prevalent in the canonical formulation. Quantum-mechanical evolution in phase space is specified by a Moyal bracket. Moyal grew up in Tel Aviv, and attended the Herzliya Hebrew Gymnasium. He studied in Paris in the 1930s, at the École Supérieure d'Electricité, Institut de Statistique, and, finally, at the Institut Henri Poincaré. His work was carried out in wartime England in the 1940s, while employed at the de Havilland Aircraft company. Moyal was a professor of mathematics at the former School of Mathematics and Physics of Macquarie University, where he was a colleague of John Clive Ward. Previously, he had worked at the Argonne National Laboratory in Illinois. He published pioneering work on stochastic processes. Personal life Moyal was married to Susanna Pollack (1912-2000), with whom he had two children, Orah Young (born in Tel Aviv) and David Moyal (born in Belfast). They divorced in 1956. He was married to Ann Moyal from 1962 until his death. Works J.E. Moyal, "Stochastic Processes and Statistical Physics" Journal of the Royal Statistical Society B'', 11, (1949), 150–210. See also Moyal bracket Wigner–Weyl transform Wigner quasiprobability distribution References External links Maverick Mathematician: The Life and Science of J.E. Moyal Obituary by Alan McIntosh and photographs Moyal Medal awarded annually by Macquarie University for research contributions to mathematics, physics or statistics 1910 births 1998 deaths 20th-century Australian mathematicians Australian physicists Australian statisticians Israeli emigrants to Australia Herzliya Hebrew Gymnasium alumni Jewish physicists Academic staff of Macquarie University Mathematical physicists Scientists from Jerusalem Quantum physicists Mandatory Palestine expatriates in France Mandatory Palestine expatriates in the United Kingdom University of Paris alumni
José Enrique Moyal
[ "Physics" ]
507
[ "Quantum physicists", "Quantum mechanics" ]
5,688,573
https://en.wikipedia.org/wiki/Mothers%20against%20decapentaplegic%20homolog%207
Mothers against decapentaplegic homolog 7 or SMAD7 is a protein that in humans is encoded by the SMAD7 gene. SMAD7 is a protein that, as its name describes, is a homolog of the Drosophila gene: "Mothers against decapentaplegic". It belongs to the SMAD family of proteins, which belong to the TGFβ superfamily of ligands. Like many other TGFβ family members, SMAD7 is involved in cell signalling. It is a TGFβ type 1 receptor antagonist. It blocks TGFβ1 and activin associating with the receptor, blocking access to SMAD2. It is an inhibitory SMAD (I-SMAD) and is enhanced by SMURF2. Smad7 enhances muscle differentiation. Structure Smad proteins contain two conserved domains. The Mad Homology domain 1 (MH1 domain) is at the N-terminal and the Mad Homology domain 2 (MH2 domain) is at the C-terminal. Between them there is a linker region which is full of regulatory sites. The MH1 domain has DNA binding activity while the MH2 domain has transcriptional activity. The linker region contains important regulatory peptide motifs including potential phosphorylation sites for mitogen-activated protein kinases(MAPKs), Erk-family MAP kinases, the Ca2+ /calmodulin-dependent protein kinase II (CamKII) and protein kinase C (PKC). Smad7 does not have the MH1 domain. A proline-tyrosine (PY) motif presents at its linker region enables its interaction with the WW domains of the E3 ubiquitin ligase, the Smad ubiquitination-related factors (Smurf2). It resides predominantly in the nucleus at basal state and translocates to the cytoplasm upon TGF-β stimulation. Function SMAD7 inhibits TGF-β signaling by preventing formation of Smad2/Smad4 complexes which initiate the TGF-β signaling. It interacts with activated TGF-β type I receptor therefore block the association, phosphorylation and activation of Smad2. By occupying type I receptors for Activin and bone morphogenetic protein (BMP), it also plays a role in negative feedback of these pathways. Upon TGF- β treatment, Smad7 binds to discrete regions of Pellino-1 via distinct regions of the Smad MH2 domains. The interaction blocks the formation of the IRAK1-mediated IL-1R/TLR signaling complex therefore abrogates NF-κB activity, which subsequently causes reduced expression of pro-inflammatory genes. While Smad7 is induced by TGF-β, it is also induced by other stimuli, such as epidermal growth factor (EGF), interferon-γ and tumor necrosis factor (TNF)-α. Therefore, it provides a cross-talk between TGF-β signaling and other cellular signaling pathways. Role in cancer A mutation located in SMAD7 gene is a cause of susceptibility to colorectal cancer (CRC) type 3. Perturbation of Smad7 and suppression of TGF-β signaling was found to be evolved in CRC. Case control studies and meta-analysis in Asian and European populations also provided evidence that this mutation is associated with colorectal cancer risk. TGF-β is one of the important growth factors in pancreatic cancer. By controlling the TGF-β pathway, smad7 is believed to be related to this disease. Some previous study showed over-expression of Smad7 in pancreatic cells but there was a recent study showed a low Smad7 expression. The role of Smad7 in pancreatic cancer is still controversial. Over-expression or constitutive activation of epidermal growth factor receptor (EGFR) can promote tumor processes. EGF-induced MMP-9 expression enhances tumor invasion and metastasis in some kinds of tumor cells such as breast cancer and ovarian cancer. Smad7 exerts an inhibitory effect on the EGF signaling pathway. Therefore, it may play a role in prevention of cancer metastasis. Use in Pharmacology SMAD7 signaling has been studied in a recent Celgene Phase III trial, NCT ID number 94, which interacts with the SMAD7 pathway. This drug (Mongersen) was studied in patients with Crohn's disease. Interactions Mothers against decapentaplegic homolog 7 has been shown to interact with: CTNNB1, EP300, TAB1, PIAS4, RNF111, SMAD3. SMAD6, SMURF2, STRAP, TGFBR1, and YAP1. References Further reading Developmental genes and proteins MH1 domain MH2 domain Transcription factors Human proteins
Mothers against decapentaplegic homolog 7
[ "Chemistry", "Biology" ]
1,047
[ "Transcription factors", "Gene expression", "Signal transduction", "Developmental genes and proteins", "Induced stem cells" ]
5,688,623
https://en.wikipedia.org/wiki/Rami%20Grossberg
Rami Grossberg () is a full professor of mathematics at Carnegie Mellon University and works in model theory. Work Grossberg's work in the past few years has revolved around the classification theory of non-elementary classes. In particular, he has provided, in joint work with Monica VanDieren, a proof of an upward "Morley's Categoricity Theorem" (a version of Shelah's categoricity conjecture) for Abstract Elementary Classes with the amalgamation property, that are tame. In another work with VanDieren, they also initiated the study of Tame abstract elementary class. Tameness is both a crucial technical property in categoricity transfer proofs and an independent notion of interest in the area – it has been studied by Baldwin, Hyttinen, Lessmann, Kesälä, Kolesnikov, Kueker among others. Other results include a best approximation to the main gap conjecture for AECs (with Olivier Lessmann), identifying AECs with JEP, AP, no maximal models and tameness as the uncountable analog to Fraïssé's constructions (with VanDieren), a stability spectrum theorem and the existence of Morley sequences for those classes (also with VanDieren). In addition to this work on the Categoricity Conjecture, more recently, with Boney and Vasey, new understanding of frames in AECs and forking (in the abstract elementary class setting) has been obtained. Some of Grossberg's work may be understood as part of the big project on Saharon Shelah's outstanding categoricity conjectures: Conjecture 1. (Categoricity for ). Let be a sentence. If is categorical in a cardinal then is categorical in all cardinals . See Infinitary logic and Beth number. Conjecture 2. (Categoricity for AECs) See and . Let K be an AEC. There exists a cardinal μ(K) such that categoricity in a cardinal greater than μ(K) implies categoricity in all cardinals greater than μ(K). Furthermore, μ(K) is the Hanf number of K. Other examples of his results in pure model theory include: generalizing the Keisler–Shelah omitting types theorem for to successors of singular cardinals; with Shelah, introducing the notion of unsuper-stability for infinitary logics, and proving a nonstructure theorem, which is used to resolve a problem of Fuchs and Salce in the theory of modules; with Hart, proving a structure theorem for , which resolves Morley's conjecture for excellent classes; and the notion of relative saturation and its connection to Shelah's conjecture for . Examples of his results in applications to algebra include the finding that under the weak continuum hypothesis there is no universal object in the class of uncountable locally finite groups (answering a question of Macintyre and Shelah); with Shelah, showing that there is a jump in cardinality of the abelian group Extp(G, Z) at the first singular strong limit cardinal. Personal life In 1986, Grossberg attained his doctorate from the University of Jerusalem. He later married his former doctoral student and frequent collaborator, Monica VanDieren. References External links Rami Grossberg A survey of recent work on AECs Year of birth missing (living people) Living people Israeli mathematicians 20th-century American mathematicians 21st-century American mathematicians Carnegie Mellon University faculty Model theorists
Rami Grossberg
[ "Mathematics" ]
721
[ "Model theorists", "Model theory" ]
5,688,857
https://en.wikipedia.org/wiki/Mothers%20against%20decapentaplegic%20homolog%209
Mothers against decapentaplegic homolog 9 also known as SMAD9, SMAD8, and MADH6 is a protein that in humans is encoded by the SMAD9 gene. SMAD9, as its name describes, is a homolog of the Drosophila gene: "Mothers against decapentaplegic". It belongs to the SMAD family of proteins, which belong to the TGFβ superfamily of modulators. Like many other TGFβ family members, SMAD9 is involved in cell signalling. When a bone morphogenetic protein binds to a receptor (BMP type 1 receptor kinase) it causes SMAD9 to interact with SMAD anchor for receptor activation (SARA).The binding of ligands causes the phosphorylation of the SMAD9 protein and the dissociation from SARA and the association with SMAD4. It is subsequently transferred to the nucleus where it forms complexes with other proteins and acts as a transcription factor. SMAD9 is a receptor regulated SMAD (R-SMAD) and is activated by bone morphogenetic protein type 1 receptor kinase. There are two isoforms of the protein. Confusingly, it is also sometimes referred to as SMAD8 in the literature. Nomenclature The SMAD proteins are homologs of both the drosophila protein, mothers against decapentaplegic (MAD) and the C. elegans protein SMA. The name is a combination of the two. During Drosophila research, it was found that a mutation in the gene, MAD, in the mother, repressed the gene, decapentaplegic, in the embryo. The phrase "Mothers against" was added since mothers often form organizations opposing various issues e.g. Mothers Against Drunk Driving or (MADD); and based on a tradition of such unusual naming within the gene research community. References Developmental genes and proteins MH1 domain MH2 domain R-SMAD Transcription factors Human proteins
Mothers against decapentaplegic homolog 9
[ "Chemistry", "Biology" ]
411
[ "Transcription factors", "Gene expression", "Signal transduction", "Developmental genes and proteins", "Induced stem cells" ]
5,689,153
https://en.wikipedia.org/wiki/R-SMAD
R-SMADs are receptor-regulated SMADs. SMADs are transcription factors that transduce extracellular TGF-β superfamily ligand signaling from cell membrane bound TGF-β receptors into the nucleus where they activate transcription TGF-β target genes. R-SMADS are directly phosphorylated on their c-terminus by type 1 TGF-β receptors through their intracellular kinase domain, leading to R-SMAD activation. R-SMADS include SMAD2 and SMAD3 from the TGF-β/Activin/Nodal branch, and SMAD1, SMAD5 and SMAD9 from the BMP/GDP branch of TGF-β signaling. In response to signals by the TGF-β superfamily of ligands these proteins associate with receptor kinases and are phosphorylated at an SSXS motif at their extreme C-terminus. These proteins then typically bind to the common mediator Smad or co-SMAD SMAD4. Smad complexes then accumulate in the cell nucleus where they regulate transcription of specific target genes: SMAD2 and SMAD3 are activated in response to TGF-β/Activin or Nodal signals. SMAD1, SMAD5 and SMAD9 (also known as SMAD8) are activated in response to BMPs bone morphogenetic protein or GDP signals. SMAD6 and SMAD7 may be referred to as I-SMADs (inhibitory SMADS), which form trimers with R-SMADS and block their ability to induce gene transcription by competing with R-SMADs for receptor binding and by marking TGF-β receptors for degradation. See also TGF beta signaling pathway References Further reading External links Developmental genes and proteins SMAD (protein)
R-SMAD
[ "Chemistry", "Biology" ]
369
[ "Biochemistry stubs", "Molecular and cellular biology stubs", "Induced stem cells", "Developmental genes and proteins" ]
5,689,932
https://en.wikipedia.org/wiki/Committed%20information%20rate
In a Frame Relay network, committed information rate (CIR) is the bandwidth for a virtual circuit guaranteed by an internet service provider to work under normal conditions. Committed data rate (CDR) is the payload portion of the CIR. At any given time, the available bandwidth should not fall below this committed figure. The bandwidth is usually expressed in kilobits per second (kbit/s). Above the CIR, an allowance of burstable bandwidth is often given, whose value can be expressed in terms of an additional rate, known as the excess information rate (EIR), or as its absolute value, peak information rate (PIR). The provider guarantees that the connection will always support the CIR rate, and sometimes the EIR rate provided that there is adequate bandwidth. The PIR, i.e. the CIR plus EIR, is either equal to or less than the speed of the access port into the network. Frame Relay carriers define and package CIRs differently, and CIRs are adjusted with experience. See also Information rate Throughput Notes References Network performance Computer network analysis Temporal rates Frame Relay
Committed information rate
[ "Physics", "Technology" ]
231
[ "Temporal quantities", "Physical quantities", "Computer network stubs", "Temporal rates", "Computing stubs" ]
5,690,096
https://en.wikipedia.org/wiki/NeXT%20Laser%20Printer
The NeXT Laser Printer [NeXT PN N2000] was a 400 DPI PostScript laser printer, sold by NeXT from late to for the NeXTstation and NeXTcube workstations and manufactured by Canon Inc. It included an adjustable paper tray, which enabled it to print on several paper sizes including A4, letter-size, and those of legal and envelope varieties. It was very similar to other printers based on the Canon SX engine, such as the Apple LaserWriter II series and HP LaserJet II/III, although those other printers only printed at 300x300 dpi. Some parts (such as the toner cartridge and input paper tray) are interchangeable with the LaserJet II/III. The printer used a proprietary high-speed serial interface, and was in essence a predecessor of the software-rendering approach, as it used the DisplayPostscript renderer in NeXTStep rather than a hardware PostScript renderer. Regardless of the lack of dedicated rendering hardware, it usually achieved close to its rated speed of 8 ppm, as the NeXTStation had a much faster CPU (25 or 33MHz 68040) and greater memory capacity (up to 128 MB in Turbo models) than the rendering engines of contemporary printers. Because NeXTStep used DisplayPostscript extensively in its windowing system, the PostScript rendering path was optimized; thus, printed documents had a true output WYSIWYG corresponding to the screen. NeXT also produced a color inkjet printer, the SCSI-I-connected, Tabloid-capable, 360 DPI Color Bubblejet model [NeXT PN N2004 (US) N2005 (UK)], based on the technology of the Canon BubbleJet. References External links NeXT Laser printers
NeXT Laser Printer
[ "Technology" ]
362
[ "Computing stubs", "Computer hardware stubs" ]
5,690,312
https://en.wikipedia.org/wiki/Maria%20Pia%20Bridge
Maria Pia Bridge (in Portuguese Ponte de D. Maria Pia, commonly known as Ponte de Dona Maria Pia) is a railway bridge built in 1877 and attributed to Gustave Eiffel. It is situated between the Portuguese Northern municipalities of Porto and Vila Nova de Gaia. The double-hinged, crescent arch bridge is made of wrought iron and spans , over the Douro River. It is part of the Linha Norte system of the national railway. At the time of its construction, it was the longest single-arch span in the world. It is no longer used for rail transport, having been replaced by Ponte de São João (or St. John's Bridge) in 1991. It is often confused with the similar D. Luís Bridge, which was built nine years later and is located to the west, although the D. Luis Bridge has two decks instead of one. History In 1875, the Royal Portuguese Railway Company announced a competition for a bridge to carry the Lisbon to Porto railway across the river Douro. This was very technically demanding: the river was fast-flowing, its depth could be as much as during times of flooding, and the riverbed was made up of a deep layer of gravel. These factors ruled out the construction of piers in the river, meaning that the bridge would have to have a central span of 160m (525 ft). At the time, the longest span of an arch bridge was the 158.5m (520 ft) span of the bridge built by James B. Eads over the Mississippi at St Louis. When the project was approved, João Crisóstomo de Abreu e Sousa, member of the Junta Consultiva das Obras Públicas (Consultative Junta for Public Works), thought that the deck should have two tracks. Gustave Eiffel's design proposal, priced at 965,000 French francs, was the least expensive of the four designs considered at around two-thirds the cost of the nearest competitor. Since the company was relatively inexperienced, a commission was appointed to report on their suitability to undertake the work. Their report was favorable, although it did emphasise the difficulty of the project: Responsibility for the actual design is difficult to attribute, but it is likely that Théophile Seyrig, Eiffel's business partner who presented a paper on the bridge to the Société des Ingénieurs Civils in 1878, was largely responsible. In his account of the bridge that accompanied the 1:50 scale model exhibited at the 1878 World's Fair, Eiffel credited Seyrig and Henry de Dion with work on the calculations and drawings. Construction started on 5 January 1876. Work on the abutments, piers, and approach decking was complete by September. Work then paused due to winter flooding, and the erection of the central arch span was not re-started until March 1877. By 28 October 1877, the platform was mounted and concluded, with the work on the bridge finishing on 30 October 1878. Tests were performed between 1 and 2 November, leading to the 4 November inauguration by King D. Louis I and Queen Maria Pia of Savoy (the eponym of the bridge). Between 1897 and 1898 there was some concern by technicians about the integrity of the bridge; its width, the interruption of principal beams, its lightweight structure, and its elastic nature. In 1890, in Ovar, the Oficina de Obras Metálicas (Metal Works Office) was created to support the work to reinforce and repair those structures. As a consequence, restrictions were placed on transit over the structure between 1900 and 1906: axle load was limited to 14 tons and velocity to per hour. Alterations to the deck of the bridge were performed under the oversight of Xavier Cordeiro in 1900. These were followed between 1901 and 1906 by improvements to the triangular beams, which were performed by the Oficina of Ovar. Consulting with a specialist in metallic structures, French engineer Manet Rabut, in 1907, the Oficina concluded that the arch and the works performed on the bridge were sufficient to allow circulation. This did not impede further work on the fore- and aft-structural members to make the bridge more accessible and to reinforce the main pillars. In 1916, a commission was created to study the possibility of a secondary transit between Vila Nova de Gaia and Porto. In 1928, the bridge was noted as "a real obstacle to traffic." In order to improve the structure for the beginning of CP service across the bridge with improved Series 070 locomotives on 1 November 1950, engineer João de Lemos executed several studies in 1948 to evaluate the bridge's condition: a study of the deck (including structural members) and analyses of the continuous beams and the arch's structural supports. The analysis of the stability of the bridge, handled by the Laboratório Nacional de Engenharia Civil (LNEC), resulted in the injection of cement and repair of the masonry joints and pillars that connected with metallic structures. At the same time, the repair team removed flaking paint from the structure and treated corrosion, including repainting with new metallic paint. Another analytic study in 1966 began to analyze upgrading service to electrical locomotives (Bò-Bó), leading to the conclusion of the electrification of the Linha Norte. In 1969, in loco stress tests verified the analytical results. In 1990, the bridge was classified by the American Society of Civil Engineers as an International Historic Civil Engineering Landmark. In 1991, rail service over the bridge ended because the single track and speed restrictions limited transit to per hour. Rail functions transitioned to the São João Bridge (designed by engineer Edgar Cardoso). In 1998, there was a plan to rehabilitate and illuminate the bridge, resulting in the establishment of a tourist train attraction between the Museu dos Transportes and the area that included the wine cellars of Porto, a route using a formerly closed tunnel under the historic centre of Porto. In 2013, there was an effort to relocate the bridge to the city centre where it would serve as a monument. Architecture The bridge is in an urban cityscape over the Douro River, connecting the mount of Seminário in the municipality of Porto to the Serra do Pilar in the lightly populated section of the municipality of Vila Nova de Gaia. The structure consists of a deck long, supported by two piers on one side of the river and three on the other, with a central arch with a span of and a rise of . It is supported on three pillars in Vila Nova da Gaia and by two pillars in Porto. Two shorter pillars support the arch. The five interlaced support pillars are constructed in a pyramidal format over granite masonry blocks, over six veins, three of which are on the Gaia side and on the Porto side. Over the bridge are painted ironwork guardrails over granite masonry. Another innovation was the method of construction used for the central arch. Since it was impossible to use any falsework, the arch was built out from the abutments on either side, their weight being supported by steel cables attached to the top of the piers supporting the deck. The same method was also used to build the decking, temporary tower structures built above deck level to support the cables. This technique had been previously used by Eads, but its use by Eiffel shows his adoption of the latest engineering techniques. The design uses a parabolic arch. References Notes Sources External links Bridges completed in 1877 Bridges in Porto Bridges in Vila Nova de Gaia Bridges over the Douro River Gustave Eiffel's designs Historic Civil Engineering Landmarks Listed bridges in Portugal National monuments in Porto District Railway bridges in Portugal Truss arch bridges Wrought iron bridges 1877 establishments in Portugal
Maria Pia Bridge
[ "Engineering" ]
1,567
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
5,690,654
https://en.wikipedia.org/wiki/Wheel-barrowing
Wheel-barrowing is a problem that may occur in an aeroplane with a tricycle gear configuration during takeoff or landing. As the aeroplane gains speed during takeoff the wing generates an increasing amount of lift although not enough to raise the aeroplane off the ground. The lift reduces the weight supported by the aeroplane's main wheels and this reduces the main wheels' contribution to directional stability, allowing the nose wheel to destabilise the aeroplane's direction along the ground. This form of wheelbarrowing is easily avoided by the pilot applying back-pressure to the elevator control during the takeoff roll to reduce the weight supported by the nose wheel. Depending on the severity of the wheel-barrowing, damage to the aircraft can be quite extensive: The propeller of a single engine airplane may strike the ground, damaging it and the engine. A wing can be damaged by striking the ground as the aircraft pivots over the nose-wheel and one main wheel. Wheel-barrowing may also be caused with a tricycle gear when the turn radius is too sharp for the speed of the aircraft on the ground – much like a child on a tricycle taking too sharp a turn. The problem is exacerbated when brakes are applied during the turn. See also Ground effect (cars) References Aerodynamics Aviation risks
Wheel-barrowing
[ "Chemistry", "Engineering" ]
269
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
5,690,659
https://en.wikipedia.org/wiki/Ferrography
Ferrography is a method of oil analysis used to inspect the severity and mechanisms of wear in machinery. This is achieved by separating ferrous debris from lubricating oil by use of a magnetic field with an instrument called a ferrograph, the result is then examined with microscopy. A trained analyst can then diagnose faults or predict failures. Ferrography is related to tribology, which is the study of friction between interacting surfaces. Since the advent of ferrography in the 1970s it has been used in many industrial settings as a form of predictive maintenance. History Ferrography was pioneered in the 1970s by the late Vernon C. Westcott, sponsored by the Advance Research Projects Agency of the United States Department of Defense At the time, the methods used to gauge wear; spectroscopic analysis and ferromagnetic chip detectors, could only provide warning of imminent failure, after the wear had already reached the severity, where preventative maintenance alone would not be an effective control to prevent catastrophic failure. The military reached out to Westcott to find a way to solve this problem and from that Westcott developed the first ferrograph. The ferrograph saw its first practical use, by the British during the Falklands War, where it was used to inspect the condition of helicopter transmissions. In 1975, Westcott filed a patent that outlined the principles of multiple varieties of ferrography, including microscopic analysis of wear and a quantitative method of on-line ferrography. In 2009 a new method of visual on-line ferrography was published by a group of researchers from Xi'an Jiaotong University, Theory of Lubrication and Bearing Institute. This is significant as it allows images of wear debris to be obtained during regular machine operation. Purpose & Uses Ferrography is a staple in failure prevention maintenance. Continuous monitoring of the lubricating oil allows a change from expensive and often unnecessary preplanned maintenance to more cost-effective failure prevention. Ferrography is unique because it can deliver information about enclosed parts as lubricating oil circulates through these areas and is still accessible. Rinsing vital components with particle-free lubricant and analyzing the output can offer a detailed report of machine wear without disassembling anything. Since its initial application in the military, ferrography has been found to be helpful in ships coal mining diesel engines gas turbines in the aerospace industry agricultural industry naval aircraft Further applications Applying the idea of ferrography in other fields, techniques have been found to analyze wear outside of lubricating oil and of particles that do not carry magnetic properties. These uses have been found in processing grease samples, gas emissions; and in examining wear on arthritic joints. In arthritic joints, residue from bone-on-bone contact can be found in fluid near the joint and analyzed using direct-reading ferrography which can give information regarding rate of decline in the joint. As of November 2016, minimal information is available regarding further uses of ferrography. Types Analytical ferrography Analytical ferrography works through magnetic separation of contaminant particles and a professional analysis of the particles. A sample of the machine's lubricating oil is taken and diluted, then run across a glass slide. This glass slide is then placed on a magnetic cylinder that attracts the contaminants. Non-magnetic contaminants remain distributed across the slide from the wash. These contaminants are then washed, to remove excess oil, heated to 600 °F for two minutes, and the slide is analyzed under a microscope. After analysis, the particles will be ranked according to size. Particles over 30 microns in size are considered "abnormal" and indicate severe wear. Particles are divided into six categories, with an additional five subcategories under ferrous wear: copper white nonferrous: usually aluminum or chromium babbitt: particles containing tin and lead contaminants: do not change appearance after heating, usually dirt fibers: typically from filters ferrous wear: magnetic particles that are attracted to the magnetic cylinder high alloy: rarely found on ferrograms low alloy cast iron dark metallic oxides: darkness indicates oxidation red oxides Being able to identify different particles can prove to be invaluable because the prominence of certain particles can point to specific locations of wear. Furthermore, the presence of particles that do not make contact with the lubricating oil can uncover contamination. This kind of analysis required a trained professional and can be prohibitively expensive for smaller operations. Direct-reading method Direct-reading ferrography is a more mathematical approach to ferrography. Essentially, the buildup on the glass slide is measured by shining a light across the slide. The blockage of the light by the buildup of particles is then used, over time, to calculate an average. An increase in blockage indicates higher amounts of machine wear. This method is less expensive, as expert analysis is not required, and can be automated. However, once an issue is identified, less information is available to diagnose the problem. On-line visual ferrography On-line visual ferrography (OLVF) allows for images of wear debris to be acquired during routine operation of machinery. It requires attaching an electromagnet, a way to vary the oil flow rate and an image sensor into the oil circuit of the compartment whose oil is being monitored. The ferrous particles in the oil are then deposited in a similar way to using a bench-top ferrograph. Relative wear debris concentration, particle coverage area and images of debris can be obtained from this method. Limitations While ferrography is an effective tool for wear analysis, it does come with several limitations. Ferrography is a very expensive procedure because of the specialized and sophisticated instruments required. Ferrography stands out among oil analysis methods because of the magnetic element involved. This allows for a more detailed report that similar methods cannot produce. Additionally, for the qualitative approach which is analytical ferrography, experts are needed to make sense of the raw output. Furthermore, ferrography cannot solve problems, only bring attention to them. These issues then need to be dealt with on their own. See also Condition Monitoring Wear References Maintenance
Ferrography
[ "Engineering" ]
1,277
[ "Maintenance", "Mechanical engineering" ]
5,691,021
https://en.wikipedia.org/wiki/Society%20of%20Arcueil
The Society of Arcueil was a circle of French scientists who met regularly on summer weekends between 1806 and 1822 at the country houses of Claude Louis Berthollet and Pierre Simon Laplace at Arcueil, then a village 3 miles south of Paris. Members In 1807, when the first collection of "Mémoires de Physique et de Chimie de la Société d'Arcueil" was published, a list of contributing members read: Claude Louis Berthollet (1748-1822) Pierre Simon Laplace (1749-1827) Friedrich Heinrich Alexander von Humboldt (1769-1859) Louis Jacques Thenard (1777-1857) Joseph Louis Gay-Lussac (1778-1850) Jean Baptiste Biot (1774-1862) Augustin Pyramus de Candolle (1778-1841) Hippolyte-Victor Collet-Descotils (1773-1815) Amedée Barthélemy Berthollet (1780-1810) In the course of the following years they were joined by: Étienne-Louis Malus (1775-1812) Dominique François Jean Arago (1786-1853) Jacques Etienne Bérard (1789-1869) Jean Antoine Chaptal (1756-1832) Pierre Louis Dulong (1785-1835) Siméon Denis Poisson (1781-1840) Inspiration Antoine Lavoisier had initiated the practice of informal deliberation with his fellow scientists, including his junior assistants, in his laboratory at the Paris Arsenal. "If at any time I have adopted, without acknowledgement the experiments of M.Berthollet, M.Fourcroy, M.de la Place, M.Monge (...) it is owing to (...) the habit of communicating our ideas, our observations and our way of thinking to each other (establishing) between us a sort of community of opinions in which it is often difficult for everyone to know his own." (Lavoisier in: "Traité Élémentaire de Chimie", 1789) Laplace, and Berthollet with his open laboratory, continued this spirit of fellowship at Arcueil. They were the senior moderators in a scientific debate of novel magnitude; combining the framework of physico-mathematical model (Laplace) with experimental investigation (Berthollet). Roots The roots of the active progress of the Society of Arcueil lay with Napoleon Bonaparte's special attention to sciences in general and - as an artillery officer - to mathematics in particular. Laplace had been Bonaparte's final examiner at the Ecole Militaire (September 1785) where Gaspard Monge, his professor, had encouraged him to finish the two-year course of mathematics in one. Napoleon became acquainted with Berthollet during his campaign in Italy, when Berthollet and Monge were part of the commission sent by the French Directory to select and dispatch Italian art treasures, manuscripts and scientific documents to Paris. Laplace, Berthollet and Monge became instrumental in having Napoleon elected to the First Class of the Institut de France (the class directing the exact sciences) when Lazare Carnot's place fell vacant in 1797. Napoleon in turn invited them to follow him to Egypt (1798-1799) and instructed Berthollet to conduct the recruitment of the scientists that were to compose the "Institut d'Egypte". The way Berthollet effectively directed the practical installation of the Institute at Qassim Bey's Palace in Caïro, cemented the friendship with Bonaparte in a way that proved its worth in the patronage of the Arcueil Society. When Berthollet, in 1807, concluded that the arrangement for research facilities at Arcueil had cost him more than he could afford, Napoleon, alerted by Laplace and Monge, immediately lend him 150.000 francs to break even. The informality of the "Institut d'Egypte" found its continuance at Arcueil where Berthollet from his Egyptian-decorated study remained in charge of the publication of the "Description de l'Egypte (1809)" (ref: Crosland, 1967). Science Under Bonaparte The quantitative applications of the new science of chemistry had important significance for the state economy. The exploitation of beet sugar, for example, was developed with the boycott of English trade in mind. From the publication of Franz Achard's letter on beet sugar in Annales de chimie et de physique (Bruxelles:Van Mons, 1799) and the first presentation of a sample to Napoleon during a session of the First Class of the Institute (June 25, 1800) till the first viable production by Jules Paul Benjamin Delessert in 1812, the subject was one of the scientific priorities in France (see also: Joseph Proust on grape sugar). The industrial fabrication of dye from home grown indigo plant (distinct from woad) at Toulouse was a direct heritage from the "Institut d'Egypte." Mathematical instruments were a special favourite with Napoleon, and were often awarded medals at the industrial fairs held at the instigation of Chaptal. Members of the Society of Arcueil were frequently invited to judge on such occasions. In 1806, at the third exhibition in the series, some 1.400 participants attended; up from 220 in 1801. Special attention was given to textile printing adapted by Christophe Oberkampf and his nephew Samuel Widmer with the introduction of roller instead of block printing. This particular industrial process integrated the bleaching by chlorine (eau de javel) invented by Berthollet, as well as the application of new dyeing methods (Samuel Widmers invention of a solid green dye). In 1806 Oberkampf's factory printed fabrics at the rate of 7,5 metres a minute, a viable alternative to English import. Laplace and Monge were also instructed to supervise Robert Fulton's experiments with the Nautilus (1800),subsidized in France. Following Volta's visit to Paris in 1801 important work on the Voltaic pile, involving the Arcueil circle, was carried out under Bonaparte's auspices rewarding Paul Erman, Humphry Davy, Gay-Lussac and Louis Jacques Thenard in the process. The scientific work in general was of first importance to the education at the Ecole Polytechnique, the home base of many Arcueil scientists. The enhancing of the quality of iron and steel, with Collet-Descotils -the precursor in the discovery of iridium- in charge as chief engineer at the "Ecole des Mines", and above all the development of gunpowder were of prime military significance. The French expertise in explosives was well judged by the Allies when later they dispatched Jöns Jacob Berzelius to Paris to update general knowledge. In 1819 he spent two full months as a guest of Berthollet in the laboratory at Arcueil experimenting, but above all sounding Pierre Dulong whose memoir on a new detonating substance (nitrogen trichloride) had appeared in the 1817 volume of "Mémoires de Physique et de Chimie de la Société d'Arcueil" (André-Marie Ampère had already briefed Humphry Davy on prior stages (1811-1813) of Dulong's invention). "Memoires..." There were three volumes of "Mémoires de Physique et de Chimie de la Société d'Arcueil": 1807, 1809 and 1817 -the last date testifying to the political difficulties following the demise of Napoleon I of France. The "Mémoires..." published some important new ideas: Malus on the polarisation of light (1809, 1817); Gay-Lussac on the free expansion of gases (1807); Humboldt and Gay-Lussac on terrestrial magnetism (1807); Gay-Lussac's law of combining volumes of gases (1809); Thenard and Biot's observation on the comparison of aragonite and calcite (one of the earliest proofs of dimorphism)(1809); Gay-Lussac and Thenard on the discovery of the amides of metal (1809); Candolle on heliotropism (1817). Equally important was the special thread, woven into the overall discourse, that held together the brilliant cross-reference among friends. Foreign visitors There had often been attempts to correspond between the French and the English scientists notwithstanding the state of war between their countries. At the first opportunity the English correspondents of Arcueil returned to Paris, among them John Leslie (1814) and Charles Blagden (1814, 1816, 1817) who died of apoplexy (1820) during a visit to Berthollet at Arcueil. Mary Somerville who wrote a popular account of Laplace's "Mécanique Céleste" dined at Arcueil with her scientific "heroes" (1817). Jöns Jacob Berzelius had already been invited by Berthollet to come and study at Arcueil in 1810, but it was not till 1818 that the Swedish government judged it appropriate for him to travel to France. At Arcueil Berzelius engaged in a steadfast friendship with Dulong. In 1820 Dulong wrote to Berzelius: "Despite the objections of M.Laplace and some others, I am convinced that this (atomic) theory is the most important concept of the century and in the next twenty years it will bring about an incalculable extension to all parts of the physical sciences" It was the testimony of a changing mood and when John Dalton, who had strong differences of opinion with the Society, visited Arcueil in 1822, he received a hearty welcome. It was the last major social event for the Society of Arcueil. Berthollet died on November 6, 1822, and with him went an inspiring power of adherence. Post Scriptum The Society of Arcueil however, through the younger generation, was still to illuminate such work as that of Liebig, Pasteur, Fresnel, Niepce, Daguerre, Léon Foucault ... as well as many others in the field of scientific education. Sources Maurice Crosland: "The Society of Arcueil -A view of French Science at the time of Napoleon I" Cambridge Mass.: Harvard University Press, 1967 Further reading F. Charles-Roux: "Bonaparte: Governor of Egypt" London: Methuen & Co, 1937 William H. Brock: "The fontana history of Chemistry" London: Fontana Press, 1992 Maurice Crosland: The Society of Arcueil: A View of French Science at the Time of Napoleon: London, 1967. Bernard Maitte: "La lumière" Paris: Editions du Seuil -Points/Sciences, 1981 Chemistry societies Scientific societies based in France
Society of Arcueil
[ "Chemistry" ]
2,228
[ "Chemistry societies", "nan" ]
5,691,144
https://en.wikipedia.org/wiki/Sludge%20%28comics%29
Sludge is a comic book series from Malibu Comics, set in the Ultraverse. It was created by Steve Gerber, Gary Martin and Aaron Lopresti. It depicted a dirty cop called Frank Hoag who was killed by the local mafia and was transformed after his death into a superpowered and viscous creature, called Sludge. Publication history Sludge made his first appearance in Sludge #1, dated October 1993, written by Steve Gerber and illustrated by Aaron Lopresti. As part of the Ultraverse imprint, the comic was set within a shared universe of super-powered beings conceptualized by writers and artists of Malibu comics. Sludge ran for only twelve issues, with one special: Sludge: Red X-Mas. A second special, Sludge: Swamp of Souls, was planned but never completed. Sludge also appeared in other Ultraverse books. After the Black September event, Sludge appeared in the first two issues of Foxfire (1996). Character history Frank Hoag was an experienced but corrupt NYPD detective who finally decided to change to take action when he was asked by his mob bosses (John Paul Marcello and Vittorio Sabatini) to kill a fellow dirty cop. When he refused, his own murder is ordered; he dies by a hail of bullets as well as a bomb. The explosion covers him with chemicals, which combine with the sewage from where the mobsters dump his body. The chemicals had regenerative properties and tried to heal Hoag, but combined the sewer substances with his body, transforming him into a huge mass of living slime. He awakens with a raging anger against criminals and an inability to think and speak coherently, with many words coming out replaced with one that sounds only vaguely similar, such as 'munch' instead of 'mutual'. There existed a connection between the chemicals that transformed Frank Hoag into Sludge and Dr. Gross' research. Dr. Gross conducted the experiments that allowed Kevin Green to transform into Prime. One of Sludge's allies was Chas, a blind homeless man who sold newspapers. He didn't comprehend that Frank has transformed; he only thinks Frank has gained an 'underwater voice'. Frank took a newspaper from Chas, claiming to be good for it and reads about deaths in the sewers. Marcello hired an assassin called Bloodstorm to kill the creature and he attacked Sludge with an explosive arrow. Frank meets Shelley Winters, a sensationalistic reporter, in the sewers. She was investigating the same case that interests Frank and she discovers Veffir Voon Iyax, a humanoid, albino alligator-man. Veffir had killed the two people and many more. During the fight, Veffir claims he is from another world, and that nobody who meets him lives. Despite this, Sludge kills him in battle and demands 35 cents from Winters. He uses this to pay back Chas. Sludge also met the villain Lord Pumpkin alias The Pump who offered the creature a swift death if he obeys him. The Pump was beginning a drug sales operation using a new drug called Zuke, that was extracted from a carnivorous plant from the Godwheel. Lord Pumpkin also had a young henchmen known as Pistol. The Dragon Fang, a local Asian mafia, began a drug war against Lord Pumpkin. Marcello joined them in the fight. Lord Pumpkin sent Sludge against Marcello, who found death at the hands of the creature. Sludge also found that the zuke had the property to cure his body's condition, so he helped Pumpkin more. Vittorio Sabatini inherited the mafia and hired Bloodstorm again. The Pump and Sludge defeated the mercenary and drugged him with Zuke. The drugged Bloodstorm was sent to Sabatini and slaughtered the mafia, but the Dragon Fang began new attacks against Pumpkin gang, killing much of his henchmen. They sent a new agent, a battle cyborg against Pumpkin, destroying the candle that gave life to his body. Pistol took the Pumpkin head, hoping to revivify the villain, but desisted after a time. Lord Pumpkin resurrected in other book. Powers and abilities Sludge has tremendous strength and durability, as well as vast regenerative capabilities, allowing him to heal from near-fatal wounds in seconds. Submersion in water speeds up the process. He does not need food or air and is immune to most chemical toxins. Sludge can cause spontaneous tissue growth in others by touch. Possibility of revival In 2003, Steve Englehart was commissioned by Marvel to relaunch the Ultraverse with the most recognizable characters, including Sludge, but the editorial decided finally not to resurrect the Ultraverse imprint. In June 2005, when asked by Newsarama whether Marvel had any plans to revive the Ultraverse, Marvel editor-in-chief Joe Quesada replied that: Appearances in other media Sludge appears in the Ultraforce animated cartoon. In the series, he is an underling of Lord Pumpkin, like the comics, forced into being so due his addiction to the Zuke drug that Pumpkin created, that restores him to human form. He sacrifices himself to stop a demon plant created by Pumpkin, helping Prototype (Jimmy Ruiz). References External links Ultraverse Malibu Comics characters Malibu Comics titles Fictional police detectives Fictional monsters Fictional superorganisms Marvel Comics characters with accelerated healing Marvel Comics characters with superhuman durability or invulnerability Marvel Comics characters with superhuman strength Marvel Comics male superheroes Marvel Comics mutates Characters created by Steve Gerber Comics by Steve Gerber Vigilante characters in comics Comics about monsters
Sludge (comics)
[ "Biology" ]
1,151
[ "Superorganisms", "Fictional superorganisms" ]
624,160
https://en.wikipedia.org/wiki/Blister%20agent
A blister agent (or vesicant), is a chemical compound that causes severe skin, eye and mucosal pain and irritation. They are named for their ability to cause severe chemical burns, resulting in painful water blisters on the bodies of those affected. Although the term is often used in connection with large-scale burns caused by chemical spills or chemical warfare agents, some naturally occurring substances such as cantharidin are also blister-producing agents (vesicants). Furanocoumarin, another naturally occurring substance, causes vesicant-like effects indirectly, for example, by increasing skin photosensitivity greatly. Vesicants have medical uses including wart removal but can be dangerous if even small amounts are ingested. Blister agents used in warfare Most blister agents fall into one of four groups: Sulfur mustards – A family of sulfur-based agents, including mustard gas. Nitrogen mustards – A family of agents similar to the sulfur mustards, but based on nitrogen instead of sulfur. Lewisite – An early blister agent that was developed, but not used, during World War I. It was effectively rendered obsolete with the development of British anti-Lewisite in the 1940s. Phosgene oxime – Occasionally included among the blister agents, although it is more properly termed a nettle agent (urticant). Effects Exposure to a weaponized blister agent can cause a number of life-threatening symptoms, including: Severe skin, eye and mucosal pain and irritation Skin erythema with large fluid blisters that heal slowly and may become infected Tearing, conjunctivitis, corneal damage Mild respiratory distress to marked airway damage All blister agents currently known are denser than air, and are readily absorbed through the eyes, lungs, and skin. Effects of the two mustard agents are typically delayed: exposure to vapors becomes evident in 4 to 6 hours, and skin exposure in 2 to 48 hours. The effects of Lewisite are immediate. References External links Medterms.com Medical Aspects of Biological and Chemical Warfare, Chapter 7: Vesicants
Blister agent
[ "Chemistry" ]
445
[ "Blister agents", "Chemical weapons" ]
624,209
https://en.wikipedia.org/wiki/Induction%20heating
Induction heating is the process of heating electrically conductive materials, namely metals or semi-conductors, by electromagnetic induction, through heat transfer passing through an inductor that creates an electromagnetic field within the coil to heat up and possibly melt steel, copper, brass, graphite, gold, silver, aluminum, or carbide. An important feature of the induction heating process is that the heat is generated inside the object itself, instead of by an external heat source via heat conduction. Thus objects can be heated very rapidly. In addition, there need not be any external contact, which can be important where contamination is an issue. Induction heating is used in many industrial processes, such as heat treatment in metallurgy, Czochralski crystal growth and zone refining used in the semiconductor industry, and to melt refractory metals that require very high temperatures. It is also used in induction cooktops. An induction heater consists of an electromagnet and an electronic oscillator that passes a high-frequency alternating current (AC) through the electromagnet. The rapidly alternating magnetic field penetrates the object, generating electric currents inside the conductor called eddy currents. The eddy currents flow through the resistance of the material, and heat it by Joule heating. In ferromagnetic and ferrimagnetic materials, such as iron, heat also is generated by magnetic hysteresis losses. The frequency of the electric current used for induction heating depends on the object size, material type, coupling (between the work coil and the object to be heated), and the penetration depth. Applications Induction heating allows the targeted heating of an applicable item for applications including surface hardening, melting, brazing and soldering, and heating to fit. Due to their ferromagnetic nature, iron and its alloys respond best to induction heating. Eddy currents can, however, be generated in any conductor, and magnetic hysteresis can occur in any magnetic material. Induction heating has been used to heat liquid conductors (such as molten metals) and also gaseous conductors (such as a gas plasma—see Induction plasma technology). Induction heating is often used to heat graphite crucibles (containing other materials) and is used extensively in the semiconductor industry for the heating of silicon and other semiconductors. Utility frequency (50/60 Hz) induction heating is used for many lower-cost industrial applications as inverters are not required. Furnace An induction furnace uses induction to heat metal to its melting point. Once molten, the high-frequency magnetic field can also be used to stir the hot metal, which is useful in ensuring that alloying additions are fully mixed into the melt. Most induction furnaces consist of a tube of water-cooled copper rings surrounding a container of refractory material. Induction furnaces are used in most modern foundries as a cleaner method of melting metals than a reverberatory furnace or a cupola. Sizes range from a kilogram of capacity to a hundred tonnes. Induction furnaces often emit a high-pitched whine or hum when they are running, depending on their operating frequency. Metals melted include iron and steel, copper, aluminium, and precious metals. Because it is a clean and non-contact process, it can be used in a vacuum or inert atmosphere. Vacuum furnaces use induction heating to produce specialty steels and other alloys that would oxidize if heated in the presence of air. Welding A similar, smaller-scale process is used for induction welding. Plastics may also be welded by induction, if they are either doped with ferromagnetic ceramics (where magnetic hysteresis of the particles provides the heat required) or by metallic particles. Seams of tubes can be welded this way. Currents induced in a tube run along the open seam and heat the edges resulting in a temperature high enough for welding. At this point, the seam edges are forced together and the seam is welded. The RF current can also be conveyed to the tube by brushes, but the result is still the same—the current flows along the open seam, heating it. Manufacturing In the Rapid Induction Printing metal additive printing process, a conductive wire feedstock and shielding gas is fed through a coiled nozzle, subjecting the feedstock to induction heating and ejection from the nozzle as a liquid, in order to refuse under shielding to form three-dimensional metal structures. The core benefit of the use of induction heating in this process is significantly greater energy and material efficiency as well as a higher degree of safety when compared with other additive manufacturing methods, such as selective laser sintering, which deliver heat to the material using a powerful laser or electron beam. Cooking In induction cooking, an induction coil inside the cooktop heats the iron base of cookware by magnetic induction. Using induction cookers produces safety, efficiency (the induction cooktop is not heated itself), and speed. Non-ferrous pans such as copper-bottomed pans and aluminium pans are generally unsuitable. By thermal conduction, the heat induced in the base is transferred to the food inside. Brazing Induction brazing is often used in higher production runs. It produces uniform results and is very repeatable. There are many types of industrial equipment where induction brazing is used. For instance, Induction is used for brazing carbide to a shaft. Sealing Induction heating is used in cap sealing of containers in the food and pharmaceutical industries. A layer of aluminum foil is placed over the bottle or jar opening and heated by induction to fuse it to the container. This provides a tamper-resistant seal since altering the contents requires breaking the foil. Heating to fit Induction heating is often used to heat an item causing it to expand before fitting or assembly. Bearings are routinely heated in this way using utility frequency (50/60 Hz) and a laminated steel transformer-type core passing through the centre of the bearing. Heat treatment Induction heating is often used in the heat treatment of metal items. The most common applications are induction hardening of steel parts, induction soldering/brazing as a means of joining metal components, and induction annealing to selectively soften an area of a steel part. Induction heating can produce high-power densities which allow short interaction times to reach the required temperature. This gives tight control of the heating pattern with the pattern following the applied magnetic field quite closely and allows reduced thermal distortion and damage. This ability can be used in hardening to produce parts with varying properties. The most common hardening process is to produce a localised surface hardening of an area that needs wear resistance while retaining the toughness of the original structure as needed elsewhere. The depth of induction hardened patterns can be controlled through the choice of induction frequency, power density, and interaction time. Limits to the flexibility of the process arise from the need to produce dedicated inductors for many applications. This is quite expensive and requires the marshalling of high-current densities in small copper inductors, which can require specialized engineering and "copper-fitting." Plastic processing Induction heating is used in plastic injection molding machines. Induction heating improves energy efficiency for injection and extrusion processes. Heat is directly generated in the barrel of the machine, reducing warm-up time and energy consumption. The induction coil can be placed outside thermal insulation, so it operates at low temperatures and has a long life. The frequency used ranges from 30 kHz down to 5 kHz, decreasing for thicker barrels. The reduction in the cost of inverter equipment has made induction heating increasingly popular. Induction heating can also be applied to molds, offering more even mold temperature and improved product quality.<ref>Dong-Hwi Sohn, Hyeju Eom, and Keun Park, Application of high-frequency induction heating to high-quality injection molding, in Plastics Engineering Annual Technical Conference Proceedings ANTEC 2010, Society of Plastics Engineers, 2010 </ref> Pyrolysis Induction heating is used to obtain biochar in the pyrolysis of biomass. Heat is directly generated into shaker reactor walls, enabling the pyrolysis of the biomass with good mixing and temperature control. Bolt heating Induction heating is used by mechanics to remove rusted bolts. The heat helps remove the rust induced tension between the threads. Details The basic setup is an AC power supply that provides electricity with low voltage but very high current and high frequency. The workpiece to heat is placed inside an air coil driven by the power supply, usually in combination with a resonant tank capacitor to increase the reactive power. The alternating magnetic field induces eddy currents in the workpiece. The frequency of the inductive current determines the depth that the induced eddy currents penetrate the workpiece. In the simplest case of a solid round bar, the induced current decreases exponentially from the surface. The penetration depth in which 86% of power will be concentrated, can be derived as , where is the depth in meters, is the resistivity of the workpiece in ohm-meters, is the dimensionless relative magnetic permeability of the workpiece, and is the frequency of the AC field in Hz. The AC field can be calculated using the formula . The equivalent resistance of the workpiece and thus the efficiency is a function of the workpiece diameter over the reference depth , increasing rapidly up to about . Since the workpiece diameter is fixed by the application, the value of is determined by the reference depth. Decreasing the reference depth requires increasing the frequency. Since the cost of induction power supplies increases with frequency, supplies are often optimized to achieve a critical frequency at which . If operated below the critical frequency, heating efficiency is reduced because eddy currents from either side of the workpiece impinge upon one another and cancel out. Increasing the frequency beyond the critical frequency creates minimal further improvement in heating efficiency, although it is used in applications that seek to heat treat only the surface of the workpiece. Relative depth varies with temperature because resistivities and permeability vary with temperature. For steel, the relative permeability drops to 1 above the Curie temperature. Thus the reference depth can vary with temperature by a factor of 2–3 for nonmagnetic conductors and by as much as 20 for magnetic steels. Magnetic materials improve the induction heat process because of hysteresis. Materials with high permeability (100–500) are easier to heat with induction heating. Hysteresis heating occurs below the Curie temperature, where materials retain their magnetic properties. High permeability below the Curie temperature in the workpiece is useful. Temperature difference, mass, and specific heat influence the workpiece heating. The energy transfer of induction heating is affected by the distance between the coil and the workpiece. Energy losses occur through heat conduction from workpiece to fixture, natural convection, and thermal radiation. The induction coil is usually made of copper tubing and fluid coolant. Diameter, shape, and number of turns influence the efficiency and field pattern. Core type furnace The furnace consists of a circular hearth that contains the charge to be melted in the form of a ring. The metal ring is large in diameter and is magnetically interlinked with an electrical winding energized by an AC source. It is essentially a transformer where the charge to be heated forms a single-turn short circuit secondary and is magnetically coupled to the primary by an iron core. References Brown, George Harold, Cyril N. Hoyler, and Rudolph A. Bierwirth, Theory and application of radio-frequency heating. New York, D. Van Nostrand Company, Inc., 1947. LCCN 47003544 Hartshorn, Leslie, Radio-frequency heating. London, G. Allen & Unwin, 1949. LCCN 50002705 Langton, L. L., Radio-frequency heating equipment, with particular reference to the theory and design of self-excited power oscillators. London, Pitman, 1949. LCCN 50001900 Shields, John Potter, Abc's of radio-frequency heating. 1st ed., Indianapolis, H. W. Sams, 1969. LCCN 76098943 Sovie, Ronald J., and George R. Seikel, Radio-frequency induction heating of low-pressure plasmas''. Washington, D.C. : National Aeronautics and Space Administration; Springfield, Va.: Clearinghouse for Federal Scientific and Technical Information, October 1967. NASA technical note. D-4206; Prepared at Lewis Research Center. See also Dielectric heating Induction cooking Heating Electrodynamics
Induction heating
[ "Mathematics" ]
2,579
[ "Electrodynamics", "Dynamical systems" ]
624,224
https://en.wikipedia.org/wiki/Getter
A getter is a deposit of reactive material that is placed inside a vacuum system to complete and maintain the vacuum. When gas molecules strike the getter material, they combine with it chemically or by adsorption. Thus the getter removes small amounts of gas from the evacuated space. The getter is usually a coating applied to a surface within the evacuated chamber. A vacuum is initially created by connecting a container to a vacuum pump. After achieving a sufficient vacuum, the container can be sealed, or the vacuum pump can be left running. Getters are especially important in sealed systems, such as vacuum tubes, including cathode-ray tubes (CRTs), vacuum insulating glass (or vacuum glass) and vacuum insulated panels, which must maintain a vacuum for a long time. This is because the inner surfaces of the container release adsorbed gases for a long time after the vacuum is established. The getter continually removes residues of a reactive gas, such as oxygen, as long as it is desorbed from a surface, or continuously penetrating in the system (tiny leaks or diffusion through a permeable material). Even in systems which are continually evacuated by a vacuum pump, getters are also used to remove residual gas, often to achieve a higher vacuum than the pump could achieve alone. Although it is often present in minute amounts and has no moving parts, a getter behaves in itself as a vacuum pump. It is an ultimate chemical sink for reactive gases. Getters cannot react with inert gases, though some getters will adsorb them in a reversible way. Also, hydrogen is usually handled by adsorption rather than by reaction. Types To avoid being contaminated by the atmosphere, the getter must be introduced into the vacuum system in an inactive form during assembly, and activated after evacuation. This is usually done by heat. Different types of getter use different ways of doing this: Flashed getter The getter material is held inactive in a reservoir during assembly and initial evacuation, and then heated and evaporated, usually by induction heating. The vaporized getter, usually a volatile metal, instantly reacts with any residual gas, and then condenses on the cool walls of the tube in a thin coating, the getter spot or getter mirror, which continues to absorb gas. This is the most common type, used in low-power vacuum tubes. Non-evaporable getter (NEG) The getter remains in solid form. Flashed getters Flashed getters are prepared by arranging a reservoir of volatile and reactive material inside the vacuum system. After the system has been evacuated and sealed under rough vacuum, the material is heated (usually by radio frequency induction heating). After evaporating, it deposits as a coating on the interior surfaces of the system. Flashed getters (typically made with barium) are commonly used in vacuum tubes. Most getters can be seen as a silvery metallic spot on the inside of the tube's glass envelope. Large transmission tubes and specialty systems often use more exotic getters, including aluminium, magnesium, calcium, sodium, strontium, caesium, and phosphorus. If the getter is exposed to atmospheric air (for example, if the tube breaks or develops a leak), it turns white and becomes useless. For this reason, flashed getters are only used in sealed systems. A functioning phosphorus getter looks very much like an oxidised metal getter, although it has an iridescent pink or orange appearance which oxidised metal getters lack. Phosphorus was frequently used before metallic getters were developed. In systems which need to be opened to air for maintenance, a titanium sublimation pump provides similar functionality to flashed getters, but can be flashed repeatedly. Alternatively, nonevaporable getters may be used. Those unfamiliar with sealed vacuum devices, such as vacuum tubes/thermionic valves, high-pressure sodium lamps or some types of metal-halide lamps, often notice the shiny flash getter deposit and mistakenly think it is a sign of failure or degradation of the device. Contemporary high-intensity discharge lamps tend to use non-evaporable getters rather than flash getters. Those familiar with such devices can often make qualitative assessments as to the hardness or quality of the vacuum within by the appearance of the flash getter deposit, with a shiny deposit indicating a good vacuum. As the getter is used up, the deposit often becomes thin and translucent, particularly at the edges. It can take on a brownish-red semi-translucent appearance, which indicates poor seals or extensive use of the device at elevated temperatures. A white deposit, usually barium oxide, indicates total failure of the seal on the vacuum system, as shown in the fluorescent display module depicted above. Activation The typical flashed getter used in small vacuum tubes (seen in 12AX7 tube, top) consists of a ring-shaped structure made from a long strip of nickel, which is folded into a long, narrow trough, filled with a mixture of barium azide and powdered glass, and then folded into the closed ring shape. The getter is attached with its trough opening facing upward toward the glass, in the specific case depicted above. During activation, while the bulb is still connected to the pump, an RF induction heating coil connected to a powerful RF oscillator operating in the 27 MHz or 40.68 MHz ISM band is positioned around the bulb in the plane of the ring. The coil acts as the primary of a transformer and the ring as a single shorted turn. Large RF currents flow in the ring, heating it. The coil is moved along the axis of the bulb so as not to overheat and melt the ring. As the ring is heated, the barium azide decomposes into barium vapor and nitrogen. The nitrogen is pumped out and the barium condenses on the bulb above the plane of the ring forming a mirror-like deposit with a large surface area. The powdered glass in the ring melts and entraps any particles which could otherwise escape loose inside the bulb causing later problems. The barium combines with any free gas when activated and continues to act after the bulb is sealed off from the pump. During use, the internal electrodes and other parts of the tube get hot. This can cause adsorbed gases to be released from metallic parts, such as anodes (plates), grids, or non-metallic porous parts, such as sintered ceramic parts. The gas is trapped on the large area of reactive barium on the bulb wall and removed from the tube. Non-evaporable getters Non-evaporable getters, which work at high temperature, generally consist of a film of a special alloy, often primarily zirconium; the requirement is that the alloy materials must form a passivation layer at room temperature which disappears when heated. Common alloys have names of the form St (Stabil) followed by a number: St 707 is 70% zirconium, 24.6% vanadium, and the balance iron. St 787 is 80.8% zirconium, 14.2% cobalt, and the balance mischmetal. St 101 is 84% zirconium and 16% aluminium. In tubes used in electronics, the getter material coats plates within the tube which are heated in normal operation; when getters are used within more general vacuum systems, such as in semiconductor manufacturing, they are introduced as separate pieces of equipment in the vacuum chamber, and turned on when needed. Deposited and patterned getter material is being used in microelectronics packaging to provide an ultra-high vacuum in a sealed cavity. To enhance the getter pumping capacity, the activation temperature must be maximized, considering the process limitations. It is, of course, important not to heat the getter when the system is not already in a good vacuum. See also Ion pump (physics) References Stokes, John W. 70 Years of Radio Tubes and Valves: A Guide for Engineers, Historians, and Collectors. Vestal Press, 1982. Reich, Herbert J. Principles of Electron Tubes. Understanding and Designing Simple Circuits. Audio Amateur Radio Publication, May 1995. (Reprint of 1941 original). External links How to activate getter in GU74B / 4CX800A An Ultrahigh Vacuum Packaging Process Demonstrating Over 2 Million Q-Factor in MEMS Vibratory Gyroscopes, IEEE Sensors Letters Vacuum tubes
Getter
[ "Physics" ]
1,747
[ "Vacuum tubes", "Vacuum", "Matter" ]
624,231
https://en.wikipedia.org/wiki/Voltage%20regulator
A voltage regulator is a system designed to automatically maintain a constant voltage. It may use a simple feed-forward design or may include negative feedback. It may use an electromechanical mechanism, or electronic components. Depending on the design, it may be used to regulate one or more AC or DC voltages. Electronic voltage regulators are found in devices such as computer power supplies where they stabilize the DC voltages used by the processor and other elements. In automobile alternators and central power station generator plants, voltage regulators control the output of the plant. In an electric power distribution system, voltage regulators may be installed at a substation or along distribution lines so that all customers receive steady voltage independent of how much power is drawn from the line. Electronic voltage regulators A simple voltage/current regulator can be made from a resistor in series with a diode (or series of diodes). Due to the logarithmic shape of diode V-I curves, the voltage across the diode changes only slightly due to changes in current drawn or changes in the input. When precise voltage control and efficiency are not important, this design may be fine. Since the forward voltage of a diode is small, this kind of voltage regulator is only suitable for low voltage regulated output. When higher voltage output is needed, a zener diode or series of zener diodes may be employed. Zener diode regulators make use of the zener diode's fixed reverse voltage, which can be quite large. Feedback voltage regulators operate by comparing the actual output voltage to some fixed reference voltage. Any difference is amplified and used to control the regulation element in such a way as to reduce the voltage error. This forms a negative feedback control loop; increasing the open-loop gain tends to increase regulation accuracy but reduce stability. (Stability is the avoidance of oscillation, or ringing, during step changes.) There will also be a trade-off between stability and the speed of the response to changes. If the output voltage is too low (perhaps due to input voltage reducing or load current increasing), the regulation element is commanded, up to a point, to produce a higher output voltage–by dropping less of the input voltage (for linear series regulators and buck switching regulators), or to draw input current for longer periods (boost-type switching regulators); if the output voltage is too high, the regulation element will normally be commanded to produce a lower voltage. However, many regulators have over-current protection, so that they will entirely stop sourcing current (or limit the current in some way) if the output current is too high, and some regulators may also shut down if the input voltage is outside a given range (see also: crowbar circuits). Electromechanical regulators In electromechanical regulators, voltage regulation is easily accomplished by coiling the sensing wire to make an electromagnet. The magnetic field produced by the current attracts a moving ferrous core held back under spring tension or gravitational pull. As voltage increases, so does the current, strengthening the magnetic field produced by the coil and pulling the core towards the field. The magnet is physically connected to a mechanical power switch, which opens as the magnet moves into the field. As voltage decreases, so does the current, releasing spring tension or the weight of the core and causing it to retract. This closes the switch and allows the power to flow once more. If the mechanical regulator design is sensitive to small voltage fluctuations, the motion of the solenoid core can be used to move a selector switch across a range of resistances or transformer windings to gradually step the output voltage up or down, or to rotate the position of a moving-coil AC regulator. Early automobile generators and alternators had a mechanical voltage regulator using one, two, or three relays and various resistors to stabilize the generator's output at slightly more than 6.7 or 13.4 V to maintain the battery as independently of the engine's rpm or the varying load on the vehicle's electrical system as possible. The relay(s) modulated the width of a current pulse to regulate the voltage output of the generator by controlling the average field current in the rotating machine which determines strength of the magnetic field produced which determines the unloaded output voltage per rpm. Capacitors are not used to smooth the pulsed voltage as described earlier. The large inductance of the field coil stores the energy delivered to the magnetic field in an iron core so the pulsed field current does not result in as strongly pulsed a field. Both types of rotating machine produce a rotating magnetic field that induces an alternating current in the coils in the stator. A generator uses a mechanical commutator, graphite brushes running on copper segments, to convert the AC produced into DC by switching the external connections at the shaft angle when the voltage would reverse. An alternator accomplishes the same goal using rectifiers that do not wear down and require replacement. Modern designs now use solid state technology (transistors) to perform the same function that the relays perform in electromechanical regulators. Electromechanical regulators are used for mains voltage stabilisation—see AC voltage stabilizers below. Automatic voltage regulator Generators, as used in power stations, ship electrical power production, or standby power systems, will have automatic voltage regulators (AVR) to stabilize their voltages as the load on the generators changes. The first AVRs for generators were electromechanical systems, but a modern AVR uses solid-state devices. An AVR is a feedback control system that measures the output voltage of the generator, compares that output to a set point, and generates an error signal that is used to adjust the excitation of the generator. As the excitation current in the field winding of the generator increases, its terminal voltage will increase. The AVR will control current by using power electronic devices; generally a small part of the generator's output is used to provide current for the field winding. Where a generator is connected in parallel with other sources such as an electrical transmission grid, changing the excitation has more of an effect on the reactive power produced by the generator than on its terminal voltage, which is mostly set by the connected power system. Where multiple generators are connected in parallel, the AVR system will have circuits to ensure all generators operate at the same power factor. AVRs on grid-connected power station generators may have additional control features to help stabilize the electrical grid against upsets due to sudden load loss or faults. AC voltage stabilizers Coil-rotation AC voltage regulator This is an older type of regulator used in the 1920s that uses the principle of a fixed-position field coil and a second field coil that can be rotated on an axis in parallel with the fixed coil, similar to a variocoupler. When the movable coil is positioned perpendicular to the fixed coil, the magnetic forces acting on the movable coil balance each other out and voltage output is unchanged. Rotating the coil in one direction or the other away from the center position will increase or decrease voltage in the secondary movable coil. This type of regulator can be automated via a servo control mechanism to advance the movable coil position in order to provide voltage increase or decrease. A braking mechanism or high-ratio gearing is used to hold the rotating coil in place against the powerful magnetic forces acting on the moving coil. Electromechanical Electromechanical regulators called voltage stabilizers or tap-changers, have also been used to regulate the voltage on AC power distribution lines. These regulators operate by using a servomechanism to select the appropriate tap on an autotransformer with multiple taps, or by moving the wiper on a continuously variable auto transfomer. If the output voltage is not in the acceptable range, the servomechanism switches the tap, changing the turns ratio of the transformer, to move the secondary voltage into the acceptable region. The controls provide a dead band wherein the controller will not act, preventing the controller from constantly adjusting the voltage ("hunting") as it varies by an acceptably small amount. Constant-voltage transformer The ferroresonant transformer, ferroresonant regulator or constant-voltage transformer is a type of saturating transformer used as a voltage regulator. These transformers use a tank circuit composed of a high-voltage resonant winding and a capacitor to produce a nearly constant average output voltage with a varying input current or varying load. The circuit has a primary on one side of a magnet shunt and the tuned circuit coil and secondary on the other side. The regulation is due to magnetic saturation in the section around the secondary. The ferroresonant approach is attractive due to its lack of active components, relying on the square loop saturation characteristics of the tank circuit to absorb variations in average input voltage. Saturating transformers provide a simple rugged method to stabilize an AC power supply. Older designs of ferroresonant transformers had an output with high harmonic content, leading to a distorted output waveform. Modern devices are used to construct a perfect sine wave. The ferroresonant action is a flux limiter rather than a voltage regulator, but with a fixed supply frequency it can maintain an almost constant average output voltage even as the input voltage varies widely. The ferroresonant transformers, which are also known as constant-voltage transformers (CVTs) or "ferros", are also good surge suppressors, as they provide high isolation and inherent short-circuit protection. A ferroresonant transformer can operate with an input voltage range ±40% or more of the nominal voltage. Output power factor remains in the range of 0.96 or higher from half to full load. Because it regenerates an output voltage waveform, output distortion, which is typically less than 4%, is independent of any input voltage distortion, including notching. Efficiency at full load is typically in the range of 89% to 93%. However, at low loads, efficiency can drop below 60%. The current-limiting capability also becomes a handicap when a CVT is used in an application with moderate to high inrush current, like motors, transformers or magnets. In this case, the CVT has to be sized to accommodate the peak current, thus forcing it to run at low loads and poor efficiency. Minimum maintenance is required, as transformers and capacitors can be very reliable. Some units have included redundant capacitors to allow several capacitors to fail between inspections without any noticeable effect on the device's performance. Output voltage varies about 1.2% for every 1% change in supply frequency. For example, a 2 Hz change in generator frequency, which is very large, results in an output voltage change of only 4%, which has little effect for most loads. It accepts 100% single-phase switch-mode power-supply loading without any requirement for derating, including all neutral components. Input current distortion remains less than 8% THD even when supplying nonlinear loads with more than 100% current THD. Drawbacks of CVTs are their larger size, audible humming sound, and the high heat generation caused by saturation. Power distribution Voltage regulators or stabilizers are used to compensate for voltage fluctuations in mains power. Large regulators may be permanently installed on distribution lines. Small portable regulators may be plugged in between sensitive equipment and a wall outlet. Automatic voltage regulators on generator sets to maintain a constant voltage for changes in load. The voltage regulator compensates for the change in load. Power distribution voltage regulators normally operate on a range of voltages, for example 150–240 V or 90–280 V. DC voltage stabilizers Many simple DC power supplies regulate the voltage using either series or shunt regulators, but most apply a voltage reference using a shunt regulator such as a Zener diode, avalanche breakdown diode, or voltage regulator tube. Each of these devices begins conducting at a specified voltage and will conduct as much current as required to hold its terminal voltage to that specified voltage by diverting excess current from a non-ideal power source to ground, often through a relatively low-value resistor to dissipate the excess energy. The power supply is designed to only supply a maximum amount of current that is within the safe operating capability of the shunt regulating device. If the stabilizer must provide more power, the shunt output is only used to provide the standard voltage reference for the electronic device, known as the voltage stabilizer. The voltage stabilizer is the electronic device, able to deliver much larger currents on demand. Active regulators Active regulators employ at least one active (amplifying) component such as a transistor or operational amplifier. Shunt regulators are often (but not always) passive and simple, but always inefficient because they (essentially) dump the excess current which is not available to the load. When more power must be supplied, more sophisticated circuits are used. In general, these active regulators can be divided into several classes: Linear series regulators Switching regulators SCR regulators Linear regulators Linear regulators are based on devices that operate in their linear region (in contrast, a switching regulator is based on a device forced to act as an on/off switch). Linear regulators are also classified in two types: series regulators shunt regulators In the past, one or more vacuum tubes were commonly used as the variable resistance. Modern designs use one or more transistors instead, perhaps within an integrated circuit. Linear designs have the advantage of very "clean" output with little noise introduced into their DC output, but are most often much less efficient and unable to step-up or invert the input voltage like switched supplies. All linear regulators require a higher input than the output. If the input voltage approaches the desired output voltage, the regulator will "drop out". The input to output voltage differential at which this occurs is known as the regulator's drop-out voltage. Low-dropout regulators (LDOs) allow an input voltage that can be much lower (i.e., they waste less energy than conventional linear regulators). Entire linear regulators are available as integrated circuits. These chips come in either fixed or adjustable voltage types. Examples of some integrated circuits are the 723 general purpose regulator and 78xx /79xx series Switching regulators Switching regulators rapidly switch a series device on and off. The duty cycle of the switch sets how much charge is transferred to the load. This is controlled by a similar feedback mechanism as in a linear regulator. Because the series element is either fully conducting, or switched off, it dissipates almost no power; this is what gives the switching design its efficiency. Switching regulators are also able to generate output voltages which are higher than the input, or of opposite polarity—something not possible with a linear design. In switched regulators, the pass transistor is used as a "controlled switch" and is operated at either cutoff or saturated state. Hence the power transmitted across the pass device is in discrete pulses rather than a steady current flow. Greater efficiency is achieved since the pass device is operated as a low-impedance switch. When the pass device is at cutoff, there is no current and it dissipates no power. Again when the pass device is in saturation, a negligible voltage drop appears across it and thus dissipates only a small amount of average power, providing maximum current to the load. In either case, the power wasted in the pass device is very little and almost all the power is transmitted to the load. Thus the efficiency of a switched-mode power supply is remarkably highin the range of 70–90%. Switched mode regulators rely on pulse-width modulation to control the average value of the output voltage. The average value of a repetitive-pulse waveform depends on the area under the waveform. When the duty cycle is varied, the average voltage changes proportionally. Like linear regulators, nearly complete switching regulators are also available as integrated circuits. Unlike linear regulators, these usually require an inductor that acts as the energy storage element. The IC regulators combine the reference voltage source, error op-amp, and pass transistor with short-circuit current limiting and thermal-overload protection. Switching regulators are more prone to output noise and instability than linear regulators. However, they provide much better power efficiency than linear regulators. SCR regulators Regulators powered from AC power circuits can use silicon controlled rectifiers (SCRs) as the series device. Whenever the output voltage is below the desired value, the SCR is triggered, allowing electricity to flow into the load until the AC mains voltage passes through zero (ending the half cycle). SCR regulators have the advantages of being both very efficient and very simple, but because they can not terminate an ongoing half cycle of conduction, they are not capable of very accurate voltage regulation in response to rapidly changing loads. An alternative is the SCR shunt regulator which uses the regulator output as a trigger. Both series and shunt designs are noisy, but powerful, as the device has a low on resistance. Combination or hybrid regulators Many power supplies use more than one regulating method in series. For example, the output from a switching regulator can be further regulated by a linear regulator. The switching regulator accepts a wide range of input voltages and efficiently generates a (somewhat noisy) voltage slightly above the ultimately desired output. That is followed by a linear regulator that generates exactly the desired voltage and eliminates nearly all the noise generated by the switching regulator. Other designs may use an SCR regulator as the "pre-regulator", followed by another type of regulator. An efficient way of creating a variable-voltage, accurate output power supply is to combine a multi-tapped transformer with an adjustable linear post-regulator. Example of linear regulators Transistor regulator In the simplest case a common base amplifier is used with the base of the regulating transistor connected directly to the voltage reference: A simple transistor regulator will provide a relatively constant output voltage Uout for changes in the voltage Uin of the power source and for changes in load RL, provided that Uin exceeds Uout by a sufficient margin and that the power handling capacity of the transistor is not exceeded. The output voltage of the stabilizer is equal to the Zener diode voltage minus the base–emitter voltage of the transistor, UZ − UBE, where UBE is usually about 0.7 V for a silicon transistor, depending on the load current. If the output voltage drops for any external reason, such as an increase in the current drawn by the load (causing an increase in the collector–emitter voltage to observe KVL), the transistor's base–emitter voltage (UBE) increases, turning the transistor on further and delivering more current to increase the load voltage again. Rv provides a bias current for both the Zener diode and the transistor. The current in the diode is minimal when the load current is maximal. The circuit designer must choose a minimum voltage that can be tolerated across Rv, bearing in mind that the higher this voltage requirement is, the higher the required input voltage Uin, and hence the lower the efficiency of the regulator. On the other hand, lower values of Rv lead to higher power dissipation in the diode and to inferior regulator characteristics. Rv is given by where min VR is the minimum voltage to be maintained across Rv, min ID is the minimum current to be maintained through the Zener diode, max IL is the maximum design load current, hFE is the forward current gain of the transistor (IC/IB). Regulator with a differential amplifier The stability of the output voltage can be significantly increased by using a differential amplifier, possibly implemented as an operational amplifier: In this case, the operational amplifier drives the transistor with more current if the voltage at its inverting input drops below the output of the voltage reference at the non-inverting input. Using the voltage divider (R1, R2 and R3) allows choice of the arbitrary output voltage between Uz and Uin. Regulator specification The output voltage can only be held constant within specified limits. The regulation is specified by two measurements: Load regulation is the change in output voltage for a given change in load current (for example, "typically 15 mV, maximum 100 mV for load currents between 5 mA and 1.4 A, at some specified temperature and input voltage"). Line regulation or input regulation is the degree to which output voltage changes with input (supply) voltage changes—as a ratio of output to input change (for example, "typically 13 mV/V"), or the output voltage change over the entire specified input voltage range (for example, "plus or minus 2% for input voltages between 90 V and 260 V, 50–60 Hz"). Other important parameters are: Temperature coefficient of the output voltage is the change with temperature (perhaps averaged over a given temperature range). Initial accuracy of a voltage regulator (or simply "the voltage accuracy") reflects the error in output voltage for a fixed regulator without taking into account temperature or aging effects on output accuracy. is the minimum difference between input voltage and output voltage for which the regulator can still supply the specified current. The input-output differential at which the voltage regulator will no longer maintain regulation is the dropout voltage. Further reduction in input voltage will result in reduced output voltage. This value is dependent on load current and junction temperature. Inrush current or input surge current or switch-on surge is the maximum, instantaneous input current drawn by an electrical device when first turned on. Inrush current usually lasts for half a second, or a few milliseconds, but it is often very high, which makes it dangerous because it can degrade and burn components gradually (over months or years), especially if there is no inrush current protection. Alternating current transformers or electric motors in automatic voltage regulators may draw and output several times their normal full-load current for a few cycles of the input waveform when first energized or switched on. Power converters also often have inrush currents much higher than their steady state currents, due to the charging current of the input capacitance. Absolute maximum ratings are defined for regulator components, specifying the continuous and peak output currents that may be used (sometimes internally limited), the maximum input voltage, maximum power dissipation at a given temperature, etc. Output noise (thermal white noise) and output dynamic impedance may be specified as graphs versus frequency, while output ripple noise (mains "hum" or switch-mode "hash" noise) may be given as peak-to-peak or RMS voltages, or in terms of their spectra. Quiescent current in a regulator circuit is the current drawn internally, not available to the load, normally measured as the input current while no load is connected and hence a source of inefficiency (some linear regulators are, surprisingly, more efficient at very low current loads than switch-mode designs because of this). Transient response is the reaction of a regulator when a (sudden) change of the load current (called the load transient) or input voltage (called the line transient) occurs. Some regulators will tend to oscillate or have a slow response time which in some cases might lead to undesired results. This value is different from the regulation parameters, as that is the stable situation definition. The transient response shows the behaviour of the regulator on a change. This data is usually provided in the technical documentation of a regulator and is also dependent on output capacitance. Mirror-image insertion protection means that a regulator is designed for use when a voltage, usually not higher than the maximum input voltage of the regulator, is applied to its output pin while its input terminal is at a low voltage, volt-free or grounded. Some regulators can continuously withstand this situation. Others might only manage it for a limited time such as 60 seconds (usually specified in the data sheet). For instance, this situation can occur when a three terminal regulator is incorrectly mounted on a PCB, with the output terminal connected to the unregulated DC input and the input connected to the load. Mirror-image insertion protection is also important when a regulator circuit is used in battery charging circuits, when external power fails or is not turned on and the output terminal remains at battery voltage. See also Charge controller Constant current regulator DC-to-DC converter List of LM-series integrated circuits Third-brush dynamo Voltage comparator Voltage regulator module References Further reading Linear & Switching Voltage Regulator Handbook; ON Semiconductor; 118 pages; 2002; HB206/D.(Free PDF download) Analog circuits Regulator
Voltage regulator
[ "Physics", "Engineering" ]
5,092
[ "Physical quantities", "Voltage regulation", "Analog circuits", "Electronic engineering", "Voltage", "Voltage stability" ]
624,254
https://en.wikipedia.org/wiki/%C4%BDubor%20Kres%C3%A1k
Ľubor Kresák (23 August 1927 in Topoľčany – 20 January 1994 in Bratislava) was a Slovak astronomer. He discovered two comets: the periodic comet 41P/Tuttle-Giacobini-Kresak and the non-periodic C/1954 M2 (Kresak-Peltier). He also suggested in 1978 that the Tunguska event was a fragment of the periodic comet Encke. The asteroid 1849 Kresák was named in his honor. His wife Margita Kresáková was also an astronomer. References External links Publications by Ľ. Kresák in Astrophysics Data System 1927 births 1994 deaths Czechoslovak astronomers People from Topoľčany Tunguska event
Ľubor Kresák
[ "Physics" ]
147
[ "Unsolved problems in physics", "Tunguska event" ]
624,269
https://en.wikipedia.org/wiki/Zde%C5%88ka%20V%C3%A1vrov%C3%A1
Zdeňka Vávrová (born 1945) is a Czech astronomer. She co-discovered periodic comet 134P/Kowal-Vávrová. She had observed it as an asteroid, which received the provisional designation 1983 JG, without seeing any cometary coma. However, later images by Charles T. Kowal showed a coma. The Minor Planet Center credits her with the discovery of 115 numbered minor planets. The Florian main-belt asteroid 3364 Zdenka, discovered by Antonín Mrkos in 1984, was named in her honor and for the 20 years she had been participating in Kleť Observatory's minor planet astrometry program. Naming citation was published on 26 February 1994 (). List of discovered minor planets See also References External links Zdenka Vávrová - Czech and Slovak comet discoverers 1945 births Czechoslovak astronomers Discoverers of asteroids Discoverers of comets Living people Women astronomers
Zdeňka Vávrová
[ "Astronomy" ]
183
[ "Women astronomers", "Astronomers" ]
624,291
https://en.wikipedia.org/wiki/Bertrand%20paradox%20%28probability%29
The Bertrand paradox is a problem within the classical interpretation of probability theory. Joseph Bertrand introduced it in his work Calcul des probabilités (1889) as an example to show that the principle of indifference may not produce definite, well-defined results for probabilities if it is applied uncritically when the domain of possibilities is infinite. Bertrand's formulation of the problem The Bertrand paradox is generally presented as follows: Consider an equilateral triangle that is inscribed in a circle. Suppose a chord of the circle is chosen at random. What is the probability that the chord is longer than a side of the triangle? Bertrand gave three arguments (each using the principle of indifference), all apparently valid yet yielding different results: The "random endpoints" method: Choose two random points on the circumference of the circle and draw the chord joining them. To calculate the probability in question imagine the triangle rotated so its vertex coincides with one of the chord endpoints. Observe that if the other chord endpoint lies on the arc between the endpoints of the triangle side opposite the first point, the chord is longer than a side of the triangle. The length of the arc is one third of the circumference of the circle, therefore the probability that a random chord is longer than a side of the inscribed triangle is . The "random radial point" method: Choose a radius of the circle, choose a point on the radius and construct the chord through this point and perpendicular to the radius. To calculate the probability in question imagine the triangle rotated so a side is perpendicular to the radius. The chord is longer than a side of the triangle if the chosen point is nearer the center of the circle than the point where the side of the triangle intersects the radius. The side of the triangle bisects the radius, therefore the probability a random chord is longer than a side of the inscribed triangle is . The "random midpoint" method: Choose a point anywhere within the circle and construct a chord with the chosen point as its midpoint. The chord is longer than a side of the inscribed triangle if the chosen point falls within a concentric circle of radius the radius of the larger circle. The area of the smaller circle is one fourth the area of the larger circle, therefore the probability a random chord is longer than a side of the inscribed triangle is . These three selection methods differ as to the weight they give to chords which are diameters. This issue can be avoided by "regularizing" the problem so as to exclude diameters, without affecting the resulting probabilities. But as presented above, in method 1, each chord can be chosen in exactly one way, regardless of whether or not it is a diameter; in method 2, each diameter can be chosen in two ways, whereas each other chord can be chosen in only one way; and in method 3, each choice of midpoint corresponds to a single chord, except the center of the circle, which is the midpoint of all the diameters. Other selection methods have been found. In fact, there exists an infinite family of them. Classical solution The problem's classical solution (presented, for example, in Bertrand's own work) depends on the method by which a chord is chosen "at random". The argument is that if the method of random selection is specified, the problem will have a well-defined solution (determined by the principle of indifference). The three solutions presented by Bertrand correspond to different selection methods, and in the absence of further information there is no reason to prefer one over another; accordingly, the problem as stated has no unique solution. Jaynes's solution using the "maximum ignorance" principle In his 1973 paper "The Well-Posed Problem", Edwin Jaynes proposed a solution to Bertrand's paradox based on the principle of "maximum ignorance"—that we should not use any information that is not given in the statement of the problem. Jaynes pointed out that Bertrand's problem does not specify the position or size of the circle and argued that therefore any definite and objective solution must be "indifferent" to size and position. In other words: the solution must be both scale and translation invariant. To illustrate: assume that chords are laid at random onto a circle with a diameter of 2, say by throwing straws onto it from far away and converting them to chords by extension/restriction. Now another circle with a smaller diameter (e.g., 1.1) is laid into the larger circle. Then the distribution of the chords on that smaller circle needs to be the same as the restricted distribution of chords on the larger circle (again using extension/restriction of the generating straws). Thus, if the smaller circle is moved around within the larger circle, the restricted distribution should not change. It can be seen very easily that there would be a change for method 3: the chord distribution on the small red circle looks qualitatively different from the distribution on the large circle: The same occurs for method 1, though it is harder to see in a graphical representation. Method 2 is the only one that is both scale invariant and translation invariant; method 3 is just scale invariant, method 1 is neither. However, Jaynes did not just use invariances to accept or reject given methods: this would leave the possibility that there is another not yet described method that would meet his common-sense criteria. Jaynes used the integral equations describing the invariances to directly determine the probability distribution. In this problem, the integral equations indeed have a unique solution, and it is precisely what was called "method 2" above, the random radius method. In a 2015 article, Alon Drory argued that Jaynes' principle can also yield Bertrand's other two solutions. Drory argues that the mathematical implementation of the above invariance properties is not unique, but depends on the underlying procedure of random selection that one uses (as mentioned above, Jaynes used a straw-throwing method to choose random chords). He shows that each of Bertrand's three solutions can be derived using rotational, scaling, and translational invariance, concluding that Jaynes' principle is just as subject to interpretation as the principle of indifference itself. For example, we may consider throwing a dart at the circle, and drawing the chord having the chosen point as its center. Then the unique distribution which is translation, rotation, and scale invariant is the one called "method 3" above. Likewise, "method 1" is the unique invariant distribution for a scenario where a spinner is used to select one endpoint of the chord, and then used again to select the orientation of the chord. Here the invariance in question consists of rotational invariance for each of the two spins. It is also the unique scale and rotation invariant distribution for a scenario where a rod is placed vertically over a point on the circle's circumference, and allowed to drop to the horizontal position (conditional on it landing partly inside the circle). Physical experiments "Method 2" is the only solution that fulfills the transformation invariants that are present in certain physical systems—such as in statistical mechanics and gas physics—in the specific case of Jaynes's proposed experiment of throwing straws from a distance onto a small circle. Nevertheless, one can design other practical experiments that give answers according to the other methods. For example, in order to arrive at the solution of "method 1", the random endpoints method, one can affix a spinner to the center of the circle, and let the results of two independent spins mark the endpoints of the chord. In order to arrive at the solution of "method 3", one could cover the circle with molasses and mark the first point that a fly lands on as the midpoint of the chord. Several observers have designed experiments in order to obtain the different solutions and verified the results empirically. Notes Further reading External links and by Numberphile & 3Blue1Brown Eponymous paradoxes Probability theory paradoxes Mathematical paradoxes
Bertrand paradox (probability)
[ "Mathematics" ]
1,639
[ "Probability theory paradoxes", "Mathematical problems", "Mathematical paradoxes" ]
624,313
https://en.wikipedia.org/wiki/Mating%20system
A mating system is a way in which a group is structured in relation to sexual behaviour. The precise meaning depends upon the context. With respect to animals, the term describes which males and females mate under which circumstances. Recognised systems include monogamy, polygamy (which includes polygyny, polyandry, and polygynandry), and promiscuity, all of which lead to different mate choice outcomes and thus these systems affect how sexual selection works in the species which practice them. In plants, the term refers to the degree and circumstances of outcrossing. In human sociobiology, the terms have been extended to encompass the formation of relationships such as marriage. In plants The primary mating systems in plants are outcrossing (cross-fertilisation), autogamy (self-fertilisation) and apomixis (asexual reproduction without fertilization, but only when arising by modification of sexual function). Mixed mating systems, in which plants use two or even all three mating systems, are not uncommon. A number of models have been used to describe the parameters of plant mating systems. The basic model is the mixed mating model, which is based on the assumption that every fertilisation is either self-fertilisation or completely random cross-fertilisation. More complex models relax this assumption; for example, the effective selfing model recognises that mating may be more common between pairs of closely related plants than between pairs of distantly related plants. In animals The following are some of the mating systems generally recognized in animals: Monogamy: One male and one female have an exclusive mating relationship. The term "pair bonding" often implies this. This is associated with one-male, one-female group compositions. There are two types of monogamy: type 1, which is facultative, and type 2, which is obligate. Facultative monogamy occurs when there are very low densities in a species. This means that mating occurs with only a single member of the opposite sex because males and females are very far apart. When a female needs aid from conspecifics in order to have a litter this is obligate monogamy. However, with this, the habitat carrying capacity is small so it means only one female can breed within the habitat. Polygamy: Three types are recognized: Polygyny (the most common polygamous mating system in vertebrates so far studied): One male has an exclusive relationship with two or more females. This is associated with one-male, multi-female group compositions. Many perennial Vespula squamosa (southern yellowjacket) colonies are polygynous. Different types of polygyny exist, such as lek polygyny and resource defense polygyny. Grayling butterflies (Hipparchia semele) engage in resource defense polygyny, where females choose a territorial male based on the best oviposition site. Although most animals opt for only one of these strategies, some exhibit hybrid strategies, such as the bee species, Xylocopa micans. Polyandry: One female has an exclusive relationship with two or more males. This is very rare and is associated with multi-male, multi-female group compositions. Genetic polyandry is found some insect species such as Apis mellifera (the Western Honey Bee), in which a virgin queen will mate with multiple drones during her nuptial flight whereas each drone will die immediately upon mating once. The queen will then store the sperm collected from these multiple matings in her spermatheca to use to fertilize eggs throughout the course of her entire reproductive life. Polygynandry: Polygynandry is a slight variation of this, where two or more males have an exclusive relationship with two or more females; the numbers of males and females do not have to be equal, and in vertebrate species studied so far, the number of males is usually less. This is associated with multi-male, multi-female group compositions. Promiscuity: A member of one sex within the social group mates with any member of the opposite sex. This is associated with multi-male, multi-female group compositions. These mating relationships may or may not be associated with social relationships, in which the sexual partners stay together to become parenting partners. As the alternative term "pair bonding" implies, this is usual in monogamy. In many polyandrous systems, the males and the female stay together to rear the young. In polygynous systems where the number of females paired with each male is low and the male will often stay with one female to help rear the young, while the other females rear their young on their own. In polygynandry, each of the males may assist one female; if all adults help rear all the young, the system is more usually called "communal breeding". In highly polygynous systems, and in promiscuous systems, paternal care of young is rare, or there may be no parental care at all. These descriptions are idealized, and the social partnerships are often easier to observe than the mating relationships. In particular: the relationships are rarely exclusive for all individuals in a species. DNA fingerprinting studies have shown that even in pair-bonding, matings outside the pair (extra-pair copulations) occur with fair frequency, and a significant minority of offspring result from them. However, the offspring that are a result of extra-pair copulations usually exhibit more advantageous genes. These genes can be associated with improvements in appearance, mating, and the functioning of internal body systems. some species show different mating systems in different circumstances, for example in different parts of their geographical range, or under different conditions of food availability mixtures of the simple systems described above may occur. Sexual conflict occurs between individuals of different sexes that have separate or conflicting requirements for optimal mating success. This conflict may lead to competitive adaptations and co-adaptations of one or both of the sexes to maintain mating processes that are beneficial to that sex. Intralocus sexual conflict and interlocus sexual conflict describe the genetic influence behind sexual conflict, and are presently recognized as the most basic forms of sexual conflict. In humans Compared to other vertebrates, where a species usually has a single mating system, humans display great variety. Humans also differ by having formal marriages which in some cultures involve negotiation and arrangement between elder relatives. Regarding sexual dimorphism (see the section about animals above), humans are in the intermediate group with moderate sex differences in body size but with relatively small testes, indicating relatively low sperm competition in socially monogamous and polygynous human societies. One estimate is that 83% of human societies are polygynous, 0.05% are polyandrous, and the rest are monogamous. Even the last group may at least in part be genetically polygynous. From an evolutionary standpoint, females are more prone to practice monogamy because their reproductive success is based on the resources they are able to acquire through reproduction rather than the quantity of offspring they produce. However, males are more likely to practice polygamy because their reproductive success is based on the amount of offspring they produce, rather than any kind of benefit from parental investment. Polygyny is associated with an increased sharing of subsistence provided by women. This is consistent with the theory that if women raise the children alone, men can concentrate on the mating effort. Polygyny is also associated with greater environmental variability in the form of variability of rainfall. This may increase the differences in the resources available to men. An important association is that polygyny is associated with a higher pathogen load in an area which may make having good genes in a male increasingly important. A high pathogen load also decreases the relative importance of sororal polygyny which may be because it becomes increasingly important to have genetic variability in the offspring (See Major histocompatibility complex and sexual selection). Virtually all the terms used to describe animal mating systems were adopted from social anthropology, where they had been devised to describe systems of marriage. This shows that human sexual behavior is unusually flexible since, in most animal species, one mating system dominates. While there are close analogies between animal mating systems and human marriage institutions, these analogies should not be pressed too far, because in human societies, marriages typically have to be recognized by the entire social group in some way, and there is no equivalent process in animal societies. The temptation to draw conclusions about what is "natural" for human sexual behavior from observations of animal mating systems should be resisted: a socio-biologist observing the kinds of behavior shown by humans in any other species would conclude that all known mating systems were natural for that species, depending on the circumstances or on individual differences. As culture increasingly affects human mating choices, ascertaining what is the 'natural' mating system of the human animal from a zoological perspective becomes increasingly difficult. Some clues can be taken from human anatomy, which is essentially unchanged from the prehistoric past: humans have a small relative size of testes to body mass in comparison to most primates; humans have a small ejaculate volume and sperm count in comparison to other primates; as compared to most primates, humans spend more time in copulation; as compared to most primates, humans copulate with lower frequency; the outward signs of estrus in women (i.e. higher body temperature, breast swelling, sugar cravings, etc.), are often perceived to be less obvious in comparison to the outward signs of ovulation in most other mammals; for most mammals, the estrous cycle and its outward signs bring on mating activity; the majority of female-initiated matings in humans coincides with estrus, but humans copulate throughout the reproductive cycle; after ejaculation/orgasm in males and females, humans release a hormone that has a sedative effect; Some have suggested that these anatomical factors signify some degree of sperm competition, although others have provided anatomical evidence to suggest that sperm competition risk in humans is low; humans have a small ejaculate volume and sperm count in comparison to other primates, even though levels of genetic and societal promiscuity are highly varied across cultures, Genetic causes and effects Monogamy has evolved multiple times in animals, with homologous brain structures predicting the mating and parental strategies used by them. These homologous structures were brought about by similar mechanisms. Even though there have been many different evolutionary pathways to get to monogamy, all the studied organisms express their genes very similarly in the fore and midbrain, implying a universal mechanism for the evolution of monogamy in vertebrates. While genetics is not the exclusive cause of mating systems within animals, it is influential in many animals, particularly rodents, which have been the most heavily researched. Certain rodents’ mating systems—monogamous, polygynous, or socially monogamous with frequent promiscuity—are correlated with suggested evolutionary phylogenies, where rodents more closely related genetically are more likely to use a similar mating system, suggesting an evolutionary basis. These differences in mating strategy can be traced back to a few significant alleles that affect behaviors that are heavily influential on mating system, such as the alleles responsible for the level of parental care, how animals choose their partner(s), and sexual competitiveness, among others, which are all at least partially influenced by genetics. While these genes may not perfectly correlate with the mating system that animals use, genetics is one factor that may lead to a species or population reproducing using one mating system over another, or even potentially multiple at different locations or points in time. Mating systems can also have large impacts on the genetics of a population, strongly affecting natural selection and speciation. In plover populations, polygamous species tend to speciate more slowly than monogamous species do. This is likely because polygamous animals tend to move larger distances to find mates, contributing to a high level of gene flow, which can genetically homogenize many nearby subpopulations. Monogamous animals, on the other hand, tend to stay closer to their starting location, not dispersing as much. Because monogamous animals don’t migrate as far, monogamous populations which are geographically closer together tend to reproductively isolate from each other more easily, and thus each subpopulation is more likely to diversify or speciate from the other nearby populations as compared to polygamous populations. In polygamous species, however, the male partner in polygynous species and female partner in polyandrous species often tend to spread further to look for mates, potentially to find more or better mates. The increased level of movement among populations leads to increased gene flow between populations, effectively making geographically distinct populations into genetically similar ones via interbreeding. This has been observed in some species of rodents, where generally promiscuous species were quickly differentiated into monogamous and polygamous taxa by a prominent introduction of monogamous behaviors in some populations of that species, showing the swift evolutionary effects different mating systems can have. Specifically, monogamous populations speciated up to 4.8 times faster and had lower extinction rates than non monogamous populations. Another way that monogamy has the potential to cause increased speciation is because individuals are more selective with partners and competition, causing different nearby populations of the same species to stop interbreeding as much, leading to speciation down the road. Another potential effect of polyandry in particular is increasing the quality of offspring and reducing the probability of reproductive failure. There are many possible reasons for this, one of the possibilities being that there is greater genetic variation in families because most offspring in a family will have either a different mother or father. This reduces the potential harm done by inbreeding, as siblings will be less closely related and more genetically diverse. Additionally, because of the increased genetic diversity among generations, the levels of reproductive fitness are also more variable, and so it is easier to select for positive traits more quickly, as the difference in fitness between members of the same generation would be greater. When many males are actively mating, polyandry can decrease the risk of extinction as well, as it can increase the effective population size. Increased effective population sizes are more stable and less prone to accumulating deleterious mutations due to genetic drift. In microorganisms Bacteria Mating in bacteria involves transfer of DNA from one cell to another and incorporation of the transferred DNA into the recipient bacteria's genome by homologous recombination. Transfer of DNA between bacterial cells can occur in three main ways. First, a bacterium can take up exogenous DNA released into the intervening medium from another bacterium by a process called transformation. DNA can also be transferred from one bacterium to another by the process of transduction, which is mediated by an infecting virus (bacteriophage). The third method of DNA transfer is conjugation, in which a plasmid mediates transfer through direct cell contact between cells. Transformation, unlike transduction or conjugation, depends on numerous bacterial gene products that specifically interact to perform this complex process, and thus transformation is clearly a bacterial adaptation for DNA transfer. In order for a bacterium to bind, take up and recombine donor DNA into its own chromosome, it must first enter a special physiological state termed natural competence. In Bacillus subtilis about 40 genes are required for the development of competence and DNA uptake. The length of DNA transferred during B. subtilis transformation can be as much as a third and up to the whole chromosome. Transformation appears to be common among bacterial species, and at least 60 species are known to have the natural ability to become competent for transformation. The development of competence in nature is usually associated with stressful environmental conditions, and seems to be an adaptation for facilitating repair of DNA damage in recipient cells. Archaea In several species of archaea, mating is mediated by formation of cellular aggregates. Halobacterium volcanii, an extreme halophilic archaeon, forms cytoplasmic bridges between cells that appear to be used for transfer of DNA from one cell to another in either direction. When the hyperthermophilic archaea Sulfolobus solfataricus and Sulfolobus acidocaldarius are exposed to the DNA damaging agents UV irradiation, bleomycin or mitomycin C, species-specific cellular aggregation is induced. Aggregation in S. solfataricus could not be induced by other physical stressors, such as pH or temperature shift, suggesting that aggregation is induced specifically by DNA damage. Ajon et al. showed that UV-induced cellular aggregation mediates chromosomal marker exchange with high frequency in S. acidocaldarius. Recombination rates exceeded those of uninduced cultures by up to three orders of magnitude. Frols et al. and Ajon et al. hypothesized that cellular aggregation enhances species-specific DNA transfer between Sulfolobus cells in order to provide increased repair of damaged DNA by means of homologous recombination. This response appears to be a primitive form of sexual interaction similar to the more well-studied bacterial transformation systems that are also associated with species specific DNA transfer between cells leading to homologous recombinational repair of DNA damage. Protists Protists are a large group of diverse eukaryotic microorganisms, mainly unicellular animals and plants, that do not form tissues. Eukaryotes emerged in evolution more than 1.5 billion years ago. The earliest eukaryotes were likely protists. Mating and sexual reproduction are widespread among extant eukaryotes. Based on a phylogenetic analysis, Dacks and Roger proposed that facultative sex was present in the common ancestor of all eukaryotes. However, to many biologists it seemed unlikely until recently, that mating and sex could be a primordial and fundamental characteristic of eukaryotes. A principal reason for this view was that mating and sex appeared to be lacking in certain pathogenic protists whose ancestors branched off early from the eukaryotic family tree. However, several of these protists are now known to be capable of, or to recently have had, the capability for meiosis and hence mating. To cite one example, the common intestinal parasite Giardia intestinalis was once considered to be a descendant of a protist lineage that predated the emergence of meiosis and sex. However, G. intestinalis was recently found to have a core set of genes that function in meiosis and that are widely present among sexual eukaryotes. These results suggested that G. intestinalis is capable of meiosis and thus mating and sexual reproduction. Furthermore, direct evidence for meiotic recombination, indicative of mating and sexual reproduction, was also found in G. intestinalis. Other protists for which evidence of mating and sexual reproduction has recently been described are parasitic protozoa of the genus Leishmania, Trichomonas vaginalis, and acanthamoeba. Protists generally reproduce asexually under favorable environmental conditions, but tend to reproduce sexually under stressful conditions, such as starvation or heat shock. Viruses Both animal viruses and bacterial viruses (bacteriophage) are able to undergo mating. When a cell is mixedly infected by two genetically marked viruses, recombinant virus progeny are often observed indicating that mating interaction had occurred at the DNA level. Another manifestation of mating between viral genomes is multiplicity reactivation (MR). MR is the process by which at least two virus genomes, each containing inactivating genome damage, interact with each other in an infected cell to form viable progeny viruses. The genes required for MR in bacteriophage T4 are largely the same as the genes required for allelic recombination. Examples of MR in animal viruses are described in the articles Herpes simplex virus, Influenza A virus, Adenoviridae, Simian virus 40, Vaccinia virus, and Reoviridae. See also References Further reading Ecology Ethology Fertility Sexual selection Sociobiology Heterosexuality
Mating system
[ "Biology" ]
4,178
[ "Evolutionary processes", "Behavior", "Ecology", "Behavioural sciences", "Sociobiology", "Ethology", "Sexual selection", "Mating systems", "Mating" ]
624,361
https://en.wikipedia.org/wiki/Autophagy
Autophagy (or autophagocytosis; from the Greek , , meaning "self-devouring" and , , meaning "hollow") is the natural, conserved degradation of the cell that removes unnecessary or dysfunctional components through a lysosome-dependent regulated mechanism. It allows the orderly degradation and recycling of cellular components. Although initially characterized as a primordial degradation pathway induced to protect against starvation, it has become increasingly clear that autophagy also plays a major role in the homeostasis of non-starved cells. Defects in autophagy have been linked to various human diseases, including neurodegeneration and cancer, and interest in modulating autophagy as a potential treatment for these diseases has grown rapidly. Four forms of autophagy have been identified: macroautophagy, microautophagy, chaperone-mediated autophagy (CMA), and crinophagy. In macroautophagy (the most thoroughly researched form of autophagy), cytoplasmic components (like mitochondria) are targeted and isolated from the rest of the cell within a double-membrane vesicle known as an autophagosome, which, in time, fuses with an available lysosome, bringing its specialty process of waste management and disposal; and eventually the contents of the vesicle (now called an autolysosome) are degraded and recycled. In crinophagy (the least well-known and researched form of autophagy), unnecessary secretory granules are degraded and recycled. In disease, autophagy has been seen as an adaptive response to stress, promoting survival of the cell; but in other cases, it appears to promote cell death and morbidity. In the extreme case of starvation, the breakdown of cellular components promotes cellular survival by maintaining cellular energy levels. The word "autophagy" was in existence and frequently used from the middle of the 19th century. In its present usage, the term autophagy was coined by Belgian biochemist Christian de Duve in 1963 based on his discovery of the functions of lysosome. The identification of autophagy-related genes in yeast in the 1990s allowed researchers to deduce the mechanisms of autophagy, which eventually led to the award of the 2016 Nobel Prize in Physiology or Medicine to Japanese researcher Yoshinori Ohsumi. History Autophagy was first observed by Keith R. Porter and his student Thomas Ashford at the Rockefeller Institute. In January 1962 they reported an increased number of lysosomes in rat liver cells after the addition of glucagon, and that some displaced lysosomes towards the centre of the cell contained other cell organelles such as mitochondria. They called this autolysis after Christian de Duve and Alex B. Novikoff. However Porter and Ashford wrongly interpreted their data as lysosome formation (ignoring the pre-existing organelles). Lysosomes could not be cell organelles, but part of cytoplasm such as mitochondria, and that hydrolytic enzymes were produced by microbodies. In 1963 Hruban, Spargo and colleagues published a detailed ultrastructural description of "focal cytoplasmic degradation", which referenced a 1955 German study of injury-induced sequestration. Hruban, Spargo and colleagues recognized three continuous stages of maturation of the sequestered cytoplasm to lysosomes, and that the process was not limited to injury states that functioned under physiological conditions for "reutilization of cellular materials", and the "disposal of organelles" during differentiation. Inspired by this discovery, de Duve christened the phenomena "autophagy". Unlike Porter and Ashford, de Duve conceived the term as a part of lysosomal function while describing the role of glucagon as a major inducer of cell degradation in the liver. With his student Russell Deter, he established that lysosomes are responsible for glucagon-induced autophagy. This was the first time the fact that lysosomes are the sites of intracellular autophagy was established. In the 1990s several groups of scientists independently discovered autophagy-related genes using the budding yeast. Notably, Yoshinori Ohsumi and Michael Thumm examined starvation-induced non-selective autophagy; in the meantime, Daniel J. Klionsky discovered the cytoplasm-to-vacuole targeting (CVT) pathway, which is a form of selective autophagy. They soon found that they were in fact looking at essentially the same pathway, just from different angles. Initially, the genes discovered by these and other yeast groups were given different names (APG, AUT, CVT, GSA, PAG, PAZ, and PDD). A unified nomenclature was advocated in 2003 by the yeast researchers to use ATG to denote autophagy genes. The 2016 Nobel Prize in Physiology or Medicine was awarded to Yoshinori Ohsumi, although some have pointed out that the award could have been more inclusive. The field of autophagy research experienced accelerated growth at the turn of the 21st century. Knowledge of ATG genes provided scientists more convenient tools to dissect functions of autophagy in human health and disease. In 1999, a landmark discovery connecting autophagy with cancer was published by Beth Levine's group. To this date, relationship between cancer and autophagy continues to be a main theme of autophagy research. The roles of autophagy in neurodegeneration and immune defense also received considerable attention. In 2003, the first Gordon Research Conference on autophagy was held at Waterville. In 2005, Daniel J Klionsky launched Autophagy, a scientific journal dedicated to this field. The first Keystone Symposia on autophagy was held in 2007 at Monterey. In 2008, Carol A Mercer created a BHMT fusion protein (GST-BHMT), which showed starvation-induced site-specific fragmentation in cell lines. The degradation of betaine homocysteine methyltransferase (BHMT), a metabolic enzyme, could be used to assess autophagy flux in mammalian cells. Macro, micro, and Chaperone mediated autophagy are mediated by autophagy-related genes and their associated enzymes. Macroautophagy is then divided into bulk and selective autophagy. In the selective autophagy is the autophagy of organelles; mitophagy, lipophagy, pexophagy, chlorophagy, ribophagy and others. Macroautophagy is the main pathway, used primarily to eradicate damaged cell organelles or unused proteins. First the phagophore engulfs the material that needs to be degraded, which forms a double membrane known as an autophagosome, around the organelle marked for destruction. The autophagosome then travels through the cytoplasm of the cell to a lysosome in mammals, or vacuoles in yeast and plants, and the two organelles fuse. Within the lysosome/vacuole, the contents of the autophagosome are degraded via acidic lysosomal hydrolase. Microautophagy, on the other hand, involves the direct engulfment of cytoplasmic material into the lysosome. This occurs by invagination, meaning the inward folding of the lysosomal membrane, or cellular protrusion. Chaperone-mediated autophagy, or CMA, is a very complex and specific pathway, which involves the recognition by the hsc70-containing complex. This means that a protein must contain the recognition site for this hsc70 complex which will allow it to bind to this chaperone, forming the CMA- substrate/chaperone complex. This complex then moves to the lysosomal membrane-bound protein that will recognise and bind with the CMA receptor. Upon recognition, the substrate protein gets unfolded and it is translocated across the lysosome membrane with the assistance of the lysosomal hsc70 chaperone. CMA is significantly different from other types of autophagy because it translocates protein material in a one by one manner, and it is extremely selective about what material crosses the lysosomal barrier. Mitophagy is the selective degradation of mitochondria by autophagy. It often occurs to defective mitochondria following damage or stress. Mitophagy promotes the turnover of mitochondria and prevents the accumulation of dysfunctional mitochondria which can lead to cellular degeneration. It is mediated by Atg32 (in yeast) and NIX and its regulator BNIP3 in mammals. Mitophagy is regulated by PINK1 and parkin proteins. The occurrence of mitophagy is not limited to the damaged mitochondria but also involves undamaged ones. Lipophagy is the degradation of lipids by autophagy, a function which has been shown to exist in both animal and fungal cells. The role of lipophagy in plant cells, however, remains elusive. In lipophagy the target are lipid structures called lipid droplets (LDs), spheric "organelles" with a core of mainly triacylglycerols (TAGs) and a unilayer of phospholipids and membrane proteins. In animal cells the main lipophagic pathway is via the engulfment of LDs by the phagophore, macroautophagy. In fungal cells on the other hand microplipophagy constitutes the main pathway and is especially well studied in the budding yeast Saccharomyces cerevisiae. Lipophagy was first discovered in mice and published 2009. Targeted interplay between bacterial pathogens and host autophagy Autophagy targets genus-specific proteins, so orthologous proteins which share sequence homology with each other are recognized as substrates by a particular autophagy targeting protein. There exists a complementarity of autophagy targeting proteins which potentially increase infection risk upon mutation. The lack of overlap among the targets of the 3 autophagy proteins and the large overlap in terms of the genera show that autophagy could target different sets of bacterial proteins from the same pathogen. On one hand, the redundancy in targeting the same genera is beneficial for robust pathogen recognition. But, on the other hand, the complementarity in the specific bacterial proteins could make the host more susceptible to chronic disorders and infections if the gene encoding one of the autophagy targeting proteins becomes mutated, and the autophagy system is overloaded or suffers other malfunctions. Moreover, autophagy targets virulence factors and virulence factors responsible for more general functions such as nutrient acquisition and motility are recognized by multiple autophagy targeting proteins. And the specialized virulence factors such as autolysins, and iron sequestering proteins are potentially recognized uniquely by a single autophagy targeting protein. The autophagy proteins CALCOCO2/NDP52 and MAP1LC3/LC3 may have evolved specifically to target pathogens or pathogenic proteins for autophagic degradation. While SQSTM1/p62 targets more generic bacterial proteins containing a target motif but not related to virulence. On the other hand, bacterial proteins from various pathogenic genera are also able to modulate autophagy. There are genus-specific patterns in the phases of autophagy that are potentially regulated by a given pathogen group. Some autophagy phases can only be modulated by particular pathogens, while some phases are modulated by multiple pathogen genera. Some of the interplay-related bacterial proteins have proteolytic and post-translational activity such as phosphorylation and ubiquitination and can interfere with the activity of autophagy proteins. Molecular biology ATG is short for "AuTophaGy"-related, which is applied to both genes and proteins related to the biological process of autophagy. There are about 16-20 conserved ATG genes coding for many core ATG proteins conserved from yeast to humans. ATG may be part of the protein name (such as ATG7) or part of the gene name (such as ATG7), although all ATG proteins and genes do not follow this pattern (such as ULK1). To give specific examples, the UKL1 enzyme (kinase complex) induces autophagosome biogenesis, and ATG13 (Autophagy-related protein 13) is required for phagosome formation. Autophagy is executed by ATG genes. Prior to 2003, ten or more names were used, but after this point a unified nomenclature was devised by fungal autophagy researchers. The first autophagy genes were identified by genetic screens conducted in Saccharomyces cerevisiae. Following their identification those genes were functionally characterized and their orthologs in a variety of different organisms were identified and studied. Today, thirty-six Atg proteins have been classified as especially important for autophagy, of which 18 belong to the core machinery. In mammals, amino acid sensing and additional signals such as growth factors and reactive oxygen species regulate the activity of the protein kinases mTOR and AMPK. These two kinases regulate autophagy through inhibitory phosphorylation of the Unc-51-like kinases ULK1 and ULK2 (mammalian homologues of Atg1). Induction of autophagy results in the dephosphorylation and activation of the ULK kinases. ULK is part of a protein complex containing Atg13, Atg101 and FIP200. ULK phosphorylates and activates Beclin-1 (mammalian homologue of Atg6), which is also part of a protein complex. The autophagy-inducible Beclin-1 complex contains the proteins PIK3R4(p150), Atg14L and the class III phosphatidylinositol 3-phosphate kinase (PI(3)K) Vps34. The active ULK and Beclin-1 complexes re-localize to the site of autophagosome initiation, the phagophore, where they both contribute to the activation of downstream autophagy components. Once active, VPS34 phosphorylates the lipid phosphatidylinositol to generate phosphatidylinositol 3-phosphate (PtdIns(3)P) on the surface of the phagophore. The generated PtdIns(3)P is used as a docking point for proteins harboring a PtdIns(3)P binding motif. WIPI2, a PtdIns(3)P binding protein of the WIPI (WD-repeat protein interacting with phosphoinositides) protein family, was recently shown to physically bind ATG16L1. Atg16L1 is a member of an E3-like protein complex involved in one of two ubiquitin-like conjugation systems essential for autophagosome formation. The FIP200 cis-Golgi-derived membranes fuse with ATG16L1-positive endosomal membranes to form the prophagophore termed HyPAS (hybrid pre-autophagosomal structure). ATG16L1 binding to WIPI2 mediates ATG16L1's activity. This leads to downstream conversion of prophagophore into ATG8-positive phagophore via a ubiquitin-like conjugation system. The first of the two ubiquitin-like conjugation systems involved in autophagy covalently binds the ubiquitin-like protein Atg12 to Atg5. The resulting conjugate protein then binds ATG16L1 to form an E3-like complex which functions as part of the second ubiquitin-like conjugation system. This complex binds and activates Atg3, which covalently attaches mammalian homologues of the ubiquitin-like yeast protein ATG8 (LC3A-C, GATE16, and GABARAPL1-3), the most studied being LC3 proteins, to the lipid phosphatidylethanolamine (PE) on the surface of autophagosomes. Lipidated LC3 contributes to the closure of autophagosomes, and enables the docking of specific cargos and adaptor proteins such as Sequestosome-1/p62. The completed autophagosome then fuses with a lysosome through the actions of multiple proteins, including SNAREs and UVRAG. Following the fusion LC3 is retained on the vesicle's inner side and degraded along with the cargo, while the LC3 molecules attached to the outer side are cleaved off by Atg4 and recycled. The contents of the autolysosome are subsequently degraded and their building blocks are released from the vesicle through the action of permeases. Sirtuin 1 (SIRT1) stimulates autophagy by preventing acetylation of proteins (via deacetylation) required for autophagy as demonstrated in cultured cells and embryonic and neonatal tissues. This function provides a link between sirtuin expression and the cellular response to limited nutrients due to caloric restriction. Functions Nutrient starvation Autophagy has roles in various cellular functions. One particular example is in yeasts, where the nutrient starvation induces a high level of autophagy. This allows unneeded proteins to be degraded and the amino acids recycled for the synthesis of proteins that are essential for survival. In higher eukaryotes, autophagy is induced in response to the nutrient depletion that occurs in animals at birth after severing off the trans-placental food supply, as well as that of nutrient starved cultured cells and tissues. Mutant yeast cells that have a reduced autophagic capability rapidly perish in nutrition-deficient conditions. Studies on the apg mutants suggest that autophagy via autophagic bodies is indispensable for protein degradation in the vacuoles under starvation conditions, and that at least 15 APG genes are involved in autophagy in yeast. A gene known as ATG7 has been implicated in nutrient-mediated autophagy, as mice studies have shown that starvation-induced autophagy was impaired in atg7-deficient mice. Infection Vesicular stomatitis virus is believed to be taken up by the autophagosome from the cytosol and translocated to the endosomes where detection takes place by a pattern recognition receptor called toll-like receptor 7, detecting single stranded RNA. Following activation of the toll-like receptor, intracellular signaling cascades are initiated, leading to induction of interferon and other antiviral cytokines. A subset of viruses and bacteria subvert the autophagic pathway to promote their own replication. Galectin-8 has recently been identified as an intracellular "danger receptor", able to initiate autophagy against intracellular pathogens. When galectin-8 binds to a damaged vacuole, it recruits an autophagy adaptor such as NDP52 leading to the formation of an autophagosome and bacterial degradation. Repair mechanism Autophagy degrades damaged organelles, cell membranes and proteins, and insufficient autophagy is thought to be one of the main reasons for the accumulation of damaged cells and aging. Autophagy and autophagy regulators are involved in response to lysosomal damage, often directed by galectins such as galectin-3 and galectin-8. Repair of damaged DNA involves the recruitment of enzymes to the damaged site, but these enzymes must be removed upon completion of the repair process. Topoisomerase I cleavage complex is employed in the processing of DNA damages (e.g. DNA-protein crosslinks) in vertebrates, and this complex is selectively degraded by autophagy, presumably after it is no longer needed. Programmed cell death One of the mechanisms of programmed cell death (PCD) is associated with the appearance of autophagosomes and depends on autophagy proteins. This form of cell death most likely corresponds to a process that has been morphologically defined as autophagic PCD. One question that constantly arises, however, is whether autophagic activity in dying cells is the cause of death or is actually an attempt to prevent it. Morphological and histochemical studies have not so far proved a causative relationship between the autophagic process and cell death. In fact, there have recently been strong arguments that autophagic activity in dying cells might actually be a survival mechanism. Studies of the metamorphosis of insects have shown cells undergoing a form of PCD that appears distinct from other forms; these have been proposed as examples of autophagic cell death. Recent pharmacological and biochemical studies have proposed that survival and lethal autophagy can be distinguished by the type and degree of regulatory signaling during stress particularly after viral infection. Although promising, these findings have not been examined in non-viral systems. Meiosis Mammalian fetal oocytes face several challenges to survival throughout the stages of meiotic prophase I prior to primordial follicle assembly. Each primordial follicle contains an immature primary oocyte. Before oocytes are enclosed into a primordial follicle, deficiencies of nutrients or growth factors might activate protective autophagy, but this can turn into death of the oocytes if starvation is prolonged. Exercise Autophagy is essential for basal homeostasis; it is also extremely important in maintaining muscle homeostasis during physical exercise. Autophagy at the molecular level is only partially understood. A study of mice shows that autophagy is important for the ever-changing demands of their nutritional and energy needs, particularly through the metabolic pathways of protein catabolism. In a 2012 study conducted by the University of Texas Southwestern Medical Center in Dallas, mutant mice (with a knock-in mutation of BCL2 phosphorylation sites to produce progeny that showed normal levels of basal autophagy yet were deficient in stress-induced autophagy) were tested to challenge this theory. Results showed that when compared to a control group, these mice illustrated a decrease in endurance and an altered glucose metabolism during acute exercise. Another study demonstrated that skeletal muscle fibers of collagen VI in knockout mice showed signs of degeneration due to an insufficiency of autophagy which led to an accumulation of damaged mitochondria and excessive cell death. Exercise-induced autophagy was unsuccessful however; but when autophagy was induced artificially post-exercise, the accumulation of damaged organelles in collagen VI deficient muscle fibres was prevented and cellular homeostasis was maintained. Both studies demonstrate that autophagy induction may contribute to the beneficial metabolic effects of exercise and that it is essential in the maintaining of muscle homeostasis during exercise, particularly in collagen VI fibers. Work at the Institute for Cell Biology, University of Bonn, showed that a certain type of autophagy, i.e. chaperone-assisted selective autophagy (CASA), is induced in contracting muscles and is required for maintaining the muscle sarcomere under mechanical tension. The CASA chaperone complex recognizes mechanically damaged cytoskeleton components and directs these components through a ubiquitin-dependent autophagic sorting pathway to lysosomes for disposal. This is necessary for maintaining muscle activity. Osteoarthritis Because autophagy decreases with age and age is a major risk factor for osteoarthritis, the role of autophagy in the development of this disease is suggested. Proteins involved in autophagy are reduced with age in both human and mouse articular cartilage. Mechanical injury to cartilage explants in culture also reduced autophagy proteins. Autophagy is constantly activated in normal cartilage but it is compromised with age and precedes cartilage cell death and structural damage. Thus autophagy is involved in a normal protective process (chondroprotection) in the joint. Cancer Cancer often occurs when several different pathways that regulate cell differentiation are disturbed. Autophagy plays an important role in cancer – both in protecting against cancer as well as potentially contributing to the growth of cancer. Autophagy can contribute to cancer by promoting survival of tumor cells that have been starved, or that degrade apoptotic mediators through autophagy: in such cases, use of inhibitors of the late stages of autophagy (such as chloroquine), on the cells that use autophagy to survive, increases the number of cancer cells killed by antineoplastic drugs. The role of autophagy in cancer is one that has been highly researched and reviewed. There is evidence that emphasizes the role of autophagy as both a tumor suppressor and a factor in tumor cell survival. Recent research has shown, however, that autophagy is more likely to be used as a tumor suppressor according to several models. Tumor suppressor Several experiments have been done with mice and varying Beclin1, a protein that regulates autophagy. When the Beclin1 gene was altered to be heterozygous (Beclin 1+/-), the mice were found to be tumor-prone. However, when Beclin1 was overexpressed, tumor development was inhibited. Care should be exercised when interpreting phenotypes of beclin mutants and attributing the observations to a defect in autophagy, however: Beclin1 is generally required for phosphatidylinositol 3- phosphate production and as such it affects numerous lysosomal and endosomal functions, including endocytosis and endocytic degradation of activated growth factor receptors. In support of the possibility that Beclin1 affects cancer development through an autophagy-independent pathway is the fact that core autophagy factors which are not known to affect other cellular processes and are definitely not known to affect cell proliferation and cell death, such as Atg7 or Atg5, show a much different phenotype when the respective gene is knocked out, which does not include tumor formation. In addition, full knockout of Beclin1 is embryonic lethal whereas knockout of Atg7 or Atg5 is not. Necrosis and chronic inflammation also has been shown to be limited through autophagy which helps protect against the formation of tumor cells. Colorectal cancer Colorectal cancer incidence is associated with a high-fat diet, and such a diet is linked to elevated levels of bile acids in the colon, particularly deoxycholic acid. Deoxycholic acid induces autophagy in non-cancer colon epithelial cells and this induction of autophagy contributes to cell survival when cells are stressed. Also autophagy is a survival pathway that is constitutively present in apoptosis-resistant colon cancer cells. The constitutive activation of autophagy in colon cancer cells, is thus a colon cancer cell survival strategy that needs to be overcome in colon cancer therapy. Mechanism of cell death Cells that undergo an extreme amount of stress experience cell death either through apoptosis or necrosis. Prolonged autophagy activation leads to a high turnover rate of proteins and organelles. A high rate above the survival threshold may kill cancer cells with a high apoptotic threshold. This technique can be utilized as a therapeutic cancer treatment. Tumor cell survival Alternatively, autophagy has also been shown to play a large role in tumor cell survival. In cancerous cells, autophagy is used as a way to deal with stress on the cell. Induction of autophagy by miRNA-4673, for example, is a pro-survival mechanism that improves the resistance of cancer cells to radiation. Once these autophagy related genes were inhibited, cell death was potentiated. The increase in metabolic energy is offset by autophagy functions. These metabolic stresses include hypoxia, nutrient deprivation, and an increase in proliferation. These stresses activate autophagy in order to recycle ATP and maintain survival of the cancerous cells. Autophagy has been shown to enable continued growth of tumor cells by maintaining cellular energy production. By inhibiting autophagy genes in these tumors cells, regression of the tumor and extended survival of the organs affected by the tumors were found. Furthermore, inhibition of autophagy has also been shown to enhance the effectiveness of anticancer therapies. Therapeutic target New developments in research have found that targeted autophagy may be a viable therapeutic solution in fighting cancer. As discussed above, autophagy plays both a role in tumor suppression and tumor cell survival. Thus, the qualities of autophagy can be used as a strategy for cancer prevention. The first strategy is to induce autophagy and enhance its tumor suppression attributes. The second strategy is to inhibit autophagy and thus induce apoptosis. The first strategy has been tested by looking at dose-response anti-tumor effects during autophagy-induced therapies. These therapies have shown that autophagy increases in a dose-dependent manner. This is directly related to the growth of cancer cells in a dose-dependent manner as well. These data support the development of therapies that will encourage autophagy. Secondly, inhibiting the protein pathways directly known to induce autophagy may also serve as an anticancer therapy. The second strategy is based on the idea that autophagy is a protein degradation system used to maintain homeostasis and the findings that inhibition of autophagy often leads to apoptosis. Inhibition of autophagy is riskier as it may lead to cell survival instead of the desired cell death. Negative regulators of autophagy Negative regulators of autophagy, such as mTOR, cFLIP, EGFR, (GAPR-1), and Rubicon are orchestrated to function within different stages of the autophagy cascade. The end-products of autophagic digestion may also serve as a negative-feedback regulatory mechanism to stop prolonged activity. The interface between inflammation and autophagy Regulators of autophagy control regulators of inflammation, and vice versa. Cells of vertebrate organisms normally activate inflammation to enhance the capacity of the immune system to clear infections and to initiate the processes that restore tissue structure and function. Therefore, it is critical to couple regulation of mechanisms for removal of cellular and bacterial debris to the principal factors that regulate inflammation: The degradation of cellular components by the lysosome during autophagy serves to recycle vital molecules and generate a pool of building blocks to help the cell respond to a changing microenvironment. Proteins that control inflammation and autophagy form a network that is critical for tissue functions, which is dysregulated in cancer: In cancer cells, aberrantly expressed and mutant proteins increase the dependence of cell survival on the "rewired" network of proteolytic systems that protects malignant cells from apoptotic proteins and from recognition by the immune system. This renders cancer cells vulnerable to intervention on regulators of autophagy. Type 2 diabetes Excessive activity of the crinophagy form of autophagy in the insulin-producing beta cells of the pancreas could reduce the quantity of insulin available for secretion, leading to type 2 diabetes. See also References Further reading External links Autophagy, a journal produced by Landes Bioscience and edited by DJ Klionsky LongevityMeme entry describing PubMed article on the effects of autophagy and lifespan Autophagolysosome on Drugs.com HADb, a Human Autophagy dedicated Database Autophagy DB, an autophagy database that covers all eukaryotes Self-Destructive Behavior in Cells May Hold Key to a Longer Life Exercise as Housecleaning for the Body The AIM center Cellular processes Programmed cell death Immunology Cell death
Autophagy
[ "Chemistry", "Biology" ]
6,712
[ "Signal transduction", "Senescence", "Immunology", "Cellular processes", "Programmed cell death" ]
624,406
https://en.wikipedia.org/wiki/Amagat%27s%20law
Amagat's law or the law of partial volumes describes the behaviour and properties of mixtures of ideal (as well as some cases of non-ideal) gases. It is of use in chemistry and thermodynamics. It is named after Emile Amagat. Overview Amagat's law states that the extensive volume of a gas mixture is equal to the sum of volumes of the component gases, if the temperature and the pressure remain the same: This is the experimental expression of volume as an extensive quantity. According to Amagat's law of partial volume, the total volume of a non-reacting mixture of gases at constant temperature and pressure should be equal to the sum of the individual partial volumes of the constituent gases. So if are considered to be the partial volumes of components in the gaseous mixture, then the total volume would be represented as Both Amagat's and Dalton's law predict the properties of gas mixtures. Their predictions are the same for ideal gases. However, for real (non-ideal) gases, the results differ. Dalton's law of partial pressures assumes that the gases in the mixture are non-interacting (with each other) and each gas independently applies its own pressure, the sum of which is the total pressure. Amagat's law assumes that the volumes of the component gases (again at the same temperature and pressure) are additive; the interactions of the different gases are the same as the average interactions of the components. The interactions can be interpreted in terms of a second virial coefficient for the mixture. For two components, the second virial coefficient for the mixture can be expressed as where the subscripts refer to components 1 and 2, the are the mole fractions, and the are the second virial coefficients. The cross term of the mixture is given by for Dalton's law and for Amagat's law. When the volumes of each component gas (same temperature and pressure) are very similar, then Amagat's law becomes mathematically equivalent to Vegard's law for solid mixtures. Ideal gas mixture When Amagat's law is valid and the gas mixture is made of ideal gases, where: is the pressure of the gas mixture, is the volume of the i-th component of the gas mixture, is the total volume of the gas mixture, is the amount of substance of i-th component of the gas mixture (in mol), is the total amount of substance of gas mixture (in mol), is the ideal, or universal, gas constant, equal to the product of the Boltzmann constant and the Avogadro constant, is the absolute temperature of the gas mixture (in K), is the mole fraction of the i-th component of the gas mixture. It follows that the mole fraction and volume fraction are the same. This is true also for other equation of state. References Eponymous laws of physics Gas laws Gases
Amagat's law
[ "Physics", "Chemistry" ]
595
[ "Matter", "Phases of matter", "Gas laws", "Statistical mechanics", "Gases" ]
624,431
https://en.wikipedia.org/wiki/Lyudmila%20Karachkina
Lyudmila Georgievna Karachkina (, born 3 September 1948, Rostov-on-Don) is an astronomer and discoverer of minor planets. In 1978 she began as a staff astronomer of the Institute for Theoretical Astronomy (ITA) at Leningrad. Her research at the Crimean Astrophysical Observatory (CrAO) then focused on astrometry and photometry of minor planets. The Minor Planet Center credits her with the discovery of 130 minor planets, including the Amor asteroid 5324 Lyapunov and the Trojan asteroid 3063 Makhaon. In 2004, she received a Ph.D. in astronomy from Odesa I. I. Mechnikov National University. Lyudmila Karachkina has two daughters, Maria and Renata. The inner main-belt asteroid 8019 Karachkina, discovered by German astronomers Lutz D. Schmadel and Freimut Börngen at Tautenburg on 14 October 1990, was named in her honor (). On 23 November 1999, the minor planet 8089 Yukar was named after her husband, Yurij Vasil'evicht Karachkin (b. 1940), a physics teacher at CrAO's school.(). List of discovered minor planets See also Tamara Smirnova, astronomer at ITA References 1948 births 20th-century Russian astronomers 21st-century Russian astronomers Discoverers of asteroids Living people Scientists from Rostov-on-Don Soviet astronomers Ukrainian astronomers Women astronomers 20th-century women scientists 21st-century women scientists
Lyudmila Karachkina
[ "Astronomy" ]
311
[ "Women astronomers", "Astronomers" ]
624,613
https://en.wikipedia.org/wiki/Phrase%20%28music%29
In music theory, a phrase () is a unit of musical meter that has a complete musical sense of its own, built from figures, motifs, and cells, and combining to form melodies, periods and larger sections. Terms such as sentence and verse have been adopted into the vocabulary of music from linguistic syntax. Though the analogy between the musical and the linguistic phrase is often made, still the term "is one of the most ambiguous in music....there is no consistency in applying these terms nor can there be...only with melodies of a very simple type, especially those of some dances, can the terms be used with some consistency." John D. White defines a phrase as "the smallest musical unit that conveys a more or less complete musical thought. Phrases vary in length and are terminated at a point of full or partial repose, which is called a cadence." Edward Cone analyses the "typical musical phrase" as consisting of an "initial downbeat, a period of motion, and a point of arrival marked by a cadential downbeat". Charles Burkhart defines a phrase as "Any group of measures (including a group of one, or possibly even a fraction of one) that has some degree of structural completeness. What counts is the sense of completeness we hear in the pitches not the notation on the page. To be complete such a group must have an ending of some kind … . Phrases are delineated by the tonal functions of pitch. They are not created by slur or by legato performance ... . A phrase is not pitches only but also has a rhythmic dimension, and further, each phrase in a work contributes to that work's large rhythmic organization." Duration or form In common practice phrases are often four bars or measures long culminating in a more or less definite cadence. A phrase will end with a weaker or stronger cadence, depending on whether it is an antecedent phrase or a consequent phrase, the first or second half of a period. However, the absolute span of the phrase (the term in today's use is coined by the German theorist Hugo Riemann) is as contestable as its pendant in language, where there can be even one-word-phrases (like "Stop!" or "Hi!"). Thus no strict line can be drawn between the terms of the 'phrase', the 'motiv' or even the separate tone (as a one-tone-, one-chord- or one-noise-expression). Thus, in views of Gestalt theory, the term of 'phrase' is rather enveloping any musical expression which is perceived as a consistent gestalt separate from others, however few or many beats, i. e. distinct musical events like tones, chords or noises, it may contain. A phrase-group is "a group of three or more phrases linked together without the two-part feeling of a period", or "a pair of consecutive phrases in which the first is a repetition of the second or in which, for whatever reason, the antecedent-consequent relationship is absent". Phrase rhythm is the rhythmic aspect of phrase construction and the relationships between phrases, and "is not at all a cut-and-dried affair, but the very lifeblood of music and capable of infinite variety. Discovering a work's phrase rhythm is a gateway to its understanding and to effective performance." The term was popularized by William Rothstein's Phrase Rhythm in Tonal Music. Techniques include overlap, lead-in, extension, expansion, reinterpretation and elision. A phrase member is one of the parts in a phrase separated into two by a pause or long note value, the second of which may repeat, sequence, or contrast with the first. A phrase segment "is a distinct portion of the phrase, but it is not a phrase either because it is not terminated by a cadence or because it seems too short to be relatively independent". See also Period (music) Strophe Melodic pattern Lick (music) References Sources Further reading How to Understand Music: A Concise Course in Musical Intelligence and Taste (1881) by William Smythe Babcock Mathews What We Hear in Music: A Course of Study in Music History and Appreciation (c. 1921) by Anne Shaw Faulkner Formal sections in music analysis Musical terminology Rhythm and meter
Phrase (music)
[ "Physics", "Technology" ]
894
[ "Physical quantities", "Time", "Formal sections in music analysis", "Rhythm and meter", "Spacetime", "Components" ]