text
stringlengths
559
401k
source
stringlengths
13
121
An integrated circuit (IC), also known as a microchip or simply chip, is a set of electronic circuits, consisting of various electronic components (such as transistors, resistors, and capacitors) and their interconnections. These components are etched onto a small, flat piece ("chip") of semiconductor material, usually silicon. Integrated circuits are used in a wide range of electronic devices, including computers, smartphones, and televisions, to perform various functions such as processing and storing information. They have greatly impacted the field of electronics by enabling device miniaturization and enhanced functionality. Integrated circuits are orders of magnitude smaller, faster, and less expensive than those constructed of discrete components, allowing a large transistor count. The IC's mass production capability, reliability, and building-block approach to integrated circuit design have ensured the rapid adoption of standardized ICs in place of designs using discrete transistors. ICs are now used in virtually all electronic equipment and have revolutionized the world of electronics. Computers, mobile phones, and other home appliances are now essential parts of the structure of modern societies, made possible by the small size and low cost of ICs such as modern computer processors and microcontrollers. Very-large-scale integration was made practical by technological advancements in semiconductor device fabrication. Since their origins in the 1960s, the size, speed, and capacity of chips have progressed enormously, driven by technical advances that fit more and more transistors on chips of the same size – a modern chip may have many billions of transistors in an area the size of a human fingernail. These advances, roughly following Moore's law, make the computer chips of today possess millions of times the capacity and thousands of times the speed of the computer chips of the early 1970s. ICs have three main advantages over circuits constructed out of discrete components: size, cost and performance. The size and cost is low because the chips, with all their components, are printed as a unit by photolithography rather than being constructed one transistor at a time. Furthermore, packaged ICs use much less material than discrete circuits. Performance is high because the IC's components switch quickly and consume comparatively little power because of their small size and proximity. The main disadvantage of ICs is the high initial cost of designing them and the enormous capital cost of factory construction. This high initial cost means ICs are only commercially viable when high production volumes are anticipated. == Terminology == An integrated circuit is defined as: A circuit in which all or some of the circuit elements are inseparably associated and electrically interconnected so that it is considered to be indivisible for the purposes of construction and commerce. In strict usage, integrated circuit refers to the single-piece circuit construction originally known as a monolithic integrated circuit, which comprises a single piece of silicon. In general usage, circuits not meeting this strict definition are sometimes referred to as ICs, which are constructed using many different technologies, e.g. 3D IC, 2.5D IC, MCM, thin-film transistors, thick-film technologies, or hybrid integrated circuits. The choice of terminology frequently appears in discussions related to whether Moore's Law is obsolete. == History == An early attempt at combining several components in one device (like modern ICs) was the Loewe 3NF vacuum tube first made in 1926. Unlike ICs, it was designed with the purpose of tax avoidance, as in Germany, radio receivers had a tax that was levied depending on how many tube holders a radio receiver had. It allowed radio receivers to have a single tube holder. One million were manufactured, and were "a first step in integration of radioelectronic devices". The device contained an amplifier, composed of three triodes, two capacitors and four resistors in a six-pin device. Radios with the Loewe 3NF were less expensive than other radios, showing one of the advantages of integration over using discrete components, that would be seen decades later with ICs. Early concepts of an integrated circuit go back to 1949, when German engineer Werner Jacobi (Siemens AG) filed a patent for an integrated-circuit-like semiconductor amplifying device showing five transistors on a common substrate in a three-stage amplifier arrangement. Jacobi disclosed small and cheap hearing aids as typical industrial applications of his patent. An immediate commercial use of his patent has not been reported. Another early proponent of the concept was Geoffrey Dummer (1909–2002), a radar scientist working for the Royal Radar Establishment of the British Ministry of Defence. Dummer presented the idea to the public at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. He gave many symposia publicly to propagate his ideas and unsuccessfully attempted to build such a circuit in 1956. Between 1953 and 1957, Sidney Darlington and Yasuo Tarui (Electrotechnical Laboratory) proposed similar chip designs where several transistors could share a common active area, but there was no electrical isolation to separate them from each other. The monolithic integrated circuit chip was enabled by the inventions of the planar process by Jean Hoerni and p–n junction isolation by Kurt Lehovec. Hoerni's invention was built on Carl Frosch and Lincoln Derick's work on surface protection and passivation by silicon dioxide masking and predeposition, as well as Fuller, Ditzenberger's and others work on the diffusion of impurities into silicon. === The first integrated circuits === A precursor idea to the IC was to create small ceramic substrates (so-called micromodules), each containing a single miniaturized component. Components could then be integrated and wired into a bidimensional or tridimensional compact grid. This idea, which seemed very promising in 1957, was proposed to the US Army by Jack Kilby and led to the short-lived Micromodule Program (similar to 1951's Project Tinkertoy). However, as the project was gaining momentum, Kilby came up with a new, revolutionary design: the IC. Newly employed by Texas Instruments, Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working example of an integrated circuit on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material … wherein all the components of the electronic circuit are completely integrated". The first customer for the new invention was the US Air Force. Kilby won the 2000 Nobel Prize in physics for his part in the invention of the integrated circuit. However, Kilby's invention was not a true monolithic integrated circuit chip since it had external gold-wire connections, which would have made it difficult to mass-produce. Half a year after Kilby, Robert Noyce at Fairchild Semiconductor invented the first true monolithic IC chip. More practical than Kilby's implementation, Noyce's chip was made of silicon, whereas Kilby's was made of germanium, and Noyce's was fabricated using the planar process, developed in early 1959 by his colleague Jean Hoerni and included the critical on-chip aluminum interconnecting lines. Modern IC chips are based on Noyce's monolithic IC, rather than Kilby's. NASA's Apollo Program was the largest single consumer of integrated circuits between 1961 and 1965. === TTL integrated circuits === Transistor–transistor logic (TTL) was developed by James L. Buie in the early 1960s at TRW Inc. TTL became the dominant integrated circuit technology during the 1970s to early 1980s. Dozens of TTL integrated circuits were a standard method of construction for the processors of minicomputers and mainframe computers. Computers such as IBM 360 mainframes, PDP-11 minicomputers and the desktop Datapoint 2200 were built from bipolar integrated circuits, either TTL or the even faster emitter-coupled logic (ECL). === MOS integrated circuits === Nearly all modern IC chips are metal–oxide–semiconductor (MOS) integrated circuits, built from MOSFETs (metal–oxide–silicon field-effect transistors). The MOSFET invented at Bell Labs between 1955 and 1960, made it possible to build high-density integrated circuits. In contrast to bipolar transistors which required a number of steps for the p–n junction isolation of transistors on a chip, MOSFETs required no such steps but could be easily isolated from each other. Its advantage for integrated circuits was pointed out by Dawon Kahng in 1961. The list of IEEE milestones includes the first integrated circuit by Kilby in 1958, Hoerni's planar process and Noyce's planar IC in 1959. The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS integrated circuit in 1964, a 120-transistor shift register developed by Robert Norman. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s. Following the development of the self-aligned gate (silicon-gate) MOSFET by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC technology with self-aligned gates, the basis of all modern CMOS integrated circuits, was developed at Fairchild Semiconductor by Federico Faggin in 1968. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip. This led to the inventions of the microprocessor and the microcontroller by the early 1970s. During the early 1970s, MOS integrated circuit technology enabled the very large-scale integration (VLSI) of more than 10,000 transistors on a single chip. At first, MOS-based computers only made sense when high density was required, such as aerospace and pocket calculators. Computers built entirely from TTL, such as the 1970 Datapoint 2200, were much faster and more powerful than single-chip MOS microprocessors such as the 1972 Intel 8008 until the early 1980s. Advances in IC technology, primarily smaller features and larger chips, have allowed the number of MOS transistors in an integrated circuit to double every two years, a trend known as Moore's law. Moore originally stated it would double every year, but he went on to change the claim to every two years in 1975. This increased capacity has been used to decrease cost and increase functionality. In general, as the feature size shrinks, almost every aspect of an IC's operation improves. The cost per transistor and the switching power consumption per transistor goes down, while the memory capacity and speed go up, through the relationships defined by Dennard scaling (MOSFET scaling). Because speed, capacity, and power consumption gains are apparent to the end user, there is fierce competition among the manufacturers to use finer geometries. Over the years, transistor sizes have decreased from tens of microns in the early 1970s to 10 nanometers in 2017 with a corresponding million-fold increase in transistors per unit area. As of 2016, typical chip areas range from a few square millimeters to around 600 mm2, with up to 25 million transistors per mm2. The expected shrinking of feature sizes and the needed progress in related areas was forecast for many years by the International Technology Roadmap for Semiconductors (ITRS). The final ITRS was issued in 2016, and it is being replaced by the International Roadmap for Devices and Systems. Initially, ICs were strictly electronic devices. The success of ICs has led to the integration of other technologies, in an attempt to obtain the same advantages of small size and low cost. These technologies include mechanical devices, optics, and sensors. Charge-coupled devices, and the closely related active-pixel sensors, are chips that are sensitive to light. They have largely replaced photographic film in scientific, medical, and consumer applications. Billions of these devices are now produced each year for applications such as cellphones, tablets, and digital cameras. This sub-field of ICs won the Nobel Prize in 2009. Very small mechanical devices driven by electricity can be integrated onto chips, a technology known as microelectromechanical systems (MEMS). These devices were developed in the late 1980s and are used in a variety of commercial and military applications. Examples include DLP projectors, inkjet printers, and accelerometers and MEMS gyroscopes used to deploy automobile airbags. Since the early 2000s, the integration of optical functionality (optical computing) into silicon chips has been actively pursued in both academic research and in industry resulting in the successful commercialization of silicon based integrated optical transceivers combining optical devices (modulators, detectors, routing) with CMOS based electronics. Photonic integrated circuits that use light such as Lightelligence's PACE (Photonic Arithmetic Computing Engine) also being developed, using the emerging field of physics known as photonics. Integrated circuits are also being developed for sensor applications in medical implants or other bioelectronic devices. Special sealing techniques have to be applied in such biogenic environments to avoid corrosion or biodegradation of the exposed semiconductor materials. As of 2018, the vast majority of all transistors are MOSFETs fabricated in a single layer on one side of a chip of silicon in a flat two-dimensional planar process. Researchers have produced prototypes of several promising alternatives, such as: various approaches to stacking several layers of transistors to make a three-dimensional integrated circuit (3DIC), such as through-silicon via, "monolithic 3D", stacked wire bonding, and other methodologies. transistors built from other materials: graphene transistors, molybdenite transistors, carbon nanotube field-effect transistor, gallium nitride transistor, transistor-like nanowire electronic devices, organic field-effect transistor, etc. fabricating transistors over the entire surface of a small sphere of silicon. modifications to the substrate, typically to make "flexible transistors" for a flexible display or other flexible electronics, possibly leading to a roll-away computer. As it becomes more difficult to manufacture ever smaller transistors, companies are using multi-chip modules/chiplets, three-dimensional integrated circuits, package on package, High Bandwidth Memory and through-silicon vias with die stacking to increase performance and reduce size, without having to reduce the size of the transistors. Such techniques are collectively known as advanced packaging. Advanced packaging is mainly divided into 2.5D and 3D packaging. 2.5D describes approaches such as multi-chip modules while 3D describes approaches where dies are stacked in one way or another, such as package on package and high bandwidth memory. All approaches involve 2 or more dies in a single package. Alternatively, approaches such as 3D NAND stack multiple layers on a single die. A technique has been demonstrated to include microfluidic cooling on integrated circuits, to improve cooling performance as well as peltier thermoelectric coolers on solder bumps, or thermal solder bumps used exclusively for heat dissipation, used in flip-chip. == Design == The cost of designing and developing a complex integrated circuit is quite high, normally in the multiple tens of millions of dollars. Therefore, it only makes economic sense to produce integrated circuit products with high production volume, so the non-recurring engineering (NRE) costs are spread across typically millions of production units. Modern semiconductor chips have billions of components, and are far too complex to be designed by hand. Software tools to help the designer are essential. Electronic design automation (EDA), also referred to as electronic computer-aided design (ECAD), is a category of software tools for designing electronic systems, including integrated circuits. The tools work together in a design flow that engineers use to design, verify, and analyze entire semiconductor chips. Some of the latest EDA tools use artificial intelligence (AI) to help engineers save time and improve chip performance. == Types == Integrated circuits can be broadly classified into analog, digital and mixed signal, consisting of analog and digital signaling on the same IC. Digital integrated circuits can contain billions of logic gates, flip-flops, multiplexers, and other circuits in a few square millimeters. The small size of these circuits allows high speed, low power dissipation, and reduced manufacturing cost compared with board-level integration. These digital ICs, typically microprocessors, DSPs, and microcontrollers, use boolean algebra to process "one" and "zero" signals. Among the most advanced integrated circuits are the microprocessors or "cores", used in personal computers, cell-phones, etc. Several cores may be integrated together in a single IC or chip. Digital memory chips and application-specific integrated circuits (ASICs) are examples of other families of integrated circuits. In the 1980s, programmable logic devices were developed. These devices contain circuits whose logical function and connectivity can be programmed by the user, rather than being fixed by the integrated circuit manufacturer. This allows a chip to be programmed to do various LSI-type functions such as logic gates, adders and registers. Programmability comes in various forms – devices that can be programmed only once, devices that can be erased and then re-programmed using UV light, devices that can be (re)programmed using flash memory, and field-programmable gate arrays (FPGAs) which can be programmed at any time, including during operation. Current FPGAs can (as of 2016) implement the equivalent of millions of gates and operate at frequencies up to 1 GHz. Analog ICs, such as sensors, power management circuits, and operational amplifiers (op-amps), process continuous signals, and perform analog functions such as amplification, active filtering, demodulation, and mixing. ICs can combine analog and digital circuits on a chip to create functions such as analog-to-digital converters and digital-to-analog converters. Such mixed-signal circuits offer smaller size and lower cost, but must account for signal interference. Prior to the late 1990s, radios could not be fabricated in the same low-cost CMOS processes as microprocessors. But since 1998, radio chips have been developed using RF CMOS processes. Examples include Intel's DECT cordless phone, or 802.11 (Wi-Fi) chips created by Atheros and other companies. Modern electronic component distributors often further sub-categorize integrated circuits: Digital ICs are categorized as logic ICs (such as microprocessors and microcontrollers), memory chips (such as MOS memory and floating-gate memory), interface ICs (level shifters, serializer/deserializer, etc.), power management ICs, and programmable devices. Analog ICs are categorized as linear integrated circuits and RF circuits (radio frequency circuits). Mixed-signal integrated circuits are categorized as data acquisition ICs (including A/D converters, D/A converters, digital potentiometers), clock/timing ICs, switched capacitor (SC) circuits, and RF CMOS circuits. Three-dimensional integrated circuits (3D ICs) are categorized into through-silicon via (TSV) ICs and Cu-Cu connection ICs. == Manufacturing == === Fabrication === The semiconductors of the periodic table of the chemical elements were identified as the most likely materials for a solid-state vacuum tube. Starting with copper oxide, proceeding to germanium, then silicon, the materials were systematically studied in the 1940s and 1950s. Today, monocrystalline silicon is the main substrate used for ICs although some III-V compounds of the periodic table such as gallium arsenide are used for specialized applications like LEDs, lasers, solar cells and the highest-speed integrated circuits. It took decades to perfect methods of creating crystals with minimal defects in semiconducting materials' crystal structure. Semiconductor ICs are fabricated in a planar process which includes three key process steps – photolithography, deposition (such as chemical vapor deposition), and etching. The main process steps are supplemented by doping and cleaning. More recent or high-performance ICs may instead use multi-gate FinFET or GAAFET transistors instead of planar ones, starting at the 22 nm node (Intel) or 16/14 nm nodes. Mono-crystal silicon wafers are used in most applications (or for special applications, other semiconductors such as gallium arsenide are used). The wafer need not be entirely silicon. Photolithography is used to mark different areas of the substrate to be doped or to have polysilicon, insulators or metal (typically aluminium or copper) tracks deposited on them. Dopants are impurities intentionally introduced to a semiconductor to modulate its electronic properties. Doping is the process of adding dopants to a semiconductor material. Integrated circuits are composed of many overlapping layers, each defined by photolithography, and normally shown in different colors. Some layers mark where various dopants are diffused into the substrate (called diffusion layers), some define where additional ions are implanted (implant layers), some define the conductors (doped polysilicon or metal layers), and some define the connections between the conducting layers (via or contact layers). All components are constructed from a specific combination of these layers. In a self-aligned CMOS process, a transistor is formed wherever the gate layer (polysilicon or metal) crosses a diffusion layer (this is called "the self-aligned gate").: p.1 (see Fig. 1.1)  Capacitive structures, in form very much like the parallel conducting plates of a traditional electrical capacitor, are formed according to the area of the "plates", with insulating material between the plates. Capacitors of a wide range of sizes are common on ICs. Meandering stripes of varying lengths are sometimes used to form on-chip resistors, though most logic circuits do not need any resistors. The ratio of the length of the resistive structure to its width, combined with its sheet resistivity, determines the resistance. More rarely, inductive structures can be built as tiny on-chip coils, or simulated by gyrators. Since a CMOS device only draws current on the transition between logic states, CMOS devices consume much less current than bipolar junction transistor devices. A random-access memory is the most regular type of integrated circuit; the highest density devices are thus memories; but even a microprocessor will have memory on the chip. (See the regular array structure at the bottom of the first image.) Although the structures are intricate – with widths which have been shrinking for decades – the layers remain much thinner than the device widths. The layers of material are fabricated much like a photographic process, although light waves in the visible spectrum cannot be used to "expose" a layer of material, as they would be too large for the features. Thus photons of higher frequencies (typically ultraviolet) are used to create the patterns for each layer. Because each feature is so small, electron microscopes are essential tools for a process engineer who might be debugging a fabrication process. Each device is tested before packaging using automated test equipment (ATE), in a process known as wafer testing, or wafer probing. The wafer is then cut into rectangular blocks, each of which is called a die. Each good die (plural dice, dies, or die) is then connected into a package using aluminium (or gold) bond wires which are thermosonically bonded to pads, usually found around the edge of the die. Thermosonic bonding was first introduced by A. Coucoulas which provided a reliable means of forming these vital electrical connections to the outside world. After packaging, the devices go through final testing on the same or similar ATE used during wafer probing. Industrial CT scanning can also be used. Test cost can account for over 25% of the cost of fabrication on lower-cost products, but can be negligible on low-yielding, larger, or higher-cost devices. As of 2022, a fabrication facility (commonly known as a semiconductor fab) can cost over US$12 billion to construct. The cost of a fabrication facility rises over time because of increased complexity of new products; this is known as Rock's law. Such a facility features: The wafers up to 300 mm in diameter (wider than a common dinner plate). As of 2022, 5 nm transistors. Copper interconnects where copper wiring replaces aluminum for interconnects. Low-κ dielectric insulators. Silicon on insulator (SOI). Strained silicon in a process used by IBM known as Strained silicon directly on insulator (SSDOI). Multigate devices such as tri-gate transistors. ICs can be manufactured either in-house by integrated device manufacturers (IDMs) or using the foundry model. IDMs are vertically integrated companies (like Intel and Samsung) that design, manufacture and sell their own ICs, and may offer design and/or manufacturing (foundry) services to other companies (the latter often to fabless companies). In the foundry model, fabless companies (like Nvidia) only design and sell ICs and outsource all manufacturing to pure play foundries such as TSMC. These foundries may offer IC design services. === Packaging === The earliest integrated circuits were packaged in ceramic flat packs, which continued to be used by the military for their reliability and small size for many years. Commercial circuit packaging quickly moved to the dual in-line package (DIP), first in ceramic and later in plastic, which is commonly cresol-formaldehyde-novolac. In the 1980s pin counts of VLSI circuits exceeded the practical limit for DIP packaging, leading to pin grid array (PGA) and leadless chip carrier (LCC) packages. Surface mount packaging appeared in the early 1980s and became popular in the late 1980s, using finer lead pitch with leads formed as either gull-wing or J-lead, as exemplified by the small-outline integrated circuit (SOIC) package – a carrier which occupies an area about 30–50% less than an equivalent DIP and is typically 70% thinner. This package has "gull wing" leads protruding from the two long sides and a lead spacing of 0.050 inches. In the late 1990s, plastic quad flat pack (PQFP) and thin small-outline package (TSOP) packages became the most common for high pin count devices, though PGA packages are still used for high-end microprocessors. Ball grid array (BGA) packages have existed since the 1970s. Flip-chip Ball Grid Array packages, which allow for a much higher pin count than other package types, were developed in the 1990s. In an FCBGA package, the die is mounted upside-down (flipped) and connects to the package balls via a package substrate that is similar to a printed-circuit board rather than by wires. FCBGA packages allow an array of input-output signals (called Area-I/O) to be distributed over the entire die rather than being confined to the die periphery. BGA devices have the advantage of not needing a dedicated socket but are much harder to replace in case of device failure. Intel transitioned away from PGA to land grid array (LGA) and BGA beginning in 2004, with the last PGA socket released in 2014 for mobile platforms. As of 2018, AMD uses PGA packages on mainstream desktop processors, BGA packages on mobile processors, and high-end desktop and server microprocessors use LGA packages. Electrical signals leaving the die must pass through the material electrically connecting the die to the package, through the conductive traces (paths) in the package, through the leads connecting the package to the conductive traces on the printed circuit board. The materials and structures used in the path these electrical signals must travel have very different electrical properties, compared to those that travel to different parts of the same die. As a result, they require special design techniques to ensure the signals are not corrupted, and much more electric power than signals confined to the die itself. When multiple dies are put in one package, the result is a system in package, abbreviated SiP. A multi-chip module (MCM), is created by combining multiple dies on a small substrate often made of ceramic. The distinction between a large MCM and a small printed circuit board is sometimes fuzzy. Packaged integrated circuits are usually large enough to include identifying information. Four common sections are the manufacturer's name or logo, the part number, a part production batch number and serial number, and a four-digit date-code to identify when the chip was manufactured. Extremely small surface-mount technology parts often bear only a number used in a manufacturer's lookup table to find the integrated circuit's characteristics. The manufacturing date is commonly represented as a two-digit year followed by a two-digit week code, such that a part bearing the code 8341 was manufactured in week 41 of 1983, or approximately in October 1983. == Intellectual property == The possibility of copying by photographing each layer of an integrated circuit and preparing photomasks for its production on the basis of the photographs obtained is a reason for the introduction of legislation for the protection of layout designs. The US Semiconductor Chip Protection Act of 1984 established intellectual property protection for photomasks used to produce integrated circuits. A diplomatic conference held at Washington, D.C., in 1989 adopted a Treaty on Intellectual Property in Respect of Integrated Circuits, also called the Washington Treaty or IPIC Treaty. The treaty is currently not in force, but was partially integrated into the TRIPS agreement. There are several United States patents connected to the integrated circuit, which include patents by J.S. Kilby US3,138,743, US3,261,081, US3,434,015 and by R.F. Stewart US3,138,747. National laws protecting IC layout designs have been adopted in a number of countries, including Japan, the EC, the UK, Australia, and Korea. The UK enacted the Copyright, Designs and Patents Act, 1988, c. 48, § 213, after it initially took the position that its copyright law fully protected chip topographies. See British Leyland Motor Corp. v. Armstrong Patents Co. Criticisms of inadequacy of the UK copyright approach as perceived by the US chip industry are summarized in further chip rights developments. Australia passed the Circuit Layouts Act of 1989 as a sui generis form of chip protection. Korea passed the Act Concerning the Layout-Design of Semiconductor Integrated Circuits in 1992. == Generations == In the early days of simple integrated circuits, the technology's large scale limited each chip to only a few transistors, and the low degree of integration meant the design process was relatively simple. Manufacturing yields were also quite low by today's standards. As metal–oxide–semiconductor (MOS) technology progressed, millions and then billions of MOS transistors could be placed on one chip, and good designs required thorough planning, giving rise to the field of electronic design automation, or EDA. Some SSI and MSI chips, like discrete transistors, are still mass-produced, both to maintain old equipment and build new devices that require only a few gates. The 7400 series of TTL chips, for example, has become a de facto standard and remains in production. === Small-scale integration (SSI) === The first integrated circuits contained only a few transistors. Early digital circuits containing tens of transistors provided a few logic gates, and early linear ICs such as the Plessey SL201 or the Philips TAA320 had as few as two transistors. The number of transistors in an integrated circuit has increased dramatically since then. The term "large scale integration" (LSI) was first used by IBM scientist Rolf Landauer when describing the theoretical concept; that term gave rise to the terms "small-scale integration" (SSI), "medium-scale integration" (MSI), "very-large-scale integration" (VLSI), and "ultra-large-scale integration" (ULSI). The early integrated circuits were SSI. SSI circuits were crucial to early aerospace projects, and aerospace projects helped inspire development of the technology. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertial guidance systems. Although the Apollo Guidance Computer led and motivated integrated-circuit technology, it was the Minuteman missile that forced it into mass-production. The Minuteman missile program and various other United States Navy programs accounted for the total $4 million integrated circuit market in 1962, and by 1968, U.S. Government spending on space and defense still accounted for 37% of the $312 million total production. The demand by the U.S. Government supported the nascent integrated circuit market until costs fell enough to allow IC firms to penetrate the industrial market and eventually the consumer market. The average price per integrated circuit dropped from $50 in 1962 to $2.33 in 1968. Integrated circuits began to appear in consumer products by the turn of the 1970s decade. A typical application was FM inter-carrier sound processing in television receivers. The first application MOS chips were small-scale integration (SSI) chips. Following Mohamed M. Atalla's proposal of the MOS integrated circuit chip in 1960, the earliest experimental MOS chip to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. The first practical application of MOS SSI chips was for NASA satellites. === Medium-scale integration (MSI) === The next step in the development of integrated circuits introduced devices which contained hundreds of transistors on each chip, called "medium-scale integration" (MSI). MOSFET scaling technology made it possible to build high-density chips. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips. In 1964, Frank Wanlass demonstrated a single-chip 16-bit shift register he designed, with a then-incredible 120 MOS transistors on a single chip. The same year, General Microelectronics introduced the first commercial MOS integrated circuit chip, consisting of 120 p-channel MOS transistors. It was a 20-bit shift register, developed by Robert Norman and Frank Wanlass. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to chips with hundreds of MOSFETs on a chip by the late 1960s. === Large-scale integration (LSI) === Further development, driven by the same MOSFET scaling technology and economic factors, led to "large-scale integration" (LSI) by the mid-1970s, with tens of thousands of transistors per chip. The masks used to process and manufacture SSI, MSI and early LSI and VLSI devices (such as the microprocessors of the early 1970s) were mostly created by hand, often using Rubylith-tape or similar. For large or complex ICs (such as memories or processors), this was often done by specially hired professionals in charge of circuit layout, placed under the supervision of a team of engineers, who would also, along with the circuit designers, inspect and verify the correctness and completeness of each mask. Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that began to be manufactured in moderate quantities in the early 1970s, had under 4,000 transistors. True LSI circuits, approaching 10,000 transistors, began to be produced around 1974, for computer main memories and second-generation microprocessors. === Very-large-scale integration (VLSI) === "Very-large-scale integration" (VLSI) is a development that started with hundreds of thousands of transistors in the early 1980s. As of 2023, maximum transistor counts continue to grow beyond 5.3 trillion transistors per chip. Multiple developments were required to achieve this increased density. Manufacturers moved to smaller MOSFET design rules and cleaner fabrication facilities. The path of process improvements was summarized by the International Technology Roadmap for Semiconductors (ITRS), which has since been succeeded by the International Roadmap for Devices and Systems (IRDS). Electronic design tools improved, making it practical to finish designs in a reasonable time. The more energy-efficient CMOS replaced NMOS and PMOS, avoiding a prohibitive increase in power consumption. The complexity and density of modern VLSI devices made it no longer feasible to check the masks or do the original design by hand. Instead, engineers use EDA tools to perform most functional verification work. In 1986, one-megabit random-access memory (RAM) chips were introduced, containing more than one million transistors. Microprocessor chips passed the million-transistor mark in 1989, and the billion-transistor mark in 2005. The trend continues largely unabated, with chips introduced in 2007 containing tens of billions of memory transistors. === ULSI, WSI, SoC and 3D-IC === To reflect further growth of the complexity, the term ULSI that stands for "ultra-large-scale integration" was proposed for chips of more than 1 million transistors. Wafer-scale integration (WSI) is a means of building very large integrated circuits that uses an entire silicon wafer to produce a single "super-chip". Through a combination of large size and reduced packaging, WSI could lead to dramatically reduced costs for some systems, notably massively parallel supercomputers. The name is taken from the term Very-Large-Scale Integration, the current state of the art when WSI was being developed. A system-on-a-chip (SoC or SOC) is an integrated circuit in which all the components needed for a computer or other system are included on a single chip. The design of such a device can be complex and costly, and whilst performance benefits can be had from integrating all needed components on one die, the cost of licensing and developing a one-die machine still outweigh having separate devices. With appropriate licensing, these drawbacks are offset by lower manufacturing and assembly costs and by a greatly reduced power budget: because signals among the components are kept on-die, much less power is required (see Packaging). Further, signal sources and destinations are physically closer on die, reducing the length of wiring and therefore latency, transmission power costs and waste heat from communication between modules on the same chip. This has led to an exploration of so-called Network-on-Chip (NoC) devices, which apply system-on-chip design methodologies to digital communication networks as opposed to traditional bus architectures. A three-dimensional integrated circuit (3D-IC) has two or more layers of active electronic components that are integrated both vertically and horizontally into a single circuit. Communication between layers uses on-die signaling, so power consumption is much lower than in equivalent separate circuits. Judicious use of short vertical wires can substantially reduce overall wire length for faster operation. == Silicon labeling and graffiti == To allow identification during production, most silicon chips will have a serial number in one corner. It is also common to add the manufacturer's logo. Ever since ICs were created, some chip designers have used the silicon surface area for surreptitious, non-functional images or words. These artistic additions, often created with great attention to detail, showcase the designers' creativity and add a touch of personality to otherwise utilitarian components. These are sometimes referred to as chip art, silicon art, silicon graffiti or silicon doodling. == ICs and IC families == The 555 timer IC The Operational amplifier 7400-series integrated circuits 4000-series integrated circuits, the CMOS counterpart to the 7400 series (see also: 74HC00 series) Intel 4004, generally regarded as the first commercially available microprocessor, which led to the 8008, the famous 8080 CPU, the 8086, 8088 (used in the original IBM PC), and the fully-backward compatible (with the 8088/8086) 80286, 80386/i386, i486, etc. The MOS Technology 6502 and Zilog Z80 microprocessors, used in many home computers of the early 1980s The Motorola 6800 series of computer-related chips, leading to the 68000 and 88000 series (the 68000 series was very successful and was used in the Apple Lisa and pre-PowerPC-based Macintosh, Commodore Amiga, Atari ST/TT/Falcon030, and NeXT families of computers, along with many models of workstations and servers from many manufacturers in the 80s, along with many other systems and devices) The LM-series of analog integrated circuits == See also == Central processing unit Chip carrier CHIPS and Science Act Chipset Czochralski method Dark silicon Ion implantation Integrated injection logic Integrated passive devices Interconnect bottleneck Heat generation in integrated circuits High-temperature operating life Microelectronics Monolithic microwave integrated circuit Multi-threshold CMOS Silicon–germanium Sound chip SPICE Thermal simulations for integrated circuits Hybrot == References == == Further reading == Veendrick, H.J.M. (2025). Nanometer CMOS ICs, from Basics to ASICs. Springer. ISBN 978-3-031-64248-7. OCLC 1463505655. Baker, R.J. (2010). CMOS: Circuit Design, Layout, and Simulation (3rd ed.). Wiley-IEEE. ISBN 978-0-470-88132-3. OCLC 699889340. Marsh, Stephen P. (2006). Practical MMIC design. Artech House. ISBN 978-1-59693-036-0. OCLC 1261968369. Camenzind, Hans (2005). Designing Analog Chips (PDF). Virtual Bookworm. ISBN 978-1-58939-718-7. OCLC 926613209. Archived from the original (PDF) on 12 June 2017. Hans Camenzind invented the 555 timer Hodges, David; Jackson, Horace; Saleh, Resve (2003). Analysis and Design of Digital Integrated Circuits. McGraw-Hill. ISBN 978-0-07-228365-5. OCLC 840380650. Rabaey, J.M.; Chandrakasan, A.; Nikolic, B. (2003). Digital Integrated Circuits (2nd ed.). Pearson. ISBN 978-0-13-090996-1. OCLC 893541089. Mead, Carver; Conway, Lynn (1991). Introduction to VLSI systems. Addison Wesley Publishing Company. ISBN 978-0-201-04358-7. OCLC 634332043. == External links == Media related to Integrated circuits at Wikimedia Commons The first monolithic integrated circuits A large chart listing ICs by generic number including access to most of the datasheets for the parts. The History of the Integrated Circuit
Wikipedia/Monolithic_integrated_circuit
The purpose of electromechanical modeling is to model and simulate an electromechanical system, such that its physical parameters can be examined before the actual system is built. Parameter estimation utilizing different estimation theory coupled with physical experiments and physical realization by doing proper stability criteria evaluation of the overall system is the major objective of electromechanical modeling. Theory driven mathematical model can be used or applied to other system to judge the performance of the joint system as a whole. This is a well known and proven technique for designing large control system for industrial as well as academic multi-disciplinary complex system. This technique is also being employed in MEMS technology recently. == Different types of mathematical modeling == The modeling of purely mechanical systems is mainly based on the Lagrangian which is a function of the generalized coordinates and the associated velocities. If all forces are derivable from a potential, then the time behavior of the dynamical systems is completely determined. For simple mechanical systems, the Lagrangian is defined as the difference of the kinetic energy and the potential energy. There exists a similar approach for electrical system. By means of the electrical coenergy and well defined power quantities, the equations of motions are uniquely defined. The currents of the inductors and the voltage drops across the capacitors play the role of the generalized coordinates. All constraints, for instance caused by the Kirchhoff laws, are eliminated from the considerations. After that, a suitable transfer function is to be derived from the system parameters which eventually governs the behavior of the system. In consequence, we have quantities (kinetic and potential energy, generalized forces) which determine the mechanical part and quantities (coenergy, powers) for the description of the electrical part. This offers a combination of the mechanical and electrical parts by means of an energy approach. As a result, an extended Lagrangian format is produced. == See also == Mechatronics Mechanical–electrical analogies == References == Dean C. Karnopp; Donald L. Margolis; Ronald C. Rosenberg (1999). System Dynamics: Modeling and Simulation of Mechatronic Systems. Wiley-Interscience. ISBN 0-471-33301-8. Sergey Edward Lyshevski (1999). Electromechanical Systems, Electric Machines, and Applied Mechatronics. CRC. ISBN 0-8493-2275-8. A.F.M. Sajidul Qadir (2013). Electro-Mechanical Modeling of SEDM (Separately Excited DC Motor) & Performance Improvement Using Different Industrial Controllers. Lulu.com. ISBN 978-1-304-22765-2.
Wikipedia/Electromechanical_modeling
A flight control computer (FCC) is a primary component of the avionics system found in fly-by-wire aircraft. It is a specialized computer system that can create artificial flight characteristics and improve handling characteristics by automating a variety of in-flight tasks which reduce the workload on the cockpit flight crew. A flight control computer receives and processes data from a multitude of sensors throughout the aircraft. These sensors monitor variables such as airspeed, altitude, and attitude (the aircraft's orientation in three-dimensional space). Embedded within integrated avionics packages, it executes critical functions such as guidance, navigation. It also controls the plane's flight control surfaces, such as the ailerons, elevators, and rudder. A dedicated flight control computer handles high-level computational tasks, including routing, autopilot functions, and flight management. This computer interfaces with the avionics system and is responsible for displaying flight data on the cockpit's flight deck. The flight control system must be fault tolerant, and for that purpose there can exist several primary flight control computers (PFCC) and secondary flight control computers (SFCC), which monitors the data output from PFCC and in the case of failure, SFCC can take over the flight controls. In the Boeing 777 there are three primary flight control computers located in the aircraft's electronic equipment bay, responsible for computing and transmitting commands for normal mode flight control surfaces to maintain normal flight, including rudder, elevators, ailerons, flaperons, horizontal stabilizer, multi-functional spoilers, and ground spoilers. == References ==
Wikipedia/Flight_control_computer
This list includes well-known general theories in science and pre-scientific natural philosophy and natural history that have since been superseded by other scientific theories. Many discarded explanations were once supported by a scientific consensus, but replaced after more empirical information became available that identified flaws and prompted new theories which better explain the available data. Pre-modern explanations originated before the scientific method, with varying degrees of empirical support. Some scientific theories are discarded in their entirety, such as the replacement of the phlogiston theory by energy and thermodynamics. Some theories known to be incomplete or in some ways incorrect are still used. For example, Newtonian classical mechanics is accurate enough for practical calculations at everyday distances and velocities, and it is still taught in schools. The more complicated relativistic mechanics must be used for long distances and velocities nearing the speed of light, and quantum mechanics for very small distances and objects. Some aspects of discarded theories are reused in modern explanations. For example, miasma theory proposed that all diseases were transmitted by "bad air". The modern germ theory of disease has found that diseases are caused by microorganisms, which can be transmitted by a variety of routes, including touching a contaminated object, blood, and contaminated water. Malaria was discovered to be a mosquito-borne disease, explaining why avoiding the "bad air" near swamps prevented it. Increasing ventilation of fresh air, one of the remedies proposed by miasma theory, does remain useful in some circumstances to expel germs spread by airborne transmission, such as SARS-CoV-2. Some theories originate in, or are perpetuated by, pseudoscience, which claims to be both scientific and factual, but fails to follow the scientific method. Scientific theories are testable and make falsifiable predictions. Thus, it can be a mark of good science if a discipline has a growing list of superseded theories, and conversely, a lack of superseded theories can indicate problems in following the use of the scientific method. Fringe science includes theories that are not currently supported by a consensus in the mainstream scientific community, either because they never had sufficient empirical support, because they were previously mainstream but later disproven, or because they are preliminary theories also known as protoscience which go on to become mainstream after empirical confirmation. Some theories, such as Lysenkoism, race science or female hysteria have been generated for political rather than empirical reasons and promoted by force. == Science == === Discarded scientific theories === ==== Biology ==== Spontaneous generation – a principle regarding the spontaneous generation of complex life from inanimate matter, which held that this process was a commonplace and everyday occurrence, as distinguished from univocal generation, or reproduction from parent(s). Falsified by an experiment by Louis Pasteur: where apparently spontaneous generation of microorganisms occurred, it did not happen on repeating the process without access to unfiltered air; on then opening the apparatus to the atmosphere, bacterial growth started. Transmutation of species, Inheritance of acquired characteristics, Lysenkoism – first theories of evolution. Not supported by experiment, and rendered obsolete by Darwinian evolution and Mendelian genetics, combined in the modern synthesis which finds that genes in the form of DNA are the primary way parental characteristics are passed to descendants. Discoveries in epigenetics have shown that in some very limited ways, the life experiences of organisms can affect the development of their children. Vitalism – the theory that living things are alive because of some "vital force" independent of matter, as opposed to because of some appropriate assembly of matter. It was gradually discredited by the rise of organic chemistry, biochemistry, and molecular biology, fields that failed to discover any "vital force." Friedrich Wöhler's synthesis of urea from ammonium cyanate was only one step in a long road, not a great refutation. Maternal impression – the theory that the mother's thoughts created birth defects. No experimental support (a notion rather than a theory), and rendered obsolete by genetic theory (see also fetal origins of adult disease, genomic imprinting). Preformationism – the theory that all organisms have existed since the beginning of life, and that gametes contain a miniature but complete preformed individual, and in the case of humans, a homunculus. No support when microscopy became available. Rendered obsolete by cytology, discovery of DNA, and atomic theory. Recapitulation theory – the theory that "ontogeny recapitulates phylogeny". See Baer's laws of embryology. Telegony – the theory that an offspring can inherit characteristics from a previous mate of its mother's as well as its actual parents, often associated with racism. Out of Asia theory of human origin – The majority view is of a recent African origin of modern humans, although a multiregional origin of modern humans hypothesis has much support (which incorporates past evidence of Asian origins). Scientific racism – the theory that humanity consists of physically discrete superior or inferior races. Rendered obsolete by Human evolutionary genetics and modern anthropology. Germ line theory, explained immunoglobulin diversity by proposing that each antibody was encoded in a separate germline gene. ==== Chemistry ==== Energeticism – a theory that attempted to reinterpret all chemistry in terms of energy, rejecting the concept of atoms. Caloric theory – the theory that a self-repelling fluid called "caloric" was the substance of heat. Rendered obsolete by the mechanical theory of heat. Origin of the calorie's name, a unit of energy still used for nutrition in some countries. Classical elements – All matter was once thought composed of various combinations of classical elements (most famously air, earth, fire, and water). Antoine Lavoisier finally refuted this in his 1789 publication, Elements of Chemistry, which contained the first modern list of chemical elements. Electrochemical dualism – the theory that all molecules are salts composed of basic and acidic oxides Phlogiston theory – The theory that combustible goods contain a substance called "phlogiston" that entered air during combustion. Replaced by Lavoisier's work on oxidation. Point 2 of Dalton's Atomic Theory was rendered obsolete by discovery of isotopes, and point 3 by discovery of subatomic particles and nuclear reactions. Radical theory – the theory that organic compounds exist as combinations of radicals that can be exchanged in chemical reactions just as chemical elements can be interchanged in inorganic compounds. Vitalism – See section on Biology. Nascent state refers to the form of a chemical element (or sometimes compound) in the instance of their liberation or formation. Often encountered are atomic oxygen (Onasc) and nascent hydrogen (Hnasc), and chlorine (Clnasc) or bromine (Brnasc). Polywater, a hypothesized polymer form of water, the properties of which actually arose from contaminants such as sweat. ==== Physics ==== Corpuscularianism – theory that matter, gravity, light and magnetism is composed of tiny corpuscles Corpuscular theory of light Emission theory of vision – the belief that vision is caused by rays emanating from the eyes was superseded by the intro-mission approach and more complex theories of vision. Aristotelian physics – superseded by Newtonian physics. Ptolemy's law of refraction, replaced by Snell's law. Luminiferous aether – failed to be detected by the sufficiently sensitive Michelson–Morley experiment, made obsolete by Einstein's work. Caloric theory – Lavoisier's successor to phlogiston, discredited by Rumford's and Joule's work. Contact tension – a theory on the source of electricity. Vis viva – Gottfried Leibniz's elementary and limited early formulation of the principle of conservation of energy. Horror vacui/plenum – concept that nature 'abhors' the existence of vacuum. Imponderable fluid – various fluids used to explain the nature of heat and electricity in terms of undetectable fluids "Purely electrostatic" theories of the generation of voltage differences. Emitter theory – another now-obsolete theory of light propagation. Electromotive force § History – the original theory by Alessandro Volta misunderstood the active agent of a voltaic cell to be a new type of force acting on the charges generated merely from contact of the electrodes. Michael Faraday later correctly explained the active agent was chemical reactions. Line of force – pre-existing theory to field. Balance of nature – superseded by catastrophe theory and chaos theory. Progression of atomic theory Democritus, the originator of atomic theory, held that everything is composed of atoms that are indestructible. His claim that atoms are indestructible is not the reason it is superseded—as it was later scientists who identified the concept of atoms with particles, which later science showed are destructible. Democritus' theory is superseded because of his position that several kinds of atoms explain pure materials like water or iron, and characteristics that science now identifies with molecules rather than with indestructible primary particles. Democritus also held that between atoms, an empty space of a different nature than atoms allowed atoms to move. This view on space and matter persisted until Einstein described spacetime as being relative and connected to matter. John Dalton's model of the atom, which held that atoms are indivisible and indestructible (superseded by nuclear physics) and that all atoms of a given element are identical in mass (superseded by discovery of atomic isotopes). Plum pudding model of the atom—assuming the protons and electrons were mixed together in a single mass Rutherford model of the atom with an impenetrable nucleus orbited by electrons Bohr model with quantized orbits Electron cloud model following the development of quantum mechanics in 1925 and the eventual atomic orbital models derived from the quantum mechanical solution to the hydrogen atom ==== Astronomy and cosmology ==== Ptolemaic system – superseded by Nicolaus Copernicus' heliocentric model. Geocentric universe – superseded by Copernicus Copernican system – superseded by Tychonic system Heliocentric universe – made obsolete by discovery of the structure of the Milky Way and the redshift of most galaxies. Heliocentrism only applies to the selected Solar System, and only approximately, since the Sun's center is not at the Solar System's center of mass. Superseded by barycentric coordinates. Aristotelian Dynamics of the celestial spheres superseded by the Elliptic orbit and Kepler's laws of planetary motion Tychonic system – superseded by Newton's laws of motion Luminiferous aether theory Static Universe theory Steady state theory, a model developed by Hermann Bondi, Thomas Gold, and Fred Hoyle whereby the expanding universe was in a steady state, and had no beginning. It was a competitor of the Big Bang model until evidence supporting the Big Bang and falsifying the steady state was found. ==== Geography and climate ==== Buenaventura River Flat Earth theory, generally known to be false among educated people in various ancient and medieval societies Terra Australis, which technically is Antarctica, but the original idea was based on an unproven belief that land in the Northern hemisphere must have a Southern counterpart for balance. Hollow Earth theory The Open Polar Sea, an ice-free sea once supposed to surround the North Pole Rain follows the plow – the theory that human settlement increases rainfall in arid regions (only true to the extent that crop fields evapotranspirate more than barren wilderness) Island of California – the theory that California was not part of mainland North America but rather a large island Inland sea of Australia Pre-modern environmental determinism (as explanations for moral behavior, as opposed to modern theories such as factor endowments, state formation, and theories of the social effects of global warming) Climatic determinism Topographic determinism Moral geography Cultural acclimatization Global cooling Drainage divides as always being made up by hills and mountains. Ancient and medieval concepts surrounding the antipodes, including the related theories of antichthones and the alleged existence of a torrid zone ==== Geology ==== Abiogenic petroleum origin – While some petroleum or natural gas is almost certainly abiogenic, the vast majority has origins as living organisms Catastrophism was largely replaced by uniformitarianism and neocatastrophism Cryptoexplosion craters, now discarded in favour of impact craters and ordinary volcanism. Flood geology replaced by modern geology and stratigraphy Neptunism replaced by plutonism and volcanism Granitization, a discredited alternative to a magmatic origin of granites Monoglaciation, the idea that the Earth had a single ice age, replaced by polyglaciation, the idea that the Earth has gone through several periods of widespread ice cover. Oscillation theory of land-level rise and subsidence during deglaciation The following were superseded by plate tectonics: Elevation crater theory Expanding Earth theory (superseded by subduction) Contracting Earth Geosyncline theory Haarman's Oscillation theory Various lost landmasses including Lemuria ==== Psychology ==== Pure behaviorist explanations for language acquisition in infancy, falsified by the study of cognitive adaptations for language. Psychomotor patterning, a pseudoscientific approach to the treatment of intellectual disabilities, brain injury, learning disabilities, and other cognitive diseases. ==== Medicine ==== Theory of the four bodily humours (see also Four temperaments) Heroic medicine – a therapeutic method derived from the belief in bodily humour imbalances as the cause of ailments. Miasma theory of disease – the theory that diseases are caused by "bad air". No experimental support, and rendered obsolete by the germ theory of disease. Phrenology – a theory of highly localised brain function popular in 19th century medicine. Homeopathy – a theory according to which a disease can be cured by infinitesimal doses of the substance that caused it Eclectic medicine – transformed into alternative medicine, and is no longer considered a scientific theory Physiognomy, related to phrenology, held that inner character was strongly correlated with physical appearance Tooth worm, an erroneous theory of the cause of dental caries, periodontitis, and toothaches === Obsolete branches of enquiry === Alchemy, which led to the development of chemistry Astrology, which led to the development of astronomy Phrenology, a pseudoscience Numerology, a pseudoscience === Theories now considered incomplete === These theories that are no longer considered the most complete representation of reality but remain useful in particular domains or under certain conditions. For some theories, a more complete model is known, but for practical use, the coarser approximation provides good results with much less calculation. Newtonian mechanics was extended by the theory of relativity and by quantum mechanics. Relativistic corrections to Newtonian mechanics are immeasurably small at velocities not approaching the speed of light, and quantum corrections are usually negligible at atomic or larger scales; Newtonian mechanics is totally satisfactory in engineering and physics under most circumstances. The anomalous perihelion precession of Mercury was the first observational evidence that relativity was a more accurate model than Newtonian gravity. Classical electrodynamics is a very close approximation to quantum electrodynamics except at very small scales and low field strengths. The Bohr model of the atom was extended by the quantum mechanical model of the atom. The formula known as Newton's sine-square law of air resistance for the force of a fluid on a body was not actually formulated by Newton but by others using a method of calculation used by Newton; it has been found incorrect and not useful except for high-speed hypersonic flow. The once-popular cycle of erosion is now considered one of many possibilities for landscape evolution. The theory of continental drift was incorporated into and improved upon by plate tectonics. Rational choice theory as a model of human behavior Mendelian genetics, classical genetics, Boveri–Sutton chromosome theory – first genetic theories. Not invalidated as such, but subsumed into molecular genetics. == See also == Pseudoscience Scientific theory Philosophy of science Protoscience Fringe science Pathological science Paradigm shift History of evolutionary thought Creation–evolution controversy === Lists === List of common misconceptions, including those about scientific subjects List of discredited substances List of experiments List of topics characterized as pseudoscience List of incorrect mathematical proofs == Notes == == References == == External links == Media related to Obsolete scientific theories at Wikimedia Commons
Wikipedia/Superseded_theories_in_science
The theory of solar cells explains the process by which light energy in photons is converted into electric current when the photons strike a suitable semiconductor device. The theoretical studies are of practical use because they predict the fundamental limits of a solar cell, and give guidance on the phenomena that contribute to losses and solar cell efficiency. == Working explanation == Photons in sunlight hit the solar panel and are absorbed by semi-conducting materials. Electrons (negatively charged) are knocked loose from their atoms as they are excited. Due to their special structure and the materials in solar cells, the electrons are only allowed to move in a single direction. The electronic structure of the materials is very important for the process to work, and often silicon incorporating small amounts of boron or phosphorus is used in different layers. An array of solar cells converts solar energy into a usable amount of direct current (DC) electricity. == Photogeneration of charge carriers == When a photon hits a piece of semiconductor, one of three things can happen: The photon can pass straight through the semiconductor — this (generally) happens for lower energy photons. The photon can reflect off the surface. The photon can be absorbed by the semiconductor if the photon energy is higher than the band gap value. This generates an electron-hole pair and sometimes heat depending on the band structure. When a photon is absorbed, its energy is given to an electron in the crystal lattice. Usually this electron is in the valence band. The energy given to the electron by the photon "excites" it into the conduction band where it is free to move around within the semiconductor. The network of covalent bonds that the electron was previously a part of now has one fewer electron. This is known as a hole, and it has positive charge. The presence of a missing covalent bond allows the bonded electrons of neighboring atoms to move into the "hole", leaving another hole behind, thus propagating holes throughout the lattice in the opposite direction to the movement of the negatively electrons. It can be said that photons absorbed in the semiconductor create electron-hole pairs. A photon only needs to have energy greater than that of the band gap in order to excite an electron from the valence band into the conduction band. However, the solar frequency spectrum approximates a black body spectrum at about 5,800 K, and as such, much of the solar radiation reaching the Earth is composed of photons with energies greater than the band gap of silicon (1.12eV), which is near to the ideal value for a terrestrial solar cell (1.4eV). These higher energy photons will be absorbed by a silicon solar cell, but the difference in energy between these photons and the silicon band gap is converted into heat (via lattice vibrations — called phonons) rather than into usable electrical energy. == The p–n junction == The most commonly known solar cell is configured as a large-area p–n junction made from silicon. As a simplification, one can imagine bringing a layer of n-type silicon into direct contact with a layer of p-type silicon. n-type doping produces mobile electrons (leaving behind positively charged donors) while p-type doping produces mobile holes (and negatively charged acceptors). In practice, p–n junctions of silicon solar cells are not made in this way, but rather by diffusing an n-type dopant into one side of a p-type wafer (or vice versa). If a piece of p-type silicon is placed in close contact with a piece of n-type silicon, then a diffusion of electrons occurs from the region of high electron concentration (the n-type side of the junction) into the region of low electron concentration (p-type side of the junction). When the electrons diffuse into the p-type side, each one annihilates a hole, making that side net negatively charged (because now the number of mobile positive holes is now less than the number of negative acceptors). Similarly, holes diffusing to the n-type side make it more positively charged. However (in the absence of an external circuit) this diffusion current of carriers does not go on indefinitely because the charge build up on either side of the junction produces an electric field that opposes further diffusion of more charges. Eventually, an equilibrium is reached where the net current is zero, leaving a region either side of the junction where electrons and holes have diffused across the junction and annihilated each other called the depletion region because it contains practically no mobile charge carriers. It is also known as the space charge region, although space charge extends a bit further in both directions than the depletion region. Once equilibrium is established, electron-hole pairs generated in the depletion region are separated by the electric field, with the electron attracted to the positive n-type side and holes to the negative p-type side, reducing the charge (and the electric field) built up by the diffusion just described. If the device is unconnected (or the external load is very high) then diffusion current would eventually restore the equilibrium charge by bringing the electron and hole back across the junction, but if the load connected is small enough, the electrons prefer to go around the external circuit in their attempt to restore equilibrium, doing useful work on the way. == Charge carrier separation == There are two causes of charge carrier motion and separation in a solar cell: drift of carriers, driven by the electric field, with electrons being pushed one way and holes the other way diffusion of carriers from zones of higher carrier concentration to zones of lower carrier concentration (following a gradient of chemical potential). These two "forces" may work one against the other at any given point in the cell. For instance, an electron moving through the junction from the p region to the n region (as in the diagram at the beginning of this article) is being pushed by the electric field against the concentration gradient. The same goes for a hole moving in the opposite direction. It is easiest to understand how a current is generated when considering electron-hole pairs that are created in the depletion zone, which is where there is a strong electric field. The electron is pushed by this field toward the n side and the hole toward the p side. (This is opposite to the direction of current in a forward-biased diode, such as a light-emitting diode in operation.) When the pair is created outside the space charge zone, where the electric field is smaller, diffusion also acts to move the carriers, but the junction still plays a role by sweeping any electrons that reach it from the p side to the n side, and by sweeping any holes that reach it from the n side to the p side, thereby creating a concentration gradient outside the space charge zone. In thick solar cells there is very little electric field in the active region outside the space charge zone, so the dominant mode of charge carrier separation is diffusion. In these cells the diffusion length of minority carriers (the length that photo-generated carriers can travel before they recombine) must be large compared to the cell thickness. In thin film cells (such as amorphous silicon), the diffusion length of minority carriers is usually very short due to the existence of defects, and the dominant charge separation is therefore drift, driven by the electrostatic field of the junction, which extends to the whole thickness of the cell. Once the minority carrier enters the drift region, it is 'swept' across the junction and, at the other side of the junction, becomes a majority carrier. This reverse current is a generation current, fed both thermally and (if present) by the absorption of light. On the other hand, majority carriers are driven into the drift region by diffusion (resulting from the concentration gradient), which leads to the forward current; only the majority carriers with the highest energies (in the so-called Boltzmann tail; cf. Maxwell–Boltzmann statistics) can fully cross the drift region. Therefore, the carrier distribution in the whole device is governed by a dynamic equilibrium between reverse current and forward current. == Connection to an external load == Ohmic metal-semiconductor contacts are made to both the n-type and p-type sides of the solar cell, and the electrodes connected to an external load. Electrons that are created on the n-type side, or created on the p-type side, "collected" by the junction and swept onto the n-type side, may travel through the wire, power the load, and continue through the wire until they reach the p-type semiconductor-metal contact. Here, they recombine with a hole that was either created as an electron-hole pair on the p-type side of the solar cell, or a hole that was swept across the junction from the n-type side after being created there. The voltage measured is equal to the difference in the quasi Fermi levels of the majority carriers (electrons in the n-type portion and holes in the p-type portion) at the two terminals. == Equivalent circuit of a solar cell == An equivalent circuit model of an ideal solar cell's p–n junction uses an ideal current source (whose photogenerated current I L {\displaystyle I_{\text{L}}} increases with light intensity) in parallel with a diode (whose current I D {\displaystyle I_{\text{D}}} represents recombination losses). To account for resistive losses, a shunt resistance R SH {\displaystyle R_{\text{SH}}} and a series resistance R S {\displaystyle R_{\text{S}}} are added as lumped elements. The resulting output current I out {\displaystyle I_{\text{out}}} equals the photogenerated current minus the currents through the diode and shunt resistor: I out = I L − I D − I SH {\displaystyle I_{\text{out}}=I_{\text{L}}-I_{\text{D}}-I_{\text{SH}}} The junction voltage (across both the diode and shunt resistance) is: V j = V out + I out R S {\displaystyle V_{\text{j}}=V_{\text{out}}+I_{\text{out}}\,R_{\text{S}}} where V out {\displaystyle V_{\text{out}}} is the voltage across the output terminals. The leakage current I SH {\displaystyle I_{\text{SH}}} through the shunt resistor is proportional to the junction's voltage V j {\displaystyle V_{\text{j}}} , according to Ohm's law: I SH = V j R SH {\displaystyle I_{\text{SH}}={\frac {V_{\text{j}}}{R_{\text{SH}}}}} By the Shockley diode equation, the current diverted through the diode is: I D = I 0 { exp ⁡ [ V j n V T ] − 1 } {\displaystyle I_{\text{D}}=I_{0}\left\{\exp \left[{\frac {V_{\text{j}}}{nV_{\text{T}}}}\right]-1\right\}} where I0, reverse saturation current n, diode ideality factor (1 for an ideal diode) q, elementary charge k, Boltzmann constant T, absolute temperature V T = k T / q , {\displaystyle V_{\text{T}}=kT/q,} the thermal voltage. At 25 °C, V T ≈ 0.0259 {\displaystyle V_{\text{T}}\approx 0.0259} volt. Substituting these into the first equation produces the characteristic equation of a solar cell, which relates solar cell parameters to the output current and voltage: I out = I L − I 0 { exp ⁡ [ V out + I out R S n V T ] − 1 } − V out + I out R S R SH . {\displaystyle I_{\text{out}}=I_{\text{L}}-I_{0}\left\{\exp \left[{\frac {V_{\text{out}}+I_{\text{out}}R_{\text{S}}}{nV_{\text{T}}}}\right]-1\right\}-{\frac {V_{\text{out}}+I_{\text{out}}R_{\text{S}}}{R_{\text{SH}}}}.} An alternative derivation produces an equation similar in appearance, but with V out {\displaystyle V_{\text{out}}} on the left-hand side. The two alternatives are identities; that is, they yield precisely the same results. Since the parameters I0, n, RS, and RSH cannot be measured directly, the most common application of the characteristic equation is nonlinear regression to extract the values of these parameters on the basis of their combined effect on solar cell behavior. When RS is not zero, the above equation does not give I out {\displaystyle I_{\text{out}}} directly, but it can then be solved using the Lambert W function: I out = ( I L + I 0 ) − V out / R SH 1 + R S / R SH − n V T R S W ( I 0 R S n V T ( 1 + R S / R SH ) exp ⁡ ( V out n V T ( 1 − R S R S + R SH ) + ( I L + I 0 ) R S n V T ( 1 + R S / R SH ) ) ) {\displaystyle I_{\text{out}}={\frac {(I_{\text{L}}+I_{0})-V_{\text{out}}/R_{\text{SH}}}{1+R_{\text{S}}/R_{\text{SH}}}}-{\frac {nV_{\text{T}}}{R_{\text{S}}}}W\left({\frac {I_{0}R_{\text{S}}}{nV_{\text{T}}(1+R_{\text{S}}/R_{\text{SH}})}}\exp \left({\frac {V_{\text{out}}}{nV_{\text{T}}}}\left(1-{\frac {R_{\text{S}}}{R_{\text{S}}+R_{\text{SH}}}}\right)+{\frac {(I_{\text{L}}+I_{0})R_{\text{S}}}{nV_{\text{T}}(1+R_{\text{S}}/R_{\text{SH}})}}\right)\right)} When an external load is used with the cell, its resistance can simply be added to RS and V out {\displaystyle V_{\text{out}}} set to zero in order to find the current. When I 0 R S / n V T {\displaystyle I_{0}R_{\text{S}}/nV_{\text{T}}} is small, we can use the approximation x − 1 W ( x y ) → y {\displaystyle x^{-1}W\left(xy\right)\to y} as x → 0 {\displaystyle x\to 0} to produce something much easier to work with I out = ( I L + I 0 ) − V out / R SH 1 + R S / R SH − I 0 ( 1 + R S / R SH ) exp ⁡ ( V out n V T ( 1 − R S R S + R SH ) + ( I L + I 0 ) R S n V T ( 1 + R S / R SH ) ) . {\displaystyle I_{\text{out}}={\frac {(I_{\text{L}}+I_{0})-V_{\text{out}}/R_{\text{SH}}}{1+R_{\text{S}}/R_{\text{SH}}}}-{\frac {I_{0}}{(1+R_{\text{S}}/R_{\text{SH}})}}\exp \left({\frac {V_{\text{out}}}{nV_{\text{T}}}}\left(1-{\frac {R_{\text{S}}}{R_{\text{S}}+R_{\text{SH}}}}\right)+{\frac {(I_{\text{L}}+I_{0})R_{\text{S}}}{nV_{\text{T}}(1+R_{\text{S}}/R_{\text{SH}})}}\right).} Several further simplifications are now possible, such as when R S ≪ R SH {\displaystyle R_{\text{S}}\ll R_{\text{SH}}} which leads to I out = I L + I 0 − V out R SH − I 0 exp ⁡ ( V out + ( I L + I 0 ) R S n V T ) . {\displaystyle I_{\text{out}}=I_{\text{L}}+I_{0}-{\frac {V_{\text{out}}}{R_{\text{SH}}}}-I_{0}\exp \left({\frac {V_{\text{out}}+(I_{\text{L}}+I_{0})R_{\text{S}}}{nV_{\text{T}}}}\right).} When the current generated by the PV is large compared with the current in the shunt, i.e. I L ≫ V out / R SH {\displaystyle I_{\text{L}}\gg V_{\text{out}}/R_{\text{SH}}} (because the shunt resistance is large) there is an analytical solution for V out {\displaystyle V_{\text{out}}} for any I out {\displaystyle I_{\text{out}}} less than I L + I 0 {\displaystyle I_{\text{L}}+I_{0}} : V out = n V T ln ⁡ ( I L − I out I 0 + 1 ) − ( I L + I 0 ) R S . {\displaystyle V_{\text{out}}=nV_{\text{T}}\ln \left({\frac {I_{\text{L}}-I_{\text{out}}}{I_{0}}}+1\right)-(I_{\text{L}}+I_{0})R_{\text{S}}.} Otherwise one can solve for V out {\displaystyle V_{\text{out}}} using the Lambert W function: V = ( I L + I 0 ) R SH − I ( R S + R SH ) − n V T W ( I 0 R SH n V T exp ⁡ ( ( I L + I 0 − I ) R SH n V T ) ) {\displaystyle V=(I_{\text{L}}+I_{0})R_{\text{SH}}-I(R_{\text{S}}+R_{\text{SH}})-nV_{\text{T}}W\left({\frac {I_{0}R_{\text{SH}}}{nV_{\text{T}}}}\exp \left({\frac {(I_{\text{L}}+I_{0}-I)R_{\text{SH}}}{nV_{\text{T}}}}\right)\right)} However, when RSH is large it's better to solve the original equation numerically. The general form of the solution is a curve with I out {\displaystyle I_{\text{out}}} decreasing as V out {\displaystyle V_{\text{out}}} increases (see graphs lower down). The slope at small or negative V out {\displaystyle V_{\text{out}}} (where the W function is near zero) approaches − 1 / ( R S + R SH ) {\displaystyle -1/(R_{\text{S}}+R_{\text{SH}})} , whereas the slope at high V out {\displaystyle V_{\text{out}}} approaches − 1 / R S {\displaystyle -1/R_{\text{S}}} . Therefore for high optimum output power P out = I out V out {\displaystyle P_{\text{out}}=I_{\text{out}}V_{\text{out}}} , it is desirable to have R SH {\displaystyle R_{\text{SH}}} large and R S {\displaystyle R_{\text{S}}} should be small. === Open-circuit voltage and short-circuit current === When the cell is operated at open circuit, I out {\displaystyle I_{\text{out}}} = 0 and the voltage across the output terminals is defined as the open-circuit voltage. Assuming the shunt resistance is high enough to neglect the final term of the characteristic equation, the open-circuit voltage VOC is: V OC ≈ n k T q ln ⁡ ( I L I 0 + 1 ) . {\displaystyle V_{\text{OC}}\approx {\frac {nkT}{q}}\ln \left({\frac {I_{\text{L}}}{I_{0}}}+1\right).} Similarly, when the cell is operated at short circuit, V out {\displaystyle V_{\text{out}}} = 0 and the current I SC {\displaystyle I_{\text{SC}}} through the terminals is defined as the short-circuit current. It can be shown that for a high-quality solar cell (low RS and I0, and high RSH) the short-circuit current is: I SC ≈ I L . {\displaystyle I_{\text{SC}}\approx I_{\text{L}}.} It is not possible to extract any power from the device when operating at either open circuit or short circuit conditions. === Effect of physical size === The values of IL, I0, RS, and RSH are dependent upon the physical size of the solar cell. In comparing otherwise identical cells, a cell with twice the junction area of another will, in principle, have double the IL and I0 because it has twice the area where photocurrent is generated and across which diode current can flow. By the same argument, it will also have half the RS of the series resistance related to vertical current flow; however, for large-area silicon solar cells, the scaling of the series resistance encountered by lateral current flow is not easily predictable since it will depend crucially on the grid design (it is not clear what "otherwise identical" means in this respect). Depending on the shunt type, the larger cell may also have half the RSH because it has twice the area where shunts may occur; on the other hand, if shunts occur mainly at the perimeter, then RSH will decrease according to the change in circumference, not area. Since the changes in the currents are the dominating ones and are balancing each other, the open-circuit voltage is practically the same; VOC starts to depend on the cell size only if RSH becomes too low. To account for the dominance of the currents, the characteristic equation is frequently written in terms of current density, or current produced per unit cell area: J = J L − J 0 { exp ⁡ [ q ( V out + J r S ) n k T ] − 1 } − V out + J r S r SH {\displaystyle J=J_{\text{L}}-J_{0}\left\{\exp \left[{\frac {q(V_{\text{out}}+Jr_{\text{S}})}{nkT}}\right]-1\right\}-{\frac {V_{\text{out}}+Jr_{\text{S}}}{r_{\text{SH}}}}} where J, current density (ampere/cm2) JL, photogenerated current density (ampere/cm2) J0, reverse saturation current density (ampere/cm2) rS, specific series resistance (Ω·cm2) rSH, specific shunt resistance (Ω·cm2). This formulation has several advantages. One is that since cell characteristics are referenced to a common cross-sectional area they may be compared for cells of different physical dimensions. While this is of limited benefit in a manufacturing setting, where all cells tend to be the same size, it is useful in research and in comparing cells between manufacturers. Another advantage is that the density equation naturally scales the parameter values to similar orders of magnitude, which can make numerical extraction of them simpler and more accurate even with naive solution methods. There are practical limitations of this formulation. For instance, certain parasitic effects grow in importance as cell sizes shrink and can affect the extracted parameter values. Recombination and contamination of the junction tend to be greatest at the perimeter of the cell, so very small cells may exhibit higher values of J0 or lower values of RSH than larger cells that are otherwise identical. In such cases, comparisons between cells must be made cautiously and with these effects in mind. This approach should only be used for comparing solar cells with comparable layout. For instance, a comparison between primarily quadratical solar cells like typical crystalline silicon solar cells and narrow but long solar cells like typical thin film solar cells can lead to wrong assumptions caused by the different kinds of current paths and therefore the influence of, for instance, a distributed series resistance contribution to rS. Macro-architecture of the solar cells could result in different surface areas being placed in any fixed volume - particularly for thin film solar cells and flexible solar cells which may allow for highly convoluted folded structures. If volume is the binding constraint, then efficiency density based on surface area may be of less relevance. === Transparent conducting electrodes === Transparent conducting electrodes are essential components of solar cells. It is either a continuous film of indium tin oxide or a conducting wire network, in which wires are charge collectors while voids between wires are transparent for light. An optimum density of wire network is essential for the maximum solar cell performance as higher wire density blocks the light transmittance while lower wire density leads to high recombination losses due to more distance traveled by the charge carriers. === Cell temperature === Temperature affects the characteristic equation in two ways: directly, via T in the exponential term, and indirectly via its effect on I0 (strictly speaking, temperature affects all of the terms, but these two far more significantly than the others). While increasing T reduces the magnitude of the exponent in the characteristic equation, the value of I0 increases exponentially with T. The net effect is to reduce VOC (the open-circuit voltage) linearly with increasing temperature. The magnitude of this reduction is inversely proportional to VOC; that is, cells with higher values of VOC suffer smaller reductions in voltage with increasing temperature. For most crystalline silicon solar cells the change in VOC with temperature is about −0.50%/°C, though the rate for the highest-efficiency crystalline silicon cells is around −0.35%/°C. By way of comparison, the rate for amorphous silicon solar cells is −0.20 to −0.30%/°C, depending on how the cell is made. The amount of photogenerated current IL increases slightly with increasing temperature because of an increase in the number of thermally generated carriers in the cell. This effect is slight, however: about 0.065%/°C for crystalline silicon cells and 0.09% for amorphous silicon cells. The overall effect of temperature on cell efficiency can be computed using these factors in combination with the characteristic equation. However, since the change in voltage is much stronger than the change in current, the overall effect on efficiency tends to be similar to that on voltage. Most crystalline silicon solar cells decline in efficiency by 0.50%/°C and most amorphous cells decline by 0.15−0.25%/°C. The figure above shows I-V curves that might typically be seen for a crystalline silicon solar cell at various temperatures. === Series resistance === As series resistance increases, the voltage drop between the junction voltage and the terminal voltage becomes greater for the same current. The result is that the current-controlled portion of the I-V curve begins to sag toward the origin, producing a significant decrease in V out {\displaystyle V_{\text{out}}} and a slight reduction in ISC, the short-circuit current. Very high values of RS will also produce a significant reduction in ISC; in these regimes, series resistance dominates and the behavior of the solar cell resembles that of a resistor. These effects are shown for crystalline silicon solar cells in the I-V curves displayed in the figure to the right. Power lost through the series resistance is I out 2 R S {\displaystyle I_{\text{out}}^{2}R_{\text{S}}} . During illumination when I D {\displaystyle I_{\text{D}}} and I SH {\displaystyle I_{\text{SH}}} are small relative to photocurrent I L {\displaystyle I_{\text{L}}} , power loss also increases quadratically with I L {\displaystyle I_{\text{L}}} . Series resistance losses are therefore most important at high illumination intensities. === Shunt resistance === As shunt resistance decreases, the current diverted through the shunt resistor increases for a given level of junction voltage. The result is that the voltage-controlled portion of the I-V curve begins to sag far from the origin, producing a significant decrease in I out {\displaystyle I_{\text{out}}} and a slight reduction in VOC. Very low values of RSH will produce a significant reduction in VOC. Much as in the case of a high series resistance, a badly shunted solar cell will take on operating characteristics similar to those of a resistor. These effects are shown for crystalline silicon solar cells in the I-V curves displayed in the figure to the right. === Reverse saturation current === If one assumes infinite shunt resistance, the characteristic equation can be solved for VOC: V OC = k T q ln ⁡ ( I SC I 0 + 1 ) . {\displaystyle V_{\text{OC}}={\frac {kT}{q}}\ln \left({\frac {I_{\text{SC}}}{I_{0}}}+1\right).} Thus, an increase in I0 produces a reduction in VOC proportional to the inverse of the logarithm of the increase. This explains mathematically the reason for the reduction in VOC that accompanies increases in temperature described above. The effect of reverse saturation current on the I-V curve of a crystalline silicon solar cell are shown in the figure to the right. Physically, reverse saturation current is a measure of the "leakage" of carriers across the p–n junction in reverse bias. This leakage is a result of carrier recombination in the neutral regions on either side of the junction. === Ideality factor === The ideality factor (also called the emissivity factor) is a fitting parameter that describes how closely the diode's behavior matches that predicted by theory, which assumes the p–n junction of the diode is an infinite plane and no recombination occurs within the space-charge region. A perfect match to theory is indicated when n = 1. When recombination in the space-charge region dominate other recombination, however, n = 2. The effect of changing ideality factor independently of all other parameters is shown for a crystalline silicon solar cell in the I-V curves displayed in the figure to the right. Most solar cells, which are quite large compared to conventional diodes, well approximate an infinite plane and will usually exhibit near-ideal behavior under standard test conditions (n ≈ 1). Under certain operating conditions, however, device operation may be dominated by recombination in the space-charge region. This is characterized by a significant increase in I0 as well as an increase in ideality factor to n ≈ 2. The latter tends to increase solar cell output voltage while the former acts to erode it. The net effect, therefore, is a combination of the increase in voltage shown for increasing n in the figure to the right and the decrease in voltage shown for increasing I0 in the figure above. Typically, I0 is the more significant factor and the result is a reduction in voltage. Sometimes, the ideality factor is observed to be greater than 2, which is generally attributed to the presence of Schottky diode or heterojunction in the solar cell. The presence of a heterojunction offset reduces the collection efficiency of the solar cell and may contribute to low fill-factor. === Other models === While the above model is most common, other models have been proposed, like the d1MxP discrete model. == See also == Electromotive force § Solar cell == References == == External links == PV Lighthouse Equivalent Circuit Calculator Chemistry Explained — Solar Cells from chemistryexplained.com
Wikipedia/Theory_of_solar_cells
Counter-electromotive force (counter EMF, CEMF, back EMF), is the electromotive force (EMF) manifesting as a voltage that opposes the change in current which induced it. CEMF is the EMF caused by electromagnetic induction. == Details == For example, the voltage appearing across an inductor or coil is due to a change in current which causes a change in the magnetic field within the coil, and therefore the self-induced voltage. The polarity of the voltage at every moment opposes that of the change in applied voltage, to keep the current constant. The term back electromotive force is also commonly used to refer to the voltage that occurs in electric motors where there is relative motion between the armature and the magnetic field produced by the motor's field coils or permanent magnet field, thus also acting as a generator while running as a motor. This effect is not due to the motor's inductance, which generates a voltage in opposition to a changing current via Faraday's law, but a separate phenomenon. That is, the back-EMF is also due to inductance and Faraday's law, but occurs even when the motor current is not changing, and arises from the geometric considerations of an armature spinning in a magnetic field. This voltage is in series with and opposes the original applied voltage and is called "back-electromotive force" (by Lenz's law). With a lower overall voltage across the motor's internal resistance as the motor turns faster, the current flowing into the motor decreases. One practical application of this phenomenon is to indirectly measure motor speed and position, as the back-EMF is proportional to the rotational speed of the armature. In motor control and robotics, back-EMF often refers most specifically to actually using the voltage generated by a spinning motor to infer the speed of the motor's rotation, for use in better controlling the motor in specific ways. To observe the effect of back-EMF of a motor, one can perform this simple exercise: with an incandescent light on, cause a large motor such as a drill press, saw, air conditioner compressor, or vacuum cleaner to start. The light may dim briefly as the motor starts. When the armature is not turning (called locked rotor) there is no back-EMF and the motor's current draw is quite high. If the motor's starting current is high enough, it will pull the line voltage down enough to cause noticeable dimming of the light. == References == == External links == Counter-electromotive-force in access control applications
Wikipedia/Counter-electromotive_force
The minimum total potential energy principle is a fundamental concept used in physics and engineering. It dictates that at low temperatures a structure or body shall deform or displace to a position that (locally) minimizes the total potential energy, with the lost potential energy being converted into kinetic energy (specifically heat). == Some examples == A free proton and free electron will tend to combine to form the lowest energy state (the ground state) of a hydrogen atom, the most stable configuration. This is because that state's energy is 13.6 electron volts (eV) lower than when the two particles separated by an infinite distance. The dissipation in this system takes the form of spontaneous emission of electromagnetic radiation, which increases the entropy of the surroundings. A rolling ball will end up stationary at the bottom of a hill, the point of minimum potential energy. The reason is that as it rolls downward under the influence of gravity, friction produced by its motion transfers energy in the form of heat to the surroundings with an attendant increase in entropy. A protein folds into the state of lowest potential energy. In this case, the dissipation takes the form of vibration of atoms within or adjacent to the protein. == Structural mechanics == The total potential energy, Π {\displaystyle {\boldsymbol {\Pi }}} , is the sum of the elastic strain energy, U, stored in the deformed body and the potential energy, V, associated to the applied forces: This energy is at a stationary position when an infinitesimal variation from such position involves no change in energy: The principle of minimum total potential energy may be derived as a special case of the virtual work principle for elastic systems subject to conservative forces. The equality between external and internal virtual work (due to virtual displacements) is: where u {\displaystyle \mathbf {u} } = vector of displacements T {\displaystyle \mathbf {T} } = vector of distributed forces acting on the part S t {\displaystyle S_{t}} of the surface f {\displaystyle \mathbf {f} } = vector of body forces In the special case of elastic bodies, the right-hand-side of (3) can be taken to be the change, δ U {\displaystyle \delta \mathbf {U} } , of elastic strain energy U due to infinitesimal variations of real displacements. In addition, when the external forces are conservative forces, the left-hand-side of (3) can be seen as the change in the potential energy function V of the forces. The function V is defined as: V = − ∫ S t u T T d S − ∫ V u T f d V {\displaystyle \mathbf {V} =-\int _{S_{t}}\mathbf {u} ^{T}\mathbf {T} dS-\int _{V}\mathbf {u} ^{T}\mathbf {f} dV} where the minus sign implies a loss of potential energy as the force is displaced in its direction. With these two subsidiary conditions, Equation 3 becomes: − δ V = δ U {\displaystyle -\delta \ \mathbf {V} =\delta \ \mathbf {U} } This leads to (2) as desired. The variational form of (2) is often used as the basis for developing the finite element method in structural mechanics. == References ==
Wikipedia/Minimum_total_potential_energy_principle
In electrical engineering, a transformer is a passive component that transfers electrical energy from one electrical circuit to another circuit, or multiple circuits. A varying current in any coil of the transformer produces a varying magnetic flux in the transformer's core, which induces a varying electromotive force (EMF) across any other coils wound around the same core. Electrical energy can be transferred between separate coils without a metallic (conductive) connection between the two circuits. Faraday's law of induction, discovered in 1831, describes the induced voltage effect in any coil due to a changing magnetic flux encircled by the coil. Transformers are used to change AC voltage levels, such transformers being termed step-up or step-down type to increase or decrease voltage level, respectively. Transformers can also be used to provide galvanic isolation between circuits as well as to couple stages of signal-processing circuits. Since the invention of the first constant-potential transformer in 1885, transformers have become essential for the transmission, distribution, and utilization of alternating current electric power. A wide range of transformer designs is encountered in electronic and electric power applications. Transformers range in size from RF transformers less than a cubic centimeter in volume, to units weighing hundreds of tons used to interconnect the power grid. == Principles == Ideal transformer equations By Faraday's law of induction: where V {\displaystyle V} is the instantaneous voltage, N {\displaystyle N} is the number of turns in a winding, dΦ/dt is the derivative of the magnetic flux Φ through one turn of the winding over time (t), and subscripts P and S denotes primary and secondary. Combining the ratio of eq. 1 & eq. 2: where for a step-up transformer a < 1 and for a step-down transformer a > 1. By the law of conservation of energy, apparent, real and reactive power are each conserved in the input and output: where S {\displaystyle S} is apparent power and I {\displaystyle I} is current. Combining Eq. 3 & Eq. 4 with this endnote gives the ideal transformer identity: where L P {\displaystyle L_{\text{P}}} is the primary winding self-inductance and L S {\displaystyle L_{\text{S}}} is the secondary winding self-inductance. By Ohm's law and ideal transformer identity: where Z L {\displaystyle Z_{\text{L}}} is the load impedance of the secondary circuit & Z L ′ {\displaystyle Z'_{\text{L}}} is the apparent load or driving point impedance of the primary circuit, the superscript ′ {\displaystyle '} denoting referred to the primary. === Ideal transformer === An ideal transformer is linear, lossless and perfectly coupled. Perfect coupling implies infinitely high core magnetic permeability and winding inductance and zero net magnetomotive force (i.e. ipnp − isns = 0). A varying current in the transformer's primary winding creates a varying magnetic flux in the transformer core, which is also encircled by the secondary winding. This varying flux at the secondary winding induces a varying electromotive force or voltage in the secondary winding. This electromagnetic induction phenomenon is the basis of transformer action and, in accordance with Lenz's law, the secondary current so produced creates a flux equal and opposite to that produced by the primary winding. The windings are wound around a core of infinitely high magnetic permeability so that all of the magnetic flux passes through both the primary and secondary windings. With a voltage source connected to the primary winding and a load connected to the secondary winding, the transformer currents flow in the indicated directions and the core magnetomotive force cancels to zero. According to Faraday's law, since the same magnetic flux passes through both the primary and secondary windings in an ideal transformer, a voltage is induced in each winding proportional to its number of turns. The transformer winding voltage ratio is equal to the winding turns ratio. An ideal transformer is a reasonable approximation for a typical commercial transformer, with voltage ratio and winding turns ratio both being inversely proportional to the corresponding current ratio. The load impedance referred to the primary circuit is equal to the turns ratio squared times the secondary circuit load impedance. === Real transformer === ==== Deviations from ideal transformer ==== The ideal transformer model neglects many basic linear aspects of real transformers, including unavoidable losses and inefficiencies. (a) Core losses, collectively called magnetizing current losses, consisting of Hysteresis losses due to nonlinear magnetic effects in the transformer core, and Eddy current losses due to joule heating in the core that are proportional to the square of the transformer's applied voltage. (b) Unlike the ideal model, the windings in a real transformer have non-zero resistances and inductances associated with: Joule losses due to resistance in the primary and secondary windings Leakage flux that escapes from the core and passes through one winding only resulting in primary and secondary reactive impedance. (c) similar to an inductor, parasitic capacitance and self-resonance phenomenon due to the electric field distribution. Three kinds of parasitic capacitance are usually considered and the closed-loop equations are provided Capacitance between adjacent turns in any one layer; Capacitance between adjacent layers; Capacitance between the core and the layer(s) adjacent to the core; Inclusion of capacitance into the transformer model is complicated, and is rarely attempted; the 'real' transformer model's equivalent circuit shown below does not include parasitic capacitance. However, the capacitance effect can be measured by comparing open-circuit inductance, i.e. the inductance of a primary winding when the secondary circuit is open, to a short-circuit inductance when the secondary winding is shorted. ==== Leakage flux ==== The ideal transformer model assumes that all flux generated by the primary winding links all the turns of every winding, including itself. In practice, some flux traverses paths that take it outside the windings. Such flux is termed leakage flux, and results in leakage inductance in series with the mutually coupled transformer windings. Leakage flux results in energy being alternately stored in and discharged from the magnetic fields with each cycle of the power supply. It is not directly a power loss, but results in inferior voltage regulation, causing the secondary voltage not to be directly proportional to the primary voltage, particularly under heavy load. Transformers are therefore normally designed to have very low leakage inductance. In some applications increased leakage is desired, and long magnetic paths, air gaps, or magnetic bypass shunts may deliberately be introduced in a transformer design to limit the short-circuit current it will supply. Leaky transformers may be used to supply loads that exhibit negative resistance, such as electric arcs, mercury- and sodium- vapor lamps and neon signs or for safely handling loads that become periodically short-circuited such as electric arc welders.: 485  Air gaps are also used to keep a transformer from saturating, especially audio-frequency transformers in circuits that have a DC component flowing in the windings. A saturable reactor exploits saturation of the core to control alternating current. Knowledge of leakage inductance is also useful when transformers are operated in parallel. It can be shown that if the percent impedance and associated winding leakage reactance-to-resistance (X/R) ratio of two transformers were the same, the transformers would share the load power in proportion to their respective ratings. However, the impedance tolerances of commercial transformers are significant. Also, the impedance and X/R ratio of different capacity transformers tends to vary. ==== Equivalent circuit ==== Referring to the diagram, a practical transformer's physical behavior may be represented by an equivalent circuit model, which can incorporate an ideal transformer. Winding joule losses and leakage reactance are represented by the following series loop impedances of the model: Primary winding: RP, XP Secondary winding: RS, XS. In normal course of circuit equivalence transformation, RS and XS are in practice usually referred to the primary side by multiplying these impedances by the turns ratio squared, (NP/NS) 2 = a2. Core loss and reactance is represented by the following shunt leg impedances of the model: Core or iron losses: RC Magnetizing reactance: XM. RC and XM are collectively termed the magnetizing branch of the model. Core losses are caused mostly by hysteresis and eddy current effects in the core and are proportional to the square of the core flux for operation at a given frequency.: 142–143  The finite permeability core requires a magnetizing current IM to maintain mutual flux in the core. Magnetizing current is in phase with the flux, the relationship between the two being non-linear due to saturation effects. However, all impedances of the equivalent circuit shown are by definition linear and such non-linearity effects are not typically reflected in transformer equivalent circuits.: 142  With sinusoidal supply, core flux lags the induced EMF by 90°. With open-circuited secondary winding, magnetizing branch current I0 equals transformer no-load current. The resulting model, though sometimes termed 'exact' equivalent circuit based on linearity assumptions, retains a number of approximations. Analysis may be simplified by assuming that magnetizing branch impedance is relatively high and relocating the branch to the left of the primary impedances. This introduces error but allows combination of primary and referred secondary resistances and reactance by simple summation as two series impedances. Transformer equivalent circuit impedance and transformer ratio parameters can be derived from the following tests: open-circuit test, short-circuit test, winding resistance test, and transformer ratio test. === Transformer EMF equation === If the flux in the core is purely sinusoidal, the relationship for either winding between its rms voltage Erms of the winding, and the supply frequency f, number of turns N, core cross-sectional area A in m2 and peak magnetic flux density Bpeak in Wb/m2 or T (tesla) is given by the universal EMF equation: E rms = 2 π f N A B peak 2 ≈ 4.44 f N A B peak {\displaystyle E_{\text{rms}}={\frac {2\pi fNAB_{\text{peak}}}{\sqrt {2}}}\approx 4.44fNAB_{\text{peak}}} === Polarity === A dot convention is often used in transformer circuit diagrams, nameplates or terminal markings to define the relative polarity of transformer windings. Positively increasing instantaneous current entering the primary winding's 'dot' end induces positive polarity voltage exiting the secondary winding's 'dot' end. Three-phase transformers used in electric power systems will have a nameplate that indicate the phase relationships between their terminals. This may be in the form of a phasor diagram, or using an alpha-numeric code to show the type of internal connection (wye or delta) for each winding. === Effect of frequency === The EMF of a transformer at a given flux increases with frequency. By operating at higher frequencies, transformers can be physically more compact because a given core is able to transfer more power without reaching saturation and fewer turns are needed to achieve the same impedance. However, properties such as core loss and conductor skin effect also increase with frequency. Aircraft and military equipment employ 400 Hz power supplies which reduce core and winding weight. Conversely, frequencies used for some railway electrification systems were much lower (e.g. 16.7 Hz and 25 Hz) than normal utility frequencies (50–60 Hz) for historical reasons concerned mainly with the limitations of early electric traction motors. Consequently, the transformers used to step-down the high overhead line voltages were much larger and heavier for the same power rating than those required for the higher frequencies. Operation of a transformer at its designed voltage but at a higher frequency than intended will lead to reduced magnetizing current. At a lower frequency, the magnetizing current will increase. Operation of a large transformer at other than its design frequency may require assessment of voltages, losses, and cooling to establish if safe operation is practical. Transformers may require protective relays to protect the transformer from overvoltage at higher than rated frequency. One example is in traction transformers used for electric multiple unit and high-speed train service operating across regions with different electrical standards. The converter equipment and traction transformers have to accommodate different input frequencies and voltage (ranging from as high as 50 Hz down to 16.7 Hz and rated up to 25 kV). At much higher frequencies the transformer core size required drops dramatically: a physically small transformer can handle power levels that would require a massive iron core at mains frequency. The development of switching power semiconductor devices made switch-mode power supplies viable, to generate a high frequency, then change the voltage level with a small transformer. Transformers for higher frequency applications such as SMPS typically use core materials with much lower hysteresis and eddy-current losses than those for 50/60 Hz. Primary examples are iron-powder and ferrite cores. The lower frequency-dependant losses of these cores often is at the expense of flux density at saturation. For instance, ferrite saturation occurs at a substantially lower flux density than laminated iron. Large power transformers are vulnerable to insulation failure due to transient voltages with high-frequency components, such as caused in switching or by lightning. === Energy losses === Transformer energy losses are dominated by winding and core losses. Transformers' efficiency tends to improve with increasing transformer capacity. The efficiency of typical distribution transformers is between about 98 and 99 percent. As transformer losses vary with load, it is often useful to tabulate no-load loss, full-load loss, half-load loss, and so on. Hysteresis and eddy current losses are constant at all load levels and dominate at no load, while winding loss increases as load increases. The no-load loss can be significant, so that even an idle transformer constitutes a drain on the electrical supply. Designing energy efficient transformers for lower loss requires a larger core, good-quality silicon steel, or even amorphous steel for the core and thicker wire, increasing initial cost. The choice of construction represents a trade-off between initial cost and operating cost. Transformer losses arise from: Winding joule losses Current flowing through a winding's conductor causes joule heating due to the resistance of the wire. As frequency increases, skin effect and proximity effect causes the winding's resistance and, hence, losses to increase. Core losses Hysteresis losses Each time the magnetic field is reversed, a small amount of energy is lost due to hysteresis within the core, caused by motion of the magnetic domains within the steel. According to Steinmetz's formula, the heat energy due to hysteresis is given by W h ≈ η β max 1.6 {\displaystyle W_{\text{h}}\approx \eta \beta _{\text{max}}^{1.6}} and, hysteresis loss is thus given by P h ≈ W h f ≈ η f β max 1.6 {\displaystyle P_{\text{h}}\approx {W}_{\text{h}}f\approx \eta {f}\beta _{\text{max}}^{1.6}} where, f is the frequency, η is the hysteresis coefficient and βmax is the maximum flux density, the empirical exponent of which varies from about 1.4 to 1.8 but is often given as 1.6 for iron. For more detailed analysis, see Magnetic core and Steinmetz's equation. Eddy current losses Eddy currents are induced in the conductive metal transformer core by the changing magnetic field, and this current flowing through the resistance of the iron dissipates energy as heat in the core. The eddy current loss is a complex function of the square of supply frequency and inverse square of the material thickness. Eddy current losses can be reduced by making the core of a stack of laminations (thin plates) electrically insulated from each other, rather than a solid block; all transformers operating at low frequencies use laminated or similar cores. Magnetostriction related transformer hum Magnetic flux in a ferromagnetic material, such as the core, causes it to physically expand and contract slightly with each cycle of the magnetic field, an effect known as magnetostriction, the frictional energy of which produces an audible noise known as mains hum or "transformer hum". This transformer hum is especially objectionable in transformers supplied at power frequencies and in high-frequency flyback transformers associated with television CRTs. Stray losses Leakage inductance is by itself largely lossless, since energy supplied to its magnetic fields is returned to the supply with the next half-cycle. However, any leakage flux that intercepts nearby conductive materials such as the transformer's support structure will give rise to eddy currents and be converted to heat. Radiative There are also radiative losses due to the oscillating magnetic field but these are usually small. Mechanical vibration and audible noise transmission In addition to magnetostriction, the alternating magnetic field causes fluctuating forces between the primary and secondary windings. This energy incites vibration transmission in interconnected metalwork, thus amplifying audible transformer hum. == Construction == === Cores === Closed-core transformers are constructed in 'core form' or 'shell form'. When windings surround the core, the transformer is core form; when windings are surrounded by the core, the transformer is shell form. Shell form design may be more prevalent than core form design for distribution transformer applications due to the relative ease in stacking the core around winding coils. Core form design tends to, as a general rule, be more economical, and therefore more prevalent, than shell form design for high voltage power transformer applications at the lower end of their voltage and power rating ranges (less than or equal to, nominally, 230 kV or 75 MVA). At higher voltage and power ratings, shell form transformers tend to be more prevalent. Shell form design tends to be preferred for extra-high voltage and higher MVA applications because, though more labor-intensive to manufacture, shell form transformers are characterized as having inherently better kVA-to-weight ratio, better short-circuit strength characteristics and higher immunity to transit damage. ==== Laminated steel cores ==== Transformers for use at power or audio frequencies typically have cores made of high permeability silicon steel. The steel has a permeability many times that of free space and the core thus serves to greatly reduce the magnetizing current and confine the flux to a path which closely couples the windings. Early transformer developers soon realized that cores constructed from solid iron resulted in prohibitive eddy current losses, and their designs mitigated this effect with cores consisting of bundles of insulated iron wires. Later designs constructed the core by stacking layers of thin steel laminations, a principle that has remained in use. Each lamination is insulated from its neighbors by a thin non-conducting layer of insulation. The transformer universal EMF equation can be used to calculate the core cross-sectional area for a preferred level of magnetic flux. The effect of laminations is to confine eddy currents to highly elliptical paths that enclose little flux, and so reduce their magnitude. Thinner laminations reduce losses, but are more laborious and expensive to construct. Thin laminations are generally used on high-frequency transformers, with some of very thin steel laminations able to operate up to 10 kHz. One common design of laminated core is made from interleaved stacks of E-shaped steel sheets capped with I-shaped pieces, leading to its name of E-I transformer. Such a design tends to exhibit more losses, but is very economical to manufacture. The cut-core or C-core type is made by winding a steel strip around a rectangular form and then bonding the layers together. It is then cut in two, forming two C shapes, and the core assembled by binding the two C halves together with a steel strap. They have the advantage that the flux is always oriented parallel to the metal grains, reducing reluctance. A steel core's remanence means that it retains a static magnetic field when power is removed. When power is then reapplied, the residual field will cause a high inrush current until the effect of the remaining magnetism is reduced, usually after a few cycles of the applied AC waveform. Overcurrent protection devices such as fuses must be selected to allow this harmless inrush to pass. On transformers connected to long, overhead power transmission lines, induced currents due to geomagnetic disturbances during solar storms can cause saturation of the core and operation of transformer protection devices. Distribution transformers can achieve low no-load losses by using cores made with low-loss high-permeability silicon steel or amorphous (non-crystalline) metal alloy. The higher initial cost of the core material is offset over the life of the transformer by its lower losses at light load. ==== Solid cores ==== Powdered iron cores are used in circuits such as switch-mode power supplies that operate above mains frequencies and up to a few tens of kilohertz. These materials combine high magnetic permeability with high bulk electrical resistivity. For frequencies extending beyond the VHF band, cores made from non-conductive magnetic ceramic materials called ferrites are common. Some radio-frequency transformers also have movable cores (sometimes called 'slugs') which allow adjustment of the coupling coefficient (and bandwidth) of tuned radio-frequency circuits. ==== Toroidal cores ==== Toroidal transformers are built around a ring-shaped core, which, depending on operating frequency, is made from a long strip of silicon steel or permalloy wound into a coil, powdered iron, or ferrite. A strip construction ensures that the grain boundaries are optimally aligned, improving the transformer's efficiency by reducing the core's reluctance. The closed ring shape eliminates air gaps inherent in the construction of an E-I core. : 485  The cross-section of the ring is usually square or rectangular, but more expensive cores with circular cross-sections are also available. The primary and secondary coils are often wound concentrically to cover the entire surface of the core. This minimizes the length of wire needed and provides screening to minimize the core's magnetic field from generating electromagnetic interference. Toroidal transformers are more efficient than the cheaper laminated E-I types for a similar power level. Other advantages compared to E-I types, include smaller size (about half), lower weight (about half), less mechanical hum (making them superior in audio amplifiers), lower exterior magnetic field (about one tenth), low off-load losses (making them more efficient in standby circuits), single-bolt mounting, and greater choice of shapes. The main disadvantages are higher cost and limited power capacity (see Classification parameters below). Because of the lack of a residual gap in the magnetic path, toroidal transformers also tend to exhibit higher inrush current, compared to laminated E-I types. Ferrite toroidal cores are used at higher frequencies, typically between a few tens of kilohertz to hundreds of megahertz, to reduce losses, physical size, and weight of inductive components. A drawback of toroidal transformer construction is the higher labor cost of winding. This is because it is necessary to pass the entire length of a coil winding through the core aperture each time a single turn is added to the coil. As a consequence, toroidal transformers rated more than a few kVA are uncommon. Relatively few toroids are offered with power ratings above 10 kVA, and practically none above 25 kVA. Small distribution transformers may achieve some of the benefits of a toroidal core by splitting it and forcing it open, then inserting a bobbin containing primary and secondary windings. ==== Air cores ==== A transformer can be produced by placing the windings near each other, an arrangement termed an "air-core" transformer. An air-core transformer eliminates loss due to hysteresis in the core material. The magnetizing inductance is drastically reduced by the lack of a magnetic core, resulting in large magnetizing currents and losses if used at low frequencies. Air-core transformers are unsuitable for use in power distribution, but are frequently employed in radio-frequency applications. Air cores are also used for resonant transformers such as tesla coils, where they can achieve reasonably low loss despite the low magnetizing inductance. === Windings === The electrical conductor used for the windings depends upon the application, but in all cases the individual turns must be electrically insulated from each other to ensure that the current travels throughout every turn. For small transformers, in which currents are low and the potential difference between adjacent turns is small, the coils are often wound from enameled magnet wire. Larger power transformers may be wound with copper rectangular strip conductors insulated by oil-impregnated paper and blocks of pressboard. High-frequency transformers operating in the tens to hundreds of kilohertz often have windings made of braided Litz wire to minimize the skin-effect and proximity effect losses. Large power transformers use multiple-stranded conductors as well, since even at low power frequencies non-uniform distribution of current would otherwise exist in high-current windings. Each strand is individually insulated, and the strands are arranged so that at certain points in the winding, or throughout the whole winding, each portion occupies different relative positions in the complete conductor. The transposition equalizes the current flowing in each strand of the conductor, and reduces eddy current losses in the winding itself. The stranded conductor is also more flexible than a solid conductor of similar size, aiding manufacture. The windings of signal transformers minimize leakage inductance and stray capacitance to improve high-frequency response. Coils are split into sections, and those sections interleaved between the sections of the other winding. Power-frequency transformers may have taps at intermediate points on the winding, usually on the higher voltage winding side, for voltage adjustment. Taps may be manually reconnected, or a manual or automatic switch may be provided for changing taps. Automatic on-load tap changers are used in electric power transmission or distribution, on equipment such as arc furnace transformers, or for automatic voltage regulators for sensitive loads. Audio-frequency transformers, used for the distribution of audio to public address loudspeakers, have taps to allow adjustment of impedance to each speaker. A center-tapped transformer is often used in the output stage of an audio power amplifier in a push-pull circuit. Modulation transformers in AM transmitters are very similar. === Cooling === It is a rule of thumb that the life expectancy of electrical insulation is halved for about every 7 °C to 10 °C increase in operating temperature (an instance of the application of the Arrhenius equation). Small dry-type and liquid-immersed transformers are often self-cooled by natural convection and radiation heat dissipation. As power ratings increase, transformers are often cooled by forced-air cooling, forced-oil cooling, water-cooling, or combinations of these. Large transformers are filled with transformer oil that both cools and insulates the windings. Transformer oil is often a highly refined mineral oil that cools the windings and insulation by circulating within the transformer tank. The mineral oil and paper insulation system has been extensively studied and used for more than 100 years. It is estimated that 50% of power transformers will survive 50 years of use, that the average age of failure of power transformers is about 10 to 15 years, and that about 30% of power transformer failures are due to insulation and overloading failures. Prolonged operation at elevated temperature degrades insulating properties of winding insulation and dielectric coolant, which not only shortens transformer life but can ultimately lead to catastrophic transformer failure. With a great body of empirical study as a guide, transformer oil testing including dissolved gas analysis provides valuable maintenance information. Building regulations in many jurisdictions require indoor liquid-filled transformers to either use dielectric fluids that are less flammable than oil, or be installed in fire-resistant rooms. Air-cooled dry transformers can be more economical where they eliminate the cost of a fire-resistant transformer room. The tank of liquid-filled transformers often has radiators through which the liquid coolant circulates by natural convection or fins. Some large transformers employ electric fans for forced-air cooling, pumps for forced-liquid cooling, or have heat exchangers for water-cooling. An oil-immersed transformer may be equipped with a Buchholz relay, which, depending on severity of gas accumulation due to internal arcing, is used to either trigger an alarm or de-energize the transformer. Oil-immersed transformer installations usually include fire protection measures such as walls, oil containment, and fire-suppression sprinkler systems. Polychlorinated biphenyls (PCBs) have properties that once favored their use as a dielectric coolant, though concerns over their environmental persistence led to a widespread ban on their use. Today, non-toxic, stable silicone-based oils, or fluorinated hydrocarbons may be used where the expense of a fire-resistant liquid offsets additional building cost for a transformer vault. However, the long life span of transformers can mean that the potential for exposure can be high long after banning. Some transformers are gas-insulated. Their windings are enclosed in sealed, pressurized tanks and often cooled by nitrogen or sulfur hexafluoride gas. Experimental power transformers in the 500–1,000 kVA range have been built with liquid nitrogen or helium cooled superconducting windings, which eliminates winding losses without affecting core losses. === Insulation === Insulation must be provided between the individual turns of the windings, between the windings, between windings and core, and at the terminals of the winding. Inter-turn insulation of small transformers may be a layer of insulating varnish on the wire. Layer of paper or polymer films may be inserted between layers of windings, and between primary and secondary windings. A transformer may be coated or dipped in a polymer resin to improve the strength of windings and protect them from moisture or corrosion. The resin may be impregnated into the winding insulation using combinations of vacuum and pressure during the coating process, eliminating all air voids in the winding. In the limit, the entire coil may be placed in a mold, and resin cast around it as a solid block, encapsulating the windings. Large oil-filled power transformers use windings wrapped with insulating paper, which is impregnated with oil during assembly of the transformer. Oil-filled transformers use highly refined mineral oil to insulate and cool the windings and core. Construction of oil-filled transformers requires that the insulation covering the windings be thoroughly dried of residual moisture before the oil is introduced. Drying may be done by circulating hot air around the core, by circulating externally heated transformer oil, or by vapor-phase drying (VPD) where an evaporated solvent transfers heat by condensation on the coil and core. For small transformers, resistance heating by injection of current into the windings is used. === Bushings === Larger transformers are provided with high-voltage insulated bushings made of polymers or porcelain. A large bushing can be a complex structure since it must provide careful control of the electric field gradient without letting the transformer leak oil. == Classification parameters == Transformers can be classified in many ways, such as the following: Power rating: From a fraction of a volt-ampere (VA) to over a thousand MVA. Duty of a transformer: Continuous, short-time, intermittent, periodic, varying. Frequency range: Power-frequency, audio-frequency, or radio-frequency. Voltage class: From a few volts to hundreds of kilovolts. Cooling type: Dry or liquid-immersed; self-cooled, forced air-cooled;forced oil-cooled, water-cooled. Application: power supply, impedance matching, output voltage and current stabilizer, pulse, circuit isolation, power distribution, rectifier, arc furnace, amplifier output, etc.. Basic magnetic form: Core form, shell form, concentric, sandwich. Constant-potential transformer descriptor: Step-up, step-down, isolation. General winding configuration: By IEC vector group, two-winding combinations of the phase designations delta, wye or star, and zigzag; autotransformer, Scott-T Rectifier phase-shift winding configuration: 2-winding, 6-pulse; 3-winding, 12-pulse; . . ., n-winding, [n − 1]·6-pulse; polygon; etc. K-factor: A measure of how well the transformer can withstand harmonic loads. == Applications == Various specific electrical application designs require a variety of transformer types. Although they all share the basic characteristic transformer principles, they are customized in construction or electrical properties for certain installation requirements or circuit conditions. In electric power transmission, transformers allow transmission of electric power at high voltages, which reduces the loss due to heating of the wires. This allows generating plants to be located economically at a distance from electrical consumers. All but a tiny fraction of the world's electrical power has passed through a series of transformers by the time it reaches the consumer. In many electronic devices, a transformer is used to convert voltage from the distribution wiring to convenient values for the circuit requirements, either directly at the power line frequency or through a switch mode power supply. Signal and audio transformers are used to couple stages of amplifiers and to match devices such as microphones and record players to the input of amplifiers. Audio transformers allowed telephone circuits to carry on a two-way conversation over a single pair of wires. A balun transformer converts a signal that is referenced to ground to a signal that has balanced voltages to ground, such as between external cables and internal circuits. Isolation transformers prevent leakage of current into the secondary circuit and are used in medical equipment and at construction sites. Resonant transformers are used for coupling between stages of radio receivers, or in high-voltage Tesla coils. == History == === Discovery of induction === Electromagnetic induction, the principle of the operation of the transformer, was discovered independently by Michael Faraday in 1831 and Joseph Henry in 1832. Only Faraday furthered his experiments to the point of working out the equation describing the relationship between EMF and magnetic flux now known as Faraday's law of induction: | E | = | d Φ B d t | , {\displaystyle |{\mathcal {E}}|=\left|{{\mathrm {d} \Phi _{\text{B}}} \over \mathrm {d} t}\right|,} where | E | {\displaystyle |{\mathcal {E}}|} is the magnitude of the EMF in volts and ΦB is the magnetic flux through the circuit in webers. Faraday performed early experiments on induction between coils of wire, including winding a pair of coils around an iron ring, thus creating the first toroidal closed-core transformer. However he only applied individual pulses of current to his transformer, and never discovered the relation between the turns ratio and EMF in the windings. === Induction coils === The first type of transformer to see wide use was the induction coil, invented by Irish-Catholic Rev. Nicholas Callan of Maynooth College, Ireland in 1836. He was one of the first researchers to realize the more turns the secondary winding has in relation to the primary winding, the larger the induced secondary EMF will be. Induction coils evolved from scientists' and inventors' efforts to get higher voltages from batteries. Since batteries produce direct current (DC) rather than AC, induction coils relied upon vibrating electrical contacts that regularly interrupted the current in the primary to create the flux changes necessary for induction. Between the 1830s and the 1870s, efforts to build better induction coils, mostly by trial and error, slowly revealed the basic principles of transformers. === First alternating current transformers === By the 1870s, efficient generators producing alternating current (AC) were available, and it was found AC could power an induction coil directly, without an interrupter. In 1876, Russian engineer Pavel Yablochkov invented a lighting system based on a set of induction coils where the primary windings were connected to a source of AC. The secondary windings could be connected to several 'electric candles' (arc lamps) of his own design. The coils Yablochkov employed functioned essentially as transformers. In 1878, the Ganz factory, Budapest, Hungary, began producing equipment for electric lighting and, by 1883, had installed over fifty systems in Austria-Hungary. Their AC systems used arc and incandescent lamps, generators, and other equipment. In 1882, Lucien Gaulard and John Dixon Gibbs first exhibited a device with an initially widely criticized laminated plate open iron core called a 'secondary generator' in London, then sold the idea to the Westinghouse company in the United States in 1886. They also exhibited the invention in Turin, Italy in 1884, where it was highly successful and adopted for an electric lighting system. Their open-core device used a fixed 1:1 ratio to supply a series circuit for the utilization load (lamps). However, the voltage of their system was controlled by moving the iron core in or out. ==== Early series circuit transformer distribution ==== Induction coils with open magnetic circuits are inefficient at transferring power to loads. Until about 1880, the paradigm for AC power transmission from a high voltage supply to a low voltage load was a series circuit. Open-core transformers with a ratio near 1:1 were connected with their primaries in series to allow use of a high voltage for transmission while presenting a low voltage to the lamps. The inherent flaw in this method was that turning off a single lamp (or other electric device) affected the voltage supplied to all others on the same circuit. Many adjustable transformer designs were introduced to compensate for this problematic characteristic of the series circuit, including those employing methods of adjusting the core or bypassing the magnetic flux around part of a coil. Efficient, practical transformer designs did not appear until the 1880s, but within a decade, the transformer would be instrumental in the war of the currents, and in seeing AC distribution systems triumph over their DC counterparts, a position in which they have remained dominant ever since. === Closed-core transformers and parallel power distribution === In the autumn of 1884, Károly Zipernowsky, Ottó Bláthy and Miksa Déri (ZBD), three Hungarian engineers associated with the Ganz Works, had determined that open-core devices were impracticable, as they were incapable of reliably regulating voltage. The Ganz factory had also in the autumn of 1884 made delivery of the world's first five high-efficiency AC transformers, the first of these units having been shipped on September 16, 1884. This first unit had been manufactured to the following specifications: 1,400 W, 40 Hz, 120:72 V, 11.6:19.4 A, ratio 1.67:1, one-phase, shell form. In their joint 1885 patent applications for novel transformers (later called ZBD transformers), they described two designs with closed magnetic circuits where copper windings were either wound around an iron wire ring core or surrounded by an iron wire core. The two designs were the first application of the two basic transformer constructions in common use to this day, termed "core form" or "shell form" . In both designs, the magnetic flux linking the primary and secondary windings traveled almost entirely within the confines of the iron core, with no intentional path through air (see Toroidal cores below). The new transformers were 3.4 times more efficient than the open-core bipolar devices of Gaulard and Gibbs. The ZBD patents included two other major interrelated innovations: one concerning the use of parallel connected, instead of series connected, utilization loads, the other concerning the ability to have high turns ratio transformers such that the supply network voltage could be much higher (initially 1,400 to 2,000 V) than the voltage of utilization loads (100 V initially preferred). When employed in parallel connected electric distribution systems, closed-core transformers finally made it technically and economically feasible to provide electric power for lighting in homes, businesses and public spaces. Bláthy had suggested the use of closed cores, Zipernowsky had suggested the use of parallel shunt connections, and Déri had performed the experiments; In early 1885, the three engineers also eliminated the problem of eddy current losses with the invention of the lamination of electromagnetic cores. Transformers today are designed on the principles discovered by the three engineers. They also popularized the word 'transformer' to describe a device for altering the EMF of an electric current although the term had already been in use by 1882. In 1886, the ZBD engineers designed, and the Ganz factory supplied electrical equipment for, the world's first power station that used AC generators to power a parallel connected common electrical network, the steam-powered Rome-Cerchi power plant. === Westinghouse improvements === Building on the advancement of AC technology in Europe, George Westinghouse founded the Westinghouse Electric in Pittsburgh, Pennsylvania, on January 8, 1886. The new firm became active in developing alternating current (AC) electric infrastructure throughout the United States. The Edison Electric Light Company held an option on the US rights for the ZBD transformers, requiring Westinghouse to pursue alternative designs on the same principles. George Westinghouse had bought Gaulard and Gibbs' patents for $50,000 in February 1886. He assigned to William Stanley the task of redesign the Gaulard and Gibbs transformer for commercial use in United States. Stanley's first patented design was for induction coils with single cores of soft iron and adjustable gaps to regulate the EMF present in the secondary winding (see image). This design was first used commercially in the US in 1886 but Westinghouse was intent on improving the Stanley design to make it (unlike the ZBD type) easy and cheap to produce. Westinghouse, Stanley and associates soon developed a core that was easier to manufacture, consisting of a stack of thin 'E‑shaped' iron plates insulated by thin sheets of paper or other insulating material. Pre-wound copper coils could then be slid into place, and straight iron plates laid in to create a closed magnetic circuit. Westinghouse obtained a patent for the new low-cost design in 1887. === Other early transformer designs === In 1889, Russian-born engineer Mikhail Dolivo-Dobrovolsky developed the first three-phase transformer at the Allgemeine Elektricitäts-Gesellschaft ('General Electricity Company') in Germany. In 1891, Nikola Tesla invented the Tesla coil, an air-cored, dual-tuned resonant transformer for producing very high voltages at high frequency. Audio frequency transformers ("repeating coils") were used by early experimenters in the development of the telephone. == See also == == Notes == == References == == Bibliography == Beeman, Donald, ed. (1955). Industrial Power Systems Handbook. McGraw-Hill. Calvert, James (2001). "Inside Transformers". University of Denver. Archived from the original on May 9, 2007. Retrieved May 19, 2007. Coltman, J. W. (Jan 1988). "The Transformer". Scientific American. 258 (1): 86–95. Bibcode:1988SciAm.258a..86C. doi:10.1038/scientificamerican0188-86. OSTI 6851152. Coltman, J. W. (Jan–Feb 2002). "History: The Transformer". IEEE Industry Applications Magazine. 8 (1): 8–15. doi:10.1109/2943.974352. S2CID 18160717. Brenner, Egon; Javid, Mansour (1959). "Chapter 18–Circuits with Magnetic Coupling". Analysis of Electric Circuits. McGraw-Hill. pp. 586–622. CEGB, (Central Electricity Generating Board) (1982). Modern Power Station Practice. Pergamon. ISBN 978-0-08-016436-6. Crosby, D. (1958). "The Ideal Transformer". IRE Transactions on Circuit Theory. 5 (2): 145. doi:10.1109/TCT.1958.1086447. Daniels, A. R. (1985). Introduction to Electrical Machines. Macmillan. ISBN 978-0-333-19627-4. De Keulenaer, Hans; Chapman, David; Fassbinder, Stefan; McDermott, Mike (2001). The Scope for Energy Saving in the EU through the Use of Energy-Efficient Electricity Distribution Transformers (PDF). 16th International Conference and Exhibition on Electricity Distribution (CIRED 2001). Institution of Engineering and Technology. doi:10.1049/cp:20010853. Archived from the original (PDF) on 4 March 2016. Retrieved 10 July 2014. Del Vecchio, Robert M.; Poulin, Bertrand; Feghali, Pierre T.M.; Shah, Dilipkumar; Ahuja, Rajendra (2002). Transformer Design Principles: With Applications to Core-Form Power Transformers. Boca Raton: CRC Press. ISBN 978-90-5699-703-8. Fink, Donald G.; Beatty, H. Wayne, eds. (1978). Standard Handbook for Electrical Engineers (11th ed.). McGraw Hill. ISBN 978-0-07-020974-9. Gottlieb, Irving (1998). Practical Transformer Handbook: for Electronics, Radio and Communications Engineers. Elsevier. ISBN 978-0-7506-3992-7. Guarnieri, M. (2013). "Who Invented the Transformer?". IEEE Industrial Electronics Magazine. 7 (4): 56–59. doi:10.1109/MIE.2013.2283834. S2CID 27936000. Halacsy, A. A.; Von Fuchs, G. H. (April 1961). "Transformer Invented 75 Years Ago". Transactions of the American Institute of Electrical Engineers. Part III: Power Apparatus and Systems. 80 (3): 121–125. doi:10.1109/AIEEPAS.1961.4500994. S2CID 51632693. Hameyer, Kay (2004). Electrical Machines I: Basics, Design, Function, Operation (PDF). RWTH Aachen University Institute of Electrical Machines. Archived from the original (PDF) on 2013-02-10. Hammond, John Winthrop (1941). Men and Volts: The Story of General Electric. J. B. Lippincott Company. pp. see esp. 106–107, 178, 238. Harlow, James (2004). Electric Power Transformer Engineering (PDF). CRC Press. ISBN 0-8493-1704-5. Hughes, Thomas P. (1993). Networks of Power: Electrification in Western Society, 1880-1930. Baltimore: The Johns Hopkins University Press. p. 96. ISBN 978-0-8018-2873-7. Retrieved Sep 9, 2009. Heathcote, Martin (1998). J & P Transformer Book (12th ed.). Newnes. ISBN 978-0-7506-1158-9. Hindmarsh, John (1977). Electrical Machines and Their Applications (4th ed.). Exeter: Pergamon. ISBN 978-0-08-030573-8. Kothari, D.P.; Nagrath, I.J. (2010). Electric Machines (4th ed.). Tata McGraw-Hill. ISBN 978-0-07-069967-0. Kulkarni, S. V.; Khaparde, S. A. (2004). Transformer Engineering: Design and Practice. CRC Press. ISBN 978-0-8247-5653-6. McLaren, Peter (1984). Elementary Electric Power and Machines. Ellis Horwood. ISBN 978-0-470-20057-5. McLyman, Colonel William (2004). "Chapter 3". Transformer and Inductor Design Handbook. CRC. ISBN 0-8247-5393-3. Pansini, Anthony (1999). Electrical Transformers and Power Equipment. CRC Press. ISBN 978-0-88173-311-2. Parker, M. R; Ula, S.; Webb, W. E. (2005). "§2.5.5 'Transformers' & §10.1.3 'The Ideal Transformer'". In Whitaker, Jerry C. (ed.). The Electronics Handbook (2nd ed.). Taylor & Francis. pp. 172, 1017. ISBN 0-8493-1889-0. Ryan, H. M. (2004). High Voltage Engineering and Testing. CRC Press. ISBN 978-0-85296-775-1. == External links == General links:
Wikipedia/Electrical_transformer
In organic chemistry, a functional group is any substituent or moiety in a molecule that causes the molecule's characteristic chemical reactions. The same functional group will undergo the same or similar chemical reactions regardless of the rest of the molecule's composition. This enables systematic prediction of chemical reactions and behavior of chemical compounds and the design of chemical synthesis. The reactivity of a functional group can be modified by other functional groups nearby. Functional group interconversion can be used in retrosynthetic analysis to plan organic synthesis. A functional group is a group of atoms in a molecule with distinctive chemical properties, regardless of the other atoms in the molecule. The atoms in a functional group are linked to each other and to the rest of the molecule by covalent bonds. For repeating units of polymers, functional groups attach to their nonpolar core of carbon atoms and thus add chemical character to carbon chains. Functional groups can also be charged, e.g. in carboxylate salts (−COO−), which turns the molecule into a polyatomic ion or a complex ion. Functional groups binding to a central atom in a coordination complex are called ligands. Complexation and solvation are also caused by specific interactions of functional groups. In the common rule of thumb "like dissolves like", it is the shared or mutually well-interacting functional groups which give rise to solubility. For example, sugar dissolves in water because both share the hydroxyl functional group (−OH) and hydroxyls interact strongly with each other. Plus, when functional groups are more electronegative than atoms they attach to, the functional groups will become polar, and the otherwise nonpolar molecules containing these functional groups become polar and so become soluble in some aqueous environment. Combining the names of functional groups with the names of the parent alkanes generates what is termed a systematic nomenclature for naming organic compounds. In traditional nomenclature, the first carbon atom after the carbon that attaches to the functional group is called the alpha carbon; the second, beta carbon, the third, gamma carbon, etc. If there is another functional group at a carbon, it may be named with the Greek letter, e.g., the gamma-amine in gamma-aminobutyric acid is on the third carbon of the carbon chain attached to the carboxylic acid group. IUPAC conventions call for numeric labeling of the position, e.g. 4-aminobutanoic acid. In traditional names various qualifiers are used to label isomers, for example, isopropanol (IUPAC name: propan-2-ol) is an isomer of n-propanol (propan-1-ol). The term moiety has some overlap with the term "functional group". However, a moiety is an entire "half" of a molecule, which can be not only a single functional group, but also a larger unit consisting of multiple functional groups. For example, an "aryl moiety" may be any group containing an aromatic ring, regardless of how many functional groups the said aryl has. == Table of common functional groups == The following is a list of common functional groups. In the formulas, the symbols R and R' usually denote an attached hydrogen, or a hydrocarbon side chain of any length, but may sometimes refer to any group of atoms. === Hydrocarbons === Hydrocarbons are a class of molecule that is defined by functional groups called hydrocarbyls that contain only carbon and hydrogen, but vary in the number and order of double bonds. Each one differs in type (and scope) of reactivity. There are also a large number of branched or ring alkanes that have specific names, e.g., tert-butyl, bornyl, cyclohexyl, etc. There are several functional groups that contain an alkene such as vinyl group, allyl group, or acrylic group. Hydrocarbons may form charged structures: positively charged carbocations or negative carbanions. Carbocations are often named -um. Examples are tropylium and triphenylmethyl cations and the cyclopentadienyl anion. === Groups containing halogen === Haloalkanes are a class of molecule that is defined by a carbon–halogen bond. This bond can be relatively weak (in the case of an iodoalkane) or quite stable (as in the case of a fluoroalkane). In general, with the exception of fluorinated compounds, haloalkanes readily undergo nucleophilic substitution reactions or elimination reactions. The substitution on the carbon, the acidity of an adjacent proton, the solvent conditions, etc. all can influence the outcome of the reactivity. === Groups containing oxygen === Compounds that contain C–O bonds each possess differing reactivity based upon the location and hybridization of the C–O bond, owing to the electron-withdrawing effect of sp-hybridized oxygen (carbonyl groups) and the donating effects of sp2-hybridized oxygen (alcohol groups). === Groups containing nitrogen === Compounds that contain nitrogen in this category may contain C-O bonds, such as in the case of amides. === Groups containing sulfur === Compounds that contain sulfur exhibit unique chemistry due to sulfur's ability to form more bonds than oxygen, its lighter analogue on the periodic table. Substitutive nomenclature (marked as prefix in table) is preferred over functional class nomenclature (marked as suffix in table) for sulfides, disulfides, sulfoxides and sulfones. === Groups containing phosphorus === Compounds that contain phosphorus exhibit unique chemistry due to the ability of phosphorus to form more bonds than nitrogen, its lighter analogue on the periodic table. === Groups containing boron === Compounds containing boron exhibit unique chemistry due to their having partially filled octets and therefore acting as Lewis acids. === Groups containing metals === note 1 Fluorine is too electronegative to be bonded to magnesium; it becomes an ionic salt instead. === Names of radicals or moieties === These names are used to refer to the moieties themselves or to radical species, and also to form the names of halides and substituents in larger molecules. When the parent hydrocarbon is unsaturated, the suffix ("-yl", "-ylidene", or "-ylidyne") replaces "-ane" (e.g. "ethane" becomes "ethyl"); otherwise, the suffix replaces only the final "-e" (e.g. "ethyne" becomes "ethynyl"). When used to refer to moieties, multiple single bonds differ from a single multiple bond. For example, a methylene bridge (methanediyl) has two single bonds, whereas a methylidene group (methylidene) has one double bond. Suffixes can be combined, as in methylidyne (triple bond) vs. methylylidene (single bond and double bond) vs. methanetriyl (three double bonds). There are some retained names, such as methylene for methanediyl, 1,x-phenylene for phenyl-1,x-diyl (where x is 2, 3, or 4), carbyne for methylidyne, and trityl for triphenylmethyl. == See also == Category:Functional groups Group contribution method == References == == External links == IUPAC Blue Book (organic nomenclature) "IUPAC ligand abbreviations" (PDF). IUPAC. 2 April 2004. Archived from the original (PDF) on 27 September 2007. Retrieved 25 February 2015. Functional group video
Wikipedia/Functional_groups
A carbohydrate () is a biomolecule composed of carbon (C), hydrogen (H), and oxygen (O) atoms. The typical hydrogen-to-oxygen atomic ratio is 2:1, analogous to that of water, and is represented by the empirical formula Cm(H2O)n (where m and n may differ). This formula does not imply direct covalent bonding between hydrogen and oxygen atoms; for example, in CH2O, hydrogen is covalently bonded to carbon, not oxygen. While the 2:1 hydrogen-to-oxygen ratio is characteristic of many carbohydrates, exceptions exist. For instance, uronic acids and deoxy-sugars like fucose deviate from this precise stoichiometric definition. Conversely, some compounds conforming to this definition, such as formaldehyde and acetic acid, are not classified as carbohydrates. The term is predominantly used in biochemistry, functioning as a synonym for saccharide (from Ancient Greek σάκχαρον (sákkharon) 'sugar'), a group that includes sugars, starch, and cellulose. The saccharides are divided into four chemical groups: monosaccharides, disaccharides, oligosaccharides, and polysaccharides. Monosaccharides and disaccharides, the smallest (lower molecular weight) carbohydrates, are commonly referred to as sugars. While the scientific nomenclature of carbohydrates is complex, the names of the monosaccharides and disaccharides very often end in the suffix -ose, which was originally taken from the word glucose (from Ancient Greek γλεῦκος (gleûkos) 'wine, must'), and is used for almost all sugars (e.g., fructose (fruit sugar), sucrose (cane or beet sugar), ribose, lactose (milk sugar)). Carbohydrates perform numerous roles in living organisms. Polysaccharides serve as an energy store (e.g., starch and glycogen) and as structural components (e.g., cellulose in plants and chitin in arthropods and fungi). The 5-carbon monosaccharide ribose is an important component of coenzymes (e.g., ATP, FAD and NAD) and the backbone of the genetic molecule known as RNA. The related deoxyribose is a component of DNA. Saccharides and their derivatives include many other important biomolecules that play key roles in the immune system, fertilization, preventing pathogenesis, blood clotting, and development. Carbohydrates are central to nutrition and are found in a wide variety of natural and processed foods. Starch is a polysaccharide and is abundant in cereals (wheat, maize, rice), potatoes, and processed food based on cereal flour, such as bread, pizza or pasta. Sugars appear in human diet mainly as table sugar (sucrose, extracted from sugarcane or sugar beets), lactose (abundant in milk), glucose and fructose, both of which occur naturally in honey, many fruits, and some vegetables. Table sugar, milk, or honey is often added to drinks and many prepared foods such as jam, biscuits and cakes. Cellulose, a polysaccharide found in the cell walls of all plants, is one of the main components of insoluble dietary fiber. Although it is not digestible by humans, cellulose and insoluble dietary fiber generally help maintain a healthy digestive system by facilitating bowel movements. Other polysaccharides contained in dietary fiber include resistant starch and inulin, which feed some bacteria in the microbiota of the large intestine, and are metabolized by these bacteria to yield short-chain fatty acids. == Terminology == In scientific literature, the term "carbohydrate" has many synonyms, like "sugar" (in the broad sense), "saccharide", "ose", "glucide", "hydrate of carbon" or "polyhydroxy compounds with aldehyde or ketone". Some of these terms, especially "carbohydrate" and "sugar", are also used with other meanings. In food science and in many informal contexts, the term "carbohydrate" often means any food that is particularly rich in the complex carbohydrate starch (such as cereals, bread and pasta) or simple carbohydrates, such as sugar (found in candy, jams, and desserts). This informality is sometimes confusing since it confounds chemical structure and digestibility in humans. The term "carbohydrate" (or "carbohydrate by difference") refers also to dietary fiber, which is a carbohydrate, but, unlike sugars and starches, fibers are not hydrolyzed by human digestive enzymes. Fiber generally contributes little food energy in humans, but is often included in the calculation of total food energy. The fermentation of soluble fibers by gut microflora can yield short-chain fatty acids, and soluble fiber is estimated to provide about 2 kcal/g. == History == The history of the discovery regarding carbohydrates dates back around 10,000 years ago in Papua New Guinea during the cultivation of sugarcane during the Neolithic agricultural revolution. The term "carbohydrate" was first proposed by German chemist Carl Schmidt (chemist) in 1844. In 1856, glycogen, a form of carbohydrate storage in animal livers, was discovered by French physiologist Claude Bernard. == Structure == Formerly the name "carbohydrate" was used in chemistry for any compound with the formula Cm (H2O)n. Following this definition, some chemists considered formaldehyde (CH2O) to be the simplest carbohydrate, while others claimed that title for glycolaldehyde. Today, the term is generally understood in the biochemistry sense, which excludes compounds with only one or two carbons and includes many biological carbohydrates which deviate from this formula. For example, while the above representative formulas would seem to capture the commonly known carbohydrates, ubiquitous and abundant carbohydrates often deviate from this. For example, carbohydrates often display chemical groups such as: N-acetyl (e.g., chitin), sulfate (e.g., glycosaminoglycans), carboxylic acid and deoxy modifications (e.g., fucose and sialic acid). Natural saccharides are generally built of simple carbohydrates called monosaccharides with general formula (CH2O)n where n is three or more. A typical monosaccharide has the structure H–(CHOH)x(C=O)–(CHOH)y–H, that is, an aldehyde or ketone with many hydroxyl groups added, usually one on each carbon atom that is not part of the aldehyde or ketone functional group. Examples of monosaccharides are glucose, fructose, and glyceraldehydes. However, some biological substances commonly called "monosaccharides" do not conform to this formula (e.g., uronic acids and deoxy-sugars such as fucose) and there are many chemicals that do conform to this formula but are not considered to be monosaccharides (e.g., formaldehyde CH2O and inositol (CH2O)6). The open-chain form of a monosaccharide often coexists with a closed ring form where the aldehyde/ketone carbonyl group carbon (C=O) and hydroxyl group (–OH) react forming a hemiacetal with a new C–O–C bridge. Monosaccharides can be linked together into what are called polysaccharides (or oligosaccharides) in a large variety of ways. Many carbohydrates contain one or more modified monosaccharide units that have had one or more groups replaced or removed. For example, deoxyribose, a component of DNA, is a modified version of ribose; chitin is composed of repeating units of N-acetyl glucosamine, a nitrogen-containing form of glucose. == Division == Carbohydrates are polyhydroxy aldehydes, ketones, alcohols, acids, their simple derivatives and their polymers having linkages of the acetal type. They may be classified according to their degree of polymerization, and may be divided initially into three principal groups, namely sugars, oligosaccharides and polysaccharides. == Monosaccharides == Monosaccharides are the simplest carbohydrates in that they cannot be hydrolyzed to smaller carbohydrates. They are aldehydes or ketones with two or more hydroxyl groups. The general chemical formula of an unmodified monosaccharide is (C•H2O)n, literally a "carbon hydrate". Monosaccharides are important fuel molecules as well as building blocks for nucleic acids. The smallest monosaccharides, for which n=3, are dihydroxyacetone and D- and L-glyceraldehydes. === Classification of monosaccharides === Monosaccharides are classified according to three different characteristics: the placement of its carbonyl group, the number of carbon atoms it contains, and its chiral handedness. If the carbonyl group is an aldehyde, the monosaccharide is an aldose; if the carbonyl group is a ketone, the monosaccharide is a ketose. Monosaccharides with three carbon atoms are called trioses, those with four are called tetroses, five are called pentoses, six are hexoses, and so on. These two systems of classification are often combined. For example, glucose is an aldohexose (a six-carbon aldehyde), ribose is an aldopentose (a five-carbon aldehyde), and fructose is a ketohexose (a six-carbon ketone). Each carbon atom bearing a hydroxyl group (-OH), with the exception of the first and last carbons, are asymmetric, making them stereo centers with two possible configurations each (R or S). Because of this asymmetry, a number of isomers may exist for any given monosaccharide formula. Using Le Bel-van't Hoff rule, the aldohexose D-glucose, for example, has the formula (C·H2O)6, of which four of its six carbons atoms are stereogenic, making D-glucose one of 24=16 possible stereoisomers. In the case of glyceraldehydes, an aldotriose, there is one pair of possible stereoisomers, which are enantiomers and epimers. 1, 3-dihydroxyacetone, the ketose corresponding to the aldose glyceraldehydes, is a symmetric molecule with no stereo centers. The assignment of D or L is made according to the orientation of the asymmetric carbon furthest from the carbonyl group: in a standard Fischer projection if the hydroxyl group is on the right the molecule is a D sugar, otherwise it is an L sugar. The "D-" and "L-" prefixes should not be confused with "d-" or "l-", which indicate the direction that the sugar rotates plane polarized light. This usage of "d-" and "l-" is no longer followed in carbohydrate chemistry. === Ring-straight chain isomerism === The aldehyde or ketone group of a straight-chain monosaccharide will react reversibly with a hydroxyl group on a different carbon atom to form a hemiacetal or hemiketal, forming a heterocyclic ring with an oxygen bridge between two carbon atoms. Rings with five and six atoms are called furanose and pyranose forms, respectively, and exist in equilibrium with the straight-chain form. During the conversion from straight-chain form to the cyclic form, the carbon atom containing the carbonyl oxygen, called the anomeric carbon, becomes a stereogenic center with two possible configurations: The oxygen atom may take a position either above or below the plane of the ring. The resulting possible pair of stereoisomers is called anomers. In the α anomer, the -OH substituent on the anomeric carbon rests on the opposite side (trans) of the ring from the CH2OH side branch. The alternative form, in which the CH2OH substituent and the anomeric hydroxyl are on the same side (cis) of the plane of the ring, is called the β anomer. === Use in living organisms === Monosaccharides are the major fuel source for metabolism, and glucose is an energy-rich molecule utilized to generate ATP in almost all living organisms. Glucose is a high-energy substrate produced in plants through photosynthesis by combining energy-poor water and carbon dioxide in an endothermic reaction fueled by solar energy. When monosaccharides are not immediately needed, they are often converted to more space-efficient (i.e., less water-soluble) forms, often polysaccharides. In animals, glucose circulating the blood is a major metabolic substrate and is oxidized in the mitochondria to produce ATP for performing useful cellular work. In humans and other animals, serum glucose levels must be regulated carefully to maintain glucose within acceptable limits and prevent the deleterious effects of hypo- or hyperglycemia. Hormones such as insulin and glucagon serve to keep glucose levels in balance: insulin stimulates glucose uptake into the muscle and fat cells when glucose levels are high, whereas glucagon helps to raise glucose levels if they dip too low by stimulating hepatic glucose synthesis. In many animals, including humans, this storage form is glycogen, especially in liver and muscle cells. In plants, starch is used for the same purpose. The most abundant carbohydrate, cellulose, is a structural component of the cell wall of plants and many forms of algae. Ribose is a component of RNA. Deoxyribose is a component of DNA. Lyxose is a component of lyxoflavin found in the human heart. Ribulose and xylulose occur in the pentose phosphate pathway. Galactose, a component of milk sugar lactose, is found in galactolipids in plant cell membranes and in glycoproteins in many tissues. Mannose occurs in human metabolism, especially in the glycosylation of certain proteins. Fructose, or fruit sugar, is found in many plants and humans, it is metabolized in the liver, absorbed directly into the intestines during digestion, and found in semen. Trehalose, a major sugar of insects, is rapidly hydrolyzed into two glucose molecules to support continuous flight. == Disaccharides == Two joined monosaccharides are called a disaccharide, the simplest kind of polysaccharide. Examples include sucrose and lactose. They are composed of two monosaccharide units bound together by a covalent bond known as a glycosidic linkage formed via a dehydration reaction, resulting in the loss of a hydrogen atom from one monosaccharide and a hydroxyl group from the other. The formula of unmodified disaccharides is C12H22O11. Although there are numerous kinds of disaccharides, a handful of disaccharides are particularly notable. Sucrose, pictured to the right, is the most abundant disaccharide, and the main form in which carbohydrates are transported in plants. It is composed of one D-glucose molecule and one D-fructose molecule. The systematic name for sucrose, O-α-D-glucopyranosyl-(1→2)-D-fructofuranoside, indicates four things: Its monosaccharides: glucose and fructose Their ring types: glucose is a pyranose and fructose is a furanose How they are linked together: the oxygen on carbon number 1 (C1) of α-D-glucose is linked to the C2 of D-fructose. The -oside suffix indicates that the anomeric carbon of both monosaccharides participates in the glycosidic bond. Lactose, a disaccharide composed of one D-galactose molecule and one D-glucose molecule, occurs naturally in mammalian milk. The systematic name for lactose is O-β-D-galactopyranosyl-(1→4)-D-glucopyranose. Other notable disaccharides include maltose (two D-glucoses linked α-1,4) and cellobiose (two D-glucoses linked β-1,4). Disaccharides can be classified into two types: reducing and non-reducing disaccharides. If the functional group is present in bonding with another sugar unit, it is called a reducing disaccharide or biose. == Oligosaccharides and polysaccharides == === Oligosaccharides === Oligosaccharides are saccharide polymers composed of three to ten units of monosaccharides, connected via glycosidic linkages, similar to disaccharides. They are usually linked to lipids or amino acids glycosic linkage with oxygen or nitrogen to form glycolipids and glycoproteins, though some, like the raffinose series and the fructooligosaccharides, do not. They have roles in cell recognition and cell adhesion. === Polysaccharides === == Nutrition == Carbohydrate consumed in food yields 3.87 kilocalories of energy per gram for simple sugars, and 3.57 to 4.12 kilocalories per gram for complex carbohydrate in most other foods. Relatively high levels of carbohydrate are associated with processed foods or refined foods made from plants, including sweets, cookies and candy, table sugar, honey, soft drinks, breads and crackers, jams and fruit products, pastas and breakfast cereals. Refined carbohydrates from processed foods such as white bread or rice, soft drinks, and desserts are readily digestible, and many are known to have a high glycemic index, which reflects a rapid assimilation of glucose. By contrast, the digestion of whole, unprocessed, fiber-rich foods such as beans, peas, and whole grains produces a slower and steadier release of glucose and energy into the body. Animal-based foods generally have the lowest carbohydrate levels, although milk does contain a high proportion of lactose. Organisms typically cannot metabolize all types of carbohydrate to yield energy. Glucose is a nearly universal and accessible source of energy. Many organisms also have the ability to metabolize other monosaccharides and disaccharides but glucose is often metabolized first. In Escherichia coli, for example, the lac operon will express enzymes for the digestion of lactose when it is present, but if both lactose and glucose are present, the lac operon is repressed, resulting in the glucose being used first (see: Diauxie). Polysaccharides are also common sources of energy. Many organisms can easily break down starches into glucose; most organisms, however, cannot metabolize cellulose or other polysaccharides such as chitin and arabinoxylans. These carbohydrate types can be metabolized by some bacteria and protists. Ruminants and termites, for example, use microorganisms to process cellulose, fermenting it to caloric short-chain fatty acids. Even though humans lack the enzymes to digest fiber, dietary fiber represents an important dietary element for humans. Fibers promote healthy digestion, help regulate postprandial glucose and insulin levels, reduce cholesterol levels, and promote satiety. The Institute of Medicine recommends that American and Canadian adults get between 45 and 65% of dietary energy from whole-grain carbohydrates. The Food and Agriculture Organization and World Health Organization jointly recommend that national dietary guidelines set a goal of 55–75% of total energy from carbohydrates, but only 10% directly from sugars (their term for simple carbohydrates). A 2017 Cochrane Systematic Review concluded that there was insufficient evidence to support the claim that whole grain diets can affect cardiovascular disease. === Classification === The term complex carbohydrate was first used in the U.S. Senate Select Committee on Nutrition and Human Needs publication Dietary Goals for the United States (1977) where it was intended to distinguish sugars from other carbohydrates (which were perceived to be nutritionally superior). However, the report put "fruit, vegetables and whole-grains" in the complex carbohydrate column, despite the fact that these may contain sugars as well as polysaccharides. The standard usage, however, is to classify carbohydrates chemically: simple if they are sugars (monosaccharides and disaccharides) and complex if they are polysaccharides (or oligosaccharides). Carbohydrates are sometimes divided into "available carbohydrates", which are absorbed in the small intestine and "unavailable carbohydrates", which pass to the large intestine, where they are subject to fermentation by the gastrointestinal microbiota. ==== Glycemic index ==== The glycemic index (GI) and glycemic load concepts characterize the potential for carbohydrates in food to raise blood glucose compared to a reference food (generally pure glucose). Expressed numerically as GI, carbohydrate-containing foods can be grouped as high-GI (score more than 70), moderate-GI (56–69), or low-GI (less than 55) relative to pure glucose (GI=100). Consumption of carbohydrate-rich, high-GI foods causes an abrupt increase in blood glucose concentration that declines rapidly following the meal, whereas low-GI foods with lower carbohydrate content produces a lower blood glucose concentration that returns gradually after the meal. Glycemic load is a measure relating the quality of carbohydrates in a food (low- vs. high-carbohydrate content – the GI) by the amount of carbohydrates in a single serving of that food. === Health effects of dietary carbohydrate restriction === Low-carbohydrate diets may miss the health advantages – such as increased intake of dietary fiber and phytochemicals – afforded by high-quality plant foods such as legumes and pulses, whole grains, fruits, and vegetables. A "meta-analysis, of moderate quality," included as adverse effects of the diet halitosis, headache and constipation. Carbohydrate-restricted diets can be as effective as low-fat diets in helping achieve weight loss over the short term when overall calorie intake is reduced. An Endocrine Society scientific statement said that "when calorie intake is held constant [...] body-fat accumulation does not appear to be affected by even very pronounced changes in the amount of fat vs carbohydrate in the diet." In the long term, low-carbohydrate diets do not appear to confer a "metabolic advantage," and effective weight loss or maintenance depends on the level of calorie restriction, not the ratio of macronutrients in a diet. The reasoning of diet advocates that carbohydrates cause undue fat accumulation by increasing blood insulin levels, but a more balanced diet that restricts refined carbohydrates can also reduce serum glucose and insulin levels and may also suppress lipogenesis and promote fat oxidation. However, as far as energy expenditure itself is concerned, the claim that low-carbohydrate diets have a "metabolic advantage" is not supported by clinical evidence. Further, it is not clear how low-carbohydrate dieting affects cardiovascular health, although two reviews showed that carbohydrate restriction may improve lipid markers of cardiovascular disease risk. Carbohydrate-restricted diets are no more effective than a conventional healthy diet in preventing the onset of type 2 diabetes, but for people with type 2 diabetes, they are a viable option for losing weight or helping with glycemic control. There is limited evidence to support routine use of low-carbohydrate dieting in managing type 1 diabetes. The American Diabetes Association recommends that people with diabetes should adopt a generally healthy diet, rather than a diet focused on carbohydrate or other macronutrients. An extreme form of low-carbohydrate diet – the ketogenic diet – is established as a medical diet for treating epilepsy. Through celebrity endorsement during the early 21st century, it became a fad diet as a means of weight loss, but with risks of undesirable side effects, such as low energy levels and increased hunger, insomnia, nausea, and gastrointestinal discomfort. The British Dietetic Association named it one of the "top 5 worst celeb diets to avoid in 2018". == Sources == Most dietary carbohydrates contain glucose, either as their only building block (as in the polysaccharides starch and glycogen), or together with another monosaccharide (as in the hetero-polysaccharides sucrose and lactose). Unbound glucose is one of the main ingredients of honey. Glucose is extremely abundant and has been isolated from a variety of natural sources across the world, including male cones of the coniferous tree Wollemia nobilis in Rome, the roots of Ilex asprella plants in China, and straws from rice in California. ^A The carbohydrate value is calculated in the USDA database and does not always correspond to the sum of the sugars, the starch, and the "dietary fiber". == Metabolism == Carbohydrate metabolism is the series of biochemical processes responsible for the formation, breakdown and interconversion of carbohydrates in living organisms. The most important carbohydrate is glucose, a simple sugar (monosaccharide) that is metabolized by nearly all known organisms. Glucose and other carbohydrates are part of a wide variety of metabolic pathways across species: plants synthesize carbohydrates from carbon dioxide and water by photosynthesis storing the absorbed energy internally, often in the form of starch or lipids. Plant components are consumed by animals and fungi, and used as fuel for cellular respiration. Oxidation of one gram of carbohydrate yields approximately 16 kJ (4 kcal) of energy, while the oxidation of one gram of lipids yields about 38 kJ (9 kcal). The human body stores between 300 and 500 g of carbohydrates depending on body weight, with the skeletal muscle contributing to a large portion of the storage. Energy obtained from metabolism (e.g., oxidation of glucose) is usually stored temporarily within cells in the form of ATP. Organisms capable of anaerobic and aerobic respiration metabolize glucose and oxygen (aerobic) to release energy, with carbon dioxide and water as byproducts. === Catabolism === Catabolism is the metabolic reaction which cells undergo to break down larger molecules, extracting energy. There are two major metabolic pathways of monosaccharide catabolism: glycolysis and the citric acid cycle. In glycolysis, oligo- and polysaccharides are cleaved first to smaller monosaccharides by enzymes called glycoside hydrolases. The monosaccharide units can then enter into monosaccharide catabolism. A 2 ATP investment is required in the early steps of glycolysis to phosphorylate Glucose to Glucose 6-Phosphate (G6P) and Fructose 6-Phosphate (F6P) to Fructose 1,6-biphosphate (FBP), thereby pushing the reaction forward irreversibly. In some cases, as with humans, not all carbohydrate types are usable as the digestive and metabolic enzymes necessary are not present. == Carbohydrate chemistry == Carbohydrate chemistry is a large and economically important branch of organic chemistry. Some of the main organic reactions that involve carbohydrates are: Amadori rearrangement Carbohydrate acetalisation Carbohydrate digestion Cyanohydrin reaction Koenigs–Knorr reaction Lobry de Bruyn–Van Ekenstein transformation Nef reaction Wohl degradation Tipson-Cohen reaction Ferrier rearrangement Ferrier II reaction == Chemical synthesis == Carbohydrate synthesis is a sub-field of organic chemistry concerned specifically with the generation of natural and unnatural carbohydrate structures. This can include the synthesis of monosaccharide residues or structures containing more than one monosaccharide, known as oligosaccharides. Selective formation of glycosidic linkages and selective reactions of hydroxyl groups are very important, and the usage of protecting groups is extensive. Common reactions for glycosidic bond formation are as follows: Chemical glycosylation Fischer glycosidation Koenigs-Knorr reaction Crich beta-mannosylation While some common protection methods are as below: Carbohydrate acetalisation Trimethylsilyl Benzyl ether p-Methoxybenzyl ether == See also == Bioplastic Carbohydrate NMR Gluconeogenesis – A process where glucose can be synthesized by non-carbohydrate sources. Glycobiology Glycogen Glycoinformatics Glycolipid Glycome Glycomics Glycosyl Macromolecule Saccharic acid == References == == Further reading == "Compolition of foods raw, processed, prepared" (PDF). United States Department of Agriculture. September 2015. Archived (PDF) from the original on October 31, 2016. Retrieved October 30, 2016. == External links == Carbohydrates, including interactive models and animations (Requires MDL Chime) IUPAC-IUBMB Joint Commission on Biochemical Nomenclature (JCBN): Carbohydrate Nomenclature Carbohydrates detailed Carbohydrates and Glycosylation – The Virtual Library of Biochemistry, Molecular Biology and Cell Biology Functional Glycomics Gateway, a collaboration between the Consortium for Functional Glycomics and Nature Publishing Group
Wikipedia/Carbohydrates
Macromolecular Chemistry and Physics is a biweekly peer-reviewed scientific journal covering polymer science. It publishes full papers, talents, trends, and highlights in all areas of polymer science, from chemistry to physical chemistry, physics, and materials science. == History == Macromolecular Chemistry and Physics was established in 1947 as Die Makromolekulare Chemie/Macromolecular Chemistry by Hermann Staudinger and obtained its current title in 1994. According to the Journal Citation Reports, the journal has a 2021 impact factor of 2.996. == See also == Macromolecular Rapid Communications, 1979 Macromolecular Theory and Simulations, 1992 Macromolecular Materials and Engineering, 2000 Macromolecular Bioscience, 2001 Macromolecular Reaction Engineering, 2007 == References == == External links == Official website
Wikipedia/Macromolecular_Chemistry_and_Physics
DNA repair is a collection of processes by which a cell identifies and corrects damage to the DNA molecules that encode its genome. A weakened capacity for DNA repair is a risk factor for the development of cancer. DNA is constantly modified in cells, by internal metabolic by-products, and by external ionizing radiation, ultraviolet light, and medicines, resulting in spontaneous DNA damage involving tens of thousands of individual molecular lesions per cell per day. DNA modifications can also be programmed. Molecular lesions can cause structural damage to the DNA molecule, and can alter or eliminate the cell's ability for transcription and gene expression. Other lesions may induce potentially harmful mutations in the cell's genome, which affect the survival of its daughter cells following mitosis. Consequently, DNA repair as part of the DNA damage response (DDR) is constantly active. When normal repair processes fail, including apoptosis, irreparable DNA damage may occur, that may be a risk factor for cancer. The degree of DNA repair change made within a cell depends on various factors, including the cell type, the age of the cell, and the extracellular environment. A cell that has accumulated a large amount of DNA damage or can no longer effectively repair its DNA may enter one of three possible states: an irreversible state of dormancy, known as senescence apoptosis a form of programmed cell death unregulated division, which can lead to the formation of a tumor that is cancerous The DNA repair ability of a cell is vital to the integrity of its genome and thus to the normal functionality of that organism. Many genes that were initially shown to influence life span have turned out to be involved in DNA damage repair and protection. The 2015 Nobel Prize in Chemistry was awarded to Tomas Lindahl, Paul Modrich, and Aziz Sancar for their work on the molecular mechanisms of DNA repair processes. == DNA damage == DNA damage, due to environmental factors and normal metabolic processes inside the cell, occurs at a rate of 10,000 to 1,000,000 molecular lesions per cell per day. While this constitutes at most only 0.03% of the human genome's approximately 3.2 billion bases, unrepaired lesions in critical genes (such as tumor suppressor genes) can impede a cell's ability to carry out its function and appreciably increase the likelihood of tumor formation and contribute to tumor heterogeneity. The vast majority of DNA damage affects the primary structure of the double helix; that is, the bases themselves are chemically modified. These modifications can in turn disrupt the molecules' regular helical structure by introducing non-native chemical bonds or bulky adducts that do not fit in the standard double helix. Unlike proteins and RNA, DNA usually lacks tertiary structure and therefore damage or disturbance does not occur at that level. DNA is, however, supercoiled and wound around "packaging" proteins called histones (in eukaryotes), and both superstructures are vulnerable to the effects of DNA damage. === Sources === DNA damage can be subdivided into two main types: endogenous damage such as attack by reactive oxygen species produced from normal metabolic byproducts (spontaneous mutation), especially the process of oxidative deamination also includes replication errors exogenous damage caused by external agents such as ultraviolet (UV) radiation (200–400 nm) from the sun or other artificial light sources other radiation frequencies, including x-rays and gamma rays, and particles like electrons, neutrons, or alpha particles. hydrolysis or thermal disruption certain plant toxins human-made mutagenic chemicals, especially aromatic compounds that act as DNA intercalating agents viruses The replication of damaged DNA before cell division can lead to the incorporation of wrong bases opposite damaged ones. Daughter cells that inherit these wrong bases carry mutations from which the original DNA sequence is unrecoverable (except in the rare case of a back mutation, for example, through gene conversion). === Types === There are several types of damage to DNA due to endogenous cellular processes: oxidation of bases [e.g. 8-oxo-7,8-dihydroguanine (8-oxoG)] and generation of DNA strand interruptions from reactive oxygen species, alkylation of bases (usually methylation), such as formation of 7-methylguanosine, 1-methyladenine, 6-O-Methylguanine hydrolysis of bases, such as deamination, depurination, and depyrimidination. "bulky adduct formation" (e.g., benzo[a]pyrene diol epoxide-dG adduct, aristolactam I-dA adduct) mismatch of bases, due to errors in DNA replication, in which the wrong DNA base is stitched into place in a newly forming DNA strand, or a DNA base is skipped over or mistakenly inserted. Monoadduct damage cause by change in single nitrogenous base of DNA Di adduct damage Damage caused by exogenous agents comes in many forms. Some examples are: Absorption of UV light directly by DNA induces photochemical reactions, leading to the formation of pyrimidine dimers, and photoionization, provoking oxidative damage. UV-A light creates mostly free radicals. The damage caused by free radicals is called indirect DNA damage. Ionizing radiation such as that created by radioactive decay or in cosmic rays causes breaks in DNA strands. Intermediate-level ionizing radiation may induce irreparable DNA damage (leading to replicational and transcriptional errors needed for neoplasia or may trigger viral interactions) leading to pre-mature aging and cancer. Thermal disruption at elevated temperature increases the rate of depurination (loss of purine bases from the DNA backbone) and single-strand breaks. For example, hydrolytic depurination is seen in the thermophilic bacteria, which grow in hot springs at 40–80 °C. The rate of depurination (300 purine residues per genome per generation) is too high in these species to be repaired by normal repair machinery, hence a possibility of an adaptive response cannot be ruled out. Industrial chemicals such as vinyl chloride and hydrogen peroxide, and environmental chemicals such as polycyclic aromatic hydrocarbons found in smoke, soot and tar create a huge diversity of DNA adducts- ethanoates, oxidized bases, alkylated phosphodiesters and crosslinking of DNA, just to name a few. UV damage, alkylation/methylation, X-ray damage and oxidative damage are examples of induced damage. Spontaneous damage can include the loss of a base, deamination, sugar ring puckering and tautomeric shift. Constitutive (spontaneous) DNA damage caused by endogenous oxidants can be detected as a low level of histone H2AX phosphorylation in untreated cells. === Nuclear versus mitochondrial === In eukaryotic cells, DNA is found in two cellular locations – inside the nucleus and inside the mitochondria. Nuclear DNA (nDNA) exists as chromatin during non-replicative stages of the cell cycle and is condensed into aggregate structures known as chromosomes during cell division. In either state the DNA is highly compacted and wound up around bead-like proteins called histones. Whenever a cell needs to express the genetic information encoded in its nDNA the required chromosomal region is unraveled, genes located therein are expressed, and then the region is condensed back to its resting conformation. Mitochondrial DNA (mtDNA) is located inside mitochondria organelles, exists in multiple copies, and is also tightly associated with a number of proteins to form a complex known as the nucleoid. Inside mitochondria, reactive oxygen species (ROS), or free radicals, byproducts of the constant production of adenosine triphosphate (ATP) via oxidative phosphorylation, create a highly oxidative environment that is known to damage mtDNA. A critical enzyme in counteracting the toxicity of these species is superoxide dismutase, which is present in both the mitochondria and cytoplasm of eukaryotic cells. === Senescence and apoptosis === Senescence, an irreversible process in which the cell no longer divides, is a protective response to the shortening of the chromosome ends, called telomeres. The telomeres are long regions of repetitive noncoding DNA that cap chromosomes and undergo partial degradation each time a cell undergoes division (see Hayflick limit). In contrast, quiescence is a reversible state of cellular dormancy that is unrelated to genome damage (see cell cycle). Senescence in cells may serve as a functional alternative to apoptosis in cases where the physical presence of a cell for spatial reasons is required by the organism, which serves as a "last resort" mechanism to prevent a cell with damaged DNA from replicating inappropriately in the absence of pro-growth cellular signaling. Unregulated cell division can lead to the formation of a tumor (see cancer), which is potentially lethal to an organism. Therefore, the induction of senescence and apoptosis is considered to be part of a strategy of protection against cancer. === Mutation === It is important to distinguish between DNA damage and mutation, the two major types of error in DNA. DNA damage and mutation are fundamentally different. Damage results in physical abnormalities in the DNA, such as single- and double-strand breaks, 8-hydroxydeoxyguanosine residues, and polycyclic aromatic hydrocarbon adducts. DNA damage can be recognized by enzymes, and thus can be correctly repaired if redundant information, such as the undamaged sequence in the complementary DNA strand or in a homologous chromosome, is available for copying. If a cell retains DNA damage, transcription of a gene can be prevented, and thus translation into a protein will also be blocked. Replication may also be blocked or the cell may die. In contrast to DNA damage, a mutation is a change in the base sequence of the DNA. A mutation cannot be recognized by enzymes once the base change is present in both DNA strands, and thus a mutation cannot be repaired. At the cellular level, mutations can cause alterations in protein function and regulation. Mutations are replicated when the cell replicates. In a population of cells, mutant cells will increase or decrease in frequency according to the effects of the mutation on the ability of the cell to survive and reproduce. Although distinctly different from each other, DNA damage and mutation are related because DNA damage often causes errors of DNA synthesis during replication or repair; these errors are a major source of mutation. Given these properties of DNA damage and mutation, it can be seen that DNA damage is a special problem in non-dividing or slowly-dividing cells, where unrepaired damage will tend to accumulate over time. On the other hand, in rapidly dividing cells, unrepaired DNA damage that does not kill the cell by blocking replication will tend to cause replication errors and thus mutation. The great majority of mutations that are not neutral in their effect are deleterious to a cell's survival. Thus, in a population of cells composing a tissue with replicating cells, mutant cells will tend to be lost. However, infrequent mutations that provide a survival advantage will tend to clonally expand at the expense of neighboring cells in the tissue. This advantage to the cell is disadvantageous to the whole organism because such mutant cells can give rise to cancer. Thus, DNA damage in frequently dividing cells, because it gives rise to mutations, is a prominent cause of cancer. In contrast, DNA damage in infrequently-dividing cells is likely a prominent cause of aging. == Mechanisms == Cells cannot function if DNA damage corrupts the integrity and accessibility of essential information in the genome (but cells remain superficially functional when non-essential genes are missing or damaged). Depending on the type of damage inflicted on the DNA's double helical structure, a variety of repair strategies have evolved to restore lost information. If possible, cells use the unmodified complementary strand of the DNA or the sister chromatid as a template to recover the original information. Without access to a template, cells use an error-prone recovery mechanism known as translesion synthesis as a last resort. Damage to DNA alters the spatial configuration of the helix, and such alterations can be detected by the cell. Once damage is localized, specific DNA repair molecules bind at or near the site of damage, inducing other molecules to bind and form a complex that enables the actual repair to take place. === Direct reversal === Cells are known to eliminate three types of damage to their DNA by chemically reversing it. These mechanisms do not require a template, since the types of damage they counteract can occur in only one of the four bases. Such direct reversal mechanisms are specific to the type of damage incurred and do not involve breakage of the phosphodiester backbone. The formation of pyrimidine dimers upon irradiation with UV light results in an abnormal covalent bond between adjacent pyrimidine bases. The photoreactivation process directly reverses this damage by the action of the enzyme photolyase, whose activation is obligately dependent on energy absorbed from blue/UV light (300–500 nm wavelength) to promote catalysis. Photolyase, an old enzyme present in bacteria, fungi, and most animals no longer functions in humans, who instead use nucleotide excision repair to repair damage from UV irradiation. Another type of damage, methylation of guanine bases, is directly reversed by the enzyme methyl guanine methyl transferase (MGMT), the bacterial equivalent of which is called ogt. This is an expensive process because each MGMT molecule can be used only once; that is, the reaction is stoichiometric rather than catalytic. A generalized response to methylating agents in bacteria is known as the adaptive response and confers a level of resistance to alkylating agents upon sustained exposure by upregulation of alkylation repair enzymes. The third type of DNA damage reversed by cells is certain methylation of the bases cytosine and adenine. === Single-strand damage === When only one of the two strands of a double helix has a defect, the other strand can be used as a template to guide the correction of the damaged strand. In order to repair damage to one of the two paired molecules of DNA, there exist a number of excision repair mechanisms that remove the damaged nucleotide and replace it with an undamaged nucleotide complementary to that found in the undamaged DNA strand. Base excision repair (BER): damaged single bases or nucleotides are most commonly repaired by removing the base or the nucleotide involved and then inserting the correct base or nucleotide. In base excision repair, a glycosylase enzyme removes the damaged base from the DNA by cleaving the bond between the base and the deoxyribose. These enzymes remove a single base to create an apurinic or apyrimidinic site (AP site). Enzymes called AP endonucleases nick the damaged DNA backbone at the AP site. DNA polymerase then removes the damaged region using its 5' to 3' exonuclease activity and correctly synthesizes the new strand using the complementary strand as a template. The gap is then sealed by enzyme DNA ligase. Nucleotide excision repair (NER): bulky, helix-distorting damage, such as pyrimidine dimerization caused by UV light is usually repaired by a three-step process. First the damage is recognized, then 12-24 nucleotide-long strands of DNA are removed both upstream and downstream of the damage site by endonucleases, and the removed DNA region is then resynthesized. NER is a highly evolutionarily conserved repair mechanism and is used in nearly all eukaryotic and prokaryotic cells. In prokaryotes, NER is mediated by Uvr proteins. In eukaryotes, many more proteins are involved, although the general strategy is the same. Mismatch repair systems are present in essentially all cells to correct errors that are not corrected by proofreading. These systems consist of at least two proteins. One detects the mismatch, and the other recruits an endonuclease that cleaves the newly synthesized DNA strand close to the region of damage. In E. coli , the proteins involved are the Mut class proteins: MutS, MutL, and MutH. In most Eukaryotes, the analog for MutS is MSH and the analog for MutL is MLH. MutH is only present in bacteria. This is followed by removal of damaged region by an exonuclease, resynthesis by DNA polymerase, and nick sealing by DNA ligase. === Double-strand breaks === Double-strand breaks, in which both strands in the double helix are severed, are particularly hazardous to the cell because they can lead to genome rearrangements. In fact, when a double-strand break is accompanied by a cross-linkage joining the two strands at the same point, neither strand can be used as a template for the repair mechanisms, so that the cell will not be able to complete mitosis when it next divides, and will either die or, in rare cases, undergo a mutation. Three mechanisms exist to repair double-strand breaks (DSBs): non-homologous end joining (NHEJ), microhomology-mediated end joining (MMEJ), and homologous recombination (HR): In NHEJ, DNA Ligase IV, a specialized DNA ligase that forms a complex with the cofactor XRCC4, directly joins the two ends. To guide accurate repair, NHEJ relies on short homologous sequences called microhomologies present on the single-stranded tails of the DNA ends to be joined. If these overhangs are compatible, repair is usually accurate. NHEJ can also introduce mutations during repair. Loss of damaged nucleotides at the break site can lead to deletions, and joining of nonmatching termini forms insertions or translocations. NHEJ is especially important before the cell has replicated its DNA, since there is no template available for repair by homologous recombination. There are "backup" NHEJ pathways in higher eukaryotes. Besides its role as a genome caretaker, NHEJ is required for joining hairpin-capped double-strand breaks induced during V(D)J recombination, the process that generates diversity in B-cell and T-cell receptors in the vertebrate immune system. MMEJ starts with short-range end resection by MRE11 nuclease on either side of a double-strand break to reveal microhomology regions. In further steps, Poly (ADP-ribose) polymerase 1 (PARP1) is required and may be an early step in MMEJ. There is pairing of microhomology regions followed by recruitment of flap structure-specific endonuclease 1 (FEN1) to remove overhanging flaps. This is followed by recruitment of XRCC1–LIG3 to the site for ligating the DNA ends, leading to an intact DNA. MMEJ is always accompanied by a deletion, so that MMEJ is a mutagenic pathway for DNA repair. HR requires the presence of an identical or nearly identical sequence to be used as a template for repair of the break. The enzymatic machinery responsible for this repair process is nearly identical to the machinery responsible for chromosomal crossover during meiosis. This pathway allows a damaged chromosome to be repaired using a sister chromatid (available in G2 after DNA replication) or a homologous chromosome as a template. DSBs caused by the replication machinery attempting to synthesize across a single-strand break or unrepaired lesion cause collapse of the replication fork and are typically repaired by recombination. In an in vitro system, MMEJ occurred in mammalian cells at the levels of 10–20% of HR when both HR and NHEJ mechanisms were also available. The extremophile Deinococcus radiodurans has a remarkable ability to survive DNA damage from ionizing radiation and other sources. At least two copies of the genome, with random DNA breaks, can form DNA fragments through annealing. Partially overlapping fragments are then used for synthesis of homologous regions through a moving D-loop that can continue extension until complementary partner strands are found. In the final step, there is crossover by means of RecA-dependent homologous recombination. Topoisomerases introduce both single- and double-strand breaks in the course of changing the DNA's state of supercoiling, which is especially common in regions near an open replication fork. Such breaks are not considered DNA damage because they are a natural intermediate in the topoisomerase biochemical mechanism and are immediately repaired by the enzymes that created them. Another type of DNA double-strand breaks originates from the DNA heat-sensitive or heat-labile sites. These DNA sites are not initial DSBs. However, they convert to DSB after treating with elevated temperature. Ionizing irradiation can induces a highly complex form of DNA damage as clustered damage. It consists of different types of DNA lesions in various locations of the DNA helix. Some of these closely located lesions can probably convert to DSB by exposure to high temperatures. But the exact nature of these lesions and their interactions is not yet known === Translesion synthesis === Translesion synthesis (TLS) is a DNA damage tolerance process that allows the DNA replication machinery to replicate past DNA lesions such as thymine dimers or AP sites. It involves switching out regular DNA polymerases for specialized translesion polymerases (i.e. DNA polymerase IV or V, from the Y Polymerase family), often with larger active sites that can facilitate the insertion of bases opposite damaged nucleotides. The polymerase switching is thought to be mediated by, among other factors, the post-translational modification of the replication processivity factor PCNA. Translesion synthesis polymerases often have low fidelity (high propensity to insert wrong bases) on undamaged templates relative to regular polymerases. However, many are extremely efficient at inserting correct bases opposite specific types of damage. For example, Pol η mediates error-free bypass of lesions induced by UV irradiation, whereas Pol ι introduces mutations at these sites. Pol η is known to add the first adenine across the T^T photodimer using Watson-Crick base pairing and the second adenine will be added in its syn conformation using Hoogsteen base pairing. From a cellular perspective, risking the introduction of point mutations during translesion synthesis may be preferable to resorting to more drastic mechanisms of DNA repair, which may cause gross chromosomal aberrations or cell death. In short, the process involves specialized polymerases either bypassing or repairing lesions at locations of stalled DNA replication. For example, Human DNA polymerase eta can bypass complex DNA lesions like guanine-thymine intra-strand crosslink, G[8,5-Me]T, although it can cause targeted and semi-targeted mutations. Paromita Raychaudhury and Ashis Basu studied the toxicity and mutagenesis of the same lesion in Escherichia coli by replicating a G[8,5-Me]T-modified plasmid in E. coli with specific DNA polymerase knockouts. Viability was very low in a strain lacking pol II, pol IV, and pol V, the three SOS-inducible DNA polymerases, indicating that translesion synthesis is conducted primarily by these specialized DNA polymerases. A bypass platform is provided to these polymerases by Proliferating cell nuclear antigen (PCNA). Under normal circumstances, PCNA bound to polymerases replicates the DNA. At a site of lesion, PCNA is ubiquitinated, or modified, by the RAD6/RAD18 proteins to provide a platform for the specialized polymerases to bypass the lesion and resume DNA replication. After translesion synthesis, extension is required. This extension can be carried out by a replicative polymerase if the TLS is error-free, as in the case of Pol η, yet if TLS results in a mismatch, a specialized polymerase is needed to extend it; Pol ζ. Pol ζ is unique in that it can extend terminal mismatches, whereas more processive polymerases cannot. So when a lesion is encountered, the replication fork will stall, PCNA will switch from a processive polymerase to a TLS polymerase such as Pol ι to fix the lesion, then PCNA may switch to Pol ζ to extend the mismatch, and last PCNA will switch to the processive polymerase to continue replication. == Global response to DNA damage == Cells exposed to ionizing radiation, ultraviolet light or chemicals are prone to acquire multiple sites of bulky DNA lesions and double-strand breaks. Moreover, DNA damaging agents can damage other biomolecules such as proteins, carbohydrates, lipids, and RNA. The accumulation of damage, to be specific, double-strand breaks or adducts stalling the replication forks, are among known stimulation signals for a global response to DNA damage. The global response to damage is an act directed toward the cells' own preservation and triggers multiple pathways of macromolecular repair, lesion bypass, tolerance, or apoptosis. The common features of global response are induction of multiple genes, cell cycle arrest, and inhibition of cell division. === Initial steps === The packaging of eukaryotic DNA into chromatin presents a barrier to all DNA-based processes that require recruitment of enzymes to their sites of action. To allow DNA repair, the chromatin must be remodeled. In eukaryotes, ATP dependent chromatin remodeling complexes and histone-modifying enzymes are two predominant factors employed to accomplish this remodeling process. Chromatin relaxation occurs rapidly at the site of a DNA damage. In one of the earliest steps, the stress-activated protein kinase, c-Jun N-terminal kinase (JNK), phosphorylates SIRT6 on serine 10 in response to double-strand breaks or other DNA damage. This post-translational modification facilitates the mobilization of SIRT6 to DNA damage sites, and is required for efficient recruitment of poly (ADP-ribose) polymerase 1 (PARP1) to DNA break sites and for efficient repair of DSBs. PARP1 protein starts to appear at DNA damage sites in less than a second, with half maximum accumulation within 1.6 seconds after the damage occurs. PARP1 synthesizes polymeric adenosine diphosphate ribose (poly (ADP-ribose) or PAR) chains on itself. Next the chromatin remodeler ALC1 quickly attaches to the product of PARP1 action, a poly-ADP ribose chain, and ALC1 completes arrival at the DNA damage within 10 seconds of the occurrence of the damage. About half of the maximum chromatin relaxation, presumably due to action of ALC1, occurs by 10 seconds. This then allows recruitment of the DNA repair enzyme MRE11, to initiate DNA repair, within 13 seconds. γH2AX, the phosphorylated form of H2AX is also involved in the early steps leading to chromatin decondensation after DNA double-strand breaks. The histone variant H2AX constitutes about 10% of the H2A histones in human chromatin. γH2AX (H2AX phosphorylated on serine 139) can be detected as soon as 20 seconds after irradiation of cells (with DNA double-strand break formation), and half maximum accumulation of γH2AX occurs in one minute. The extent of chromatin with phosphorylated γH2AX is about two million base pairs at the site of a DNA double-strand break. γH2AX does not, itself, cause chromatin decondensation, but within 30 seconds of irradiation, RNF8 protein can be detected in association with γH2AX. RNF8 mediates extensive chromatin decondensation, through its subsequent interaction with CHD4, a component of the nucleosome remodeling and deacetylase complex NuRD. DDB2 occurs in a heterodimeric complex with DDB1. This complex further complexes with the ubiquitin ligase protein CUL4A and with PARP1. This larger complex rapidly associates with UV-induced damage within chromatin, with half-maximum association completed in 40 seconds. The PARP1 protein, attached to both DDB1 and DDB2, then PARylates (creates a poly-ADP ribose chain) on DDB2 that attracts the DNA remodeling protein ALC1. Action of ALC1 relaxes the chromatin at the site of UV damage to DNA. This relaxation allows other proteins in the nucleotide excision repair pathway to enter the chromatin and repair UV-induced cyclobutane pyrimidine dimer damages. After rapid chromatin remodeling, cell cycle checkpoints are activated to allow DNA repair to occur before the cell cycle progresses. First, two kinases, ATM and ATR are activated within 5 or 6 minutes after DNA is damaged. This is followed by phosphorylation of the cell cycle checkpoint protein Chk1, initiating its function, about 10 minutes after DNA is damaged. === DNA damage response === In the DNA damage response (DDR), cell cycle checkpoints are activated. Checkpoint activation pauses the cell cycle and gives the cell time to repair the damage before continuing to divide. DNA damage checkpoints occur at the G1/S and G2/M boundaries. An intra-S checkpoint also exists. Checkpoint activation is controlled by two master kinases, ATM and ATR. ATM responds to DNA double-strand breaks and disruptions in chromatin structure, whereas ATR primarily responds to stalled replication forks. These kinases phosphorylate downstream targets in a signal transduction cascade, eventually leading to cell cycle arrest. A class of checkpoint mediator proteins including BRCA1, MDC1, and 53BP1 has also been identified. These proteins seem to be required for transmitting the checkpoint activation signal to downstream proteins. DNA damage checkpoint is a signal transduction pathway that blocks cell cycle progression in G1, G2 and metaphase and slows down the rate of S phase progression when DNA is damaged. It leads to a pause in cell cycle allowing the cell time to repair the damage before continuing to divide. Checkpoint Proteins can be separated into four groups: phosphatidylinositol 3-kinase (PI3K)-like protein kinase, proliferating cell nuclear antigen (PCNA)-like group, two serine/threonine(S/T) kinases and their adaptors. Central to all DNA damage induced checkpoints responses is a pair of large protein kinases belonging to the first group of PI3K-like protein kinases-the ATM (Ataxia telangiectasia mutated) and ATR (Ataxia- and Rad-related) kinases, whose sequence and functions have been well conserved in evolution. All DNA damage response requires either ATM or ATR because they have the ability to bind to the chromosomes at the site of DNA damage, together with accessory proteins that are platforms on which DNA damage response components and DNA repair complexes can be assembled. An important downstream target of ATM and ATR is p53, as it is required for inducing apoptosis following DNA damage. The cyclin-dependent kinase inhibitor p21 is induced by both p53-dependent and p53-independent mechanisms and can arrest the cell cycle at the G1/S and G2/M checkpoints by deactivating cyclin/cyclin-dependent kinase complexes. === The prokaryotic SOS response === The SOS response is the changes in gene expression in Escherichia coli and other bacteria in response to extensive DNA damage. The prokaryotic SOS system is regulated by two key proteins: LexA and RecA. The LexA homodimer is a transcriptional repressor that binds to operator sequences commonly referred to as SOS boxes. In Escherichia coli it is known that LexA regulates transcription of approximately 48 genes including the lexA and recA genes. The SOS response is known to be widespread in the Bacteria domain, but it is mostly absent in some bacterial phyla, like the Spirochetes. The most common cellular signals activating the SOS response are regions of single-stranded DNA (ssDNA), arising from stalled replication forks or double-strand breaks, which are processed by DNA helicase to separate the two DNA strands. In the initiation step, RecA protein binds to ssDNA in an ATP hydrolysis driven reaction creating RecA–ssDNA filaments. RecA–ssDNA filaments activate LexA autoprotease activity, which ultimately leads to cleavage of LexA dimer and subsequent LexA degradation. The loss of LexA repressor induces transcription of the SOS genes and allows for further signal induction, inhibition of cell division and an increase in levels of proteins responsible for damage processing. In Escherichia coli, SOS boxes are 20-nucleotide long sequences near promoters with palindromic structure and a high degree of sequence conservation. In other classes and phyla, the sequence of SOS boxes varies considerably, with different length and composition, but it is always highly conserved and one of the strongest short signals in the genome. The high information content of SOS boxes permits differential binding of LexA to different promoters and allows for timing of the SOS response. The lesion repair genes are induced at the beginning of SOS response. The error-prone translesion polymerases, for example, UmuCD'2 (also called DNA polymerase V), are induced later on as a last resort. Once the DNA damage is repaired or bypassed using polymerases or through recombination, the amount of single-stranded DNA in cells is decreased, lowering the amounts of RecA filaments decreases cleavage activity of LexA homodimer, which then binds to the SOS boxes near promoters and restores normal gene expression. === Eukaryotic transcriptional responses to DNA damage === Eukaryotic cells exposed to DNA damaging agents also activate important defensive pathways by inducing multiple proteins involved in DNA repair, cell cycle checkpoint control, protein trafficking and degradation. Such genome wide transcriptional response is very complex and tightly regulated, thus allowing coordinated global response to damage. Exposure of yeast Saccharomyces cerevisiae to DNA damaging agents results in overlapping but distinct transcriptional profiles. Similarities to environmental shock response indicates that a general global stress response pathway exist at the level of transcriptional activation. In contrast, different human cell types respond to damage differently indicating an absence of a common global response. The probable explanation for this difference between yeast and human cells may be in the heterogeneity of mammalian cells. In an animal different types of cells are distributed among different organs that have evolved different sensitivities to DNA damage. In general global response to DNA damage involves expression of multiple genes responsible for postreplication repair, homologous recombination, nucleotide excision repair, DNA damage checkpoint, global transcriptional activation, genes controlling mRNA decay, and many others. A large amount of damage to a cell leaves it with an important decision: undergo apoptosis and die, or survive at the cost of living with a modified genome. An increase in tolerance to damage can lead to an increased rate of survival that will allow a greater accumulation of mutations. Yeast Rev1 and human polymerase η are members of Y family translesion DNA polymerases present during global response to DNA damage and are responsible for enhanced mutagenesis during a global response to DNA damage in eukaryotes. == Aging == === Pathological effects of poor DNA repair === Experimental animals with genetic deficiencies in DNA repair often show decreased life span and increased cancer incidence. For example, mice deficient in the dominant NHEJ pathway and in telomere maintenance mechanisms get lymphoma and infections more often, and, as a consequence, have shorter lifespans than wild-type mice. In similar manner, mice deficient in a key repair and transcription protein that unwinds DNA helices have premature onset of aging-related diseases and consequent shortening of lifespan. However, not every DNA repair deficiency creates exactly the predicted effects; mice deficient in the NER pathway exhibited shortened life span without correspondingly higher rates of mutation. The maximum life spans of mice, naked mole-rats and humans are respectively ~3, ~30 and ~129 years. Of these, the shortest lived species, mouse, expresses DNA repair genes, including core genes in several DNA repair pathways, at a lower level than do humans and naked mole rats. Furthermore several DNA repair pathways in humans and naked mole-rats are up-regulated compared to mouse. These observations suggest that elevated DNA repair facilitates greater longevity. If the rate of DNA damage exceeds the capacity of the cell to repair it, the accumulation of errors can overwhelm the cell and result in early senescence, apoptosis, or cancer. Inherited diseases associated with faulty DNA repair functioning result in premature aging, increased sensitivity to carcinogens and correspondingly increased cancer risk (see below). On the other hand, organisms with enhanced DNA repair systems, such as Deinococcus radiodurans, the most radiation-resistant known organism, exhibit remarkable resistance to the double-strand break-inducing effects of radioactivity, likely due to enhanced efficiency of DNA repair and especially NHEJ. === Longevity and caloric restriction === A number of individual genes have been identified as influencing variations in life span within a population of organisms. The effects of these genes is strongly dependent on the environment, in particular, on the organism's diet. Caloric restriction reproducibly results in extended lifespan in a variety of organisms, likely via nutrient sensing pathways and decreased metabolic rate. The molecular mechanisms by which such restriction results in lengthened lifespan are as yet unclear (see for some discussion); however, the behavior of many genes known to be involved in DNA repair is altered under conditions of caloric restriction. Several agents reported to have anti-aging properties have been shown to attenuate constitutive level of mTOR signaling, an evidence of reduction of metabolic activity, and concurrently to reduce constitutive level of DNA damage induced by endogenously generated reactive oxygen species. For example, increasing the gene dosage of the gene SIR-2, which regulates DNA packaging in the nematode worm Caenorhabditis elegans, can significantly extend lifespan. The mammalian homolog of SIR-2 is known to induce downstream DNA repair factors involved in NHEJ, an activity that is especially promoted under conditions of caloric restriction. Caloric restriction has been closely linked to the rate of base excision repair in the nuclear DNA of rodents, although similar effects have not been observed in mitochondrial DNA. The C. elegans gene AGE-1, an upstream effector of DNA repair pathways, confers dramatically extended life span under free-feeding conditions but leads to a decrease in reproductive fitness under conditions of caloric restriction. This observation supports the pleiotropy theory of the biological origins of aging, which suggests that genes conferring a large survival advantage early in life will be selected for even if they carry a corresponding disadvantage late in life. == Medicine and DNA repair modulation == === Hereditary DNA repair disorders === Defects in the NER mechanism are responsible for several genetic disorders, including: Xeroderma pigmentosum: hypersensitivity to sunlight/UV, resulting in increased skin cancer incidence and premature aging Cockayne syndrome: hypersensitivity to UV and chemical agents Trichothiodystrophy: sensitive skin, brittle hair and nails Mental retardation often accompanies the latter two disorders, suggesting increased vulnerability of developmental neurons. Other DNA repair disorders include: Werner's syndrome: premature aging and retarded growth Bloom's syndrome: sunlight hypersensitivity, high incidence of malignancies (especially leukemias). Ataxia telangiectasia: sensitivity to ionizing radiation and some chemical agents All of the above diseases are often called "segmental progerias" ("accelerated aging diseases") because those affected appear elderly and experience aging-related diseases at an abnormally young age, while not manifesting all the symptoms of old age. Other diseases associated with reduced DNA repair function include Fanconi anemia, hereditary breast cancer and hereditary colon cancer. == Cancer == Because of inherent limitations in the DNA repair mechanisms, if humans lived long enough, they would all eventually develop cancer. There are at least 34 Inherited human DNA repair gene mutations that increase cancer risk. Many of these mutations cause DNA repair to be less effective than normal. In particular, Hereditary nonpolyposis colorectal cancer (HNPCC) is strongly associated with specific mutations in the DNA mismatch repair pathway. BRCA1 and BRCA2, two important genes whose mutations confer a hugely increased risk of breast cancer on carriers, are both associated with a large number of DNA repair pathways, especially NHEJ and homologous recombination. Cancer therapy procedures such as chemotherapy and radiotherapy work by overwhelming the capacity of the cell to repair DNA damage, resulting in cell death. Cells that are most rapidly dividing – most typically cancer cells – are preferentially affected. The side-effect is that other non-cancerous but rapidly dividing cells such as progenitor cells in the gut, skin, and hematopoietic system are also affected. Modern cancer treatments attempt to localize the DNA damage to cells and tissues only associated with cancer, either by physical means (concentrating the therapeutic agent in the region of the tumor) or by biochemical means (exploiting a feature unique to cancer cells in the body). In the context of therapies targeting DNA damage response genes, the latter approach has been termed 'synthetic lethality'. Perhaps the most well-known of these 'synthetic lethality' drugs is the poly(ADP-ribose) polymerase 1 (PARP1) inhibitor olaparib, which was approved by the Food and Drug Administration in 2015 for the treatment in women of BRCA-defective ovarian cancer. Tumor cells with partial loss of DNA damage response (specifically, homologous recombination repair) are dependent on another mechanism – single-strand break repair – which is a mechanism consisting, in part, of the PARP1 gene product. Olaparib is combined with chemotherapeutics to inhibit single-strand break repair induced by DNA damage caused by the co-administered chemotherapy. Tumor cells relying on this residual DNA repair mechanism are unable to repair the damage and hence are not able to survive and proliferate, whereas normal cells can repair the damage with the functioning homologous recombination mechanism. Many other drugs for use against other residual DNA repair mechanisms commonly found in cancer are currently under investigation. However, synthetic lethality therapeutic approaches have been questioned due to emerging evidence of acquired resistance, achieved through rewiring of DNA damage response pathways and reversion of previously inhibited defects. === DNA repair defects in cancer === Studies have shown that the DNA damage response acts as a barrier to the malignant transformation of preneoplastic cells. Early studies have shown an elevated DNA damage response in cell-culture models with oncogene activation, and preneoplastic colon adenomas. DNA damage response mechanisms trigger cell-cycle arrest, and attempt to repair DNA lesions or promote cell death/senescence if repair is not possible. Replication stress is observed in preneoplastic cells due to increased proliferation signals from oncogenic mutations. Replication stress is characterized by: increased replication initiation/origin firing; increased transcription and collisions of transcription-replication complexes; nucleotide deficiency; increase in reactive oxygen species (ROS). Replication stress, along with the selection for inactivating mutations in DNA damage response genes in the evolution of the tumor, leads to downregulation and/or loss of some DNA damage response mechanisms, and hence loss of DNA repair and/or senescence/programmed cell death. In experimental mouse models, loss of DNA damage response-mediated cell senescence was observed after using a short hairpin RNA (shRNA) to inhibit the double-strand break response kinase ataxia telangiectasia (ATM), leading to increased tumor size and invasiveness. Humans born with inherited defects in DNA repair mechanisms (for example, Li-Fraumeni syndrome) have a higher cancer risk. The prevalence of DNA damage response mutations differs across cancer types; for example, 30% of breast invasive carcinomas have mutations in genes involved in homologous recombination. In cancer, downregulation is observed across all DNA damage response mechanisms (base excision repair (BER), nucleotide excision repair (NER), DNA mismatch repair (MMR), homologous recombination repair (HR), non-homologous end joining (NHEJ) and translesion DNA synthesis (TLS). As well as mutations to DNA damage repair genes, mutations also arise in the genes responsible for arresting the cell cycle to allow sufficient time for DNA repair to occur, and some genes are involved in both DNA damage repair and cell cycle checkpoint control, for example ATM and checkpoint kinase 2 (CHEK2) – a tumor suppressor that is often absent or downregulated in non-small cell lung cancer. === Epigenetic DNA repair defects in cancer === Classically, cancer has been viewed as a set of diseases that are driven by progressive genetic abnormalities that include mutations in tumour-suppressor genes and oncogenes, and chromosomal aberrations. However, it has become apparent that cancer is also driven by epigenetic alterations. Epigenetic alterations refer to functionally relevant modifications to the genome that do not involve a change in the nucleotide sequence. Examples of such modifications are changes in DNA methylation (hypermethylation and hypomethylation) and histone modification, changes in chromosomal architecture (caused by inappropriate expression of proteins such as HMGA2 or HMGA1) and changes caused by microRNAs. Each of these epigenetic alterations serves to regulate gene expression without altering the underlying DNA sequence. These changes usually remain through cell divisions, last for multiple cell generations, and can be considered to be epimutations (equivalent to mutations). While large numbers of epigenetic alterations are found in cancers, the epigenetic alterations in DNA repair genes, causing reduced expression of DNA repair proteins, appear to be particularly important. Such alterations are thought to occur early in progression to cancer and to be a likely cause of the genetic instability characteristic of cancers. Reduced expression of DNA repair genes causes deficient DNA repair. When DNA repair is deficient DNA damages remain in cells at a higher than usual level and these excess damages cause increased frequencies of mutation or epimutation. Mutation rates increase substantially in cells defective in DNA mismatch repair or in homologous recombinational repair (HRR). Chromosomal rearrangements and aneuploidy also increase in HRR defective cells. Higher levels of DNA damage not only cause increased mutation, but also cause increased epimutation. During repair of DNA double strand breaks, or repair of other DNA damages, incompletely cleared sites of repair can cause epigenetic gene silencing. Deficient expression of DNA repair proteins due to an inherited mutation can cause increased risk of cancer. Individuals with an inherited impairment in any of 34 DNA repair genes (see article DNA repair-deficiency disorder) have an increased risk of cancer, with some defects causing up to a 100% lifetime chance of cancer (e.g. p53 mutations). However, such germline mutations (which cause highly penetrant cancer syndromes) are the cause of only about 1 percent of cancers. === Frequencies of epimutations in DNA repair genes === Deficiencies in DNA repair enzymes are occasionally caused by a newly arising somatic mutation in a DNA repair gene, but are much more frequently caused by epigenetic alterations that reduce or silence expression of DNA repair genes. For example, when 113 colorectal cancers were examined in sequence, only four had a missense mutation in the DNA repair gene MGMT, while the majority had reduced MGMT expression due to methylation of the MGMT promoter region (an epigenetic alteration). Five different studies found that between 40% and 90% of colorectal cancers have reduced MGMT expression due to methylation of the MGMT promoter region. Similarly, out of 119 cases of mismatch repair-deficient colorectal cancers that lacked DNA repair gene PMS2 expression, PMS2 was deficient in 6 due to mutations in the PMS2 gene, while in 103 cases PMS2 expression was deficient because its pairing partner MLH1 was repressed due to promoter methylation (PMS2 protein is unstable in the absence of MLH1). In the other 10 cases, loss of PMS2 expression was likely due to epigenetic overexpression of the microRNA, miR-155, which down-regulates MLH1. In a further example, epigenetic defects were found in various cancers (e.g. breast, ovarian, colorectal and head and neck). Two or three deficiencies in the expression of ERCC1, XPF or PMS2 occur simultaneously in the majority of 49 colon cancers evaluated by Facista et al. The chart in this section shows some frequent DNA damaging agents, examples of DNA lesions they cause, and the pathways that deal with these DNA damages. At least 169 enzymes are either directly employed in DNA repair or influence DNA repair processes. Of these, 83 are directly employed in repairing the 5 types of DNA damages illustrated in the chart. Some of the more well studied genes central to these repair processes are shown in the chart. The gene designations shown in red, gray or cyan indicate genes frequently epigenetically altered in various types of cancers. Wikipedia articles on each of the genes highlighted by red, gray or cyan describe the epigenetic alteration(s) and the cancer(s) in which these epimutations are found. Review articles, and broad experimental survey articles also document most of these epigenetic DNA repair deficiencies in cancers. Red-highlighted genes are frequently reduced or silenced by epigenetic mechanisms in various cancers. When these genes have low or absent expression, DNA damages can accumulate. Replication errors past these damages (see translesion synthesis) can lead to increased mutations and, ultimately, cancer. Epigenetic repression of DNA repair genes in accurate DNA repair pathways appear to be central to carcinogenesis. The two gray-highlighted genes RAD51 and BRCA2, are required for homologous recombinational repair. They are sometimes epigenetically over-expressed and sometimes under-expressed in certain cancers. As indicated in the Wikipedia articles on RAD51 and BRCA2, such cancers ordinarily have epigenetic deficiencies in other DNA repair genes. These repair deficiencies would likely cause increased unrepaired DNA damages. The over-expression of RAD51 and BRCA2 seen in these cancers may reflect selective pressures for compensatory RAD51 or BRCA2 over-expression and increased homologous recombinational repair to at least partially deal with such excess DNA damages. In those cases where RAD51 or BRCA2 are under-expressed, this would itself lead to increased unrepaired DNA damages. Replication errors past these damages (see translesion synthesis) could cause increased mutations and cancer, so that under-expression of RAD51 or BRCA2 would be carcinogenic in itself. Cyan-highlighted genes are in the microhomology-mediated end joining (MMEJ) pathway and are up-regulated in cancer. MMEJ is an additional error-prone inaccurate repair pathway for double-strand breaks. In MMEJ repair of a double-strand break, an homology of 5–25 complementary base pairs between both paired strands is sufficient to align the strands, but mismatched ends (flaps) are usually present. MMEJ removes the extra nucleotides (flaps) where strands are joined, and then ligates the strands to create an intact DNA double helix. MMEJ almost always involves at least a small deletion, so that it is a mutagenic pathway. FEN1, the flap endonuclease in MMEJ, is epigenetically increased by promoter hypomethylation and is over-expressed in the majority of cancers of the breast, prostate, stomach, neuroblastomas, pancreas, and lung. PARP1 is also over-expressed when its promoter region ETS site is epigenetically hypomethylated, and this contributes to progression to endometrial cancer and BRCA-mutated serous ovarian cancer. Other genes in the MMEJ pathway are also over-expressed in a number of cancers (see MMEJ for summary), and are also shown in cyan. === Genome-wide distribution of DNA repair in human somatic cells === Differential activity of DNA repair pathways across various regions of the human genome causes mutations to be very unevenly distributed within tumor genomes. In particular, the gene-rich, early-replicating regions of the human genome exhibit lower mutation frequencies than the gene-poor, late-replicating heterochromatin. One mechanism underlying this involves the histone modification H3K36me3, which can recruit mismatch repair proteins, thereby lowering mutation rates in H3K36me3-marked regions. Another important mechanism concerns nucleotide excision repair, which can be recruited by the transcription machinery, lowering somatic mutation rates in active genes and other open chromatin regions. == Epigenetic alterations due to DNA repair == Damage to DNA is very common and is constantly being repaired. Epigenetic alterations can accompany DNA repair of oxidative damage or double-strand breaks. In human cells, oxidative DNA damage occurs about 10,000 times a day and DNA double-strand breaks occur about 10 to 50 times a cell cycle in somatic replicating cells (see DNA damage (naturally occurring)). The selective advantage of DNA repair is to allow the cell to survive in the face of DNA damage. The selective advantage of epigenetic alterations that occur with DNA repair is not clear. === Repair of oxidative DNA damage can alter epigenetic markers === In the steady state (with endogenous damages occurring and being repaired), there are about 2,400 oxidatively damaged guanines that form 8-oxo-2'-deoxyguanosine (8-OHdG) in the average mammalian cell DNA. 8-OHdG constitutes about 5% of the oxidative damages commonly present in DNA. The oxidized guanines do not occur randomly among all guanines in DNA. There is a sequence preference for the guanine at a methylated CpG site (a cytosine followed by guanine along its 5' → 3' direction and where the cytosine is methylated (5-mCpG)). A 5-mCpG site has the lowest ionization potential for guanine oxidation. Oxidized guanine has mispairing potential and is mutagenic. Oxoguanine glycosylase (OGG1) is the primary enzyme responsible for the excision of the oxidized guanine during DNA repair. OGG1 finds and binds to an 8-OHdG within a few seconds. However, OGG1 does not immediately excise 8-OHdG. In HeLa cells half maximum removal of 8-OHdG occurs in 30 minutes, and in irradiated mice, the 8-OHdGs induced in the mouse liver are removed with a half-life of 11 minutes. When OGG1 is present at an oxidized guanine within a methylated CpG site it recruits TET1 to the 8-OHdG lesion (see Figure). This allows TET1 to demethylate an adjacent methylated cytosine. Demethylation of cytosine is an epigenetic alteration. As an example, when human mammary epithelial cells were treated with H2O2 for six hours, 8-OHdG increased about 3.5-fold in DNA and this caused about 80% demethylation of the 5-methylcytosines in the genome. Demethylation of CpGs in a gene promoter by TET enzyme activity increases transcription of the gene into messenger RNA. In cells treated with H2O2, one particular gene was examined, BACE1. The methylation level of the BACE1 CpG island was reduced (an epigenetic alteration) and this allowed about 6.5 fold increase of expression of BACE1 messenger RNA. While six-hour incubation with H2O2 causes considerable demethylation of 5-mCpG sites, shorter times of H2O2 incubation appear to promote other epigenetic alterations. Treatment of cells with H2O2 for 30 minutes causes the mismatch repair protein heterodimer MSH2-MSH6 to recruit DNA methyltransferase 1 (DNMT1) to sites of some kinds of oxidative DNA damage. This could cause increased methylation of cytosines (epigenetic alterations) at these locations. Jiang et al. treated HEK 293 cells with agents causing oxidative DNA damage, (potassium bromate (KBrO3) or potassium chromate (K2CrO4)). Base excision repair (BER) of oxidative damage occurred with the DNA repair enzyme polymerase beta localizing to oxidized guanines. Polymerase beta is the main human polymerase in short-patch BER of oxidative DNA damage. Jiang et al. also found that polymerase beta recruited the DNA methyltransferase protein DNMT3b to BER repair sites. They then evaluated the methylation pattern at the single nucleotide level in a small region of DNA including the promoter region and the early transcription region of the BRCA1 gene. Oxidative DNA damage from bromate modulated the DNA methylation pattern (caused epigenetic alterations) at CpG sites within the region of DNA studied. In untreated cells, CpGs located at −189, −134, −29, −19, +16, and +19 of the BRCA1 gene had methylated cytosines (where numbering is from the messenger RNA transcription start site, and negative numbers indicate nucleotides in the upstream promoter region). Bromate treatment-induced oxidation resulted in the loss of cytosine methylation at −189, −134, +16 and +19 while also leading to the formation of new methylation at the CpGs located at −80, −55, −21 and +8 after DNA repair was allowed. === Homologous recombinational repair alters epigenetic markers === At least four articles report the recruitment of DNA methyltransferase 1 (DNMT1) to sites of DNA double-strand breaks. During homologous recombinational repair (HR) of the double-strand break, the involvement of DNMT1 causes the two repaired strands of DNA to have different levels of methylated cytosines. One strand becomes frequently methylated at about 21 CpG sites downstream of the repaired double-strand break. The other DNA strand loses methylation at about six CpG sites that were previously methylated downstream of the double-strand break, as well as losing methylation at about five CpG sites that were previously methylated upstream of the double-strand break. When the chromosome is replicated, this gives rise to one daughter chromosome that is heavily methylated downstream of the previous break site and one that is unmethylated in the region both upstream and downstream of the previous break site. With respect to the gene that was broken by the double-strand break, half of the progeny cells express that gene at a high level and in the other half of the progeny cells expression of that gene is repressed. When clones of these cells were maintained for three years, the new methylation patterns were maintained over that time period. In mice with a CRISPR-mediated homology-directed recombination insertion in their genome there were a large number of increased methylations of CpG sites within the double-strand break-associated insertion. === Non-homologous end joining can cause some epigenetic marker alterations === Non-homologous end joining (NHEJ) repair of a double-strand break can cause a small number of demethylations of pre-existing cytosine DNA methylations downstream of the repaired double-strand break. Further work by Allen et al. showed that NHEJ of a DNA double-strand break in a cell could give rise to some progeny cells having repressed expression of the gene harboring the initial double-strand break and some progeny having high expression of that gene due to epigenetic alterations associated with NHEJ repair. The frequency of epigenetic alterations causing repression of a gene after an NHEJ repair of a DNA double-strand break in that gene may be about 0.9%. == Evolution == The basic processes of DNA repair are highly conserved among both prokaryotes and eukaryotes and even among bacteriophages (viruses which infect bacteria); however, more complex organisms with more complex genomes have correspondingly more complex repair mechanisms. The ability of a large number of protein structural motifs to catalyze relevant chemical reactions has played a significant role in the elaboration of repair mechanisms during evolution. For an extremely detailed review of hypotheses relating to the evolution of DNA repair, see. The fossil record indicates that single-cell life began to proliferate on the planet at some point during the Precambrian period, although exactly when recognizably modern life first emerged is unclear. Nucleic acids became the sole and universal means of encoding genetic information, requiring DNA repair mechanisms that in their basic form have been inherited by all extant life forms from their common ancestor. The emergence of Earth's oxygen-rich atmosphere (known as the "oxygen catastrophe") due to photosynthetic organisms, as well as the presence of potentially damaging free radicals in the cell due to oxidative phosphorylation, necessitated the evolution of DNA repair mechanisms that act specifically to counter the types of damage induced by oxidative stress. The mechanism by which this came about, however, is unclear. === Rate of evolutionary change === On some occasions, DNA damage is not repaired or is repaired by an error-prone mechanism that results in a change from the original sequence. When this occurs, mutations may propagate into the genomes of the cell's progeny. Should such an event occur in a germ line cell that will eventually produce a gamete, the mutation has the potential to be passed on to the organism's offspring. The rate of evolution in a particular species (or, in a particular gene) is a function of the rate of mutation. As a consequence, the rate and accuracy of DNA repair mechanisms have an influence over the process of evolutionary change. DNA damage protection and repair does not influence the rate of adaptation by gene regulation and by recombination and selection of alleles. On the other hand, DNA damage repair and protection does influence the rate of accumulation of irreparable, advantageous, code expanding, inheritable mutations, and slows down the evolutionary mechanism for expansion of the genome of organisms with new functionalities. The tension between evolvability and mutation repair and protection needs further investigation. == Technology == A technology named clustered regularly interspaced short palindromic repeat (shortened to CRISPR-Cas9) was discovered in 2012. The new technology allows anyone with molecular biology training to alter the genes of any species with precision, by inducing DNA damage at a specific point and then altering DNA repair mechanisms to insert new genes. It is cheaper, more efficient, and more precise than other technologies. With the help of CRISPR–Cas9, parts of a genome can be edited by scientists by removing, adding, or altering parts in a DNA sequence. == See also == == References == == External links == Media related to DNA repair at Wikimedia Commons Roswell Park Cancer Institute DNA Repair Lectures A comprehensive list of Human DNA Repair Genes 3D structures of some DNA repair enzymes Machado CR, Menck CF (December 1997). "Human DNA repair diseases: From genome instability to cancer". Braz. J. Genet. 20 (4): 755–762. doi:10.1590/S0100-84551997000400032. DNA repair special interest group DNA Repair Archived 12 February 2018 at the Wayback Machine DNA Damage and DNA Repair Segmental Progeria Hakem R (February 2008). "DNA-damage repair; the good, the bad, and the ugly". EMBO J. 27 (4): 589–605. doi:10.1038/emboj.2008.15. PMC 2262034. PMID 18285820. Morales ME, Derbes RS, Ade CM, Ortego JC, Stark J, Deininger PL, et al. (2016). "Heavy Metal Exposure Influences Double Strand Break DNA Repair Outcomes". PLOS ONE. 11 (3): e0151367. Bibcode:2016PLoSO..1151367M. doi:10.1371/journal.pone.0151367. PMC 4788447. PMID 26966913.
Wikipedia/DNA_repair
Protein biosynthesis, or protein synthesis, is a core biological process, occurring inside cells, balancing the loss of cellular proteins (via degradation or export) through the production of new proteins. Proteins perform a number of critical functions as enzymes, structural proteins or hormones. Protein synthesis is a very similar process for both prokaryotes and eukaryotes but there are some distinct differences. Protein synthesis can be divided broadly into two phases: transcription and translation. During transcription, a section of DNA encoding a protein, known as a gene, is converted into a molecule called messenger RNA (mRNA). This conversion is carried out by enzymes, known as RNA polymerases, in the nucleus of the cell. In eukaryotes, this mRNA is initially produced in a premature form (pre-mRNA) which undergoes post-transcriptional modifications to produce mature mRNA. The mature mRNA is exported from the cell nucleus via nuclear pores to the cytoplasm of the cell for translation to occur. During translation, the mRNA is read by ribosomes which use the nucleotide sequence of the mRNA to determine the sequence of amino acids. The ribosomes catalyze the formation of covalent peptide bonds between the encoded amino acids to form a polypeptide chain. Following translation the polypeptide chain must fold to form a functional protein; for example, to function as an enzyme the polypeptide chain must fold correctly to produce a functional active site. To adopt a functional three-dimensional shape, the polypeptide chain must first form a series of smaller underlying structures called secondary structures. The polypeptide chain in these secondary structures then folds to produce the overall 3D tertiary structure. Once correctly folded, the protein can undergo further maturation through different post-translational modifications, which can alter the protein's ability to function, its location within the cell (e.g. cytoplasm or nucleus) and its ability to interact with other proteins. Protein biosynthesis has a key role in disease as changes and errors in this process, through underlying DNA mutations or protein misfolding, are often the underlying causes of a disease. DNA mutations change the subsequent mRNA sequence, which then alters the mRNA encoded amino acid sequence. Mutations can cause the polypeptide chain to be shorter by generating a stop sequence which causes early termination of translation. Alternatively, a mutation in the mRNA sequence changes the specific amino acid encoded at that position in the polypeptide chain. This amino acid change can impact the protein's ability to function or to fold correctly. Misfolded proteins have a tendency to form dense protein clumps, which are often implicated in diseases, particularly neurological disorders including Alzheimer's and Parkinson's disease. == Transcription == Transcription occurs in the nucleus using DNA as a template to produce mRNA. In eukaryotes, this mRNA molecule is known as pre-mRNA as it undergoes post-transcriptional modifications in the nucleus to produce a mature mRNA molecule. However, in prokaryotes post-transcriptional modifications are not required so the mature mRNA molecule is immediately produced by transcription. Initially, an enzyme known as a helicase acts on the molecule of DNA. DNA has an antiparallel, double helix structure composed of two, complementary polynucleotide strands, held together by hydrogen bonds between the base pairs. The helicase disrupts the hydrogen bonds causing a region of DNA – corresponding to a gene – to unwind, separating the two DNA strands and exposing a series of bases. Despite DNA being a double-stranded molecule, only one of the strands acts as a template for pre-mRNA synthesis; this strand is known as the template strand. The other DNA strand (which is complementary to the template strand) is known as the coding strand. Both DNA and RNA have intrinsic directionality, meaning there are two distinct ends of the molecule. This property of directionality is due to the asymmetrical underlying nucleotide subunits, with a phosphate group on one side of the pentose sugar and a base on the other. The five carbons in the pentose sugar are numbered from 1' (where ' means prime) to 5'. Therefore, the phosphodiester bonds connecting the nucleotides are formed by joining the hydroxyl group on the 3' carbon of one nucleotide to the phosphate group on the 5' carbon of another nucleotide. Hence, the coding strand of DNA runs in a 5' to 3' direction and the complementary, template DNA strand runs in the opposite direction from 3' to 5'. The enzyme RNA polymerase binds to the exposed template strand and reads from the gene in the 3' to 5' direction. Simultaneously, the RNA polymerase synthesizes a single strand of pre-mRNA in the 5'-to-3' direction by catalysing the formation of phosphodiester bonds between activated nucleotides (free in the nucleus) that are capable of complementary base pairing with the template strand. Behind the moving RNA polymerase the two strands of DNA rejoin, so only 12 base pairs of DNA are exposed at one time. RNA polymerase builds the pre-mRNA molecule at a rate of 20 nucleotides per second enabling the production of thousands of pre-mRNA molecules from the same gene in an hour. Despite the fast rate of synthesis, the RNA polymerase enzyme contains its own proofreading mechanism. The proofreading mechanisms allows the RNA polymerase to remove incorrect nucleotides (which are not complementary to the template strand of DNA) from the growing pre-mRNA molecule through an excision reaction. When RNA polymerases reaches a specific DNA sequence which terminates transcription, RNA polymerase detaches and pre-mRNA synthesis is complete. The pre-mRNA molecule synthesized is complementary to the template DNA strand and shares the same nucleotide sequence as the coding DNA strand. However, there is one crucial difference in the nucleotide composition of DNA and mRNA molecules. DNA is composed of the bases: guanine, cytosine, adenine and thymine (G, C, A and T). RNA is also composed of four bases: guanine, cytosine, adenine and uracil. In RNA molecules, the DNA base thymine is replaced by uracil which is able to base pair with adenine. Therefore, in the pre-mRNA molecule, all complementary bases which would be thymine in the coding DNA strand are replaced by uracil. === Post-transcriptional modifications === Once transcription is complete, the pre-mRNA molecule undergoes post-transcriptional modifications to produce a mature mRNA molecule. There are 3 key steps within post-transcriptional modifications: Addition of a 5' cap to the 5' end of the pre-mRNA molecule Addition of a 3' poly(A) tail is added to the 3' end pre-mRNA molecule Removal of introns via RNA splicing The 5' cap is added to the 5' end of the pre-mRNA molecule and is composed of a guanine nucleotide modified through methylation. The purpose of the 5' cap is to prevent break down of mature mRNA molecules before translation, the cap also aids binding of the ribosome to the mRNA to start translation and enables mRNA to be differentiated from other RNAs in the cell. In contrast, the 3' Poly(A) tail is added to the 3' end of the mRNA molecule and is composed of 100–200 adenine bases. These distinct mRNA modifications enable the cell to detect that the full mRNA message is intact if both the 5' cap and 3' tail are present. This modified pre-mRNA molecule then undergoes the process of RNA splicing. Genes are composed of a series of introns and exons, introns are nucleotide sequences which do not encode a protein while, exons are nucleotide sequences that directly encode a protein. Introns and exons are present in both the underlying DNA sequence and the pre-mRNA molecule, therefore, to produce a mature mRNA molecule encoding a protein, splicing must occur. During splicing, the intervening introns are removed from the pre-mRNA molecule by a multi-protein complex known as a spliceosome (composed of over 150 proteins and RNA). This mature mRNA molecule is then exported into the cytoplasm through nuclear pores in the envelope of the nucleus. == Translation == During translation, ribosomes synthesize polypeptide chains from mRNA template molecules. In eukaryotes, translation occurs in the cytoplasm of the cell, where the ribosomes are located either free floating or attached to the endoplasmic reticulum. In prokaryotes, which lack a nucleus, the processes of both transcription and translation occur in the cytoplasm. Ribosomes are complex molecular machines, made of a mixture of protein and ribosomal RNA, arranged into two subunits (a large and a small subunit), which surround the mRNA molecule. The ribosome reads the mRNA molecule in a 5'-3' direction and uses it as a template to determine the order of amino acids in the polypeptide chain. To translate the mRNA molecule, the ribosome uses small molecules, known as transfer RNAs (tRNA), to deliver the correct amino acids to the ribosome. Each tRNA is composed of 70–80 nucleotides and adopts a characteristic cloverleaf structure due to the formation of hydrogen bonds between the nucleotides within the molecule. There are around 60 different types of tRNAs, each tRNA binds to a specific sequence of three nucleotides (known as a codon) within the mRNA molecule and delivers a specific amino acid. The ribosome initially attaches to the mRNA at the start codon (AUG) and begins to translate the molecule. The mRNA nucleotide sequence is read in triplets; three adjacent nucleotides in the mRNA molecule correspond to a single codon. Each tRNA has an exposed sequence of three nucleotides, known as the anticodon, which are complementary in sequence to a specific codon that may be present in mRNA. For example, the first codon encountered is the start codon composed of the nucleotides AUG. The correct tRNA with the anticodon (complementary 3 nucleotide sequence UAC) binds to the mRNA using the ribosome. This tRNA delivers the correct amino acid corresponding to the mRNA codon, in the case of the start codon, this is the amino acid methionine. The next codon (adjacent to the start codon) is then bound by the correct tRNA with complementary anticodon, delivering the next amino acid to ribosome. The ribosome then uses its peptidyl transferase enzymatic activity to catalyze the formation of the covalent peptide bond between the two adjacent amino acids. The ribosome then moves along the mRNA molecule to the third codon. The ribosome then releases the first tRNA molecule, as only two tRNA molecules can be brought together by a single ribosome at one time. The next complementary tRNA with the correct anticodon complementary to the third codon is selected, delivering the next amino acid to the ribosome which is covalently joined to the growing polypeptide chain. This process continues with the ribosome moving along the mRNA molecule adding up to 15 amino acids per second to the polypeptide chain. Behind the first ribosome, up to 50 additional ribosomes can bind to the mRNA molecule forming a polysome, this enables simultaneous synthesis of multiple identical polypeptide chains. Termination of the growing polypeptide chain occurs when the ribosome encounters a stop codon (UAA, UAG, or UGA) in the mRNA molecule. When this occurs, no tRNA can recognise it and a release factor induces the release of the complete polypeptide chain from the ribosome. Dr. Har Gobind Khorana, a scientist originating from India, decoded the RNA sequences for about 20 amino acids. He was awarded the Nobel Prize in 1968, along with two other scientists, for his work. == Protein folding == Once synthesis of the polypeptide chain is complete, the polypeptide chain folds to adopt a specific structure which enables the protein to carry out its functions. The basic form of protein structure is known as the primary structure, which is simply the polypeptide chain i.e. a sequence of covalently bonded amino acids. The primary structure of a protein is encoded by a gene. Therefore, any changes to the sequence of the gene can alter the primary structure of the protein and all subsequent levels of protein structure, ultimately changing the overall structure and function. The primary structure of a protein (the polypeptide chain) can then fold or coil to form the secondary structure of the protein. The most common types of secondary structure are known as an alpha helix or beta sheet, these are small structures produced by hydrogen bonds forming within the polypeptide chain. This secondary structure then folds to produce the tertiary structure of the protein. The tertiary structure is the proteins overall 3D structure which is made of different secondary structures folding together. In the tertiary structure, key protein features e.g. the active site, are folded and formed enabling the protein to function. Finally, some proteins may adopt a complex quaternary structure. Most proteins are made of a single polypeptide chain, however, some proteins are composed of multiple polypeptide chains (known as subunits) which fold and interact to form the quaternary structure. Hence, the overall protein is a multi-subunit complex composed of multiple folded, polypeptide chain subunits e.g. haemoglobin. == Post-translation events == There are events that follow protein biosynthesis such as proteolysis and protein-folding. Proteolysis refers to the cleavage of proteins by proteases and the breakdown of proteins into amino acids by the action of enzymes. == Post-translational modifications == When protein folding into the mature, functional 3D state is complete, it is not necessarily the end of the protein maturation pathway. A folded protein can still undergo further processing through post-translational modifications. There are over 200 known types of post-translational modification, these modifications can alter protein activity, the ability of the protein to interact with other proteins and where the protein is found within the cell e.g. in the cell nucleus or cytoplasm. Through post-translational modifications, the diversity of proteins encoded by the genome is expanded by 2 to 3 orders of magnitude. There are four key classes of post-translational modification: Cleavage Addition of chemical groups Addition of complex molecules Formation of intramolecular bonds === Cleavage === Cleavage of proteins is an irreversible post-translational modification carried out by enzymes known as proteases. These proteases are often highly specific and cause hydrolysis of a limited number of peptide bonds within the target protein. The resulting shortened protein has an altered polypeptide chain with different amino acids at the start and end of the chain. This post-translational modification often alters the proteins function, the protein can be inactivated or activated by the cleavage and can display new biological activities. === Addition of chemical groups === Following translation, small chemical groups can be added onto amino acids within the mature protein structure. Examples of processes which add chemical groups to the target protein include methylation, acetylation and phosphorylation. Methylation is the reversible addition of a methyl group onto an amino acid catalyzed by methyltransferase enzymes. Methylation occurs on at least 9 of the 20 common amino acids, however, it mainly occurs on the amino acids lysine and arginine. One example of a protein which is commonly methylated is a histone. Histones are proteins found in the nucleus of the cell. DNA is tightly wrapped round histones and held in place by other proteins and interactions between negative charges in the DNA and positive charges on the histone. A highly specific pattern of amino acid methylation on the histone proteins is used to determine which regions of DNA are tightly wound and unable to be transcribed and which regions are loosely wound and able to be transcribed. Histone-based regulation of DNA transcription is also modified by acetylation. Acetylation is the reversible covalent addition of an acetyl group onto a lysine amino acid by the enzyme acetyltransferase. The acetyl group is removed from a donor molecule known as acetyl coenzyme A and transferred onto the target protein. Histones undergo acetylation on their lysine residues by enzymes known as histone acetyltransferase. The effect of acetylation is to weaken the charge interactions between the histone and DNA, thereby making more genes in the DNA accessible for transcription. The final, prevalent post-translational chemical group modification is phosphorylation. Phosphorylation is the reversible, covalent addition of a phosphate group to specific amino acids (serine, threonine and tyrosine) within the protein. The phosphate group is removed from the donor molecule ATP by a protein kinase and transferred onto the hydroxyl group of the target amino acid, this produces adenosine diphosphate as a byproduct. This process can be reversed and the phosphate group removed by the enzyme protein phosphatase. Phosphorylation can create a binding site on the phosphorylated protein which enables it to interact with other proteins and generate large, multi-protein complexes. Alternatively, phosphorylation can change the level of protein activity by altering the ability of the protein to bind its substrate. === Addition of complex molecules === Post-translational modifications can incorporate more complex, large molecules into the folded protein structure. One common example of this is glycosylation, the addition of a polysaccharide molecule, which is widely considered to be most common post-translational modification. In glycosylation, a polysaccharide molecule (known as a glycan) is covalently added to the target protein by glycosyltransferases enzymes and modified by glycosidases in the endoplasmic reticulum and Golgi apparatus. Glycosylation can have a critical role in determining the final, folded 3D structure of the target protein. In some cases glycosylation is necessary for correct folding. N-linked glycosylation promotes protein folding by increasing solubility and mediates the protein binding to protein chaperones. Chaperones are proteins responsible for folding and maintaining the structure of other proteins. There are broadly two types of glycosylation, N-linked glycosylation and O-linked glycosylation. N-linked glycosylation starts in the endoplasmic reticulum with the addition of a precursor glycan. The precursor glycan is modified in the Golgi apparatus to produce complex glycan bound covalently to the nitrogen in an asparagine amino acid. In contrast, O-linked glycosylation is the sequential covalent addition of individual sugars onto the oxygen in the amino acids serine and threonine within the mature protein structure. === Formation of covalent bonds === Many proteins produced within the cell are secreted outside the cell to function as extracellular proteins. Extracellular proteins are exposed to a wide variety of conditions. To stabilize the 3D protein structure, covalent bonds are formed either within the protein or between the different polypeptide chains in the quaternary structure. The most prevalent type is a disulfide bond (also known as a disulfide bridge). A disulfide bond is formed between two cysteine amino acids using their side chain chemical groups containing a Sulphur atom, these chemical groups are known as thiol functional groups. Disulfide bonds act to stabilize the pre-existing structure of the protein. Disulfide bonds are formed in an oxidation reaction between two thiol groups and therefore, need an oxidizing environment to react. As a result, disulfide bonds are typically formed in the oxidizing environment of the endoplasmic reticulum catalyzed by enzymes called protein disulfide isomerases. Disulfide bonds are rarely formed in the cytoplasm as it is a reducing environment. == Role of protein synthesis in disease == Many diseases are caused by mutations in genes, due to the direct connection between the DNA nucleotide sequence and the amino acid sequence of the encoded protein. Changes to the primary structure of the protein can result in the protein mis-folding or malfunctioning. Mutations within a single gene have been identified as a cause of multiple diseases, including sickle cell disease, known as single gene disorders. === Sickle cell disease === Sickle cell disease is a group of diseases caused by a mutation in a subunit of hemoglobin, a protein found in red blood cells responsible for transporting oxygen. The most dangerous of the sickle cell diseases is known as sickle cell anemia. Sickle cell anemia is the most common homozygous recessive single gene disorder, meaning the affected individual must carry a mutation in both copies of the affected gene (one inherited from each parent) to experience the disease. Hemoglobin has a complex quaternary structure and is composed of four polypeptide subunits – two A subunits and two B subunits. Patients with sickle cell anemia have a missense or substitution mutation in the gene encoding the hemoglobin B subunit polypeptide chain. A missense mutation means the nucleotide mutation alters the overall codon triplet such that a different amino acid is paired with the new codon. In the case of sickle cell anemia, the most common missense mutation is a single nucleotide mutation from thymine to adenine in the hemoglobin B subunit gene. This changes codon 6 from encoding the amino acid glutamic acid to encoding valine. This change in the primary structure of the hemoglobin B subunit polypeptide chain alters the functionality of the hemoglobin multi-subunit complex in low oxygen conditions. When red blood cells unload oxygen into the tissues of the body, the mutated haemoglobin protein starts to stick together to form a semi-solid structure within the red blood cell. This distorts the shape of the red blood cell, resulting in the characteristic "sickle" shape, and reduces cell flexibility. This rigid, distorted red blood cell can accumulate in blood vessels creating a blockage. The blockage prevents blood flow to tissues and can lead to tissue death which causes great pain to the individual. === Cancer === Cancers form as a result of gene mutations as well as improper protein translation. In addition to cancer cells proliferating abnormally, they suppress the expression of anti-apoptotic or pro-apoptotic genes or proteins. Most cancer cells see a mutation in the signaling protein Ras, which functions as an on/off signal transductor in cells. In cancer cells, the RAS protein becomes persistently active, thus promoting the proliferation of the cell due to the absence of any regulation. Additionally, most cancer cells carry two mutant copies of the regulator gene p53, which acts as a gatekeeper for damaged genes and initiates apoptosis in malignant cells. In its absence, the cell cannot initiate apoptosis or signal for other cells to destroy it. As the tumor cells proliferate, they either remain confined to one area and are called benign, or become malignant cells that migrate to other areas of the body. Oftentimes, these malignant cells secrete proteases that break apart the extracellular matrix of tissues. This then allows the cancer to enter its terminal stage called Metastasis, in which the cells enter the bloodstream or the lymphatic system to travel to a new part of the body. == See also == Central dogma of molecular biology Genetic code == References == == External links == A more advanced video detailing the different types of post-translational modifications and their chemical structures A useful video visualising the process of converting DNA to protein via transcription and translation Video visualising the process of protein folding from the non-functional primary structure to a mature, folded 3D protein structure with reference to the role of mutations and protein mis-folding in disease
Wikipedia/Protein_synthesis
Maxwell's equations, or Maxwell–Heaviside equations, are a set of coupled partial differential equations that, together with the Lorentz force law, form the foundation of classical electromagnetism, classical optics, electric and magnetic circuits. The equations provide a mathematical model for electric, optical, and radio technologies, such as power generation, electric motors, wireless communication, lenses, radar, etc. They describe how electric and magnetic fields are generated by charges, currents, and changes of the fields. The equations are named after the physicist and mathematician James Clerk Maxwell, who, in 1861 and 1862, published an early form of the equations that included the Lorentz force law. Maxwell first used the equations to propose that light is an electromagnetic phenomenon. The modern form of the equations in their most common formulation is credited to Oliver Heaviside. Maxwell's equations may be combined to demonstrate how fluctuations in electromagnetic fields (waves) propagate at a constant speed in vacuum, c (299792458 m/s). Known as electromagnetic radiation, these waves occur at various wavelengths to produce a spectrum of radiation from radio waves to gamma rays. In partial differential equation form and a coherent system of units, Maxwell's microscopic equations can be written as (top to bottom: Gauss's law, Gauss's law for magnetism, Faraday's law, Ampère-Maxwell law) ∇ ⋅ E = ρ ε 0 ∇ ⋅ B = 0 ∇ × E = − ∂ B ∂ t ∇ × B = μ 0 ( J + ε 0 ∂ E ∂ t ) {\displaystyle {\begin{aligned}\nabla \cdot \mathbf {E} \,\,\,&={\frac {\rho }{\varepsilon _{0}}}\\\nabla \cdot \mathbf {B} \,\,\,&=0\\\nabla \times \mathbf {E} &=-{\frac {\partial \mathbf {B} }{\partial t}}\\\nabla \times \mathbf {B} &=\mu _{0}\left(\mathbf {J} +\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\right)\end{aligned}}} With E {\displaystyle \mathbf {E} } the electric field, B {\displaystyle \mathbf {B} } the magnetic field, ρ {\displaystyle \rho } the electric charge density and J {\displaystyle \mathbf {J} } the current density. ε 0 {\displaystyle \varepsilon _{0}} is the vacuum permittivity and μ 0 {\displaystyle \mu _{0}} the vacuum permeability. The equations have two major variants: The microscopic equations have universal applicability but are unwieldy for common calculations. They relate the electric and magnetic fields to total charge and total current, including the complicated charges and currents in materials at the atomic scale. The macroscopic equations define two new auxiliary fields that describe the large-scale behaviour of matter without having to consider atomic-scale charges and quantum phenomena like spins. However, their use requires experimentally determined parameters for a phenomenological description of the electromagnetic response of materials. The term "Maxwell's equations" is often also used for equivalent alternative formulations. Versions of Maxwell's equations based on the electric and magnetic scalar potentials are preferred for explicitly solving the equations as a boundary value problem, analytical mechanics, or for use in quantum mechanics. The covariant formulation (on spacetime rather than space and time separately) makes the compatibility of Maxwell's equations with special relativity manifest. Maxwell's equations in curved spacetime, commonly used in high-energy and gravitational physics, are compatible with general relativity. In fact, Albert Einstein developed special and general relativity to accommodate the invariant speed of light, a consequence of Maxwell's equations, with the principle that only relative movement has physical consequences. The publication of the equations marked the unification of a theory for previously separately described phenomena: magnetism, electricity, light, and associated radiation. Since the mid-20th century, it has been understood that Maxwell's equations do not give an exact description of electromagnetic phenomena, but are instead a classical limit of the more precise theory of quantum electrodynamics. == History of the equations == == Conceptual descriptions == === Gauss's law === Gauss's law describes the relationship between an electric field and electric charges: an electric field points away from positive charges and towards negative charges, and the net outflow of the electric field through a closed surface is proportional to the enclosed charge, including bound charge due to polarization of material. The coefficient of the proportion is the permittivity of free space. === Gauss's law for magnetism === Gauss's law for magnetism states that electric charges have no magnetic analogues, called magnetic monopoles; no north or south magnetic poles exist in isolation. Instead, the magnetic field of a material is attributed to a dipole, and the net outflow of the magnetic field through a closed surface is zero. Magnetic dipoles may be represented as loops of current or inseparable pairs of equal and opposite "magnetic charges". Precisely, the total magnetic flux through a Gaussian surface is zero, and the magnetic field is a solenoidal vector field. === Faraday's law === The Maxwell–Faraday version of Faraday's law of induction describes how a time-varying magnetic field corresponds to the negative curl of an electric field. In integral form, it states that the work per unit charge required to move a charge around a closed loop equals the rate of change of the magnetic flux through the enclosed surface. The electromagnetic induction is the operating principle behind many electric generators: for example, a rotating bar magnet creates a changing magnetic field and generates an electric field in a nearby wire. === Ampère–Maxwell law === The original law of Ampère states that magnetic fields relate to electric current. Maxwell's addition states that magnetic fields also relate to changing electric fields, which Maxwell called displacement current. The integral form states that electric and displacement currents are associated with a proportional magnetic field along any enclosing curve. Maxwell's modification of Ampère's circuital law is important because the laws of Ampère and Gauss must otherwise be adjusted for static fields. As a consequence, it predicts that a rotating magnetic field occurs with a changing electric field. A further consequence is the existence of self-sustaining electromagnetic waves which travel through empty space. The speed calculated for electromagnetic waves, which could be predicted from experiments on charges and currents, matches the speed of light; indeed, light is one form of electromagnetic radiation (as are X-rays, radio waves, and others). Maxwell understood the connection between electromagnetic waves and light in 1861, thereby unifying the theories of electromagnetism and optics. == Formulation in terms of electric and magnetic fields (microscopic or in vacuum version) == In the electric and magnetic field formulation there are four equations that determine the fields for given charge and current distribution. A separate law of nature, the Lorentz force law, describes how the electric and magnetic fields act on charged particles and currents. By convention, a version of this law in the original equations by Maxwell is no longer included. The vector calculus formalism below, the work of Oliver Heaviside, has become standard. It is rotationally invariant, and therefore mathematically more transparent than Maxwell's original 20 equations in x, y and z components. The relativistic formulations are more symmetric and Lorentz invariant. For the same equations expressed using tensor calculus or differential forms (see § Alternative formulations). The differential and integral formulations are mathematically equivalent; both are useful. The integral formulation relates fields within a region of space to fields on the boundary and can often be used to simplify and directly calculate fields from symmetric distributions of charges and currents. On the other hand, the differential equations are purely local and are a more natural starting point for calculating the fields in more complicated (less symmetric) situations, for example using finite element analysis. === Key to the notation === Symbols in bold represent vector quantities, and symbols in italics represent scalar quantities, unless otherwise indicated. The equations introduce the electric field, E, a vector field, and the magnetic field, B, a pseudovector field, each generally having a time and location dependence. The sources are the total electric charge density (total charge per unit volume), ρ, and the total electric current density (total current per unit area), J. The universal constants appearing in the equations (the first two ones explicitly only in the SI formulation) are: the permittivity of free space, ε0, and the permeability of free space, μ0, and the speed of light, c = ( ε 0 μ 0 ) − 1 / 2 {\displaystyle c=({\varepsilon _{0}\mu _{0}})^{-1/2}} ==== Differential equations ==== In the differential equations, the nabla symbol, ∇, denotes the three-dimensional gradient operator, del, the ∇⋅ symbol (pronounced "del dot") denotes the divergence operator, the ∇× symbol (pronounced "del cross") denotes the curl operator. ==== Integral equations ==== In the integral equations, Ω is any volume with closed boundary surface ∂Ω, and Σ is any surface with closed boundary curve ∂Σ, The equations are a little easier to interpret with time-independent surfaces and volumes. Time-independent surfaces and volumes are "fixed" and do not change over a given time interval. For example, since the surface is time-independent, we can bring the differentiation under the integral sign in Faraday's law: d d t ∬ Σ B ⋅ d S = ∬ Σ ∂ B ∂ t ⋅ d S , {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\iint _{\Sigma }\mathbf {B} \cdot \mathrm {d} \mathbf {S} =\iint _{\Sigma }{\frac {\partial \mathbf {B} }{\partial t}}\cdot \mathrm {d} \mathbf {S} \,,} Maxwell's equations can be formulated with possibly time-dependent surfaces and volumes by using the differential version and using Gauss' and Stokes' theorems as appropriate. ∫ ∂ Ω {\displaystyle {\vphantom {\int }}_{\scriptstyle \partial \Omega }} is a surface integral over the boundary surface ∂Ω, with the loop indicating the surface is closed ∭ Ω {\displaystyle \iiint _{\Omega }} is a volume integral over the volume Ω, ∮ ∂ Σ {\displaystyle \oint _{\partial \Sigma }} is a line integral around the boundary curve ∂Σ, with the loop indicating the curve is closed. ∬ Σ {\displaystyle \iint _{\Sigma }} is a surface integral over the surface Σ, The total electric charge Q enclosed in Ω is the volume integral over Ω of the charge density ρ (see the "macroscopic formulation" section below): Q = ∭ Ω ρ d V , {\displaystyle Q=\iiint _{\Omega }\rho \ \mathrm {d} V,} where dV is the volume element. The net magnetic flux ΦB is the surface integral of the magnetic field B passing through a fixed surface, Σ: Φ B = ∬ Σ B ⋅ d S , {\displaystyle \Phi _{B}=\iint _{\Sigma }\mathbf {B} \cdot \mathrm {d} \mathbf {S} ,} The net electric flux ΦE is the surface integral of the electric field E passing through Σ: Φ E = ∬ Σ E ⋅ d S , {\displaystyle \Phi _{E}=\iint _{\Sigma }\mathbf {E} \cdot \mathrm {d} \mathbf {S} ,} The net electric current I is the surface integral of the electric current density J passing through Σ: I = ∬ Σ J ⋅ d S , {\displaystyle I=\iint _{\Sigma }\mathbf {J} \cdot \mathrm {d} \mathbf {S} ,} where dS denotes the differential vector element of surface area S, normal to surface Σ. (Vector area is sometimes denoted by A rather than S, but this conflicts with the notation for magnetic vector potential). === Formulation in the SI === === Formulation in the Gaussian system === The definitions of charge, electric field, and magnetic field can be altered to simplify theoretical calculation, by absorbing dimensioned factors of ε0 and μ0 into the units (and thus redefining these). With a corresponding change in the values of the quantities for the Lorentz force law this yields the same physics, i.e. trajectories of charged particles, or work done by an electric motor. These definitions are often preferred in theoretical and high energy physics where it is natural to take the electric and magnetic field with the same units, to simplify the appearance of the electromagnetic tensor: the Lorentz covariant object unifying electric and magnetic field would then contain components with uniform unit and dimension.: vii  Such modified definitions are conventionally used with the Gaussian (CGS) units. Using these definitions, colloquially "in Gaussian units", the Maxwell equations become: The equations simplify slightly when a system of quantities is chosen in the speed of light, c, is used for nondimensionalization, so that, for example, seconds and lightseconds are interchangeable, and c = 1. Further changes are possible by absorbing factors of 4π. This process, called rationalization, affects whether Coulomb's law or Gauss's law includes such a factor (see Heaviside–Lorentz units, used mainly in particle physics). == Relationship between differential and integral formulations == The equivalence of the differential and integral formulations are a consequence of the Gauss divergence theorem and the Kelvin–Stokes theorem. === Flux and divergence === According to the (purely mathematical) Gauss divergence theorem, the electric flux through the boundary surface ∂Ω can be rewritten as ∮ ∂ Ω E ⋅ d S = ∭ Ω ∇ ⋅ E d V {\displaystyle {\vphantom {\oint }}_{\scriptstyle \partial \Omega }\mathbf {E} \cdot \mathrm {d} \mathbf {S} =\iiint _{\Omega }\nabla \cdot \mathbf {E} \,\mathrm {d} V} The integral version of Gauss's equation can thus be rewritten as ∭ Ω ( ∇ ⋅ E − ρ ε 0 ) d V = 0 {\displaystyle \iiint _{\Omega }\left(\nabla \cdot \mathbf {E} -{\frac {\rho }{\varepsilon _{0}}}\right)\,\mathrm {d} V=0} Since Ω is arbitrary (e.g. an arbitrary small ball with arbitrary center), this is satisfied if and only if the integrand is zero everywhere. This is the differential equations formulation of Gauss equation up to a trivial rearrangement. Similarly rewriting the magnetic flux in Gauss's law for magnetism in integral form gives ∮ ∂ Ω B ⋅ d S = ∭ Ω ∇ ⋅ B d V = 0. {\displaystyle {\vphantom {\oint }}_{\scriptstyle \partial \Omega }\mathbf {B} \cdot \mathrm {d} \mathbf {S} =\iiint _{\Omega }\nabla \cdot \mathbf {B} \,\mathrm {d} V=0.} which is satisfied for all Ω if and only if ∇ ⋅ B = 0 {\displaystyle \nabla \cdot \mathbf {B} =0} everywhere. === Circulation and curl === By the Kelvin–Stokes theorem we can rewrite the line integrals of the fields around the closed boundary curve ∂Σ to an integral of the "circulation of the fields" (i.e. their curls) over a surface it bounds, i.e. ∮ ∂ Σ B ⋅ d ℓ = ∬ Σ ( ∇ × B ) ⋅ d S , {\displaystyle \oint _{\partial \Sigma }\mathbf {B} \cdot \mathrm {d} {\boldsymbol {\ell }}=\iint _{\Sigma }(\nabla \times \mathbf {B} )\cdot \mathrm {d} \mathbf {S} ,} Hence the Ampère–Maxwell law, the modified version of Ampère's circuital law, in integral form can be rewritten as ∬ Σ ( ∇ × B − μ 0 ( J + ε 0 ∂ E ∂ t ) ) ⋅ d S = 0. {\displaystyle \iint _{\Sigma }\left(\nabla \times \mathbf {B} -\mu _{0}\left(\mathbf {J} +\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\right)\right)\cdot \mathrm {d} \mathbf {S} =0.} Since Σ can be chosen arbitrarily, e.g. as an arbitrary small, arbitrary oriented, and arbitrary centered disk, we conclude that the integrand is zero if and only if the Ampère–Maxwell law in differential equations form is satisfied. The equivalence of Faraday's law in differential and integral form follows likewise. The line integrals and curls are analogous to quantities in classical fluid dynamics: the circulation of a fluid is the line integral of the fluid's flow velocity field around a closed loop, and the vorticity of the fluid is the curl of the velocity field. == Charge conservation == The invariance of charge can be derived as a corollary of Maxwell's equations. The left-hand side of the Ampère–Maxwell law has zero divergence by the div–curl identity. Expanding the divergence of the right-hand side, interchanging derivatives, and applying Gauss's law gives: 0 = ∇ ⋅ ( ∇ × B ) = ∇ ⋅ ( μ 0 ( J + ε 0 ∂ E ∂ t ) ) = μ 0 ( ∇ ⋅ J + ε 0 ∂ ∂ t ∇ ⋅ E ) = μ 0 ( ∇ ⋅ J + ∂ ρ ∂ t ) {\displaystyle 0=\nabla \cdot (\nabla \times \mathbf {B} )=\nabla \cdot \left(\mu _{0}\left(\mathbf {J} +\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\right)\right)=\mu _{0}\left(\nabla \cdot \mathbf {J} +\varepsilon _{0}{\frac {\partial }{\partial t}}\nabla \cdot \mathbf {E} \right)=\mu _{0}\left(\nabla \cdot \mathbf {J} +{\frac {\partial \rho }{\partial t}}\right)} i.e., ∂ ρ ∂ t + ∇ ⋅ J = 0. {\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot \mathbf {J} =0.} By the Gauss divergence theorem, this means the rate of change of charge in a fixed volume equals the net current flowing through the boundary: d d t Q Ω = d d t ∭ Ω ρ d V = − {\displaystyle {\frac {d}{dt}}Q_{\Omega }={\frac {d}{dt}}\iiint _{\Omega }\rho \mathrm {d} V=-} ∮ ∂ Ω J ⋅ d S = − I ∂ Ω . {\displaystyle {\vphantom {\oint }}_{\scriptstyle \partial \Omega }\mathbf {J} \cdot {\rm {d}}\mathbf {S} =-I_{\partial \Omega }.} In particular, in an isolated system the total charge is conserved. == Vacuum equations, electromagnetic waves and speed of light == In a region with no charges (ρ = 0) and no currents (J = 0), such as in vacuum, Maxwell's equations reduce to: ∇ ⋅ E = 0 , ∇ × E + ∂ B ∂ t = 0 , ∇ ⋅ B = 0 , ∇ × B − μ 0 ε 0 ∂ E ∂ t = 0. {\displaystyle {\begin{aligned}\nabla \cdot \mathbf {E} &=0,&\nabla \times \mathbf {E} +{\frac {\partial \mathbf {B} }{\partial t}}=0,\\\nabla \cdot \mathbf {B} &=0,&\nabla \times \mathbf {B} -\mu _{0}\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}=0.\end{aligned}}} Taking the curl (∇×) of the curl equations, and using the curl of the curl identity we obtain μ 0 ε 0 ∂ 2 E ∂ t 2 − ∇ 2 E = 0 , μ 0 ε 0 ∂ 2 B ∂ t 2 − ∇ 2 B = 0. {\displaystyle {\begin{aligned}\mu _{0}\varepsilon _{0}{\frac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}-\nabla ^{2}\mathbf {E} =0,\\\mu _{0}\varepsilon _{0}{\frac {\partial ^{2}\mathbf {B} }{\partial t^{2}}}-\nabla ^{2}\mathbf {B} =0.\end{aligned}}} The quantity μ 0 ε 0 {\displaystyle \mu _{0}\varepsilon _{0}} has the dimension (T/L)2. Defining c = ( μ 0 ε 0 ) − 1 / 2 {\displaystyle c=(\mu _{0}\varepsilon _{0})^{-1/2}} , the equations above have the form of the standard wave equations 1 c 2 ∂ 2 E ∂ t 2 − ∇ 2 E = 0 , 1 c 2 ∂ 2 B ∂ t 2 − ∇ 2 B = 0. {\displaystyle {\begin{aligned}{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}-\nabla ^{2}\mathbf {E} =0,\\{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {B} }{\partial t^{2}}}-\nabla ^{2}\mathbf {B} =0.\end{aligned}}} Already during Maxwell's lifetime, it was found that the known values for ε 0 {\displaystyle \varepsilon _{0}} and μ 0 {\displaystyle \mu _{0}} give c ≈ 2.998 × 10 8 m/s {\displaystyle c\approx 2.998\times 10^{8}~{\text{m/s}}} , then already known to be the speed of light in free space. This led him to propose that light and radio waves were propagating electromagnetic waves, since amply confirmed. In the old SI system of units, the values of μ 0 = 4 π × 10 − 7 {\displaystyle \mu _{0}=4\pi \times 10^{-7}} and c = 299 792 458 m/s {\displaystyle c=299\,792\,458~{\text{m/s}}} are defined constants, (which means that by definition ε 0 = 8.854 187 8... × 10 − 12 F/m {\displaystyle \varepsilon _{0}=8.854\,187\,8...\times 10^{-12}~{\text{F/m}}} ) that define the ampere and the metre. In the new SI system, only c keeps its defined value, and the electron charge gets a defined value. In materials with relative permittivity, εr, and relative permeability, μr, the phase velocity of light becomes v p = 1 μ 0 μ r ε 0 ε r , {\displaystyle v_{\text{p}}={\frac {1}{\sqrt {\mu _{0}\mu _{\text{r}}\varepsilon _{0}\varepsilon _{\text{r}}}}},} which is usually less than c. In addition, E and B are perpendicular to each other and to the direction of wave propagation, and are in phase with each other. A sinusoidal plane wave is one special solution of these equations. Maxwell's equations explain how these waves can physically propagate through space. The changing magnetic field creates a changing electric field through Faraday's law. In turn, that electric field creates a changing magnetic field through Maxwell's modification of Ampère's circuital law. This perpetual cycle allows these waves, now known as electromagnetic radiation, to move through space at velocity c. == Macroscopic formulation == The above equations are the microscopic version of Maxwell's equations, expressing the electric and the magnetic fields in terms of the (possibly atomic-level) charges and currents present. This is sometimes called the "general" form, but the macroscopic version below is equally general, the difference being one of bookkeeping. The microscopic version is sometimes called "Maxwell's equations in vacuum": this refers to the fact that the material medium is not built into the structure of the equations, but appears only in the charge and current terms. The microscopic version was introduced by Lorentz, who tried to use it to derive the macroscopic properties of bulk matter from its microscopic constituents.: 5  "Maxwell's macroscopic equations", also known as Maxwell's equations in matter, are more similar to those that Maxwell introduced himself. In the macroscopic equations, the influence of bound charge Qb and bound current Ib is incorporated into the displacement field D and the magnetizing field H, while the equations depend only on the free charges Qf and free currents If. This reflects a splitting of the total electric charge Q and current I (and their densities ρ and J) into free and bound parts: Q = Q f + Q b = ∭ Ω ( ρ f + ρ b ) d V = ∭ Ω ρ d V , I = I f + I b = ∬ Σ ( J f + J b ) ⋅ d S = ∬ Σ J ⋅ d S . {\displaystyle {\begin{aligned}Q&=Q_{\text{f}}+Q_{\text{b}}=\iiint _{\Omega }\left(\rho _{\text{f}}+\rho _{\text{b}}\right)\,\mathrm {d} V=\iiint _{\Omega }\rho \,\mathrm {d} V,\\I&=I_{\text{f}}+I_{\text{b}}=\iint _{\Sigma }\left(\mathbf {J} _{\text{f}}+\mathbf {J} _{\text{b}}\right)\cdot \mathrm {d} \mathbf {S} =\iint _{\Sigma }\mathbf {J} \cdot \mathrm {d} \mathbf {S} .\end{aligned}}} The cost of this splitting is that the additional fields D and H need to be determined through phenomenological constituent equations relating these fields to the electric field E and the magnetic field B, together with the bound charge and current. See below for a detailed description of the differences between the microscopic equations, dealing with total charge and current including material contributions, useful in air/vacuum; and the macroscopic equations, dealing with free charge and current, practical to use within materials. === Bound charge and current === When an electric field is applied to a dielectric material its molecules respond by forming microscopic electric dipoles – their atomic nuclei move a tiny distance in the direction of the field, while their electrons move a tiny distance in the opposite direction. This produces a macroscopic bound charge in the material even though all of the charges involved are bound to individual molecules. For example, if every molecule responds the same, similar to that shown in the figure, these tiny movements of charge combine to produce a layer of positive bound charge on one side of the material and a layer of negative charge on the other side. The bound charge is most conveniently described in terms of the polarization P of the material, its dipole moment per unit volume. If P is uniform, a macroscopic separation of charge is produced only at the surfaces where P enters and leaves the material. For non-uniform P, a charge is also produced in the bulk. Somewhat similarly, in all materials the constituent atoms exhibit magnetic moments that are intrinsically linked to the angular momentum of the components of the atoms, most notably their electrons. The connection to angular momentum suggests the picture of an assembly of microscopic current loops. Outside the material, an assembly of such microscopic current loops is not different from a macroscopic current circulating around the material's surface, despite the fact that no individual charge is traveling a large distance. These bound currents can be described using the magnetization M. The very complicated and granular bound charges and bound currents, therefore, can be represented on the macroscopic scale in terms of P and M, which average these charges and currents on a sufficiently large scale so as not to see the granularity of individual atoms, but also sufficiently small that they vary with location in the material. As such, Maxwell's macroscopic equations ignore many details on a fine scale that can be unimportant to understanding matters on a gross scale by calculating fields that are averaged over some suitable volume. === Auxiliary fields, polarization and magnetization === The definitions of the auxiliary fields are: D ( r , t ) = ε 0 E ( r , t ) + P ( r , t ) , H ( r , t ) = 1 μ 0 B ( r , t ) − M ( r , t ) , {\displaystyle {\begin{aligned}\mathbf {D} (\mathbf {r} ,t)&=\varepsilon _{0}\mathbf {E} (\mathbf {r} ,t)+\mathbf {P} (\mathbf {r} ,t),\\\mathbf {H} (\mathbf {r} ,t)&={\frac {1}{\mu _{0}}}\mathbf {B} (\mathbf {r} ,t)-\mathbf {M} (\mathbf {r} ,t),\end{aligned}}} where P is the polarization field and M is the magnetization field, which are defined in terms of microscopic bound charges and bound currents respectively. The macroscopic bound charge density ρb and bound current density Jb in terms of polarization P and magnetization M are then defined as ρ b = − ∇ ⋅ P , J b = ∇ × M + ∂ P ∂ t . {\displaystyle {\begin{aligned}\rho _{\text{b}}&=-\nabla \cdot \mathbf {P} ,\\\mathbf {J} _{\text{b}}&=\nabla \times \mathbf {M} +{\frac {\partial \mathbf {P} }{\partial t}}.\end{aligned}}} If we define the total, bound, and free charge and current density by ρ = ρ b + ρ f , J = J b + J f , {\displaystyle {\begin{aligned}\rho &=\rho _{\text{b}}+\rho _{\text{f}},\\\mathbf {J} &=\mathbf {J} _{\text{b}}+\mathbf {J} _{\text{f}},\end{aligned}}} and use the defining relations above to eliminate D, and H, the "macroscopic" Maxwell's equations reproduce the "microscopic" equations. === Constitutive relations === In order to apply 'Maxwell's macroscopic equations', it is necessary to specify the relations between displacement field D and the electric field E, as well as the magnetizing field H and the magnetic field B. Equivalently, we have to specify the dependence of the polarization P (hence the bound charge) and the magnetization M (hence the bound current) on the applied electric and magnetic field. The equations specifying this response are called constitutive relations. For real-world materials, the constitutive relations are rarely simple, except approximately, and usually determined by experiment. See the main article on constitutive relations for a fuller description.: 44–45  For materials without polarization and magnetization, the constitutive relations are (by definition): 2  D = ε 0 E , H = 1 μ 0 B , {\displaystyle \mathbf {D} =\varepsilon _{0}\mathbf {E} ,\quad \mathbf {H} ={\frac {1}{\mu _{0}}}\mathbf {B} ,} where ε0 is the permittivity of free space and μ0 the permeability of free space. Since there is no bound charge, the total and the free charge and current are equal. An alternative viewpoint on the microscopic equations is that they are the macroscopic equations together with the statement that vacuum behaves like a perfect linear "material" without additional polarization and magnetization. More generally, for linear materials the constitutive relations are: 44–45  D = ε E , H = 1 μ B , {\displaystyle \mathbf {D} =\varepsilon \mathbf {E} ,\quad \mathbf {H} ={\frac {1}{\mu }}\mathbf {B} ,} where ε is the permittivity and μ the permeability of the material. For the displacement field D the linear approximation is usually excellent because for all but the most extreme electric fields or temperatures obtainable in the laboratory (high power pulsed lasers) the interatomic electric fields of materials of the order of 1011 V/m are much higher than the external field. For the magnetizing field H {\displaystyle \mathbf {H} } , however, the linear approximation can break down in common materials like iron leading to phenomena like hysteresis. Even the linear case can have various complications, however. For homogeneous materials, ε and μ are constant throughout the material, while for inhomogeneous materials they depend on location within the material (and perhaps time).: 463  For isotropic materials, ε and μ are scalars, while for anisotropic materials (e.g. due to crystal structure) they are tensors.: 421 : 463  Materials are generally dispersive, so ε and μ depend on the frequency of any incident EM waves.: 625 : 397  Even more generally, in the case of non-linear materials (see for example nonlinear optics), D and P are not necessarily proportional to E, similarly H or M is not necessarily proportional to B. In general D and H depend on both E and B, on location and time, and possibly other physical quantities. In applications one also has to describe how the free currents and charge density behave in terms of E and B possibly coupled to other physical quantities like pressure, and the mass, number density, and velocity of charge-carrying particles. E.g., the original equations given by Maxwell (see History of Maxwell's equations) included Ohm's law in the form J f = σ E . {\displaystyle \mathbf {J} _{\text{f}}=\sigma \mathbf {E} .} == Alternative formulations == Following are some of the several other mathematical formalisms of Maxwell's equations, with the columns separating the two homogeneous Maxwell equations from the two inhomogeneous ones. Each formulation has versions directly in terms of the electric and magnetic fields, and indirectly in terms of the electrical potential φ and the vector potential A. Potentials were introduced as a convenient way to solve the homogeneous equations, but it was thought that all observable physics was contained in the electric and magnetic fields (or relativistically, the Faraday tensor). The potentials play a central role in quantum mechanics, however, and act quantum mechanically with observable consequences even when the electric and magnetic fields vanish (Aharonov–Bohm effect). Each table describes one formalism. See the main article for details of each formulation. The direct spacetime formulations make manifest that the Maxwell equations are relativistically invariant, where space and time are treated on equal footing. Because of this symmetry, the electric and magnetic fields are treated on equal footing and are recognized as components of the Faraday tensor. This reduces the four Maxwell equations to two, which simplifies the equations, although we can no longer use the familiar vector formulation. Maxwell equations in formulation that do not treat space and time manifestly on the same footing have Lorentz invariance as a hidden symmetry. This was a major source of inspiration for the development of relativity theory. Indeed, even the formulation that treats space and time separately is not a non-relativistic approximation and describes the same physics by simply renaming variables. For this reason the relativistic invariant equations are usually called the Maxwell equations as well. Each table below describes one formalism. In the tensor calculus formulation, the electromagnetic tensor Fαβ is an antisymmetric covariant order 2 tensor; the four-potential, Aα, is a covariant vector; the current, Jα, is a vector; the square brackets, [ ], denote antisymmetrization of indices; ∂α is the partial derivative with respect to the coordinate, xα. In Minkowski space coordinates are chosen with respect to an inertial frame; (xα) = (ct, x, y, z), so that the metric tensor used to raise and lower indices is ηαβ = diag(1, −1, −1, −1). The d'Alembert operator on Minkowski space is ◻ = ∂α∂α as in the vector formulation. In general spacetimes, the coordinate system xα is arbitrary, the covariant derivative ∇α, the Ricci tensor, Rαβ and raising and lowering of indices are defined by the Lorentzian metric, gαβ and the d'Alembert operator is defined as ◻ = ∇α∇α. The topological restriction is that the second real cohomology group of the space vanishes (see the differential form formulation for an explanation). This is violated for Minkowski space with a line removed, which can model a (flat) spacetime with a point-like monopole on the complement of the line. In the differential form formulation on arbitrary space times, F = ⁠1/2⁠Fαβ‍dxα ∧ dxβ is the electromagnetic tensor considered as a 2-form, A = Aαdxα is the potential 1-form, J = − J α ⋆ d x α {\displaystyle J=-J_{\alpha }{\star }\mathrm {d} x^{\alpha }} is the current 3-form, d is the exterior derivative, and ⋆ {\displaystyle {\star }} is the Hodge star on forms defined (up to its orientation, i.e. its sign) by the Lorentzian metric of spacetime. In the special case of 2-forms such as F, the Hodge star ⋆ {\displaystyle {\star }} depends on the metric tensor only for its local scale. This means that, as formulated, the differential form field equations are conformally invariant, but the Lorenz gauge condition breaks conformal invariance. The operator ◻ = ( − ⋆ d ⋆ d − d ⋆ d ⋆ ) {\displaystyle \Box =(-{\star }\mathrm {d} {\star }\mathrm {d} -\mathrm {d} {\star }\mathrm {d} {\star })} is the d'Alembert–Laplace–Beltrami operator on 1-forms on an arbitrary Lorentzian spacetime. The topological condition is again that the second real cohomology group is 'trivial' (meaning that its form follows from a definition). By the isomorphism with the second de Rham cohomology this condition means that every closed 2-form is exact. Other formalisms include the geometric algebra formulation and a matrix representation of Maxwell's equations. Historically, a quaternionic formulation was used. == Solutions == Maxwell's equations are partial differential equations that relate the electric and magnetic fields to each other and to the electric charges and currents. Often, the charges and currents are themselves dependent on the electric and magnetic fields via the Lorentz force equation and the constitutive relations. These all form a set of coupled partial differential equations which are often very difficult to solve: the solutions encompass all the diverse phenomena of classical electromagnetism. Some general remarks follow. As for any differential equation, boundary conditions and initial conditions are necessary for a unique solution. For example, even with no charges and no currents anywhere in spacetime, there are the obvious solutions for which E and B are zero or constant, but there are also non-trivial solutions corresponding to electromagnetic waves. In some cases, Maxwell's equations are solved over the whole of space, and boundary conditions are given as asymptotic limits at infinity. In other cases, Maxwell's equations are solved in a finite region of space, with appropriate conditions on the boundary of that region, for example an artificial absorbing boundary representing the rest of the universe, or periodic boundary conditions, or walls that isolate a small region from the outside world (as with a waveguide or cavity resonator). Jefimenko's equations (or the closely related Liénard–Wiechert potentials) are the explicit solution to Maxwell's equations for the electric and magnetic fields created by any given distribution of charges and currents. It assumes specific initial conditions to obtain the so-called "retarded solution", where the only fields present are the ones created by the charges. However, Jefimenko's equations are unhelpful in situations when the charges and currents are themselves affected by the fields they create. Numerical methods for differential equations can be used to compute approximate solutions of Maxwell's equations when exact solutions are impossible. These include the finite element method and finite-difference time-domain method. For more details, see Computational electromagnetics. == Overdetermination of Maxwell's equations == Maxwell's equations seem overdetermined, in that they involve six unknowns (the three components of E and B) but eight equations (one for each of the two Gauss's laws, three vector components each for Faraday's and Ampère's circuital laws). (The currents and charges are not unknowns, being freely specifiable subject to charge conservation.) This is related to a certain limited kind of redundancy in Maxwell's equations: It can be proven that any system satisfying Faraday's law and Ampère's circuital law automatically also satisfies the two Gauss's laws, as long as the system's initial condition does, and assuming conservation of charge and the nonexistence of magnetic monopoles. This explanation was first introduced by Julius Adams Stratton in 1941. Although it is possible to simply ignore the two Gauss's laws in a numerical algorithm (apart from the initial conditions), the imperfect precision of the calculations can lead to ever-increasing violations of those laws. By introducing dummy variables characterizing these violations, the four equations become not overdetermined after all. The resulting formulation can lead to more accurate algorithms that take all four laws into account. Both identities ∇ ⋅ ∇ × B ≡ 0 , ∇ ⋅ ∇ × E ≡ 0 {\displaystyle \nabla \cdot \nabla \times \mathbf {B} \equiv 0,\nabla \cdot \nabla \times \mathbf {E} \equiv 0} , which reduce eight equations to six independent ones, are the true reason of overdetermination. Equivalently, the overdetermination can be viewed as implying conservation of electric and magnetic charge, as they are required in the derivation described above but implied by the two Gauss's laws. For linear algebraic equations, one can make 'nice' rules to rewrite the equations and unknowns. The equations can be linearly dependent. But in differential equations, and especially partial differential equations (PDEs), one needs appropriate boundary conditions, which depend in not so obvious ways on the equations. Even more, if one rewrites them in terms of vector and scalar potential, then the equations are underdetermined because of gauge fixing. == Maxwell's equations as the classical limit of QED == Maxwell's equations and the Lorentz force law (along with the rest of classical electromagnetism) are extraordinarily successful at explaining and predicting a variety of phenomena. However, they do not account for quantum effects, and so their domain of applicability is limited. Maxwell's equations are thought of as the classical limit of quantum electrodynamics (QED). Some observed electromagnetic phenomena cannot be explained with Maxwell's equations if the source of the electromagnetic fields are the classical distributions of charge and current. These include photon–photon scattering and many other phenomena related to photons or virtual photons, "nonclassical light" and quantum entanglement of electromagnetic fields (see Quantum optics). E.g. quantum cryptography cannot be described by Maxwell theory, not even approximately. The approximate nature of Maxwell's equations becomes more and more apparent when going into the extremely strong field regime (see Euler–Heisenberg Lagrangian) or to extremely small distances. Finally, Maxwell's equations cannot explain any phenomenon involving individual photons interacting with quantum matter, such as the photoelectric effect, Planck's law, the Duane–Hunt law, and single-photon light detectors. However, many such phenomena may be explained using a halfway theory of quantum matter coupled to a classical electromagnetic field, either as external field or with the expected value of the charge current and density on the right hand side of Maxwell's equations. This is known as semiclassical theory or self-field QED and was initially discovered by de Broglie and Schrödinger and later fully developed by E.T. Jaynes and A.O. Barut. == Variations == Popular variations on the Maxwell equations as a classical theory of electromagnetic fields are relatively scarce because the standard equations have stood the test of time remarkably well. === Magnetic monopoles === Maxwell's equations posit that there is electric charge, but no magnetic charge (also called magnetic monopoles), in the universe. Indeed, magnetic charge has never been observed, despite extensive searches, and may not exist. If they did exist, both Gauss's law for magnetism and Faraday's law would need to be modified, and the resulting four equations would be fully symmetric under the interchange of electric and magnetic fields.: 273–275  == See also == == Explanatory notes == == References == == Further reading == Imaeda, K. (1995), "Biquaternionic Formulation of Maxwell's Equations and their Solutions", in Ablamowicz, Rafał; Lounesto, Pertti (eds.), Clifford Algebras and Spinor Structures, Springer, pp. 265–280, doi:10.1007/978-94-015-8422-7_16, ISBN 978-90-481-4525-6 === Historical publications === On Faraday's Lines of Force – 1855/56. Maxwell's first paper (Part 1 & 2) – Compiled by Blaze Labs Research (PDF). On Physical Lines of Force – 1861. Maxwell's 1861 paper describing magnetic lines of force – Predecessor to 1873 Treatise. James Clerk Maxwell, "A Dynamical Theory of the Electromagnetic Field", Philosophical Transactions of the Royal Society of London 155, 459–512 (1865). (This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society.) A Dynamical Theory Of The Electromagnetic Field – 1865. Maxwell's 1865 paper describing his 20 equations, link from Google Books. J. Clerk Maxwell (1873), "A Treatise on Electricity and Magnetism": Maxwell, J. C., "A Treatise on Electricity And Magnetism" – Volume 1 – 1873 – Posner Memorial Collection – Carnegie Mellon University. Maxwell, J. C., "A Treatise on Electricity And Magnetism" – Volume 2 – 1873 – Posner Memorial Collection – Carnegie Mellon University. Developments before the theory of relativity Larmor Joseph (1897). "On a dynamical theory of the electric and luminiferous medium. Part 3, Relations with material media" . Phil. Trans. R. Soc. 190: 205–300. Lorentz Hendrik (1899). "Simplified theory of electrical and optical phenomena in moving systems" . Proc. Acad. Science Amsterdam. I: 427–443. Lorentz Hendrik (1904). "Electromagnetic phenomena in a system moving with any velocity less than that of light" . Proc. Acad. Science Amsterdam. IV: 669–678. Henri Poincaré (1900) "La théorie de Lorentz et le Principe de Réaction" (in French), Archives Néerlandaises, V, 253–278. Henri Poincaré (1902) "La Science et l'Hypothèse" (in French). Henri Poincaré (1905) "Sur la dynamique de l'électron" (in French), Comptes Rendus de l'Académie des Sciences, 140, 1504–1508. Catt, Walton and Davidson. "The History of Displacement Current" Archived 2008-05-06 at the Wayback Machine. Wireless World, March 1979. == External links == "Maxwell equations", Encyclopedia of Mathematics, EMS Press, 2001 [1994] maxwells-equations.com — An intuitive tutorial of Maxwell's equations. The Feynman Lectures on Physics Vol. II Ch. 18: The Maxwell Equations Wikiversity Page on Maxwell's Equations === Modern treatments === Electromagnetism (ch. 11), B. Crowell, Fullerton College Lecture series: Relativity and electromagnetism, R. Fitzpatrick, University of Texas at Austin Electromagnetic waves from Maxwell's equations on Project PHYSNET. MIT Video Lecture Series (36 × 50 minute lectures) (in .mp4 format) – Electricity and Magnetism Taught by Professor Walter Lewin. === Other === Silagadze, Z. K. (2002). "Feynman's derivation of Maxwell equations and extra dimensions". Annales de la Fondation Louis de Broglie. 27: 241–256. arXiv:hep-ph/0106235. Nature Milestones: Photons – Milestone 2 (1861) Maxwell's equations
Wikipedia/Maxwell_Equations
In mathematics, the covariant derivative is a way of specifying a derivative along tangent vectors of a manifold. Alternatively, the covariant derivative is a way of introducing and working with a connection on a manifold by means of a differential operator, to be contrasted with the approach given by a principal connection on the frame bundle – see affine connection. In the special case of a manifold isometrically embedded into a higher-dimensional Euclidean space, the covariant derivative can be viewed as the orthogonal projection of the Euclidean directional derivative onto the manifold's tangent space. In this case the Euclidean derivative is broken into two parts, the extrinsic normal component (dependent on the embedding) and the intrinsic covariant derivative component. The name is motivated by the importance of changes of coordinate in physics: the covariant derivative transforms covariantly under a general coordinate transformation, that is, linearly via the Jacobian matrix of the transformation. This article presents an introduction to the covariant derivative of a vector field with respect to a vector field, both in a coordinate-free language and using a local coordinate system and the traditional index notation. The covariant derivative of a tensor field is presented as an extension of the same concept. The covariant derivative generalizes straightforwardly to a notion of differentiation associated to a connection on a vector bundle, also known as a Koszul connection. == History == Historically, at the turn of the 20th century, the covariant derivative was introduced by Gregorio Ricci-Curbastro and Tullio Levi-Civita in the theory of Riemannian and pseudo-Riemannian geometry. Ricci and Levi-Civita (following ideas of Elwin Bruno Christoffel) observed that the Christoffel symbols used to define the curvature could also provide a notion of differentiation which generalized the classical directional derivative of vector fields on a manifold. This new derivative – the Levi-Civita connection – was covariant in the sense that it satisfied Riemann's requirement that objects in geometry should be independent of their description in a particular coordinate system. It was soon noted by other mathematicians, prominent among these being Hermann Weyl, Jan Arnoldus Schouten, and Élie Cartan, that a covariant derivative could be defined abstractly without the presence of a metric. The crucial feature was not a particular dependence on the metric, but that the Christoffel symbols satisfied a certain precise second-order transformation law. This transformation law could serve as a starting point for defining the derivative in a covariant manner. Thus the theory of covariant differentiation forked off from the strictly Riemannian context to include a wider range of possible geometries. In the 1940s, practitioners of differential geometry began introducing other notions of covariant differentiation in general vector bundles which were, in contrast to the classical bundles of interest to geometers, not part of the tensor analysis of the manifold. By and large, these generalized covariant derivatives had to be specified ad hoc by some version of the connection concept. In 1950, Jean-Louis Koszul unified these new ideas of covariant differentiation in a vector bundle by means of what is known today as a Koszul connection or a connection on a vector bundle. Using ideas from Lie algebra cohomology, Koszul successfully converted many of the analytic features of covariant differentiation into algebraic ones. In particular, Koszul connections eliminated the need for awkward manipulations of Christoffel symbols (and other analogous non-tensorial objects) in differential geometry. Thus they quickly supplanted the classical notion of covariant derivative in many post-1950 treatments of the subject. == Motivation == The covariant derivative is a generalization of the directional derivative from vector calculus. As with the directional derivative, the covariant derivative is a rule, ∇ u v {\displaystyle \nabla _{\mathbf {u} }{\mathbf {v} }} , which takes as its inputs: (1) a vector, u, defined at a point P, and (2) a vector field v defined in a neighborhood of P. The output is the vector ∇ u v ( P ) {\displaystyle \nabla _{\mathbf {u} }{\mathbf {v} }(P)} , also at the point P. The primary difference from the usual directional derivative is that ∇ u v {\displaystyle \nabla _{\mathbf {u} }{\mathbf {v} }} must, in a certain precise sense, be independent of the manner in which it is expressed in a coordinate system. A vector may be described as a list of numbers in terms of a basis, but as a geometrical object the vector retains its identity regardless of how it is described. For a geometric vector written in components with respect to one basis, when the basis is changed the components transform according to a change of basis formula, with the coordinates undergoing a covariant transformation. The covariant derivative is required to transform, under a change in coordinates, by a covariant transformation in the same way as a basis does (hence the name). In the case of Euclidean space, one usually defines the directional derivative of a vector field in terms of the difference between two vectors at two nearby points. In such a system one translates one of the vectors to the origin of the other, keeping it parallel, then takes their difference within the same vector space. With a Cartesian (fixed orthonormal) coordinate system "keeping it parallel" amounts to keeping the components constant. This ordinary directional derivative on Euclidean space is the first example of a covariant derivative. Next, one must take into account changes of the coordinate system. For example, if the Euclidean plane is described by polar coordinates, "keeping it parallel" does not amount to keeping the polar components constant under translation, since the coordinate grid itself "rotates". Thus, the same covariant derivative written in polar coordinates contains extra terms that describe how the coordinate grid itself rotates, or how in more general coordinates the grid expands, contracts, twists, interweaves, etc. Consider the example of a particle moving along a curve γ(t) in the Euclidean plane. In polar coordinates, γ may be written in terms of its radial and angular coordinates by γ(t) = (r(t), θ(t)). A vector at a particular time t (for instance, a constant acceleration of the particle) is expressed in terms of ( e r , e θ ) {\displaystyle (\mathbf {e} _{r},\mathbf {e} _{\theta })} , where e r {\displaystyle \mathbf {e} _{r}} and e θ {\displaystyle \mathbf {e} _{\theta }} are unit tangent vectors for the polar coordinates, serving as a basis to decompose a vector in terms of radial and tangential components. At a slightly later time, the new basis in polar coordinates appears slightly rotated with respect to the first set. The covariant derivative of the basis vectors (the Christoffel symbols) serve to express this change. In a curved space, such as the surface of the Earth (regarded as a sphere), the translation of tangent vectors between different points is not well defined, and its analog, parallel transport, depends on the path along which the vector is translated. A vector on a globe on the equator at point Q is directed to the north. Suppose we transport the vector (keeping it parallel) first along the equator to the point P, then drag it along a meridian to the N pole, and finally transport it along another meridian back to Q. Then we notice that the parallel-transported vector along a closed circuit does not return as the same vector; instead, it has another orientation. This would not happen in Euclidean space and is caused by the curvature of the surface of the globe. The same effect occurs if we drag the vector along an infinitesimally small closed surface subsequently along two directions and then back. This infinitesimal change of the vector is a measure of the curvature, and can be defined in terms of the covariant derivative. === Remarks === The definition of the covariant derivative does not use the metric in space. However, for each metric there is a unique torsion-free covariant derivative called the Levi-Civita connection such that the covariant derivative of the metric is zero. The properties of a derivative imply that ∇ v u {\displaystyle \nabla _{\mathbf {v} }\mathbf {u} } depends on the values of u in a neighborhood of a point p in the same way as e.g. the derivative of a scalar function f along a curve at a given point p depends on the values of f in a neighborhood of p. The information in a neighborhood of a point p in the covariant derivative can be used to define parallel transport of a vector. Also the curvature, torsion, and geodesics may be defined only in terms of the covariant derivative or other related variation on the idea of a linear connection. == Informal definition using an embedding into Euclidean space == Suppose an open subset U of a d-dimensional Riemannian manifold M is embedded into Euclidean space ( R n , ⟨ ⋅ , ⋅ ⟩ ) {\displaystyle (\mathbb {R} ^{n},\langle \cdot ,\cdot \rangle )} via a twice continuously-differentiable (C2) mapping Ψ → : R d ⊃ U → R n {\displaystyle {\vec {\Psi }}:\mathbb {R} ^{d}\supset U\to \mathbb {R} ^{n}} such that the tangent space at Ψ → ( p ) {\displaystyle {\vec {\Psi }}(p)} is spanned by the vectors { ∂ Ψ → ∂ x i | p : i ∈ { 1 , … , d } } {\displaystyle \left\{\left.{\frac {\partial {\vec {\Psi }}}{\partial x^{i}}}\right|_{p}:i\in \{1,\dots ,d\}\right\}} and the scalar product ⟨ ⋅ , ⋅ ⟩ {\displaystyle \left\langle \cdot ,\cdot \right\rangle } on R n {\displaystyle \mathbb {R} ^{n}} is compatible with the metric on M: g i j = ⟨ ∂ Ψ → ∂ x i , ∂ Ψ → ∂ x j ⟩ . {\displaystyle g_{ij}=\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{i}}},{\frac {\partial {\vec {\Psi }}}{\partial x^{j}}}\right\rangle .} (Since the manifold metric is always assumed to be regular, the compatibility condition implies linear independence of the partial derivative tangent vectors.) For a tangent vector field, V → = v j ∂ Ψ → ∂ x j {\displaystyle {\vec {V}}=v^{j}{\frac {\partial {\vec {\Psi }}}{\partial x^{j}}}} , one has ∂ V → ∂ x i = ∂ ∂ x i ( v j ∂ Ψ → ∂ x j ) = ∂ v j ∂ x i ∂ Ψ → ∂ x j + v j ∂ 2 Ψ → ∂ x i ∂ x j . {\displaystyle {\frac {\partial {\vec {V}}}{\partial x^{i}}}={\frac {\partial }{\partial x^{i}}}\left(v^{j}{\frac {\partial {\vec {\Psi }}}{\partial x^{j}}}\right)={\frac {\partial v^{j}}{\partial x^{i}}}{\frac {\partial {\vec {\Psi }}}{\partial x^{j}}}+v^{j}{\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{i}\,\partial x^{j}}}.} The last term is not tangential to M, but can be expressed as a linear combination of the tangent space base vectors using the Christoffel symbols as linear factors plus a vector orthogonal to the tangent space: v j ∂ 2 Ψ → ∂ x i ∂ x j = v j Γ k i j ∂ Ψ → ∂ x k + n → . {\displaystyle v^{j}{\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{i}\,\partial x^{j}}}=v^{j}{\Gamma ^{k}}_{ij}{\frac {\partial {\vec {\Psi }}}{\partial x^{k}}}+{\vec {n}}.} In the case of the Levi-Civita connection, the covariant derivative ∇ e i V → {\displaystyle \nabla _{\mathbf {e} _{i}}{\vec {V}}} , also written ∇ i V → {\displaystyle \nabla _{i}{\vec {V}}} , is defined as the orthogonal projection of the usual derivative onto tangent space: ∇ e i V → := ∂ V → ∂ x i − n → = ( ∂ v k ∂ x i + v j Γ k i j ) ∂ Ψ → ∂ x k . {\displaystyle \nabla _{\mathbf {e} _{i}}{\vec {V}}:={\frac {\partial {\vec {V}}}{\partial x^{i}}}-{\vec {n}}=\left({\frac {\partial v^{k}}{\partial x^{i}}}+v^{j}{\Gamma ^{k}}_{ij}\right){\frac {\partial {\vec {\Psi }}}{\partial x^{k}}}.} From here it may be computationally convenient to obtain a relation between the Christoffel symbols for the Levi-Civita connection and the metric. To do this we first note that, since the vector n → {\displaystyle {\vec {n}}} in the previous equation is orthogonal to the tangent space, ⟨ ∂ 2 Ψ → ∂ x i ∂ x j , ∂ Ψ → ∂ x l ⟩ = ⟨ Γ k i j ∂ Ψ → ∂ x k + n → , ∂ Ψ → ∂ x l ⟩ = ⟨ ∂ Ψ → ∂ x k , ∂ Ψ → ∂ x l ⟩ Γ k i j = g k l Γ k i j . {\displaystyle \left\langle {\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{i}\,\partial x^{j}}},{\frac {\partial {\vec {\Psi }}}{\partial x^{l}}}\right\rangle =\left\langle {\Gamma ^{k}}_{ij}{\frac {\partial {\vec {\Psi }}}{\partial x^{k}}}+{\vec {n}},{\frac {\partial {\vec {\Psi }}}{\partial x^{l}}}\right\rangle =\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{k}}},{\frac {\partial {\vec {\Psi }}}{\partial x^{l}}}\right\rangle {\Gamma ^{k}}_{ij}=g_{kl}\,{\Gamma ^{k}}_{ij}.} Then, since the partial derivative of a component g a b {\displaystyle g_{ab}} of the metric with respect to a coordinate x c {\displaystyle x^{c}} is ∂ g a b ∂ x c = ∂ ∂ x c ⟨ ∂ Ψ → ∂ x a , ∂ Ψ → ∂ x b ⟩ = ⟨ ∂ 2 Ψ → ∂ x c ∂ x a , ∂ Ψ → ∂ x b ⟩ + ⟨ ∂ Ψ → ∂ x a , ∂ 2 Ψ → ∂ x c ∂ x b ⟩ , {\displaystyle {\frac {\partial g_{ab}}{\partial x^{c}}}={\frac {\partial }{\partial x^{c}}}\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{a}}},{\frac {\partial {\vec {\Psi }}}{\partial x^{b}}}\right\rangle =\left\langle {\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{c}\,\partial x^{a}}},{\frac {\partial {\vec {\Psi }}}{\partial x^{b}}}\right\rangle +\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{a}}},{\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{c}\,\partial x^{b}}}\right\rangle ,} any triplet i , j , k {\displaystyle i,j,k} of indices yields a system of equations { ∂ g j k ∂ x i = ⟨ ∂ Ψ → ∂ x j , ∂ 2 Ψ → ∂ x k ∂ x i ⟩ + ⟨ ∂ Ψ → ∂ x k , ∂ 2 Ψ → ∂ x i ∂ x j ⟩ ∂ g k i ∂ x j = ⟨ ∂ Ψ → ∂ x i , ∂ 2 Ψ → ∂ x j ∂ x k ⟩ + ⟨ ∂ Ψ → ∂ x k , ∂ 2 Ψ → ∂ x i ∂ x j ⟩ ∂ g i j ∂ x k = ⟨ ∂ Ψ → ∂ x i , ∂ 2 Ψ → ∂ x j ∂ x k ⟩ + ⟨ ∂ Ψ → ∂ x j , ∂ 2 Ψ → ∂ x k ∂ x i ⟩ . {\displaystyle \left\{{\begin{alignedat}{2}{\frac {\partial g_{jk}}{\partial x^{i}}}=&&\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{j}}},{\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{k}\partial x^{i}}}\right\rangle &+\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{k}}},{\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{i}\partial x^{j}}}\right\rangle \\{\frac {\partial g_{ki}}{\partial x^{j}}}=&\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{i}}},{\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{j}\partial x^{k}}}\right\rangle &&+\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{k}}},{\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{i}\partial x^{j}}}\right\rangle \\{\frac {\partial g_{ij}}{\partial x^{k}}}=&\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{i}}},{\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{j}\partial x^{k}}}\right\rangle &+\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{j}}},{\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{k}\partial x^{i}}}\right\rangle &&.\end{alignedat}}\right.} (Here the symmetry of the scalar product has been used and the order of partial differentiations have been swapped.) Adding the first two equations and subtracting the third, we obtain ∂ g j k ∂ x i + ∂ g k i ∂ x j − ∂ g i j ∂ x k = 2 ⟨ ∂ Ψ → ∂ x k , ∂ 2 Ψ → ∂ x i ∂ x j ⟩ . {\displaystyle {\frac {\partial g_{jk}}{\partial x^{i}}}+{\frac {\partial g_{ki}}{\partial x^{j}}}-{\frac {\partial g_{ij}}{\partial x^{k}}}=2\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{k}}},{\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{i}\,\partial x^{j}}}\right\rangle .} Thus the Christoffel symbols for the Levi-Civita connection are related to the metric by g k l Γ k i j = 1 2 ( ∂ g j l ∂ x i + ∂ g l i ∂ x j − ∂ g i j ∂ x l ) . {\displaystyle g_{kl}{\Gamma ^{k}}_{ij}={\frac {1}{2}}\left({\frac {\partial g_{jl}}{\partial x^{i}}}+{\frac {\partial g_{li}}{\partial x^{j}}}-{\frac {\partial g_{ij}}{\partial x^{l}}}\right).} If g is nondegenerate then Γ k i j {\displaystyle {\Gamma ^{k}}_{ij}} can be solved for directly as Γ k i j = 1 2 g k l ( ∂ g j l ∂ x i + ∂ g l i ∂ x j − ∂ g i j ∂ x l ) . {\displaystyle {\Gamma ^{k}}_{ij}={\frac {1}{2}}g^{kl}\left({\frac {\partial g_{jl}}{\partial x^{i}}}+{\frac {\partial g_{li}}{\partial x^{j}}}-{\frac {\partial g_{ij}}{\partial x^{l}}}\right).} For a very simple example that captures the essence of the description above, draw a circle on a flat sheet of paper. Travel around the circle at a constant speed. The derivative of your velocity, your acceleration vector, always points radially inward. Roll this sheet of paper into a cylinder. Now the (Euclidean) derivative of your velocity has a component that sometimes points inward toward the axis of the cylinder depending on whether you're near a solstice or an equinox. (At the point of the circle when you are moving parallel to the axis, there is no inward acceleration. Conversely, at a point (1/4 of a circle later) when the velocity is along the cylinder's bend, the inward acceleration is maximum.) This is the (Euclidean) normal component. The covariant derivative component is the component parallel to the cylinder's surface, and is the same as that before you rolled the sheet into a cylinder. == Formal definition == A covariant derivative is a (Koszul) connection on the tangent bundle and other tensor bundles: it differentiates vector fields in a way analogous to the usual differential on functions. The definition extends to a differentiation on the dual of vector fields (i.e. covector fields) and to arbitrary tensor fields, in a unique way that ensures compatibility with the tensor product and trace operations (tensor contraction). === Functions === Given a point p ∈ M {\displaystyle p\in M} of the manifold M, a real function f : M → R {\displaystyle f:M\to \mathbb {R} } on the manifold and a tangent vector v ∈ T p M {\displaystyle \mathbf {v} \in T_{p}M} , the covariant derivative of f at p along v is the scalar at p, denoted ( ∇ v f ) p {\displaystyle \left(\nabla _{\mathbf {v} }f\right)_{p}} , that represents the principal part of the change in the value of f when the argument of f is changed by the infinitesimal displacement vector v. (This is the differential of f evaluated against the vector v.) Formally, there is a differentiable curve ϕ : [ − 1 , 1 ] → M {\displaystyle \phi :[-1,1]\to M} such that ϕ ( 0 ) = p {\displaystyle \phi (0)=p} and ϕ ′ ( 0 ) = v {\displaystyle \phi '(0)=\mathbf {v} } , and the covariant derivative of f at p is defined by ( ∇ v f ) p = ( f ∘ ϕ ) ′ ( 0 ) = lim t → 0 f ( ϕ ( t ) ) − f ( p ) t . {\displaystyle \left(\nabla _{\mathbf {v} }f\right)_{p}=\left(f\circ \phi \right)^{\prime }\left(0\right)=\lim _{t\to 0}{\frac {f(\phi \left(t\right))-f(p)}{t}}.} When v : M → T p M {\displaystyle \mathbf {v} :M\to T_{p}M} is a vector field on M, the covariant derivative ∇ v f : M → R {\displaystyle \nabla _{\mathbf {v} }f:M\to \mathbb {R} } is the function that associates with each point p in the common domain of f and v the scalar ( ∇ v f ) p {\displaystyle \left(\nabla _{\mathbf {v} }f\right)_{p}} . For a scalar function f and vector field v, the covariant derivative ∇ v f {\displaystyle \nabla _{\mathbf {v} }f} coincides with the Lie derivative L v ( f ) {\displaystyle L_{v}(f)} , and with the exterior derivative d f ( v ) {\displaystyle df(v)} . === Vector fields === Given a point p of the manifold M, a vector field u : M → T p M {\displaystyle \mathbf {u} :M\to T_{p}M} defined in a neighborhood of p and a tangent vector v ∈ T p M {\displaystyle \mathbf {v} \in T_{p}M} , the covariant derivative of u at p along v is the tangent vector at p, denoted ( ∇ v u ) p {\displaystyle (\nabla _{\mathbf {v} }\mathbf {u} )_{p}} , such that the following properties hold (for any tangent vectors v, x and y at p, vector fields u and w defined in a neighborhood of p, scalar values g and h at p, and scalar function f defined in a neighborhood of p): ( ∇ v u ) p {\displaystyle \left(\nabla _{\mathbf {v} }\mathbf {u} \right)_{p}} is linear in v {\displaystyle \mathbf {v} } so ( ∇ g x + h y u ) p = g ( p ) ( ∇ x u ) p + h ( p ) ( ∇ y u ) p {\displaystyle \left(\nabla _{g\mathbf {x} +h\mathbf {y} }\mathbf {u} \right)_{p}=g(p)\left(\nabla _{\mathbf {x} }\mathbf {u} \right)_{p}+h(p)\left(\nabla _{\mathbf {y} }\mathbf {u} \right)_{p}} ( ∇ v u ) p {\displaystyle \left(\nabla _{\mathbf {v} }\mathbf {u} \right)_{p}} is additive in u {\displaystyle \mathbf {u} } so: ( ∇ v [ u + w ] ) p = ( ∇ v u ) p + ( ∇ v w ) p {\displaystyle \left(\nabla _{\mathbf {v} }\left[\mathbf {u} +\mathbf {w} \right]\right)_{p}=\left(\nabla _{\mathbf {v} }\mathbf {u} \right)_{p}+\left(\nabla _{\mathbf {v} }\mathbf {w} \right)_{p}} ( ∇ v u ) p {\displaystyle (\nabla _{\mathbf {v} }\mathbf {u} )_{p}} obeys the product rule; i.e., where ∇ v f {\displaystyle \nabla _{\mathbf {v} }f} is defined above, ( ∇ v [ f u ] ) p = f ( p ) ( ∇ v u ) p + ( ∇ v f ) p u p . {\displaystyle \left(\nabla _{\mathbf {v} }\left[f\mathbf {u} \right]\right)_{p}=f(p)\left(\nabla _{\mathbf {v} }\mathbf {u} )_{p}+(\nabla _{\mathbf {v} }f\right)_{p}\mathbf {u} _{p}.} Note that ( ∇ v u ) p {\displaystyle \left(\nabla _{\mathbf {v} }\mathbf {u} \right)_{p}} depends not only on the value of u at p but also on values of u in a neighborhood of p, because the last property, the product rule, involves the directional derivative of f (by the vector v). If u and v are both vector fields defined over a common domain, then ∇ v u {\displaystyle \nabla _{\mathbf {v} }\mathbf {u} } denotes the vector field whose value at each point p of the domain is the tangent vector ( ∇ v u ) p {\displaystyle \left(\nabla _{\mathbf {v} }\mathbf {u} \right)_{p}} . === Covector fields === Given a field of covectors (or one-form) α {\displaystyle \alpha } defined in a neighborhood of p, its covariant derivative ( ∇ v α ) p {\displaystyle (\nabla _{\mathbf {v} }\alpha )_{p}} is defined in a way to make the resulting operation compatible with tensor contraction and the product rule. That is, ( ∇ v α ) p {\displaystyle (\nabla _{\mathbf {v} }\alpha )_{p}} is defined as the unique one-form at p such that the following identity is satisfied for all vector fields u in a neighborhood of p ( ∇ v α ) p ( u p ) = ∇ v [ α ( u ) ] p − α p [ ( ∇ v u ) p ] . {\displaystyle \left(\nabla _{\mathbf {v} }\alpha \right)_{p}\left(\mathbf {u} _{p}\right)=\nabla _{\mathbf {v} }\left[\alpha \left(\mathbf {u} \right)\right]_{p}-\alpha _{p}\left[\left(\nabla _{\mathbf {v} }\mathbf {u} \right)_{p}\right].} The covariant derivative of a covector field along a vector field v is again a covector field. === Tensor fields === Once the covariant derivative is defined for fields of vectors and covectors it can be defined for arbitrary tensor fields by imposing the following identities for every pair of tensor fields φ {\displaystyle \varphi } and ψ {\displaystyle \psi } in a neighborhood of the point p: ∇ v ( φ ⊗ ψ ) p = ( ∇ v φ ) p ⊗ ψ ( p ) + φ ( p ) ⊗ ( ∇ v ψ ) p , {\displaystyle \nabla _{\mathbf {v} }\left(\varphi \otimes \psi \right)_{p}=\left(\nabla _{\mathbf {v} }\varphi \right)_{p}\otimes \psi (p)+\varphi (p)\otimes \left(\nabla _{\mathbf {v} }\psi \right)_{p},} and for φ {\displaystyle \varphi } and ψ {\displaystyle \psi } of the same valence ∇ v ( φ + ψ ) p = ( ∇ v φ ) p + ( ∇ v ψ ) p . {\displaystyle \nabla _{\mathbf {v} }(\varphi +\psi )_{p}=(\nabla _{\mathbf {v} }\varphi )_{p}+(\nabla _{\mathbf {v} }\psi )_{p}.} The covariant derivative of a tensor field along a vector field v is again a tensor field of the same type. Explicitly, let T be a tensor field of type (p, q). Consider T to be a differentiable multilinear map of smooth sections α1, α2, ..., αq of the cotangent bundle T∗M and of sections X1, X2, ..., Xp of the tangent bundle TM, written T(α1, α2, ..., X1, X2, ...) into R. The covariant derivative of T along Y is given by the formula ( ∇ Y T ) ( α 1 , α 2 , … , X 1 , X 2 , … ) = ∇ Y ( T ( α 1 , α 2 , … , X 1 , X 2 , … ) ) − T ( ∇ Y α 1 , α 2 , … , X 1 , X 2 , … ) − T ( α 1 , ∇ Y α 2 , … , X 1 , X 2 , … ) − ⋯ − T ( α 1 , α 2 , … , ∇ Y X 1 , X 2 , … ) − T ( α 1 , α 2 , … , X 1 , ∇ Y X 2 , … ) − ⋯ {\displaystyle {\begin{aligned}(\nabla _{Y}T)\left(\alpha _{1},\alpha _{2},\ldots ,X_{1},X_{2},\ldots \right)=&{}\nabla _{Y}\left(T\left(\alpha _{1},\alpha _{2},\ldots ,X_{1},X_{2},\ldots \right)\right)\\&{}-T\left(\nabla _{Y}\alpha _{1},\alpha _{2},\ldots ,X_{1},X_{2},\ldots \right)-T\left(\alpha _{1},\nabla _{Y}\alpha _{2},\ldots ,X_{1},X_{2},\ldots \right)-\cdots \\&{}-T\left(\alpha _{1},\alpha _{2},\ldots ,\nabla _{Y}X_{1},X_{2},\ldots \right)-T\left(\alpha _{1},\alpha _{2},\ldots ,X_{1},\nabla _{Y}X_{2},\ldots \right)-\cdots \end{aligned}}} == Coordinate description == Given coordinate functions x i , i = 0 , 1 , 2 , … , {\displaystyle x^{i},\ i=0,1,2,\dots ,} any tangent vector can be described by its components in the basis e i = ∂ ∂ x i . {\displaystyle \mathbf {e} _{i}={\frac {\partial }{\partial x^{i}}}.} The covariant derivative of a basis vector along a basis vector is again a vector and so can be expressed as a linear combination Γ k e k {\displaystyle \Gamma ^{k}\mathbf {e} _{k}} . To specify the covariant derivative it is enough to specify the covariant derivative of each basis vector field e i {\displaystyle \mathbf {e} _{i}} along e j {\displaystyle \mathbf {e} _{j}} . ∇ e j e i = Γ k i j e k , {\displaystyle \nabla _{\mathbf {e} _{j}}\mathbf {e} _{i}={\Gamma ^{k}}_{ij}\mathbf {e} _{k},} the coefficients Γ i j k {\displaystyle \Gamma _{ij}^{k}} are the components of the connection with respect to a system of local coordinates. In the theory of Riemannian and pseudo-Riemannian manifolds, the components of the Levi-Civita connection with respect to a system of local coordinates are called Christoffel symbols. Then using the rules in the definition, we find that for general vector fields v = v j e j {\displaystyle \mathbf {v} =v^{j}\mathbf {e} _{j}} and u = u i e i {\displaystyle \mathbf {u} =u^{i}\mathbf {e} _{i}} we get ∇ v u = ∇ v j e j u i e i = v j ∇ e j u i e i = v j u i ∇ e j e i + v j e i ∇ e j u i = v j u i Γ k i j e k + v j ∂ u i ∂ x j e i {\displaystyle {\begin{aligned}\nabla _{\mathbf {v} }\mathbf {u} &=\nabla _{v^{j}\mathbf {e} _{j}}u^{i}\mathbf {e} _{i}\\&=v^{j}\nabla _{\mathbf {e} _{j}}u^{i}\mathbf {e} _{i}\\&=v^{j}u^{i}\nabla _{\mathbf {e} _{j}}\mathbf {e} _{i}+v^{j}\mathbf {e} _{i}\nabla _{\mathbf {e} _{j}}u^{i}\\&=v^{j}u^{i}{\Gamma ^{k}}_{ij}\mathbf {e} _{k}+v^{j}{\partial u^{i} \over \partial x^{j}}\mathbf {e} _{i}\end{aligned}}} so ∇ v u = ( v j u i Γ k i j + v j ∂ u k ∂ x j ) e k . {\displaystyle \nabla _{\mathbf {v} }\mathbf {u} =\left(v^{j}u^{i}{\Gamma ^{k}}_{ij}+v^{j}{\partial u^{k} \over \partial x^{j}}\right)\mathbf {e} _{k}.} The first term in this formula is responsible for "twisting" the coordinate system with respect to the covariant derivative and the second for changes of components of the vector field u. In particular ∇ e j u = ∇ j u = ( ∂ u i ∂ x j + u k Γ i k j ) e i {\displaystyle \nabla _{\mathbf {e} _{j}}\mathbf {u} =\nabla _{j}\mathbf {u} =\left({\frac {\partial u^{i}}{\partial x^{j}}}+u^{k}{\Gamma ^{i}}_{kj}\right)\mathbf {e} _{i}} In words: the covariant derivative is the usual derivative along the coordinates with correction terms which tell how the coordinates change. For covectors similarly we have ∇ e j θ = ( ∂ θ i ∂ x j − θ k Γ k i j ) e ∗ i {\displaystyle \nabla _{\mathbf {e} _{j}}{\mathbf {\theta } }=\left({\frac {\partial \theta _{i}}{\partial x^{j}}}-\theta _{k}{\Gamma ^{k}}_{ij}\right){\mathbf {e} ^{*}}^{i}} where e ∗ i ( e j ) = δ i j {\displaystyle {\mathbf {e} ^{*}}^{i}(\mathbf {e} _{j})={\delta ^{i}}_{j}} . The covariant derivative of a type (r, s) tensor field along e c {\displaystyle e_{c}} is given by the expression: ( ∇ e c T ) a 1 … a r b 1 … b s = ∂ ∂ x c T a 1 … a r b 1 … b s + Γ a 1 d c T d a 2 … a r b 1 … b s + ⋯ + Γ a r d c T a 1 … a r − 1 d b 1 … b s − Γ d b 1 c T a 1 … a r d b 2 … b s − ⋯ − Γ d b s c T a 1 … a r b 1 … b s − 1 d . {\displaystyle {\begin{aligned}{(\nabla _{e_{c}}T)^{a_{1}\ldots a_{r}}}_{b_{1}\ldots b_{s}}={}&{\frac {\partial }{\partial x^{c}}}{T^{a_{1}\ldots a_{r}}}_{b_{1}\ldots b_{s}}\\&+\,{\Gamma ^{a_{1}}}_{dc}{T^{da_{2}\ldots a_{r}}}_{b_{1}\ldots b_{s}}+\cdots +{\Gamma ^{a_{r}}}_{dc}{T^{a_{1}\ldots a_{r-1}d}}_{b_{1}\ldots b_{s}}\\&-\,{\Gamma ^{d}}_{b_{1}c}{T^{a_{1}\ldots a_{r}}}_{db_{2}\ldots b_{s}}-\cdots -{\Gamma ^{d}}_{b_{s}c}{T^{a_{1}\ldots a_{r}}}_{b_{1}\ldots b_{s-1}d}.\end{aligned}}} Or, in words: take the partial derivative of the tensor and add: + Γ a i d c {\displaystyle +{\Gamma ^{a_{i}}}_{dc}} for every upper index a i {\displaystyle a_{i}} , and − Γ d b i c {\displaystyle -{\Gamma ^{d}}_{b_{i}c}} for every lower index b i {\displaystyle b_{i}} . If instead of a tensor, one is trying to differentiate a tensor density (of weight +1), then one also adds a term − Γ d d c T a 1 … a r b 1 … b s . {\displaystyle -{\Gamma ^{d}}_{dc}{T^{a_{1}\ldots a_{r}}}_{b_{1}\ldots b_{s}}.} If it is a tensor density of weight W, then multiply that term by W. For example, − g {\textstyle {\sqrt {-g}}} is a scalar density (of weight +1), so we get: ( − g ) ; c = ( − g ) , c − − g Γ d d c {\displaystyle \left({\sqrt {-g}}\right)_{;c}=\left({\sqrt {-g}}\right)_{,c}-{\sqrt {-g}}\,{\Gamma ^{d}}_{dc}} where semicolon ";" indicates covariant differentiation and comma "," indicates partial differentiation. Incidentally, this particular expression is equal to zero, because the covariant derivative of a function solely of the metric is always zero. == Notation == In textbooks on physics, the covariant derivative is sometimes simply stated in terms of its components in this equation. Often a notation is used in which the covariant derivative is given with a semicolon, while a normal partial derivative is indicated by a comma. In this notation we write the same as: ∇ e j v = d e f v s ; j e s v i ; j = v i , j + v k Γ i k j {\displaystyle \nabla _{e_{j}}\mathbf {v} \ {\stackrel {\mathrm {def} }{=}}\ {v^{s}}_{;j}\mathbf {e} _{s}\;\;\;\;\;\;{v^{i}}_{;j}={v^{i}}_{,j}+v^{k}{\Gamma ^{i}}_{kj}} In case two or more indexes appear after the semicolon, all of them must be understood as covariant derivatives: ∇ e k ( ∇ e j v ) = d e f v s ; j k e s {\displaystyle \nabla _{e_{k}}\left(\nabla _{e_{j}}\mathbf {v} \right)\ {\stackrel {\mathrm {def} }{=}}\ {v^{s}}_{;jk}\mathbf {e} _{s}} In some older texts (notably Adler, Bazin & Schiffer, Introduction to General Relativity), the covariant derivative is denoted by a double pipe and the partial derivative by single pipe: ∇ e j v = d e f v i | | j = v i | j + v k Γ i k j {\displaystyle \nabla _{e_{j}}\mathbf {v} \ {\stackrel {\mathrm {def} }{=}}\ {v^{i}}_{||j}={v^{i}}_{|j}+v^{k}{\Gamma ^{i}}_{kj}} == Covariant derivative by field type == For a scalar field ϕ {\displaystyle \phi \,} , covariant differentiation is simply partial differentiation: ϕ ; a ≡ ∂ a ϕ {\displaystyle \phi _{;a}\equiv \partial _{a}\phi } For a contravariant vector field λ a {\displaystyle \lambda ^{a}} , we have: λ a ; b ≡ ∂ b λ a + Γ a b c λ c {\displaystyle {\lambda ^{a}}_{;b}\equiv \partial _{b}\lambda ^{a}+{\Gamma ^{a}}_{bc}\lambda ^{c}} For a covariant vector field λ a {\displaystyle \lambda _{a}} , we have: λ a ; c ≡ ∂ c λ a − Γ b c a λ b {\displaystyle \lambda _{a;c}\equiv \partial _{c}\lambda _{a}-{\Gamma ^{b}}_{ca}\lambda _{b}} For a type (2,0) tensor field τ a b {\displaystyle \tau ^{ab}} , we have: τ a b ; c ≡ ∂ c τ a b + Γ a c d τ d b + Γ b c d τ a d {\displaystyle {\tau ^{ab}}_{;c}\equiv \partial _{c}\tau ^{ab}+{\Gamma ^{a}}_{cd}\tau ^{db}+{\Gamma ^{b}}_{cd}\tau ^{ad}} For a type (0,2) tensor field τ a b {\displaystyle \tau _{ab}} , we have: τ a b ; c ≡ ∂ c τ a b − Γ d c a τ d b − Γ d c b τ a d {\displaystyle \tau _{ab;c}\equiv \partial _{c}\tau _{ab}-{\Gamma ^{d}}_{ca}\tau _{db}-{\Gamma ^{d}}_{cb}\tau _{ad}} For a type (1,1) tensor field τ a b {\displaystyle {\tau ^{a}}_{b}} , we have: τ a b ; c ≡ ∂ c τ a b + Γ a c d τ d b − Γ d c b τ a d {\displaystyle {\tau ^{a}}_{b;c}\equiv \partial _{c}{\tau ^{a}}_{b}+{\Gamma ^{a}}_{cd}{\tau ^{d}}_{b}-{\Gamma ^{d}}_{cb}{\tau ^{a}}_{d}} The notation above is meant in the sense τ a b ; c ≡ ( ∇ e c τ ) a b {\displaystyle {\tau ^{ab}}_{;c}\equiv \left(\nabla _{\mathbf {e} _{c}}\tau \right)^{ab}} == Properties == In general, covariant derivatives do not commute. By example, the covariant derivatives of vector field λ a ; b c ≠ λ a ; c b {\displaystyle \lambda _{a;bc}\neq \lambda _{a;cb}} . The Riemann tensor R d a b c {\displaystyle {R^{d}}_{abc}} is defined such that: λ a ; b c − λ a ; c b = R d a b c λ d {\displaystyle \lambda _{a;bc}-\lambda _{a;cb}={R^{d}}_{abc}\lambda _{d}} or, equivalently, λ a ; b c − λ a ; c b = − R a d b c λ d {\displaystyle {\lambda ^{a}}_{;bc}-{\lambda ^{a}}_{;cb}=-{R^{a}}_{dbc}\lambda ^{d}} The covariant derivative of a (2,0)-tensor field fulfills: τ a b ; c d − τ a b ; d c = − R a e c d τ e b − R b e c d τ a e {\displaystyle {\tau ^{ab}}_{;cd}-{\tau ^{ab}}_{;dc}=-{R^{a}}_{ecd}\tau ^{eb}-{R^{b}}_{ecd}\tau ^{ae}} The latter can be shown by taking (without loss of generality) that τ a b = λ a μ b {\displaystyle \tau ^{ab}=\lambda ^{a}\mu ^{b}} . == Derivative along a curve == Since the covariant derivative ∇ X T {\displaystyle \nabla _{X}T} of a tensor field T at a point p depends only on the value of the vector field X at p one can define the covariant derivative along a smooth curve γ ( t ) {\displaystyle \gamma (t)} in a manifold: D t T = ∇ γ ˙ ( t ) T . {\displaystyle D_{t}T=\nabla _{{\dot {\gamma }}(t)}T.} Note that the tensor field T only needs to be defined on the curve γ ( t ) {\displaystyle \gamma (t)} for this definition to make sense. In particular, γ ˙ ( t ) {\displaystyle {\dot {\gamma }}(t)} is a vector field along the curve γ {\displaystyle \gamma } itself. If ∇ γ ˙ ( t ) γ ˙ ( t ) {\displaystyle \nabla _{{\dot {\gamma }}(t)}{\dot {\gamma }}(t)} vanishes then the curve is called a geodesic of the covariant derivative. If the covariant derivative is the Levi-Civita connection of a positive-definite metric then the geodesics for the connection are precisely the geodesics of the metric that are parametrized by arc length. The derivative along a curve is also used to define the parallel transport along the curve. Sometimes the covariant derivative along a curve is called absolute or intrinsic derivative. == Relation to Lie derivative == A covariant derivative introduces an extra geometric structure on a manifold that allows vectors in neighboring tangent spaces to be compared: there is no canonical way to compare vectors from different tangent spaces because there is no canonical coordinate system. There is however another generalization of directional derivatives which is canonical: the Lie derivative, which evaluates the change of one vector field along the flow of another vector field. Thus, one must know both vector fields in a neighborhood, not merely at a single point. The covariant derivative on the other hand introduces its own change for vectors in a given direction, and it only depends on the vector direction at a single point, rather than a vector field in a neighborhood of a point. In other words, the covariant derivative is linear (over C∞(M)) in the direction argument, while the Lie derivative is linear in neither argument. Note that the antisymmetrized covariant derivative ∇uv − ∇vu, and the Lie derivative Luv differ by the torsion of the connection, so that if a connection is torsion free, then its antisymmetrization is the Lie derivative. == See also == == Notes == == References == Kobayashi, Shoshichi; Nomizu, Katsumi (1996). Foundations of Differential Geometry, Vol. 1 (New ed.). Wiley Interscience. ISBN 0-471-15733-3. I.Kh. Sabitov (2001) [1994], "Covariant differentiation", Encyclopedia of Mathematics, EMS Press Sternberg, Shlomo (1964). Lectures on Differential Geometry. Prentice-Hall. Spivak, Michael (1999). A Comprehensive Introduction to Differential Geometry (Volume Two). Publish or Perish, Inc.
Wikipedia/Covariant_differential
In physics, the Bardeen–Cooper–Schrieffer (BCS) theory (named after John Bardeen, Leon Cooper, and John Robert Schrieffer) is the first microscopic theory of superconductivity since Heike Kamerlingh Onnes's 1911 discovery. The theory describes superconductivity as a microscopic effect caused by a condensation of Cooper pairs. The theory is also used in nuclear physics to describe the pairing interaction between nucleons in an atomic nucleus. It was proposed by Bardeen, Cooper, and Schrieffer in 1957; they received the Nobel Prize in Physics for this theory in 1972. == History == Rapid progress in the understanding of superconductivity gained momentum in the mid-1950s. It began with the 1948 paper, "On the Problem of the Molecular Theory of Superconductivity", where Fritz London proposed that the phenomenological London equations may be consequences of the coherence of a quantum state. In 1953, Brian Pippard, motivated by penetration experiments, proposed that this would modify the London equations via a new scale parameter called the coherence length. John Bardeen then argued in the 1955 paper, "Theory of the Meissner Effect in Superconductors", that such a modification naturally occurs in a theory with an energy gap. The key ingredient was Leon Cooper's calculation of the bound states of electrons subject to an attractive force in his 1956 paper, "Bound Electron Pairs in a Degenerate Fermi Gas". In 1957 Bardeen and Cooper assembled these ingredients and constructed such a theory, the BCS theory, with Robert Schrieffer. The theory was first published in April 1957 in the letter, "Microscopic theory of superconductivity". The demonstration that the phase transition is second order, that it reproduces the Meissner effect and the calculations of specific heats and penetration depths appeared in the December 1957 article, "Theory of superconductivity". They received the Nobel Prize in Physics in 1972 for this theory. In 1986, high-temperature superconductivity was discovered in La-Ba-Cu-O, at temperatures up to 30 K. Following experiments determined more materials with transition temperatures up to about 130 K, considerably above the previous limit of about 30 K. It is experimentally very well known that the transition temperature strongly depends on pressure. In general, it is believed that BCS theory alone cannot explain this phenomenon and that other effects are in play. These effects are still not yet fully understood; it is possible that they even control superconductivity at low temperatures for some materials. == Overview == At sufficiently low temperatures, electrons near the Fermi surface become unstable against the formation of Cooper pairs. Cooper showed such binding will occur in the presence of an attractive potential, no matter how weak. In conventional superconductors, an attraction is generally attributed to an electron-lattice interaction. The BCS theory, however, requires only that the potential be attractive, regardless of its origin. In the BCS framework, superconductivity is a macroscopic effect which results from the condensation of Cooper pairs. These have some bosonic properties, and bosons, at sufficiently low temperature, can form a large Bose–Einstein condensate. Superconductivity was simultaneously explained by Nikolay Bogolyubov, by means of the Bogoliubov transformations. In many superconductors, the attractive interaction between electrons (necessary for pairing) is brought about indirectly by the interaction between the electrons and the vibrating crystal lattice (the phonons). Roughly speaking the picture is the following: An electron moving through a conductor will attract nearby positive charges in the lattice. This deformation of the lattice causes another electron, with opposite spin, to move into the region of higher positive charge density. The two electrons then become correlated. Because there are a lot of such electron pairs in a superconductor, these pairs overlap very strongly and form a highly collective condensate. In this "condensed" state, the breaking of one pair will change the energy of the entire condensate - not just a single electron, or a single pair. Thus, the energy required to break any single pair is related to the energy required to break all of the pairs (or more than just two electrons). Because the pairing increases this energy barrier, kicks from oscillating atoms in the conductor (which are small at sufficiently low temperatures) are not enough to affect the condensate as a whole, or any individual "member pair" within the condensate. Thus the electrons stay paired together and resist all kicks, and the electron flow as a whole (the current through the superconductor) will not experience resistance. Thus, the collective behavior of the condensate is a crucial ingredient necessary for superconductivity. === Details === BCS theory starts from the assumption that there is some attraction between electrons, which can overcome the Coulomb repulsion. In most materials (in low temperature superconductors), this attraction is brought about indirectly by the coupling of electrons to the crystal lattice (as explained above). However, the results of BCS theory do not depend on the origin of the attractive interaction. For instance, Cooper pairs have been observed in ultracold gases of fermions where a homogeneous magnetic field has been tuned to their Feshbach resonance. The original results of BCS (discussed below) described an s-wave superconducting state, which is the rule among low-temperature superconductors but is not realized in many unconventional superconductors such as the d-wave high-temperature superconductors. Extensions of BCS theory exist to describe these other cases, although they are insufficient to completely describe the observed features of high-temperature superconductivity. BCS is able to give an approximation for the quantum-mechanical many-body state of the system of (attractively interacting) electrons inside the metal. This state is now known as the BCS state. In the normal state of a metal, electrons move independently, whereas in the BCS state, they are bound into Cooper pairs by the attractive interaction. The BCS formalism is based on the reduced potential for the electrons' attraction. Within this potential, a variational ansatz for the wave function is proposed. This ansatz was later shown to be exact in the dense limit of pairs. Note that the continuous crossover between the dilute and dense regimes of attracting pairs of fermions is still an open problem, which now attracts a lot of attention within the field of ultracold gases. === Underlying evidence === The hyperphysics website pages at Georgia State University summarize some key background to BCS theory as follows: Evidence of a band gap at the Fermi level (described as "a key piece in the puzzle") the existence of a critical temperature and critical magnetic field implied a band gap, and suggested a phase transition, but single electrons are forbidden from condensing to the same energy level by the Pauli exclusion principle. The site comments that "a drastic change in conductivity demanded a drastic change in electron behavior". Conceivably, pairs of electrons might perhaps act like bosons instead, which are bound by different condensate rules and do not have the same limitation. Isotope effect on the critical temperature, suggesting lattice interactions The Debye frequency of phonons in a lattice is proportional to the inverse of the square root of the mass of lattice ions. It was shown that the superconducting transition temperature of mercury indeed showed the same dependence, by substituting the most abundant natural mercury isotope, 202Hg, with a different isotope, 198Hg. An exponential rise in heat capacity near the critical temperature for some superconductors An exponential increase in heat capacity near the critical temperature also suggests an energy bandgap for the superconducting material. As superconducting vanadium is warmed toward its critical temperature, its heat capacity increases greatly in a very few degrees; this suggests an energy gap being bridged by thermal energy. The lessening of the measured energy gap towards the critical temperature This suggests a type of situation where some kind of binding energy exists but it is gradually weakened as the temperature increases toward the critical temperature. A binding energy suggests two or more particles or other entities that are bound together in the superconducting state. This helped to support the idea of bound particles – specifically electron pairs – and together with the above helped to paint a general picture of paired electrons and their lattice interactions. == Implications == BCS derived several important theoretical predictions that are independent of the details of the interaction, since the quantitative predictions mentioned below hold for any sufficiently weak attraction between the electrons and this last condition is fulfilled for many low temperature superconductors - the so-called weak-coupling case. These have been confirmed in numerous experiments: The electrons are bound into Cooper pairs, and these pairs are correlated due to the Pauli exclusion principle for the electrons, from which they are constructed. Therefore, in order to break a pair, one has to change energies of all other pairs. This means there is an energy gap for single-particle excitation, unlike in the normal metal (where the state of an electron can be changed by adding an arbitrarily small amount of energy). This energy gap is highest at low temperatures but vanishes at the transition temperature when superconductivity ceases to exist. The BCS theory gives an expression that shows how the gap grows with the strength of the attractive interaction and the (normal phase) single particle density of states at the Fermi level. Furthermore, it describes how the density of states is changed on entering the superconducting state, where there are no electronic states any more at the Fermi level. The energy gap is most directly observed in tunneling experiments and in reflection of microwaves from superconductors. BCS theory predicts the dependence of the value of the energy gap Δ at temperature T on the critical temperature Tc. The ratio between the value of the energy gap at zero temperature and the value of the superconducting transition temperature (expressed in energy units) takes the universal value Δ ( T = 0 ) = 1.764 k B T c , {\displaystyle \Delta (T=0)=1.764\,k_{\rm {B}}T_{\rm {c}},} independent of material. Near the critical temperature the relation asymptotes to Δ ( T → T c ) ≈ 3.06 k B T c 1 − ( T / T c ) {\displaystyle \Delta (T\to T_{\rm {c}})\approx 3.06\,k_{\rm {B}}T_{\rm {c}}{\sqrt {1-(T/T_{\rm {c}})}}} which is of the form suggested the previous year by M. J. Buckingham based on the fact that the superconducting phase transition is second order, that the superconducting phase has a mass gap and on Blevins, Gordy and Fairbank's experimental results the previous year on the absorption of millimeter waves by superconducting tin. Due to the energy gap, the specific heat of the superconductor is suppressed strongly (exponentially) at low temperatures, there being no thermal excitations left. However, before reaching the transition temperature, the specific heat of the superconductor becomes even higher than that of the normal conductor (measured immediately above the transition) and the ratio of these two values is found to be universally given by 2.5. BCS theory correctly predicts the Meissner effect, i.e. the expulsion of a magnetic field from the superconductor and the variation of the penetration depth (the extent of the screening currents flowing below the metal's surface) with temperature. It also describes the variation of the critical magnetic field (above which the superconductor can no longer expel the field but becomes normal conducting) with temperature. BCS theory relates the value of the critical field at zero temperature to the value of the transition temperature and the density of states at the Fermi level. In its simplest form, BCS gives the superconducting transition temperature Tc in terms of the electron-phonon coupling potential V and the Debye cutoff energy ED: k B T c = 1.134 E D e − 1 / N ( 0 ) V , {\displaystyle k_{\rm {B}}\,T_{\rm {c}}=1.134E_{\rm {D}}\,{e^{-1/N(0)\,V}},} where N(0) is the electronic density of states at the Fermi level. For more details, see Cooper pairs. The BCS theory reproduces the isotope effect, which is the experimental observation that for a given superconducting material, the critical temperature is inversely proportional to the square-root of the mass of the isotope used in the material. The isotope effect was reported by two groups on 24 March 1950, who discovered it independently working with different mercury isotopes, although a few days before publication they learned of each other's results at the ONR conference in Atlanta. The two groups are Emanuel Maxwell, and C. A. Reynolds, B. Serin, W. H. Wright, and L. B. Nesbitt. The choice of isotope ordinarily has little effect on the electrical properties of a material, but does affect the frequency of lattice vibrations. This effect suggests that superconductivity is related to vibrations of the lattice. This is incorporated into BCS theory, where lattice vibrations yield the binding energy of electrons in a Cooper pair. Little–Parks experiment - One of the first indications to the importance of the Cooper-pairing principle. == See also == Magnesium diboride, considered a BCS superconductor Quasiparticle Little–Parks effect, one of the first indications of the importance of the Cooper pairing principle. == References == === Primary sources === Cooper, Leon N. (1956). "Bound Electron Pairs in a Degenerate Fermi Gas". Physical Review. 104 (4): 1189–1190. Bibcode:1956PhRv..104.1189C. doi:10.1103/PhysRev.104.1189. Bardeen, J.; Cooper, L. N.; Schrieffer, J. R. (1957). "Microscopic Theory of Superconductivity". Physical Review. 106 (1): 162–164. Bibcode:1957PhRv..106..162B. doi:10.1103/PhysRev.106.162. Bardeen, J.; Cooper, L. N.; Schrieffer, J. R. (1957). "Theory of Superconductivity". Physical Review. 108 (5): 1175–1204. Bibcode:1957PhRv..108.1175B. doi:10.1103/PhysRev.108.1175. == Further reading == John Robert Schrieffer, Theory of Superconductivity, (1964), ISBN 0-7382-0120-0 Michael Tinkham, Introduction to Superconductivity, ISBN 0-486-43503-2 Pierre-Gilles de Gennes, Superconductivity of Metals and Alloys, ISBN 0-7382-0101-4. Cooper, Leon N; Feldman, Dmitri, eds. (2010). BCS: 50 Years (book). World Scientific. ISBN 978-981-4304-64-1. Schmidt, Vadim Vasil'evich. The physics of superconductors: Introduction to fundamentals and applications. Springer Science & Business Media, 2013. == External links == Hyperphysics page on BCS Dance analogy Archived 2011-06-29 at the Wayback Machine of BCS theory as explained by Bob Schrieffer (audio recording) Mean-Field Theory: Hartree-Fock and BCS in E. Pavarini, E. Koch, J. van den Brink, and G. Sawatzky: Quantum materials: Experiments and Theory, Jülich 2016, ISBN 978-3-95806-159-0
Wikipedia/Bcs_theory
In electronics, diode modelling refers to the mathematical models used to approximate the actual behaviour of real diodes to enable calculations and circuit analysis. A diode's I-V curve is nonlinear. A very accurate, but complicated, physical model composes the I-V curve from three exponentials with a slightly different steepness (i.e. ideality factor), which correspond to different recombination mechanisms in the device; at very large and very tiny currents the curve can be continued by linear segments (i.e. resistive behaviour). In a relatively good approximation a diode is modelled by the single-exponential Shockley diode law. This nonlinearity still complicates calculations in circuits involving diodes so even simpler models are often used. This article discusses the modelling of p-n junction diodes, but the techniques may be generalized to other solid state diodes. == Large-signal modelling == === Shockley diode model === The Shockley diode equation relates the diode current I {\displaystyle I} of a p-n junction diode to the diode voltage V D {\displaystyle V_{D}} . This relationship is the diode I-V characteristic: I = I S ( e V D n V T − 1 ) {\displaystyle I=I_{S}\left(e^{\frac {V_{D}}{nV_{\text{T}}}}-1\right)} , where I S {\displaystyle I_{S}} is the saturation current or scale current of the diode (the magnitude of the current that flows for negative V D {\displaystyle V_{D}} in excess of a few V T {\displaystyle V_{\text{T}}} , typically 10−12 A). The scale current is proportional to the cross-sectional area of the diode. Continuing with the symbols: V T {\displaystyle V_{\text{T}}} is the thermal voltage ( k T / q {\displaystyle kT/q} , about 26 mV at normal temperatures), and n {\displaystyle n} is known as the diode ideality factor (for silicon diodes n {\displaystyle n} is approximately 1 to 2). When V D ≫ n V T {\displaystyle V_{D}\gg nV_{\text{T}}} the formula can be simplified to: I ≈ I S ⋅ e V D n V T {\displaystyle I\approx I_{S}\cdot e^{\frac {V_{D}}{nV_{\text{T}}}}} . This expression is, however, only an approximation of a more complex I-V characteristic. Its applicability is particularly limited in case of ultra-shallow junctions, for which better analytical models exist. ==== Diode-resistor circuit example ==== To illustrate the complications in using this law, consider the problem of finding the voltage across the diode in Figure 1. Because the current flowing through the diode is the same as the current throughout the entire circuit, we can lay down another equation. By Kirchhoff's laws, the current flowing in the circuit is I = V S − V D R {\displaystyle I={\frac {V_{S}-V_{D}}{R}}} . These two equations determine the diode current and the diode voltage. To solve these two equations, we could substitute the current I {\displaystyle I} from the second equation into the first equation, and then try to rearrange the resulting equation to get V D {\displaystyle V_{D}} in terms of V S {\displaystyle V_{S}} . A difficulty with this method is that the diode law is nonlinear. Nonetheless, a formula expressing I {\displaystyle I} directly in terms of V S {\displaystyle V_{S}} without involving V D {\displaystyle V_{D}} can be obtained using the Lambert W-function, which is the inverse function of f ( w ) = w e w {\displaystyle f(w)=we^{w}} , that is, w = W ( f ) {\displaystyle w=W(f)} . This solution is discussed next. === Explicit solution === An explicit expression for the diode current can be obtained in terms of the Lambert W-function (also called the Omega function). A guide to these manipulations follows. A new variable w {\displaystyle w} is introduced as w = I S R n V T ( I I S + 1 ) {\displaystyle w={\frac {I_{S}R}{nV_{\text{T}}}}\left({\frac {I}{I_{S}}}+1\right)} . Following the substitutions I / I S = e V D / n V T − 1 {\displaystyle I/I_{S}=e^{V_{D}/nV_{\text{T}}}-1} : w e w = I S R n V T e V D n V T e I S R n V T ( I I S + 1 ) {\displaystyle we^{w}={\frac {I_{S}R}{nV_{\text{T}}}}e^{\frac {V_{D}}{nV_{\text{T}}}}e^{{\frac {I_{S}R}{nV_{\text{T}}}}\left({\frac {I}{I_{S}}}+1\right)}} and V D = V S − I R {\displaystyle V_{D}=V_{S}-IR} : w e w = I S R n V T e V S n V T e − I R n V T e I R I S n V T I S e I S R n V T {\displaystyle we^{w}={\frac {I_{S}R}{nV_{\text{T}}}}e^{\frac {V_{S}}{nV_{\text{T}}}}e^{\frac {-IR}{nV_{\text{T}}}}e^{\frac {IRI_{S}}{nV_{\text{T}}I_{S}}}e^{\frac {I_{S}R}{nV_{\text{T}}}}} rearrangement of the diode law in terms of w becomes: w e w = I S R n V T e V s + I s R n V T {\displaystyle we^{w}={\frac {I_{S}R}{nV_{\text{T}}}}e^{\frac {V_{s}+I_{s}R}{nV_{\text{T}}}}} , which using the Lambert W {\displaystyle W} -function becomes w = W ( I S R n V T e V s + I s R n V T ) {\displaystyle w=W\left({\frac {I_{S}R}{nV_{\text{T}}}}e^{\frac {V_{s}+I_{s}R}{nV_{\text{T}}}}\right)} . The final explicit solution being I = n V T R W ( I S R n V T e V s + I s R n V T ) − I S {\displaystyle I={\frac {nV_{\text{T}}}{R}}W\left({\frac {I_{S}R}{nV_{\text{T}}}}e^{\frac {V_{s}+I_{s}R}{nV_{\text{T}}}}\right)-I_{S}} . With the approximations (valid for the most common values of the parameters) I s R ≪ V S {\displaystyle I_{s}R\ll V_{S}} and I / I S ≫ 1 {\displaystyle I/I_{S}\gg 1} , this solution becomes I ≈ n V T R W ( I S R n V T e V s n V T ) {\displaystyle I\approx {\frac {nV_{\text{T}}}{R}}W\left({\frac {I_{S}R}{nV_{\text{T}}}}e^{\frac {V_{s}}{nV_{\text{T}}}}\right)} . Once the current is determined, the diode voltage can be found using either of the other equations. For large x, W ( x ) {\displaystyle W(x)} can be approximated by W ( x ) = ln ⁡ x − ln ⁡ ln ⁡ x + o ( 1 ) {\displaystyle W(x)=\ln x-\ln \ln x+o(1)} . For common physical parameters and resistances, I S R n V T e V s n V T {\displaystyle {\frac {I_{S}R}{nV_{\text{T}}}}e^{\frac {V_{s}}{nV_{\text{T}}}}} will be on the order of 1040. ==== Iterative solution ==== The diode voltage V D {\displaystyle V_{D}} can be found in terms of V S {\displaystyle V_{S}} for any particular set of values by an iterative method using a calculator or computer. The diode law is rearranged by dividing by I S {\displaystyle I_{S}} , and adding 1. The diode law becomes e V D n V T = I I S + 1 {\displaystyle e^{\frac {V_{D}}{nV_{\text{T}}}}={\frac {I}{I_{S}}}+1} . By taking natural logarithms of both sides the exponential is removed, and the equation becomes V D n V T = ln ⁡ ( I I S + 1 ) {\displaystyle {\frac {V_{D}}{nV_{\text{T}}}}=\ln \left({\frac {I}{I_{S}}}+1\right)} . For any I {\displaystyle I} , this equation determines V D {\displaystyle V_{D}} . However, I {\displaystyle I} also must satisfy the Kirchhoff's law equation, given above. This expression is substituted for I {\displaystyle I} to obtain V D n V T = ln ⁡ ( V S − V D R I S + 1 ) {\displaystyle {\frac {V_{D}}{nV_{\text{T}}}}=\ln \left({\frac {V_{S}-V_{D}}{RI_{S}}}+1\right)} , or V D = n V T ln ⁡ ( V S − V D R I S + 1 ) {\displaystyle V_{D}=nV_{\text{T}}\ln \left({\frac {V_{S}-V_{D}}{RI_{S}}}+1\right)} . The voltage of the source V S {\displaystyle V_{S}} is a known given value, but V D {\displaystyle V_{D}} is on both sides of the equation, which forces an iterative solution: a starting value for V D {\displaystyle V_{D}} is guessed and put into the right side of the equation. Carrying out the various operations on the right side, we come up with a new value for V D {\displaystyle V_{D}} . This new value now is substituted on the right side, and so forth. If this iteration converges the values of V D {\displaystyle V_{D}} become closer and closer together as the process continues, and we can stop iteration when the accuracy is sufficient. Once V D {\displaystyle V_{D}} is found, I {\displaystyle I} can be found from the Kirchhoff's law equation. Sometimes an iterative procedure depends critically on the first guess. In this example, almost any first guess will do, say V D = 600 mV {\displaystyle V_{D}=600\,{\text{mV}}} . Sometimes an iterative procedure does not converge at all: in this problem an iteration based on the exponential function does not converge, and that is why the equations were rearranged to use a logarithm. Finding a convergent iterative formulation is an art, and every problem is different. ==== Graphical solution ==== Graphical analysis is a simple way to derive a numerical solution to the transcendental equations describing the diode. As with most graphical methods, it has the advantage of easy visualization. By plotting the I-V curves, it is possible to obtain an approximate solution to any arbitrary degree of accuracy. This process is the graphical equivalent of the two previous approaches, which are more amenable to computer implementation. This method plots the two current-voltage equations on a graph and the point of intersection of the two curves satisfies both equations, giving the value of the current flowing through the circuit and the voltage across the diode. The figure illustrates such method. === Piecewise linear model === In practice, the graphical method is complicated and impractical for complex circuits. Another method of modelling a diode is called piecewise linear (PWL) modelling. In mathematics, this means taking a function and breaking it down into several linear segments. This method is used to approximate the diode characteristic curve as a series of linear segments. The real diode is modelled as 3 components in series: an ideal diode, a voltage source and a resistor. The figure shows a real diode I-V curve being approximated by a two-segment piecewise linear model. Typically the sloped line segment would be chosen tangent to the diode curve at the Q-point. Then the slope of this line is given by the reciprocal of the small-signal resistance of the diode at the Q-point. ==== Mathematically idealized diode ==== Firstly, consider a mathematically idealized diode. In such an ideal diode, if the diode is reverse biased, the current flowing through it is zero. This ideal diode starts conducting at 0 V and for any positive voltage an infinite current flows and the diode acts like a short circuit. The I-V characteristics of an ideal diode are shown below: ==== Ideal diode in series with voltage source ==== Now consider the case when we add a voltage source in series with the diode in the form shown below: When forward biased, the ideal diode is simply a short circuit and when reverse biased, an open circuit. If the anode of the diode is connected to 0 V, the voltage at the cathode will be at Vt and so the potential at the cathode will be greater than the potential at the anode and the diode will be reverse biased. In order to get the diode to conduct, the voltage at the anode will need to be taken to Vt. This circuit approximates the cut-in voltage present in real diodes. The combined I-V characteristic of this circuit is shown below: The Shockley diode model can be used to predict the approximate value of V t {\displaystyle V_{t}} . I = I S ( e V D n ⋅ V T − 1 ) ⇔ ln ⁡ ( 1 + I I S ) = V D n ⋅ V T ⇔ V D = n ⋅ V T ln ⁡ ( 1 + I I S ) ≈ n ⋅ V T ln ⁡ ( I I S ) ⇔ V D ≈ n ⋅ V T ⋅ ln ⁡ 10 ⋅ log 10 ⁡ ( I I S ) {\displaystyle {\begin{aligned}&I=I_{S}\left(e^{\frac {V_{D}}{n\cdot V_{\text{T}}}}-1\right)\\\Leftrightarrow {}&\ln \left(1+{\frac {I}{I_{S}}}\right)={\frac {V_{D}}{n\cdot V_{\text{T}}}}\\\Leftrightarrow {}&V_{D}=n\cdot V_{\text{T}}\ln \left(1+{\frac {I}{I_{S}}}\right)\approx n\cdot V_{\text{T}}\ln \left({\frac {I}{I_{S}}}\right)\\\Leftrightarrow {}&V_{D}\approx n\cdot V_{\text{T}}\cdot \ln {10}\cdot \log _{10}{\left({\frac {I}{I_{S}}}\right)}\end{aligned}}} Using n = 1 {\displaystyle n=1} and T = 25 °C {\displaystyle T=25\,{\text{°C}}} : V D ≈ 0.05916 ⋅ log 10 ⁡ ( I I S ) {\displaystyle V_{D}\approx 0.05916\cdot \log _{10}{\left({\frac {I}{I_{S}}}\right)}} Typical values of the saturation current at room temperature are: I S = 10 − 12 A {\displaystyle I_{S}=10^{-12}{\text{A}}} for silicon diodes; I S = 10 − 6 A {\displaystyle I_{S}=10^{-6}{\text{A}}} for germanium diodes. As the variation of V D {\displaystyle V_{D}} goes with the logarithm of the ratio I I S {\displaystyle {\frac {I}{I_{S}}}} , its value varies very little for a big variation of the ratio. The use of base 10 logarithms makes it easier to think in orders of magnitude. For a current of 1.0 mA: V D ≈ 0.53 V {\displaystyle V_{D}\approx 0.53\,{\text{V}}} for silicon diodes (9 orders of magnitude); V D ≈ 0.18 V {\displaystyle V_{D}\approx 0.18\,{\text{V}}} for germanium diodes (3 orders of magnitude). For a current of 100 mA: V D ≈ 0.65 V {\displaystyle V_{D}\approx 0.65\,{\text{V}}} for silicon diodes (11 orders of magnitude); V D ≈ 0.30 V {\displaystyle V_{D}\approx 0.30\,{\text{V}}} for germanium diodes (5 orders of magnitude). Values of 0.6 or 0.7 volts are commonly used for silicon diodes. ==== Diode with voltage source and current-limiting resistor ==== The last thing needed is a resistor to limit the current, as shown below: The I-V characteristic of the final circuit looks like this: The real diode now can be replaced with the combined ideal diode, voltage source and resistor and the circuit then is modelled using just linear elements. If the sloped-line segment is tangent to the real diode curve at the Q-point, this approximate circuit has the same small-signal circuit at the Q-point as the real diode. ==== Dual PWL-diodes or 3-Line PWL model ==== When more accuracy is desired in modelling the diode's turn-on characteristic, the model can be enhanced by doubling-up the standard PWL-model. This model uses two piecewise-linear diodes in parallel, as a way to model a single diode more accurately. == Small-signal modelling == === Resistance === Using the Shockley equation, the small-signal diode resistance r D {\displaystyle r_{D}} of the diode can be derived about some operating point (Q-point) where the DC bias current is I Q {\displaystyle I_{Q}} and the Q-point applied voltage is V Q {\displaystyle V_{Q}} . To begin, the diode small-signal conductance g D {\displaystyle g_{D}} is found, that is, the change in current in the diode caused by a small change in voltage across the diode, divided by this voltage change, namely: g D = d I d V | Q = I s n ⋅ V T e V Q n ⋅ V T ≈ I Q n ⋅ V T {\displaystyle g_{D}=\left.{\frac {dI}{dV}}\right|_{Q}={\frac {I_{s}}{n\cdot V_{\text{T}}}}e^{\frac {V_{Q}}{n\cdot V_{\text{T}}}}\approx {\frac {I_{Q}}{n\cdot V_{\text{T}}}}} . The latter approximation assumes that the bias current I Q {\displaystyle I_{Q}} is large enough so that the factor of 1 in the parentheses of the Shockley diode equation can be ignored. This approximation is accurate even at rather small voltages, because the thermal voltage V T ≈ 25 mV {\displaystyle V_{\text{T}}\approx 25\,{\text{mV}}} at 300 K, so V Q / V T {\displaystyle V_{Q}/V_{\text{T}}} tends to be large, meaning that the exponential is very large. Noting that the small-signal resistance r D {\displaystyle r_{D}} is the reciprocal of the small-signal conductance just found, the diode resistance is independent of the ac current, but depends on the dc current, and is given as r D = n ⋅ V T I Q {\displaystyle r_{D}={\frac {n\cdot V_{\text{T}}}{I_{Q}}}} . === Capacitance === The charge in the diode carrying current I Q {\displaystyle I_{Q}} is known to be Q = I Q τ F + Q J {\displaystyle Q=I_{Q}\tau _{F}+Q_{J}} , where τ F {\displaystyle \tau _{F}} is the forward transit time of charge carriers: The first term in the charge is the charge in transit across the diode when the current I Q {\displaystyle I_{Q}} flows. The second term is the charge stored in the junction itself when it is viewed as a simple capacitor; that is, as a pair of electrodes with opposite charges on them. It is the charge stored on the diode by virtue of simply having a voltage across it, regardless of any current it conducts. In a similar fashion as before, the diode capacitance is the change in diode charge with diode voltage: C D = d Q d V Q = d I Q d V Q τ F + d Q J d V Q ≈ I Q V T τ F + C J {\displaystyle C_{D}={\frac {dQ}{dV_{Q}}}={\frac {dI_{Q}}{dV_{Q}}}\tau _{F}+{\frac {dQ_{J}}{dV_{Q}}}\approx {\frac {I_{Q}}{V_{\text{T}}}}\tau _{F}+C_{J}} , where C J = d Q J d V Q {\displaystyle C_{J}={\frac {dQ_{J}}{dV_{Q}}}} is the junction capacitance and the first term is called the diffusion capacitance, because it is related to the current diffusing through the junction. == Variation of forward voltage with temperature == The Shockley diode equation has an exponential of V D / ( k T / q ) {\displaystyle V_{D}/(kT/q)} , which would lead one to expect that the forward-voltage increases with temperature. In fact, this is generally not the case: as temperature rises, the saturation current I S {\displaystyle I_{S}} rises, and this effect dominates. So as the diode becomes hotter, the forward-voltage (for a given current) decreases. Here is some detailed experimental data, which shows this for a 1N4005 silicon diode. In fact, some silicon diodes are used as temperature sensors; for example, the CY7 series from OMEGA has a forward voltage of 1.02 V in liquid nitrogen (77 K), 0.54 V at room temperature, and 0.29 V at 100 °C. In addition, there is a small change of the material parameter bandgap with temperature. For LEDs, this bandgap change also shifts their colour: they move towards the blue end of the spectrum when cooled. Since the diode forward-voltage drops as its temperature rises, this can lead to thermal runaway due to current hogging when paralleled in bipolar-transistor circuits (since the base-emitter junction of a BJT acts as a diode), where a reduction in the base-emitter forward voltage leads to an increase in collector power-dissipation, which in turn reduces the required base-emitter forward voltage even further. == See also == Bipolar junction transistor Semiconductor device modelling == References ==
Wikipedia/Diode_modelling
A network analyzer is an instrument that measures the network parameters of electrical networks. Today, network analyzers commonly measure s–parameters because reflection and transmission of electrical networks are easy to measure at high frequencies, but there are other network parameter sets such as y-parameters, z-parameters, and h-parameters. Network analyzers are often used to characterize two-port networks such as amplifiers and filters, but they can be used on networks with an arbitrary number of ports. == Overview == Network analyzers are used mostly at high frequencies; operating frequencies can range from 1 Hz to 1.5 THz. Special types of network analyzers can also cover lower frequency ranges down to 1 Hz. These network analyzers can be used, for example, for the stability analysis of open loops or for the measurement of audio and ultrasonic components. The two basic types of network analyzers are scalar network analyzer (SNA)—measures amplitude properties only vector network analyzer (VNA)—measures both amplitude and phase properties A VNA is a form of RF network analyzer widely used for RF design applications. A VNA may also be called a gain–phase meter or an automatic network analyzer. An SNA is functionally identical to a spectrum analyzer in combination with a tracking generator. As of 2007, VNAs are the most common type of network analyzers, and so references to an unqualified "network analyzer" most often mean a VNA. Six prominent VNA manufacturers are Keysight, Anritsu, Advantest, Rohde & Schwarz, Siglent, Copper Mountain Technologies and OMICRON Lab. For some years now, entry-level devices and do-it-yourself projects have also been available, some for less than $100, mainly from the amateur radio sector. Although these have significantly reduced features compared to professional devices and offer only a limited range of functions, they are often sufficient for private users - especially during studies and for hobby applications up to the single-digit GHz range. Another category of network analyzer is the microwave transition analyzer (MTA) or large-signal network analyzer (LSNA), which measure both amplitude and phase of the fundamental and harmonics. The MTA was commercialized before the LSNA, but was lacking some of the user-friendly calibration features now available with the LSNA. == Architecture == The basic architecture of a network analyzer involves a signal generator, a test set, one or more receivers and display. In some setups, these units are distinct instruments. Most VNAs have two test ports, permitting measurement of four S-parameters ( S 11 , S 21 , S 12 , S 22 ) {\displaystyle (S_{11},S_{21},S_{12},S_{22})} , but instruments with more than two ports are available commercially. === Signal generator === The network analyzer needs a test signal, and a signal generator or signal source will provide one. Older network analyzers did not have their own signal generator, but had the ability to control a stand-alone signal generator using, for example, a GPIB connection. Nearly all modern network analyzers have a built-in signal generator. High-performance network analyzers have two built-in sources. Two built-in sources are useful for applications such as mixer test, where one source provides the RF signal, another the LO; or amplifier intermodulation testing, where two tones are required for the test. === Test set === The test set takes the signal generator output and routes it to the device under test, and it routes the signal to be measured to the receivers. It often splits off a reference channel for the incident wave. In a SNA, the reference channel may go to a diode detector (receiver) whose output is sent to the signal generator's automatic level control. The result is better control of the signal generator's output and better measurement accuracy. In a VNA, the reference channel goes to the receivers; it is needed to serve as a phase reference. Directional couplers or two resistor power dividers are used for signal separation. Some microwave test sets include the front end mixers for the receivers (e.g., test sets for HP 8510). === Receiver === The receivers make the measurements. A network analyzer will have one or more receivers connected to its test ports. The reference test port is usually labeled R, and the primary test ports are A, B, C, ... Some analyzers will dedicate a separate receiver to each test port, but others share one or two receivers among the ports. The R receiver may be less sensitive than the receivers used on the test ports. For the SNA, the receiver only measures the magnitude of the signal. A receiver can be a detector diode that operates at the test frequency. The simplest SNA will have a single test port, but more accurate measurements are made when a reference port is also used. The reference port will compensate for amplitude variations in the test signal at the measurement plane. It is possible to share a single detector and use it for both the reference port and the test port by making two measurement passes. For the VNA, the receiver measures both the magnitude and the phase of the signal. It needs a reference channel (R) to determine the phase, so a VNA needs at least two receivers. The usual method down converts the reference and test channels to make the measurements at a lower frequency. The phase may be measured with a quadrature detector. A VNA requires at least two receivers, but some will have three or four receivers to permit simultaneous measurement of different parameters. There are some VNA architectures (six-port reflectometer) that infer phase and magnitude from just power measurements. === Processor and display === With the processed RF signal available from the receiver / detector section it is necessary to display the signal in a format that can be interpreted. With the levels of processing that are available today, some very sophisticated solutions are available in RF network analyzers. Here the reflection and transmission data is formatted to enable the information to be interpreted as easily as possible. Most RF network analyzers incorporate features including linear and logarithmic sweeps, linear and log formats, polar plots, Smith charts, etc. Trace markers, limit lines and pass/fail criteria are also added in many instances. == S-parameter measurement with vector network analyzer == A VNA is a test system that enables the RF performance of radio frequency and microwave devices to be characterised in terms of network scattering parameters, or S parameters. The diagram shows the essential parts of a typical 2-port vector network analyzer (VNA). The two ports of the device under test (DUT) are denoted port 1 (P1) and port 2 (P2). The test port connectors provided on the VNA itself are precision types which will normally have to be extended and connected to P1 and P2 using precision cables 1 and 2, PC1 and PC2 respectively and suitable connector adaptors A1 and A2 respectively. The test frequency is generated by a variable frequency CW source and its power level is set using a variable attenuator. The position of switch SW1 sets the direction that the test signal passes through the DUT. Initially consider that SW1 is at position 1 so that the test signal is incident on the DUT at P1 which is appropriate for measuring S 11 {\displaystyle S_{11}\,} and S 21 {\displaystyle S_{21}\,} . The test signal is fed by SW1 to the common port of splitter 1, one arm (the reference channel) feeding a reference receiver for P1 (RX REF1) and the other (the test channel) connecting to P1 via the directional coupler DC1, PC1 and A1. The third port of DC1 couples off the power reflected from P1 via A1 and PC1, then feeding it to test receiver 1 (RX TEST1). Similarly, signals leaving P2 pass via A2, PC2 and DC2 to RX TEST2. RX REF1, RX TEST1, RX REF2 and RXTEST2 are known as coherent receivers as they share the same reference oscillator, and they are capable of measuring the test signal's amplitude and phase at the test frequency. All of the complex receiver output signals are fed to a processor which does the mathematical processing and displays the chosen parameters and format on the phase and amplitude display. The instantaneous value of phase includes both the temporal and spatial parts, but the former is removed by virtue of using 2 test channels, one as a reference and the other for measurement. When SW1 is set to position 2, the test signals are applied to P2, the reference is measured by RX REF2, reflections from P2 are coupled off by DC2 and measured by RX TEST2 and signals leaving P1 are coupled off by DC1 and measured by RX TEST1. This position is appropriate for measuring S 22 {\displaystyle S_{22}\,} and S 12 {\displaystyle S_{12}\,} . == Calibration and error correction == A network analyzer, like most electronic instruments requires periodic calibration; typically this is performed once per year and is performed by the manufacturer or by a 3rd party in a calibration laboratory. When the instrument is calibrated, a sticker will usually be attached, stating the date it was calibrated and when the next calibration is due. A calibration certificate will be issued. A vector network analyzer achieves highly accurate measurements by correcting for the systematic errors in the instrument, the characteristics of cables, adapters and test fixtures. The process of error correction, although commonly just called calibration, is an entirely different process, and may be performed by an engineer several times in an hour. Sometimes it is called user-calibration, to indicate the difference from periodic calibration by a manufacturer. A network analyzer has connectors on its front panel, but the measurements are seldom made at the front panel. Usually some test cables will connect from the front panel to the device under test (DUT). The length of those cables will introduce a time delay and corresponding phase shift (affecting VNA measurements); the cables will also introduce some attenuation (affecting SNA and VNA measurements). The same is true for cables and couplers inside the network analyzer. All these factors will change with temperature. Calibration usually involves measuring known standards and using those measurements to compensate for systematic errors, but there are methods which do not require known standards. Only systematic errors can be corrected. Random errors, such as connector repeatability cannot be corrected by the user calibration. However, some portable vector network analyzers, designed for lower accuracy measurement outside using batteries, do attempt some correction for temperature by measuring the internal temperature of the network analyzer. The first steps, prior to starting the user calibration are: Visually inspect the connectors for any problems such as bent pins or parts which are obviously off-centre. These should not be used, as mating damaged connectors with good connectors will often result in damaging the good connector. Clean the connectors with compressed air at less than 60 psi. If necessary clean the connectors with isopropyl alcohol and allow to dry. Gage the connectors to determine there are not any gross mechanical problems. Connector gauges with resolutions of 0.001" to 0.0001" will usually be included in the better quality calibration kits. Tighten the connectors to the specified torque. A torque wrench will be supplied with all but the cheapest calibration kits. There are several different methods of calibration. SOLT : which is an acronym for short, open, load, through, is the simplest method. As the name suggests, this requires access to known standards with a short circuit, open circuit, a precision load (usually 50 ohms) and a through connection. It is best if the test ports have the same type of connector (N, 3,5 mm etc.), but of a different gender, so the through just requires the test ports are connected together. SOLT is suitable for coaxial measurements, where it is possible to obtain the short, open, load and through. The SOLT calibration method is less suitable for waveguide measurements, where it is difficult to obtain an open circuit or a load, or for measurements on non-coaxial test fixtures, where the same problems with finding suitable standards exist. TRL (through-reflect-line calibration): This technique is useful for microwave, noncoaxial environments such as fixture, wafer probing, or waveguide. TRL uses a transmission line, significantly longer in electrical length than the through line, of known length and impedance as one standard. TRL also requires a high-reflection standard (usually, a short or open) whose impedance does not have to be well characterized, but it must be electrically the same for both test ports. Sometimes manufactures, such as Anritsu, call the TRL calibration a LRL calibration where the first L now stands for 'Line'. The simplest calibration that can be performed on a network analyzer is a transmission measurement. This gives no phase information, and so gives similar data to a scalar network analyzer. The simplest calibration that can be performed on a network analyzer, whilst providing phase information is a 1-port calibration (S11 or S22, but not both). This accounts for the three systematic errors which appear in 1-port reflectivity measurements: Directivity—error resulting from the portion of the source signal that never reaches the DUT. Source match—errors resulting from multiple internal reflections between the source and the DUT. Reflection tracking—error resulting from all frequency dependence of test leads, connections, etc. In a typical 1-port reflection calibration, the user measures three known standards, usually an open, a short and a known load. From these three measurements the network analyzer can account for the three errors above. A more complex calibration is a full 2-port reflectivity and transmission calibration. For two ports there are 12 possible systematic errors analogous to the three above. The most common method for correcting for these involves measuring a short, load and open standard on each of the two ports, as well as transmission between the two ports. It is impossible to make a perfect short circuit, as there will always be some inductance in the short. It is impossible to make a perfect open circuit, as there will always be some fringing capacitance. A modern network analyzer will have data stored about the devices in a calibration kit. (Keysight Technologies 2006) For the open-circuit, this will be some electrical delay (typically tens of picoseconds), and fringing capacitance which will be frequency dependent. The capacitance is normally specified in terms of a polynomial, with the coefficients specific to each standard. A short will have some delay, and a frequency dependent inductance, although the inductance is normally considered insignificant below about 6 GHz. The definitions for a number of standards used in Keysight calibration kits can be found at http://na.support.keysight.com/pna/caldefs/stddefs.html The definitions of the standards for a particular calibration kit will often change depending on the frequency range of the network analyzer. If a calibration kit works to 9 GHz, but a particular network analyzer has a maximum frequency of operation of 3 GHz, then the capacitance of the open standard can approximated more closely up to 3 GHz, using a different set of coefficients than are necessary to work up to 9 GHz. In some calibration kits, the data on the males is different from the females, so the user needs to specify the gender of the connector. In other calibration kits (e.g. Keysight 85033E 9 GHz 3.5 mm), the male and female have identical characteristics, so there is no need for the user to specify the gender. For gender-less connectors, like APC-7, this issue does not arise. Most network analyzers have the ability to have a user defined calibration kit. So if a user has a particular calibration kit details of which are not in the firmware of the network analyzer, the data about the kit can be loaded into the network analyzer and so the kit used. Typically the calibration data can be entered on the instrument front panel or loaded from a medium such as floppy disk or USB stick, or down a bus such as USB or GPIB. The more expensive calibration kits will usually include a torque wrench to tighten connectors properly and a connector gauge to ensure there are no gross errors in the connectors. === Automated calibration fixtures === A calibration using a mechanical calibration kit may take a significant amount of time. Not only must the operator sweep through all the frequencies of interest, but the operator must also disconnect and reconnect the various standards. (Keysight Technologies 2003, p. 9) To avoid that work, network analyzers can employ automated calibration standards. (Keysight Technologies 2003) The operator connects one box to the network analyzer. The box has a set of standards inside and some switches that have already been characterized. The network analyzer can read the characterization and control the configuration using a digital bus such as USB. == Network analyzer verification kits == Many verification kits are available to verify the network analyzer is performing to specification. These typically consist of transmission lines with an air dielectric and attenuators. The Keysight 85055A verification kit includes a 10 cm airline, stepped impedance airline, 20 dB and 50 dB attenuators with data on the devices measured by the manufacturer and stored on both a floppy disk and USB flash drive. Older versions of the 85055A have the data stored on tape and floppy disks rather than on USB drives. Verification kits are also manufactured for other transmission lines such as waveguide which contain a known through mismatch and attenuations. The Flann verification kit includes 5 mismatches using a decrease in waveguide height to provide a known VSWR and 2 attenuators of differing attenuation levels. == Noise figure measurements == The three major manufacturers of VNAs, Keysight, Anritsu, and Rohde & Schwarz, all produce models which permit the use of noise figure measurements. The vector error correction permits higher accuracy than is possible with other forms of commercial noise figure meters. == See also == Bode plotter Electrical measurements Network analyzer (AC power) Vector signal analyzer Smith chart == Notes == == References == Keysight Technologies (June 9, 2003), Electronic vs. Mechanical Calibration Kits: Calibration Methods and Accuracy (PDF), White Paper, Keysight Technologies Keysight Technologies (December 3, 2019), Specifying Calibration Standards for the Keysight 8510 Network Analyzer (PDF), Application Note 8510-5B, Keysight Technologies Dunsmore, Joel P. (September 2012), Handbook of Microwave Component Measurements: with Advanced VNA Techniques, Wiley, ISBN 978-1-1199-7955-5 == External links == Network Analyzer Basics Archived 2020-02-04 at the Wayback Machine (PDF, 5.69 MB), from Keysight Primer on Vector Network Analysis (PDF, 123 KB), from Anritsu Large-Signal Network Analysis (PDF, 3.73 MB), by Jan Verspecht Homebrew VNA by Paul Kiciak, N2PK Measuring Frequency Response (PDF, 961 KB), by Ray Ridley RF vector network analyzer basics RF Fundamentals for Vector Network Analyzers
Wikipedia/Network_analyzer_(electrical)
From 1929 to the late 1960s, large alternating current power systems were modelled and studied on AC network analyzers (also called alternating current network calculators or AC calculating boards) or transient network analyzers. These special-purpose analog computers were an outgrowth of the DC calculating boards used in the very earliest power system analysis. By the middle of the 1950s, fifty network analyzers were in operation. AC network analyzers were much used for power-flow studies, short circuit calculations, and system stability studies, but were ultimately replaced by numerical solutions running on digital computers. While the analyzers could provide real-time simulation of events, with no concerns about numeric stability of algorithms, the analyzers were costly, inflexible, and limited in the number of buses and lines that could be simulated. Eventually powerful digital computers replaced analog network analyzers for practical calculations, but analog physical models for studying electrical transients are still in use. == Calculating methods == As AC power systems became larger at the start of the 20th century, with more interconnected devices, the problem of calculating the expected behavior of the systems became more difficult. Manual methods were only practical for systems of a few sources and nodes. The complexity of practical problems made manual calculation techniques too laborious or inaccurate to be useful. Many mechanical aids to calculation were developed to solve problems relating to network power systems. DC calculating boards used resistors and DC sources to represent an AC network. A resistor was used to model the inductive reactance of a circuit, while the actual series resistance of the circuit was neglected. The principle disadvantage was the inability to model complex impedances. However, for short-circuit fault studies, the effect of the resistance component was usually small. DC boards served to produce results accurate to around 20% error, sufficient for some purposes. Artificial lines were used to analyze transmission lines. These carefully constructed replicas of the distributed inductance, capacitance and resistance of a full-size line were used to investigate propagation of impulses in lines and to validate theoretical calculations of transmission line properties. An artificial line was made by winding layers of wire around a glass cylinder, with interleaved sheets of tin foil, to give the model proportionally the same distributed inductance and capacitance as the full-size line. Later, lumped-element approximations of transmission lines were found to give adequate precision for many calculations. Laboratory investigations of the stability of multiple-machine systems were constrained by the use of direct-operated indicating instruments (voltmeters, ammeters, and wattmeters). To ensure that the instruments negligibly loaded the model system, the machine power level used was substantial. Some workers in the 1920s used three-phase model generators rated up to 600 kVA and 2300 volts to represent a power system. General Electric developed model systems using generators rated at 3.75 kVA. It was difficult to keep multiple generators in synchronism, and the size and cost of the units was a constraint. While transmission lines and loads could be accurately scaled down to laboratory representations, rotating machines could not be accurately miniaturized and keep the same dynamic characteristics as full-sized prototypes; the ratio of machine inertia to machine frictional loss did not scale. == Scale model == A network analyzer system was essentially a scale model of the electrical properties of a specific power system. Generators, transmission lines, and loads were represented by miniature electrical components with scale values in proportion to the modeled system. Model components were interconnected with flexible cords to represent the schematic diagram of the modeled system. Instead of using miniature rotating machines, accurately calibrated phase-shifting transformers were built to simulate electrical machines. These were all energized by the same source (at local power frequency or from a motor-generator set) and so inherently maintained synchronism. The phase angle and terminal voltage of each simulated generator could be set using rotary scales on each phase-shifting transformer unit. Using the per-unit system allowed values to be conveniently interpreted without additional calculation. To reduce the size of the model components, the network analyzer often was energized at a higher frequency than the 50 Hz or 60 Hz utility frequency. The operating frequency was chosen to be high enough to allow high-quality inductors and capacitors to be made, and to be compatible with the available indicating instruments, but not so high that stray capacitance would affect results. Many systems used either 440 Hz, or 480 Hz, provided by a motor-generator set, to reduce size of model components. Some systems used 10 kHz, using capacitors and inductors similar to those used in radio electronics. Model circuits were energized at relatively low voltages to allow for safe measurement with adequate precision. The model base quantities varied by manufacturer and date of design; as amplified indicating instruments became more common, lower base quantities were feasible. Model voltages and currents started off around 200 volts and 0.5 amperes in the MIT analyzer, which still allowed directly driven (but especially sensitive) instruments to be used to measure model parameters. The later machines used as little as 50 volts and 50 mA, used with amplified indicating instruments. By use of the per-unit system, model quantities could be readily transformed into the actual system quantities of voltage, current, power or impedance. A watt measured in the model might correspond to hundreds of kilowatts or megawatts in the modeled system. One hundred volts measured on the model might correspond to one per-unit, which could represent, say, 230,000 volts on a transmission line or 11,000 volts in a distribution system. Typically, results accurate to around 2% of measurement could be obtained. Model components were single-phase devices, but using the symmetrical components method, unbalanced three-phase systems could be studied as well. A complete network analyzer was a system that filled a large room; one model was described as four bays of equipment, spanning a U-shaped arrangement 26 feet (8 metres) across. Companies such as General Electric and Westinghouse could provide consulting services based on their analyzers; but some large electrical utilities operated their own analyzers. The use of network analyzers allowed quick solutions to difficult calculation problems, and allowed problems to be analyzed that would otherwise be uneconomic to compute using manual calculations. Although expensive to build and operate, network analyzers often repaid their costs in reduced calculation time and expedited project schedules. For example, a stability study might indicate if a transmission line should have larger or differently spaced conductors to preserve stability margin during system faults; potentially saving many miles of cable and thousands of insulators. Network analyzers did not directly simulate the dynamic effects of load application to machine dynamics (torque angle, and others). Instead, the analyzer would be used to solve dynamic problems in a stepwise fashion, first calculating a load flow, then adjusting the phase angle of the machine in response to its power flow, and re-calculating the power flow. In use, the system to be modelled would be represented as a single line diagram and all the impedances of lines and machines would be scaled to model values on the analyzer. A plugging diagram would be prepared to show the interconnections to be made between the model elements. The circuit elements would be interconnected by patch cables. The model system would be energized, and measurements taken at the points of interest in the model; these could be scaled up to the values in the full-scale system. == The MIT network analyzer == The network analyzer installed at Massachusetts Institute of Technology (MIT) grew out of a 1924 thesis project by Hugh H. Spencer and Harold Locke Hazen, investigating a power system modelling concept proposed by Vannevar Bush. Instead of miniature rotating machines, each generator was represented by a transformer with adjustable voltage and phase, all fed from a common source. This eliminated a significant source of the poor accuracy of models with miniature rotating machines. The 1925 publication of this thesis attracted the attention at General Electric, where Robert Doherty was interested in modelling problems of system stability. He asked Hazen to verify that the model could accurately reproduce the behavior of machines during load changes. Design and construction was carried out jointly by General Electric and MIT. When first demonstrated in June 1929, the system had eight phase-shifting transformers to represent synchronous machines. Other elements included 100 variable line resistors, 100 variable reactors, 32 fixed capacitors, and 40 adjustable load units. The analyzer was described in a 1930 paper by H.L Hazen, O.R. Schurig and M.F. Gardner. The base quantities for the analyzer were 200 volts, and 0.5 amperes. Sensitive portable thermocouple-type instruments were used for measurement. The analyzer occupied four large panels, arranged in a U-shape, with tables in front of each section to hold measuring instruments. While primarily conceived as an educational tool, the analyzer saw considerable use by outside firms, who would pay to use the device. American Gas and Electric Company, the Tennessee Valley Authority, and many other organizations studied problems on the MIT analyzer in its first decade of operation. In 1940 the system was moved and expanded to handle more complex systems. By 1953 the MIT analyzer was beginning to fall behind the state of the art. Digital computers were first used on power system problems as early as "Whirlwind" in 1949. Unlike most of the forty other analyzers in service by that point, the MIT instrument was energized at 60 Hz, not 440 or 480 Hz, making its components large, and expansion to new types of problems difficult. Many utility customers had bought their own network analyzers. The MIT system was dismantled and sold to the Puerto Rico Water Resources Authority in 1954. == Commercial manufacturers == By 1947, fourteen network analyzers had been built at a total cost of about two million US dollars. General Electric built two full-scale network analyzers for its own work and for services to its clients. Westinghouse built systems for their internal use and provided more than 20 analyzers to utility and university clients. After the Second World War analyzers were known to be in use in France, the UK, Australia, Japan, and the Soviet Union. Later models had improvements such as centralized control of switching, central measurement bays, and chart recorders to automatically provide permanent records of results. General Electric's Model 307 was a miniaturized AC network analyzer with four generator units and a single electronically amplified metering unit. It was targeted at utility companies to solve problems too large for hand computation but not worth the expense of renting time on a full size analyzer. Like the Iowa State College analyzer, it used a system frequency of 10 kHz instead of 60 Hz or 480 Hz, allowing much smaller radio-style capacitor and inductors to be used to model power system components. The 307 was cataloged from 1957 and had a list of about 20 utility, educational and government customers. In 1959 its list price was $8,590. In 1953, the Metropolitan Edison Company and a group of six other electrical companies purchased a new Westinghouse AC network analyzer for installation at the Franklin Institute in Philadelphia. The system, described as the largest ever built, cost $400,000. In Japan, network analyzers were installed starting in 1951. The Yokogawa Electric company introduced a model energized at 3980 Hz starting in 1956. == Other applications == === Transient analyzer === A "transient network analyzer" was an analog model of a transmission system especially adapted to study high-frequency transient surges (such as those due to lightning or switching), instead of AC power frequency currents. Similarly to an AC network analyzer, they represented apparatus and lines with scaled inductances and resistances. A synchronously driven switch repeatedly applied a transient impulse to the model system, and the response at any point could be observed on an oscilloscope or recorded on an oscillograph. Some transient analyzers are still in use for research and education, sometimes combined with digital protective relays or recording instruments. === Anacom === The Westinghouse Anacom was an AC-energized electrical analog computer system used extensively for problems in mechanical design, structural elements, lubrication oil flow, and various transient problems including those due to lightning surges in electric power transmission systems. The excitation frequency of the computer could be varied. The Westinghouse Anacom constructed in 1948 was used up to the early 1990s for engineering calculations; its original cost was $500,000. The system was periodically updated and expanded; by the 1980s the Anacom could be run through many simulation cases unattended, under the control of a digital computer that automatically set up initial conditions and recorded the results. Westinghouse built a replica Anacom for Northwestern University, sold an Anacom to ABB, and twenty or thirty similar computers by other makers were used around the world. === Physics and chemistry === Since the multiple elements of the AC network analyzer formed a powerful analog computer, occasionally problems in physics and chemistry were modeled (by such researchers as Gabriel Kron of General Electric), in the late 1940s prior to the ready availability of general-purpose digital computers. Another application was water flow in water distribution systems. The forces and displacements of a mechanical system could be readily modelled with the voltages and currents of a network analyzer, which allowed easy adjustment of properties such as the stiffness of a spring by, for example, changing the value of a capacitor. === Structures === The David Taylor Model Basin operated an AC network analyzer from the late 1950s until the mid-1960s. The system was used on problems in ship design. An electrical analog of the structural properties of a proposed ship, shaft, or other structure could be built, and tested for its vibrational modes. Unlike AC analyzers used for power systems work, the exciting frequency was made continuously variable so that mechanical resonance effects could be investigated. == Decline and obsolescence == Even during the Depression and the Second World War, many network analyzers were constructed because of their great value in solving calculations related to electric power transmission. By the mid 1950s, about thirty analyzers were available in the United States, representing an oversupply. Institutions such as MIT could no longer justify operating analyzers as paying clients barely covered operating expenses. Once digital computers of adequate performance became available, the solution methods developed on analog network analyzers were migrated to the digital realm, where plugboards, switches and meter pointers were replaced with punch cards and printouts. The same general-purpose digital computer hardware that ran network studies could easily be dual-tasked with business functions such as payroll. Analog network analyzers faded from general use for load-flow and fault studies, although some persisted in transient studies for a while longer. Analog analyzers were dismantled and either sold off to other utilities, donated to engineering schools, or scrapped. The fate of a few analyzers illustrates the trend. The analyzer purchased by American Electric Power was replaced by digital systems in 1961, and donated to Virginia Tech. The Westinghouse network analyzer purchased by the State Electricity Commission of Victoria, Australia in 1950 was taken out of utility service in 1967 and donated to the Engineering department at Monash University; but by 1985, even instructional use of the analyzer was no longer practical and the system was finally dismantled. One factor contributing to the obsolescence of analog models was the increasing complexity of interconnected power systems. Even a large analyzer could only represent a few machines, and perhaps a few score lines and busses. Digital computers routinely handled systems with thousands of busses and transmission lines. == See also == Network analyzer (electrical) Power system protection Differential analyser Prospective short-circuit current == References == == External links == [1] Lee Allen Mayo, thesis Simulation without replication, University of Notre Dame 2011, pp. 52–101 discusses use of network analyzers for theoretical calculations
Wikipedia/Network_analyzer_(AC_power)
Mathematical methods are integral to the study of electronics. == Mathematics in electronics engineering == Mathematical Methods in Electronics Engineering involves applying mathematical principles to analyze, design, and optimize electronic circuits and systems. Key areas include: Linear Algebra: Used to solve systems of linear equations that arise in circuit analysis. Applications include network theory and the analysis of electrical circuits using matrices and vector spaces Calculus: Essential for understanding changes in electronic signals. Used in the analysis of dynamic systems and control systems. Integral calculus is used in analyzing waveforms and signals. Differential Equations: Applied to model and analyze the behavior of circuits over time. Used in the study of filters, oscillators, and transient responses of circuits. Complex Numbers and Complex Analysis: Important for circuit analysis and impedance calculations. Used in signal processing and to solve problems involving sinusoidal signals. Probability and Statistics: Used in signal processing and communication systems to handle noise and random signals. Reliability analysis of electronic components. Fourier and Laplace Transforms: Crucial for analyzing signals and systems. Fourier transforms are used for frequency analysis and signal processing. Laplace transforms are used for solving differential equations and analyzing system stability. Numerical Methods: Employed for simulating and solving complex circuits that cannot be solved analytically. Used in computer-aided design tools for electronic circuit design. Vector Calculus: Applied in electromagnetic field theory. Important for understanding the behavior of electromagnetic waves and fields in electronic devices. Optimization: Techniques used to design efficient circuits and systems. Applications include minimizing power consumption and maximizing signal integrity. These methods are integral to systematically analyzing and improving the performance and functionality of electronic devices and systems. == Mathematical methods applied in foundational electrical laws and theorems == A number of fundamental electrical laws and theorems apply to all electrical networks. These include: Faraday's law of induction: Any change in the magnetic environment of a coil of wire will cause a voltage (emf) to be "induced" in the coil. Gauss's Law: The total of the electric flux out of a closed surface is equal to the charge enclosed divided by the permittivity. Kirchhoff's Current Law: The sum of all currents entering a node is equal to the sum of all currents leaving the node, or the sum of total current at a junction is zero. Kirchhoff's voltage law: The directed sum of the electrical potential differences around a circuit must be zero. Ohm's Law: The voltage across a resistor is the product of its resistance and the current flowing through it, at constant temperature. Norton's Theorem: Any two-terminal collection of voltage sources and resistors is electrically equivalent to an ideal current source in parallel with a single resistor. Thévenin's Theorem: Any two-terminal combination of voltage sources and resistors is electrically equivalent to a single voltage source in series with a single resistor. Millman's Theorem: The voltage on the ends of branches in parallel is equal to the sum of the currents flowing in every branch divided by the total equivalent conductance. == Analytical methods == In addition to the foundational principles and theorems, several analytical methods are integral to the study of electronics: Network analysis (electrical circuits): Essential for comprehending capacitor and inductor behavior under changing voltage inputs, particularly significant in fields such as signal processing, power electronics, and control systems. This entails solving intricate networks of resistors through techniques like node-voltage and mesh-current methods. Signal analysis: Involves Fourier analysis, Nyquist–Shannon sampling theorem, and information theory, essential for understanding and manipulating signals in various systems. These methods build on the foundational laws and theorems provide insights and tools for the analysis and design of complex electronic systems. == See also == Introduction to Electronics Georgia Tech University of California, Santa Cruz Electrical Engineering curriculum University of California, Berkeley Electrical Engineering curriculum (UCSC Catalog) (Berkeley Academic Guide) == References ==
Wikipedia/Mathematical_methods_in_electronics
Optics is the branch of physics that studies the behaviour and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics usually describes the behaviour of visible, ultraviolet, and infrared light. Light is a type of electromagnetic radiation, and other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties. Most optical phenomena can be accounted for by using the classical electromagnetic description of light, however complete electromagnetic descriptions of light are often difficult to apply in practice. Practical optics is usually done using simplified models. The most common of these, geometric optics, treats light as a collection of rays that travel in straight lines and bend when they pass through or reflect from surfaces. Physical optics is a more comprehensive model of light, which includes wave effects such as diffraction and interference that cannot be accounted for in geometric optics. Historically, the ray-based model of light was developed first, followed by the wave model of light. Progress in electromagnetic theory in the 19th century led to the discovery that light waves were in fact electromagnetic radiation. Some phenomena depend on light having both wave-like and particle-like properties. Explanation of these effects requires quantum mechanics. When considering light's particle-like properties, the light is modelled as a collection of particles called "photons". Quantum optics deals with the application of quantum mechanics to optical systems. Optical science is relevant to and studied in many related disciplines including astronomy, various engineering fields, photography, and medicine (particularly ophthalmology and optometry, in which it is called physiological optics). Practical applications of optics are found in a variety of technologies and everyday objects, including mirrors, lenses, telescopes, microscopes, lasers, and fibre optics. == History == Optics began with the development of lenses by the ancient Egyptians and Mesopotamians. The earliest known lenses, made from polished crystal, often quartz, date from as early as 2000 BC from Crete (Archaeological Museum of Heraclion, Greece). Lenses from Rhodes date around 700 BC, as do Assyrian lenses such as the Nimrud lens. The ancient Romans and Greeks filled glass spheres with water to make lenses. These practical developments were followed by the development of theories of light and vision by ancient Greek and Indian philosophers, and the development of geometrical optics in the Greco-Roman world. The word optics comes from the ancient Greek word ὀπτική, optikē 'appearance, look'. Greek philosophy on optics broke down into two opposing theories on how vision worked, the intromission theory and the emission theory. The intromission approach saw vision as coming from objects casting off copies of themselves (called eidola) that were captured by the eye. With many propagators including Democritus, Epicurus, Aristotle and their followers, this theory seems to have some contact with modern theories of what vision really is, but it remained only speculation lacking any experimental foundation. Plato first articulated the emission theory, the idea that visual perception is accomplished by rays emitted by the eyes. He also commented on the parity reversal of mirrors in Timaeus. Some hundred years later, Euclid (4th–3rd century BC) wrote a treatise entitled Optics where he linked vision to geometry, creating geometrical optics. He based his work on Plato's emission theory wherein he described the mathematical rules of perspective and described the effects of refraction qualitatively, although he questioned that a beam of light from the eye could instantaneously light up the stars every time someone blinked. Euclid stated the principle of shortest trajectory of light, and considered multiple reflections on flat and spherical mirrors. Ptolemy, in his treatise Optics, held an extramission-intromission theory of vision: the rays (or flux) from the eye formed a cone, the vertex being within the eye, and the base defining the visual field. The rays were sensitive, and conveyed information back to the observer's intellect about the distance and orientation of surfaces. He summarized much of Euclid and went on to describe a way to measure the angle of refraction, though he failed to notice the empirical relationship between it and the angle of incidence. Plutarch (1st–2nd century AD) described multiple reflections on spherical mirrors and discussed the creation of magnified and reduced images, both real and imaginary, including the case of chirality of the images. During the Middle Ages, Greek ideas about optics were resurrected and extended by writers in the Muslim world. One of the earliest of these was Al-Kindi (c. 801–873) who wrote on the merits of Aristotelian and Euclidean ideas of optics, favouring the emission theory since it could better quantify optical phenomena. In 984, the Persian mathematician Ibn Sahl wrote the treatise "On burning mirrors and lenses", correctly describing a law of refraction equivalent to Snell's law. He used this law to compute optimum shapes for lenses and curved mirrors. In the early 11th century, Alhazen (Ibn al-Haytham) wrote the Book of Optics (Kitab al-manazir) in which he explored reflection and refraction and proposed a new system for explaining vision and light based on observation and experiment. He rejected the "emission theory" of Ptolemaic optics with its rays being emitted by the eye, and instead put forward the idea that light reflected in all directions in straight lines from all points of the objects being viewed and then entered the eye, although he was unable to correctly explain how the eye captured the rays. Alhazen's work was largely ignored in the Arabic world but it was anonymously translated into Latin around 1200 A.D. and further summarised and expanded on by the Polish monk Witelo making it a standard text on optics in Europe for the next 400 years. In the 13th century in medieval Europe, English bishop Robert Grosseteste wrote on a wide range of scientific topics, and discussed light from four different perspectives: an epistemology of light, a metaphysics or cosmogony of light, an etiology or physics of light, and a theology of light, basing it on the works of Aristotle and Platonism. Grosseteste's most famous disciple, Roger Bacon, wrote works citing a wide range of recently translated optical and philosophical works, including those of Alhazen, Aristotle, Avicenna, Averroes, Euclid, al-Kindi, Ptolemy, Tideus, and Constantine the African. Bacon was able to use parts of glass spheres as magnifying glasses to demonstrate that light reflects from objects rather than being released from them. The first wearable eyeglasses were invented in Italy around 1286. This was the start of the optical industry of grinding and polishing lenses for these "spectacles", first in Venice and Florence in the thirteenth century, and later in the spectacle making centres in both the Netherlands and Germany. Spectacle makers created improved types of lenses for the correction of vision based more on empirical knowledge gained from observing the effects of the lenses rather than using the rudimentary optical theory of the day (theory which for the most part could not even adequately explain how spectacles worked). This practical development, mastery, and experimentation with lenses led directly to the invention of the compound optical microscope around 1595, and the refracting telescope in 1608, both of which appeared in the spectacle making centres in the Netherlands. In the early 17th century, Johannes Kepler expanded on geometric optics in his writings, covering lenses, reflection by flat and curved mirrors, the principles of pinhole cameras, inverse-square law governing the intensity of light, and the optical explanations of astronomical phenomena such as lunar and solar eclipses and astronomical parallax. He was also able to correctly deduce the role of the retina as the actual organ that recorded images, finally being able to scientifically quantify the effects of different types of lenses that spectacle makers had been observing over the previous 300 years. After the invention of the telescope, Kepler set out the theoretical basis on how they worked and described an improved version, known as the Keplerian telescope, using two convex lenses to produce higher magnification. Optical theory progressed in the mid-17th century with treatises written by philosopher René Descartes, which explained a variety of optical phenomena including reflection and refraction by assuming that light was emitted by objects which produced it. This differed substantively from the ancient Greek emission theory. In the late 1660s and early 1670s, Isaac Newton expanded Descartes's ideas into a corpuscle theory of light, famously determining that white light was a mix of colours that can be separated into its component parts with a prism. In 1690, Christiaan Huygens proposed a wave theory for light based on suggestions that had been made by Robert Hooke in 1664. Hooke himself publicly criticised Newton's theories of light and the feud between the two lasted until Hooke's death. In 1704, Newton published Opticks and, at the time, partly because of his success in other areas of physics, he was generally considered to be the victor in the debate over the nature of light. Newtonian optics was generally accepted until the early 19th century when Thomas Young and Augustin-Jean Fresnel conducted experiments on the interference of light that firmly established light's wave nature. Young's famous double slit experiment showed that light followed the superposition principle, which is a wave-like property not predicted by Newton's corpuscle theory. This work led to a theory of diffraction for light and opened an entire area of study in physical optics. Wave optics was successfully unified with electromagnetic theory by James Clerk Maxwell in the 1860s. The next development in optical theory came in 1899 when Max Planck correctly modelled blackbody radiation by assuming that the exchange of energy between light and matter only occurred in discrete amounts he called quanta. In 1905, Albert Einstein published the theory of the photoelectric effect that firmly established the quantization of light itself. In 1913, Niels Bohr showed that atoms could only emit discrete amounts of energy, thus explaining the discrete lines seen in emission and absorption spectra. The understanding of the interaction between light and matter that followed from these developments not only formed the basis of quantum optics but also was crucial for the development of quantum mechanics as a whole. The ultimate culmination, the theory of quantum electrodynamics, explains all optics and electromagnetic processes in general as the result of the exchange of real and virtual photons. Quantum optics gained practical importance with the inventions of the maser in 1953 and of the laser in 1960. Following the work of Paul Dirac in quantum field theory, George Sudarshan, Roy J. Glauber, and Leonard Mandel applied quantum theory to the electromagnetic field in the 1950s and 1960s to gain a more detailed understanding of photodetection and the statistics of light. == Classical optics == Classical optics is divided into two main branches: geometrical (or ray) optics and physical (or wave) optics. In geometrical optics, light is considered to travel in straight lines, while in physical optics, light is considered as an electromagnetic wave. Geometrical optics can be viewed as an approximation of physical optics that applies when the wavelength of the light used is much smaller than the size of the optical elements in the system being modelled. === Geometrical optics === Geometrical optics, or ray optics, describes the propagation of light in terms of "rays" which travel in straight lines, and whose paths are governed by the laws of reflection and refraction at interfaces between different media. These laws were discovered empirically as far back as 984 AD and have been used in the design of optical components and instruments from then until the present day. They can be summarised as follows: When a ray of light hits the boundary between two transparent materials, it is divided into a reflected and a refracted ray. The law of reflection says that the reflected ray lies in the plane of incidence, and the angle of reflection equals the angle of incidence. The law of refraction says that the refracted ray lies in the plane of incidence, and the sine of the angle of incidence divided by the sine of the angle of refraction is a constant: sin ⁡ θ 1 sin ⁡ θ 2 = n , {\displaystyle {\frac {\sin {\theta _{1}}}{\sin {\theta _{2}}}}=n,} where n is a constant for any two materials and a given wavelength of light. If the first material is air or vacuum, n is the refractive index of the second material. The laws of reflection and refraction can be derived from Fermat's principle which states that the path taken between two points by a ray of light is the path that can be traversed in the least time. ==== Approximations ==== Geometric optics is often simplified by making the paraxial approximation, or "small angle approximation". The mathematical behaviour then becomes linear, allowing optical components and systems to be described by simple matrices. This leads to the techniques of Gaussian optics and paraxial ray tracing, which are used to find basic properties of optical systems, such as approximate image and object positions and magnifications. ==== Reflections ==== Reflections can be divided into two types: specular reflection and diffuse reflection. Specular reflection describes the gloss of surfaces such as mirrors, which reflect light in a simple, predictable way. This allows for the production of reflected images that can be associated with an actual (real) or extrapolated (virtual) location in space. Diffuse reflection describes non-glossy materials, such as paper or rock. The reflections from these surfaces can only be described statistically, with the exact distribution of the reflected light depending on the microscopic structure of the material. Many diffuse reflectors are described or can be approximated by Lambert's cosine law, which describes surfaces that have equal luminance when viewed from any angle. Glossy surfaces can give both specular and diffuse reflection. In specular reflection, the direction of the reflected ray is determined by the angle the incident ray makes with the surface normal, a line perpendicular to the surface at the point where the ray hits. The incident and reflected rays and the normal lie in a single plane, and the angle between the reflected ray and the surface normal is the same as that between the incident ray and the normal. This is known as the Law of Reflection. For flat mirrors, the law of reflection implies that images of objects are upright and the same distance behind the mirror as the objects are in front of the mirror. The image size is the same as the object size. The law also implies that mirror images are parity inverted, which we perceive as a left-right inversion. Images formed from reflection in two (or any even number of) mirrors are not parity inverted. Corner reflectors produce reflected rays that travel back in the direction from which the incident rays came. This is called retroreflection. Mirrors with curved surfaces can be modelled by ray tracing and using the law of reflection at each point on the surface. For mirrors with parabolic surfaces, parallel rays incident on the mirror produce reflected rays that converge at a common focus. Other curved surfaces may also focus light, but with aberrations due to the diverging shape causing the focus to be smeared out in space. In particular, spherical mirrors exhibit spherical aberration. Curved mirrors can form images with a magnification greater than or less than one, and the magnification can be negative, indicating that the image is inverted. An upright image formed by reflection in a mirror is always virtual, while an inverted image is real and can be projected onto a screen. ==== Refractions ==== Refraction occurs when light travels through an area of space that has a changing index of refraction; this principle allows for lenses and the focusing of light. The simplest case of refraction occurs when there is an interface between a uniform medium with index of refraction n1 and another medium with index of refraction n2. In such situations, Snell's Law describes the resulting deflection of the light ray: n 1 sin ⁡ θ 1 = n 2 sin ⁡ θ 2 {\displaystyle n_{1}\sin \theta _{1}=n_{2}\sin \theta _{2}} where θ1 and θ2 are the angles between the normal (to the interface) and the incident and refracted waves, respectively. The index of refraction of a medium is related to the speed, v, of light in that medium by n = c / v , {\displaystyle n=c/v,} where c is the speed of light in vacuum. Snell's Law can be used to predict the deflection of light rays as they pass through linear media as long as the indexes of refraction and the geometry of the media are known. For example, the propagation of light through a prism results in the light ray being deflected depending on the shape and orientation of the prism. In most materials, the index of refraction varies with the frequency of the light, known as dispersion. Taking this into account, Snell's Law can be used to predict how a prism will disperse light into a spectrum. The discovery of this phenomenon when passing light through a prism is famously attributed to Isaac Newton. Some media have an index of refraction which varies gradually with position and, therefore, light rays in the medium are curved. This effect is responsible for mirages seen on hot days: a change in index of refraction air with height causes light rays to bend, creating the appearance of specular reflections in the distance (as if on the surface of a pool of water). Optical materials with varying indexes of refraction are called gradient-index (GRIN) materials. Such materials are used to make gradient-index optics. For light rays travelling from a material with a high index of refraction to a material with a low index of refraction, Snell's law predicts that there is no θ2 when θ1 is large. In this case, no transmission occurs; all the light is reflected. This phenomenon is called total internal reflection and allows for fibre optics technology. As light travels down an optical fibre, it undergoes total internal reflection allowing for essentially no light to be lost over the length of the cable. ===== Lenses ===== A device that produces converging or diverging light rays due to refraction is known as a lens. Lenses are characterized by their focal length: a converging lens has positive focal length, while a diverging lens has negative focal length. Smaller focal length indicates that the lens has a stronger converging or diverging effect. The focal length of a simple lens in air is given by the lensmaker's equation. Ray tracing can be used to show how images are formed by a lens. For a thin lens in air, the location of the image is given by the simple equation 1 S 1 + 1 S 2 = 1 f , {\displaystyle {\frac {1}{S_{1}}}+{\frac {1}{S_{2}}}={\frac {1}{f}},} where S1 is the distance from the object to the lens, S2 is the distance from the lens to the image, and f is the focal length of the lens. In the sign convention used here, the object and image distances are positive if the object and image are on opposite sides of the lens. Incoming parallel rays are focused by a converging lens onto a spot one focal length from the lens, on the far side of the lens. This is called the rear focal point of the lens. Rays from an object at a finite distance are focused further from the lens than the focal distance; the closer the object is to the lens, the further the image is from the lens. With diverging lenses, incoming parallel rays diverge after going through the lens, in such a way that they seem to have originated at a spot one focal length in front of the lens. This is the lens's front focal point. Rays from an object at a finite distance are associated with a virtual image that is closer to the lens than the focal point, and on the same side of the lens as the object. The closer the object is to the lens, the closer the virtual image is to the lens. As with mirrors, upright images produced by a single lens are virtual, while inverted images are real. Lenses suffer from aberrations that distort images. Monochromatic aberrations occur because the geometry of the lens does not perfectly direct rays from each object point to a single point on the image, while chromatic aberration occurs because the index of refraction of the lens varies with the wavelength of the light. === Physical optics === In physical optics, light is considered to propagate as waves. This model predicts phenomena such as interference and diffraction, which are not explained by geometric optics. The speed of light waves in air is approximately 3.0×108 m/s (exactly 299,792,458 m/s in vacuum). The wavelength of visible light waves varies between 400 and 700 nm, but the term "light" is also often applied to infrared (0.7–300 μm) and ultraviolet radiation (10–400 nm). The wave model can be used to make predictions about how an optical system will behave without requiring an explanation of what is "waving" in what medium. Until the middle of the 19th century, most physicists believed in an "ethereal" medium in which the light disturbance propagated. The existence of electromagnetic waves was predicted in 1865 by Maxwell's equations. These waves propagate at the speed of light and have varying electric and magnetic fields which are orthogonal to one another, and also to the direction of propagation of the waves. Light waves are now generally treated as electromagnetic waves except when quantum mechanical effects have to be considered. ==== Modelling and design of optical systems using physical optics ==== Many simplified approximations are available for analysing and designing optical systems. Most of these use a single scalar quantity to represent the electric field of the light wave, rather than using a vector model with orthogonal electric and magnetic vectors. The Huygens–Fresnel equation is one such model. This was derived empirically by Fresnel in 1815, based on Huygens' hypothesis that each point on a wavefront generates a secondary spherical wavefront, which Fresnel combined with the principle of superposition of waves. The Kirchhoff diffraction equation, which is derived using Maxwell's equations, puts the Huygens-Fresnel equation on a firmer physical foundation. Examples of the application of Huygens–Fresnel principle can be found in the articles on diffraction and Fraunhofer diffraction. More rigorous models, involving the modelling of both electric and magnetic fields of the light wave, are required when dealing with materials whose electric and magnetic properties affect the interaction of light with the material. For instance, the behaviour of a light wave interacting with a metal surface is quite different from what happens when it interacts with a dielectric material. A vector model must also be used to model polarised light. Numerical modeling techniques such as the finite element method, the boundary element method and the transmission-line matrix method can be used to model the propagation of light in systems which cannot be solved analytically. Such models are computationally demanding and are normally only used to solve small-scale problems that require accuracy beyond that which can be achieved with analytical solutions. All of the results from geometrical optics can be recovered using the techniques of Fourier optics which apply many of the same mathematical and analytical techniques used in acoustic engineering and signal processing. Gaussian beam propagation is a simple paraxial physical optics model for the propagation of coherent radiation such as laser beams. This technique partially accounts for diffraction, allowing accurate calculations of the rate at which a laser beam expands with distance, and the minimum size to which the beam can be focused. Gaussian beam propagation thus bridges the gap between geometric and physical optics. ==== Superposition and interference ==== In the absence of nonlinear effects, the superposition principle can be used to predict the shape of interacting waveforms through the simple addition of the disturbances. This interaction of waves to produce a resulting pattern is generally termed "interference" and can result in a variety of outcomes. If two waves of the same wavelength and frequency are in phase, both the wave crests and wave troughs align. This results in constructive interference and an increase in the amplitude of the wave, which for light is associated with a brightening of the waveform in that location. Alternatively, if the two waves of the same wavelength and frequency are out of phase, then the wave crests will align with wave troughs and vice versa. This results in destructive interference and a decrease in the amplitude of the wave, which for light is associated with a dimming of the waveform at that location. See below for an illustration of this effect. Since the Huygens–Fresnel principle states that every point of a wavefront is associated with the production of a new disturbance, it is possible for a wavefront to interfere with itself constructively or destructively at different locations producing bright and dark fringes in regular and predictable patterns. Interferometry is the science of measuring these patterns, usually as a means of making precise determinations of distances or angular resolutions. The Michelson interferometer was a famous instrument which used interference effects to accurately measure the speed of light. The appearance of thin films and coatings is directly affected by interference effects. Antireflective coatings use destructive interference to reduce the reflectivity of the surfaces they coat, and can be used to minimise glare and unwanted reflections. The simplest case is a single layer with a thickness of one-fourth the wavelength of incident light. The reflected wave from the top of the film and the reflected wave from the film/material interface are then exactly 180° out of phase, causing destructive interference. The waves are only exactly out of phase for one wavelength, which would typically be chosen to be near the centre of the visible spectrum, around 550 nm. More complex designs using multiple layers can achieve low reflectivity over a broad band, or extremely low reflectivity at a single wavelength. Constructive interference in thin films can create a strong reflection of light in a range of wavelengths, which can be narrow or broad depending on the design of the coating. These films are used to make dielectric mirrors, interference filters, heat reflectors, and filters for colour separation in colour television cameras. This interference effect is also what causes the colourful rainbow patterns seen in oil slicks. ==== Diffraction and optical resolution ==== Diffraction is the process by which light interference is most commonly observed. The effect was first described in 1665 by Francesco Maria Grimaldi, who also coined the term from the Latin diffringere 'to break into pieces'. Later that century, Robert Hooke and Isaac Newton also described phenomena now known to be diffraction in Newton's rings while James Gregory recorded his observations of diffraction patterns from bird feathers. The first physical optics model of diffraction that relied on the Huygens–Fresnel principle was developed in 1803 by Thomas Young in his interference experiments with the interference patterns of two closely spaced slits. Young showed that his results could only be explained if the two slits acted as two unique sources of waves rather than corpuscles. In 1815 and 1818, Augustin-Jean Fresnel firmly established the mathematics of how wave interference can account for diffraction. The simplest physical models of diffraction use equations that describe the angular separation of light and dark fringes due to light of a particular wavelength (λ). In general, the equation takes the form m λ = d sin ⁡ θ {\displaystyle m\lambda =d\sin \theta } where d is the separation between two wavefront sources (in the case of Young's experiments, it was two slits), θ is the angular separation between the central fringe and the m-th order fringe, where the central maximum is m = 0. This equation is modified slightly to take into account a variety of situations such as diffraction through a single gap, diffraction through multiple slits, or diffraction through a diffraction grating that contains a large number of slits at equal spacing. More complicated models of diffraction require working with the mathematics of Fresnel or Fraunhofer diffraction. X-ray diffraction makes use of the fact that atoms in a crystal have regular spacing at distances that are on the order of one angstrom. To see diffraction patterns, x-rays with similar wavelengths to that spacing are passed through the crystal. Since crystals are three-dimensional objects rather than two-dimensional gratings, the associated diffraction pattern varies in two directions according to Bragg reflection, with the associated bright spots occurring in unique patterns and d being twice the spacing between atoms. Diffraction effects limit the ability of an optical detector to optically resolve separate light sources. In general, light that is passing through an aperture will experience diffraction and the best images that can be created (as described in diffraction-limited optics) appear as a central spot with surrounding bright rings, separated by dark nulls; this pattern is known as an Airy pattern, and the central bright lobe as an Airy disk. The size of such a disk is given by sin ⁡ θ = 1.22 λ D {\displaystyle \sin \theta =1.22{\frac {\lambda }{D}}} where θ is the angular resolution, λ is the wavelength of the light, and D is the diameter of the lens aperture. If the angular separation of the two points is significantly less than the Airy disk angular radius, then the two points cannot be resolved in the image, but if their angular separation is much greater than this, distinct images of the two points are formed and they can therefore be resolved. Rayleigh defined the somewhat arbitrary "Rayleigh criterion" that two points whose angular separation is equal to the Airy disk radius (measured to first null, that is, to the first place where no light is seen) can be considered to be resolved. It can be seen that the greater the diameter of the lens or its aperture, the finer the resolution. Interferometry, with its ability to mimic extremely large baseline apertures, allows for the greatest angular resolution possible. For astronomical imaging, the atmosphere prevents optimal resolution from being achieved in the visible spectrum due to the atmospheric scattering and dispersion which cause stars to twinkle. Astronomers refer to this effect as the quality of astronomical seeing. Techniques known as adaptive optics have been used to eliminate the atmospheric disruption of images and achieve results that approach the diffraction limit. ==== Dispersion and scattering ==== Refractive processes take place in the physical optics limit, where the wavelength of light is similar to other distances, as a kind of scattering. The simplest type of scattering is Thomson scattering which occurs when electromagnetic waves are deflected by single particles. In the limit of Thomson scattering, in which the wavelike nature of light is evident, light is dispersed independent of the frequency, in contrast to Compton scattering which is frequency-dependent and strictly a quantum mechanical process, involving the nature of light as particles. In a statistical sense, elastic scattering of light by numerous particles much smaller than the wavelength of the light is a process known as Rayleigh scattering while the similar process for scattering by particles that are similar or larger in wavelength is known as Mie scattering with the Tyndall effect being a commonly observed result. A small proportion of light scattering from atoms or molecules may undergo Raman scattering, wherein the frequency changes due to excitation of the atoms and molecules. Brillouin scattering occurs when the frequency of light changes due to local changes with time and movements of a dense material. Dispersion occurs when different frequencies of light have different phase velocities, due either to material properties (material dispersion) or to the geometry of an optical waveguide (waveguide dispersion). The most familiar form of dispersion is a decrease in index of refraction with increasing wavelength, which is seen in most transparent materials. This is called "normal dispersion". It occurs in all dielectric materials, in wavelength ranges where the material does not absorb light. In wavelength ranges where a medium has significant absorption, the index of refraction can increase with wavelength. This is called "anomalous dispersion". The separation of colours by a prism is an example of normal dispersion. At the surfaces of the prism, Snell's law predicts that light incident at an angle θ to the normal will be refracted at an angle arcsin(sin (θ) / n). Thus, blue light, with its higher refractive index, is bent more strongly than red light, resulting in the well-known rainbow pattern. Material dispersion is often characterised by the Abbe number, which gives a simple measure of dispersion based on the index of refraction at three specific wavelengths. Waveguide dispersion is dependent on the propagation constant. Both kinds of dispersion cause changes in the group characteristics of the wave, the features of the wave packet that change with the same frequency as the amplitude of the electromagnetic wave. "Group velocity dispersion" manifests as a spreading-out of the signal "envelope" of the radiation and can be quantified with a group dispersion delay parameter: D = 1 v g 2 d v g d λ {\displaystyle D={\frac {1}{v_{\mathrm {g} }^{2}}}{\frac {dv_{\mathrm {g} }}{d\lambda }}} where vg is the group velocity. For a uniform medium, the group velocity is v g = c ( n − λ d n d λ ) − 1 {\displaystyle v_{\mathrm {g} }=c\left(n-\lambda {\frac {dn}{d\lambda }}\right)^{-1}} where n is the index of refraction and c is the speed of light in a vacuum. This gives a simpler form for the dispersion delay parameter: D = − λ c d 2 n d λ 2 . {\displaystyle D=-{\frac {\lambda }{c}}\,{\frac {d^{2}n}{d\lambda ^{2}}}.} If D is less than zero, the medium is said to have positive dispersion or normal dispersion. If D is greater than zero, the medium has negative dispersion. If a light pulse is propagated through a normally dispersive medium, the result is the higher frequency components slow down more than the lower frequency components. The pulse therefore becomes positively chirped, or up-chirped, increasing in frequency with time. This causes the spectrum coming out of a prism to appear with red light the least refracted and blue/violet light the most refracted. Conversely, if a pulse travels through an anomalously (negatively) dispersive medium, high-frequency components travel faster than the lower ones, and the pulse becomes negatively chirped, or down-chirped, decreasing in frequency with time. The result of group velocity dispersion, whether negative or positive, is ultimately temporal spreading of the pulse. This makes dispersion management extremely important in optical communications systems based on optical fibres, since if dispersion is too high, a group of pulses representing information will each spread in time and merge, making it impossible to extract the signal. ==== Polarisation ==== Polarisation is a general property of waves that describes the orientation of their oscillations. For transverse waves such as many electromagnetic waves, it describes the orientation of the oscillations in the plane perpendicular to the wave's direction of travel. The oscillations may be oriented in a single direction (linear polarisation), or the oscillation direction may rotate as the wave travels (circular or elliptical polarisation). Circularly polarised waves can rotate rightward or leftward in the direction of travel, and which of those two rotations is present in a wave is called the wave's chirality. The typical way to consider polarisation is to keep track of the orientation of the electric field vector as the electromagnetic wave propagates. The electric field vector of a plane wave may be arbitrarily divided into two perpendicular components labeled x and y (with z indicating the direction of travel). The shape traced out in the x-y plane by the electric field vector is a Lissajous figure that describes the polarisation state. The following figures show some examples of the evolution of the electric field vector (blue), with time (the vertical axes), at a particular point in space, along with its x and y components (red/left and green/right), and the path traced by the vector in the plane (purple): The same evolution would occur when looking at the electric field at a particular time while evolving the point in space, along the direction opposite to propagation. In the leftmost figure above, the x and y components of the light wave are in phase. In this case, the ratio of their strengths is constant, so the direction of the electric vector (the vector sum of these two components) is constant. Since the tip of the vector traces out a single line in the plane, this special case is called linear polarisation. The direction of this line depends on the relative amplitudes of the two components. In the middle figure, the two orthogonal components have the same amplitudes and are 90° out of phase. In this case, one component is zero when the other component is at maximum or minimum amplitude. There are two possible phase relationships that satisfy this requirement: the x component can be 90° ahead of the y component or it can be 90° behind the y component. In this special case, the electric vector traces out a circle in the plane, so this polarisation is called circular polarisation. The rotation direction in the circle depends on which of the two-phase relationships exists and corresponds to right-hand circular polarisation and left-hand circular polarisation. In all other cases, where the two components either do not have the same amplitudes and/or their phase difference is neither zero nor a multiple of 90°, the polarisation is called elliptical polarisation because the electric vector traces out an ellipse in the plane (the polarisation ellipse). This is shown in the above figure on the right. Detailed mathematics of polarisation is done using Jones calculus and is characterised by the Stokes parameters. ===== Changing polarisation ===== Media that have different indexes of refraction for different polarisation modes are called birefringent. Well known manifestations of this effect appear in optical wave plates/retarders (linear modes) and in Faraday rotation/optical rotation (circular modes). If the path length in the birefringent medium is sufficient, plane waves will exit the material with a significantly different propagation direction, due to refraction. For example, this is the case with macroscopic crystals of calcite, which present the viewer with two offset, orthogonally polarised images of whatever is viewed through them. It was this effect that provided the first discovery of polarisation, by Erasmus Bartholinus in 1669. In addition, the phase shift, and thus the change in polarisation state, is usually frequency dependent, which, in combination with dichroism, often gives rise to bright colours and rainbow-like effects. In mineralogy, such properties, known as pleochroism, are frequently exploited for the purpose of identifying minerals using polarisation microscopes. Additionally, many plastics that are not normally birefringent will become so when subject to mechanical stress, a phenomenon which is the basis of photoelasticity. Non-birefringent methods, to rotate the linear polarisation of light beams, include the use of prismatic polarisation rotators which use total internal reflection in a prism set designed for efficient collinear transmission. Media that reduce the amplitude of certain polarisation modes are called dichroic, with devices that block nearly all of the radiation in one mode known as polarising filters or simply "polarisers". Malus' law, which is named after Étienne-Louis Malus, says that when a perfect polariser is placed in a linear polarised beam of light, the intensity, I, of the light that passes through is given by I = I 0 cos 2 ⁡ θ i , {\displaystyle I=I_{0}\cos ^{2}\theta _{\mathrm {i} },} where I0 is the initial intensity, and θi is the angle between the light's initial polarisation direction and the axis of the polariser. A beam of unpolarised light can be thought of as containing a uniform mixture of linear polarisations at all possible angles. Since the average value of cos2 θ is 1/2, the transmission coefficient becomes I I 0 = 1 2 . {\displaystyle {\frac {I}{I_{0}}}={\frac {1}{2}}\,.} In practice, some light is lost in the polariser and the actual transmission of unpolarised light will be somewhat lower than this, around 38% for Polaroid-type polarisers but considerably higher (>49.9%) for some birefringent prism types. In addition to birefringence and dichroism in extended media, polarisation effects can also occur at the (reflective) interface between two materials of different refractive index. These effects are treated by the Fresnel equations. Part of the wave is transmitted and part is reflected, with the ratio depending on the angle of incidence and the angle of refraction. In this way, physical optics recovers Brewster's angle. When light reflects from a thin film on a surface, interference between the reflections from the film's surfaces can produce polarisation in the reflected and transmitted light. ===== Natural light ===== Most sources of electromagnetic radiation contain a large number of atoms or molecules that emit light. The orientation of the electric fields produced by these emitters may not be correlated, in which case the light is said to be unpolarised. If there is partial correlation between the emitters, the light is partially polarised. If the polarisation is consistent across the spectrum of the source, partially polarised light can be described as a superposition of a completely unpolarised component, and a completely polarised one. One may then describe the light in terms of the degree of polarisation, and the parameters of the polarisation ellipse. Light reflected by shiny transparent materials is partly or fully polarised, except when the light is normal (perpendicular) to the surface. It was this effect that allowed the mathematician Étienne-Louis Malus to make the measurements that allowed for his development of the first mathematical models for polarised light. Polarisation occurs when light is scattered in the atmosphere. The scattered light produces the brightness and colour in clear skies. This partial polarisation of scattered light can be taken advantage of using polarising filters to darken the sky in photographs. Optical polarisation is principally of importance in chemistry due to circular dichroism and optical rotation (circular birefringence) exhibited by optically active (chiral) molecules. == Modern optics == Modern optics encompasses the areas of optical science and engineering that became popular in the 20th century. These areas of optical science typically relate to the electromagnetic or quantum properties of light but do include other topics. A major subfield of modern optics, quantum optics, deals with specifically quantum mechanical properties of light. Quantum optics is not just theoretical; some modern devices, such as lasers, have principles of operation that depend on quantum mechanics. Light detectors, such as photomultipliers and channeltrons, respond to individual photons. Electronic image sensors, such as CCDs, exhibit shot noise corresponding to the statistics of individual photon events. Light-emitting diodes and photovoltaic cells, too, cannot be understood without quantum mechanics. In the study of these devices, quantum optics often overlaps with quantum electronics. Specialty areas of optics research include the study of how light interacts with specific materials as in crystal optics and metamaterials. Other research focuses on the phenomenology of electromagnetic waves as in singular optics, non-imaging optics, non-linear optics, statistical optics, and radiometry. Additionally, computer engineers have taken an interest in integrated optics, machine vision, and photonic computing as possible components of the "next generation" of computers. Today, the pure science of optics is called optical science or optical physics to distinguish it from applied optical sciences, which are referred to as optical engineering. Prominent subfields of optical engineering include illumination engineering, photonics, and optoelectronics with practical applications like lens design, fabrication and testing of optical components, and image processing. Some of these fields overlap, with nebulous boundaries between the subjects' terms that mean slightly different things in different parts of the world and in different areas of industry. A professional community of researchers in nonlinear optics has developed in the last several decades due to advances in laser technology. === Lasers === A laser is a device that emits light, a kind of electromagnetic radiation, through a process called stimulated emission. The term laser is an acronym for 'Light Amplification by Stimulated Emission of Radiation'. Laser light is usually spatially coherent, which means that the light either is emitted in a narrow, low-divergence beam, or can be converted into one with the help of optical components such as lenses. Because the microwave equivalent of the laser, the maser, was developed first, devices that emit microwave and radio frequencies are usually called masers. The first working laser was demonstrated on 16 May 1960 by Theodore Maiman at Hughes Research Laboratories. When first invented, they were called "a solution looking for a problem". Since then, lasers have become a multibillion-dollar industry, finding utility in thousands of highly varied applications. The first application of lasers visible in the daily lives of the general population was the supermarket barcode scanner, introduced in 1974. The laserdisc player, introduced in 1978, was the first successful consumer product to include a laser, but the compact disc player was the first laser-equipped device to become truly common in consumers' homes, beginning in 1982. These optical storage devices use a semiconductor laser less than a millimetre wide to scan the surface of the disc for data retrieval. Fibre-optic communication relies on lasers to transmit large amounts of information at the speed of light. Other common applications of lasers include laser printers and laser pointers. Lasers are used in medicine in areas such as bloodless surgery, laser eye surgery, and laser capture microdissection and in military applications such as missile defence systems, electro-optical countermeasures (EOCM), and lidar. Lasers are also used in holograms, bubblegrams, laser light shows, and laser hair removal. === Kapitsa–Dirac effect === The Kapitsa–Dirac effect causes beams of particles to diffract as the result of meeting a standing wave of light. Light can be used to position matter using various phenomena (see optical tweezers). == Applications == Optics is part of everyday life. The ubiquity of visual systems in biology indicates the central role optics plays as the science of one of the five senses. Many people benefit from eyeglasses or contact lenses, and optics are integral to the functioning of many consumer goods including cameras. Rainbows and mirages are examples of optical phenomena. Optical communication provides the backbone for both the Internet and modern telephony. === Human eye === The human eye functions by focusing light onto a layer of photoreceptor cells called the retina, which forms the inner lining of the back of the eye. The focusing is accomplished by a series of transparent media. Light entering the eye passes first through the cornea, which provides much of the eye's optical power. The light then continues through the fluid just behind the cornea—the anterior chamber, then passes through the pupil. The light then passes through the lens, which focuses the light further and allows adjustment of focus. The light then passes through the main body of fluid in the eye—the vitreous humour, and reaches the retina. The cells in the retina line the back of the eye, except for where the optic nerve exits; this results in a blind spot. There are two types of photoreceptor cells, rods and cones, which are sensitive to different aspects of light. Rod cells are sensitive to the intensity of light over a wide frequency range, thus are responsible for black-and-white vision. Rod cells are not present on the fovea, the area of the retina responsible for central vision, and are not as responsive as cone cells to spatial and temporal changes in light. There are, however, twenty times more rod cells than cone cells in the retina because the rod cells are present across a wider area. Because of their wider distribution, rods are responsible for peripheral vision. In contrast, cone cells are less sensitive to the overall intensity of light, but come in three varieties that are sensitive to different frequency-ranges and thus are used in the perception of colour and photopic vision. Cone cells are highly concentrated in the fovea and have a high visual acuity meaning that they are better at spatial resolution than rod cells. Since cone cells are not as sensitive to dim light as rod cells, most night vision is limited to rod cells. Likewise, since cone cells are in the fovea, central vision (including the vision needed to do most reading, fine detail work such as sewing, or careful examination of objects) is done by cone cells. Ciliary muscles around the lens allow the eye's focus to be adjusted. This process is known as accommodation. The near point and far point define the nearest and farthest distances from the eye at which an object can be brought into sharp focus. For a person with normal vision, the far point is located at infinity. The near point's location depends on how much the muscles can increase the curvature of the lens, and how inflexible the lens has become with age. Optometrists, ophthalmologists, and opticians usually consider an appropriate near point to be closer than normal reading distance—approximately 25 cm. Defects in vision can be explained using optical principles. As people age, the lens becomes less flexible and the near point recedes from the eye, a condition known as presbyopia. Similarly, people suffering from hyperopia cannot decrease the focal length of their lens enough to allow for nearby objects to be imaged on their retina. Conversely, people who cannot increase the focal length of their lens enough to allow for distant objects to be imaged on the retina suffer from myopia and have a far point that is considerably closer than infinity. A condition known as astigmatism results when the cornea is not spherical but instead is more curved in one direction. This causes horizontally extended objects to be focused on different parts of the retina than vertically extended objects, and results in distorted images. All of these conditions can be corrected using corrective lenses. For presbyopia and hyperopia, a converging lens provides the extra curvature necessary to bring the near point closer to the eye while for myopia a diverging lens provides the curvature necessary to send the far point to infinity. Astigmatism is corrected with a cylindrical surface lens that curves more strongly in one direction than in another, compensating for the non-uniformity of the cornea. The optical power of corrective lenses is measured in diopters, a value equal to the reciprocal of the focal length measured in metres; with a positive focal length corresponding to a converging lens and a negative focal length corresponding to a diverging lens. For lenses that correct for astigmatism as well, three numbers are given: one for the spherical power, one for the cylindrical power, and one for the angle of orientation of the astigmatism. ==== Visual effects ==== Optical illusions (also called visual illusions) are characterized by visually perceived images that differ from objective reality. The information gathered by the eye is processed in the brain to give a percept that differs from the object being imaged. Optical illusions can be the result of a variety of phenomena including physical effects that create images that are different from the objects that make them, the physiological effects on the eyes and brain of excessive stimulation (e.g. brightness, tilt, colour, movement), and cognitive illusions where the eye and brain make unconscious inferences. Cognitive illusions include some which result from the unconscious misapplication of certain optical principles. For example, the Ames room, Hering, Müller-Lyer, Orbison, Ponzo, Sander, and Wundt illusions all rely on the suggestion of the appearance of distance by using converging and diverging lines, in the same way that parallel light rays (or indeed any set of parallel lines) appear to converge at a vanishing point at infinity in two-dimensionally rendered images with artistic perspective. This suggestion is also responsible for the famous moon illusion where the moon, despite having essentially the same angular size, appears much larger near the horizon than it does at zenith. This illusion so confounded Ptolemy that he incorrectly attributed it to atmospheric refraction when he described it in his treatise, Optics. Another type of optical illusion exploits broken patterns to trick the mind into perceiving symmetries or asymmetries that are not present. Examples include the café wall, Ehrenstein, Fraser spiral, Poggendorff, and Zöllner illusions. Related, but not strictly illusions, are patterns that occur due to the superimposition of periodic structures. For example, transparent tissues with a grid structure produce shapes known as moiré patterns, while the superimposition of periodic transparent patterns comprising parallel opaque lines or curves produces line moiré patterns. ==== Optical instruments ==== Single lenses have a variety of applications including photographic lenses, corrective lenses, and magnifying glasses while single mirrors are used in parabolic reflectors and rear-view mirrors. Combining a number of mirrors, prisms, and lenses produces compound optical instruments which have practical uses. For example, a periscope is simply two plane mirrors aligned to allow for viewing around obstructions. The most famous compound optical instruments in science are the microscope and the telescope which were both invented by the Dutch in the late 16th century. Microscopes were first developed with just two lenses: an objective lens and an eyepiece. The objective lens is essentially a magnifying glass and was designed with a very small focal length while the eyepiece generally has a longer focal length. This has the effect of producing magnified images of close objects. Generally, an additional source of illumination is used since magnified images are dimmer due to the conservation of energy and the spreading of light rays over a larger surface area. Modern microscopes, known as compound microscopes have many lenses in them (typically four) to optimize the functionality and enhance image stability. A slightly different variety of microscope, the comparison microscope, looks at side-by-side images to produce a stereoscopic binocular view that appears three dimensional when used by humans. The first telescopes, called refracting telescopes, were also developed with a single objective and eyepiece lens. In contrast to the microscope, the objective lens of the telescope was designed with a large focal length to avoid optical aberrations. The objective focuses an image of a distant object at its focal point which is adjusted to be at the focal point of an eyepiece of a much smaller focal length. The main goal of a telescope is not necessarily magnification, but rather the collection of light which is determined by the physical size of the objective lens. Thus, telescopes are normally indicated by the diameters of their objectives rather than by the magnification which can be changed by switching eyepieces. Because the magnification of a telescope is equal to the focal length of the objective divided by the focal length of the eyepiece, smaller focal-length eyepieces cause greater magnification. Since crafting large lenses is much more difficult than crafting large mirrors, most modern telescopes are reflecting telescopes, that is, telescopes that use a primary mirror rather than an objective lens. The same general optical considerations apply to reflecting telescopes that applied to refracting telescopes, namely, the larger the primary mirror, the more light collected, and the magnification is still equal to the focal length of the primary mirror divided by the focal length of the eyepiece. Professional telescopes generally do not have eyepieces and instead place an instrument (often a charge-coupled device) at the focal point instead. === Photography === The optics of photography involves both lenses and the medium in which the electromagnetic radiation is recorded, whether it be a plate, film, or charge-coupled device. Photographers must consider the reciprocity of the camera and the shot which is summarized by the relation Exposure ∝ ApertureArea × ExposureTime × SceneLuminance In other words, the smaller the aperture (giving greater depth of focus), the less light coming in, so the length of time has to be increased (leading to possible blurriness if motion occurs). An example of the use of the law of reciprocity is the Sunny 16 rule which gives a rough estimate for the settings needed to estimate the proper exposure in daylight. A camera's aperture is measured by a unitless number called the f-number or f-stop, f/#, often notated as N {\displaystyle N} , and given by f / # = N = f D {\displaystyle f/\#=N={\frac {f}{D}}\ } where f {\displaystyle f} is the focal length, and D {\displaystyle D} is the diameter of the entrance pupil. By convention, "f/#" is treated as a single symbol, and specific values of f/# are written by replacing the number sign with the value. The two ways to increase the f-stop are to either decrease the diameter of the entrance pupil or change to a longer focal length (in the case of a zoom lens, this can be done by simply adjusting the lens). Higher f-numbers also have a larger depth of field due to the lens approaching the limit of a pinhole camera which is able to focus all images perfectly, regardless of distance, but requires very long exposure times. The field of view that the lens will provide changes with the focal length of the lens. There are three basic classifications based on the relationship to the diagonal size of the film or sensor size of the camera to the focal length of the lens: Normal lens: angle of view of about 50° (called normal because this angle considered roughly equivalent to human vision) and a focal length approximately equal to the diagonal of the film or sensor. Wide-angle lens: angle of view wider than 60° and focal length shorter than a normal lens. Long focus lens: angle of view narrower than a normal lens. This is any lens with a focal length longer than the diagonal measure of the film or sensor. The most common type of long focus lens is the telephoto lens, a design that uses a special telephoto group to be physically shorter than its focal length. Modern zoom lenses may have some or all of these attributes. The absolute value for the exposure time required depends on how sensitive to light the medium being used is (measured by the film speed, or, for digital media, by the quantum efficiency). Early photography used media that had very low light sensitivity, and so exposure times had to be long even for very bright shots. As technology has improved, so has the sensitivity through film cameras and digital cameras. Other results from physical and geometrical optics apply to camera optics. For example, the maximum resolution capability of a particular camera set-up is determined by the diffraction limit associated with the pupil size and given, roughly, by the Rayleigh criterion. === Atmospheric optics === The unique optical properties of the atmosphere cause a wide range of spectacular optical phenomena. The blue colour of the sky is a direct result of Rayleigh scattering which redirects higher frequency (blue) sunlight back into the field of view of the observer. Because blue light is scattered more easily than red light, the sun takes on a reddish hue when it is observed through a thick atmosphere, as during a sunrise or sunset. Additional particulate matter in the sky can scatter different colours at different angles creating colourful glowing skies at dusk and dawn. Scattering off of ice crystals and other particles in the atmosphere are responsible for halos, afterglows, coronas, rays of sunlight, and sun dogs. The variation in these kinds of phenomena is due to different particle sizes and geometries. Mirages are optical phenomena in which light rays are bent due to thermal variations in the refraction index of air, producing displaced or heavily distorted images of distant objects. Other dramatic optical phenomena associated with this include the Novaya Zemlya effect where the sun appears to rise earlier than predicted with a distorted shape. A spectacular form of refraction occurs with a temperature inversion called the Fata Morgana where objects on the horizon or even beyond the horizon, such as islands, cliffs, ships or icebergs, appear elongated and elevated, like "fairy tale castles". Rainbows are the result of a combination of internal reflection and dispersive refraction of light in raindrops. A single reflection off the backs of an array of raindrops produces a rainbow with an angular size on the sky that ranges from 40° to 42° with red on the outside. Double rainbows are produced by two internal reflections with angular size of 50.5° to 54° with violet on the outside. Because rainbows are seen with the sun 180° away from the centre of the rainbow, rainbows are more prominent the closer the sun is to the horizon. == See also == Ion optics Important publications in optics List of optical topics List of textbooks in electromagnetism == References == === Works cited === === Further reading === Born, Max; Wolf, Emil (2002). Principles of Optics. Cambridge University Press. ISBN 978-1-139-64340-5. Fowles, Grant R. (1975). Introduction to Modern Optics (4th ed.). Addison-Wesley Longman. Lipson, Stephen G.; Lipson, Henry; Tannhauser, David Stefan (1995). Optical Physics. Cambridge University Press. ISBN 978-0-521-43631-1. Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th, Illustrated ed.). Belmont, California: Thomson-Brooks/Cole. ISBN 978-0-534-40842-8. Tipler, Paul A.; Mosca, Gene (2004). Physics for Scientists and Engineers: Electricity, Magnetism, Light, and Elementary Modern Physics. Vol. 2. W. H. Freeman. ISBN 978-0-7167-0810-0. == External links == Relevant discussions Optics on In Our Time at the BBC Textbooks and tutorials Light and Matter – an open-source textbook, containing a treatment of optics in ch. 28–32 Optics2001 – Optics library and community Fundamental Optics – Melles Griot Technical Guide Physics of Light and Optics – Brigham Young University Undergraduate Book Optics for PV – a step-by-step introduction to classical optics Further reading Optics and photonics: Physics enhancing our lives by Institute of Physics publications Societies
Wikipedia/optics
"A Dynamical Theory of the Electromagnetic Field" is a paper by James Clerk Maxwell on electromagnetism, published in 1865. Physicist Freeman Dyson called the publishing of the paper the "most important event of the nineteenth century in the history of the physical sciences". The paper was key in establishing the classical theory of electromagnetism. Maxwell derives an electromagnetic wave equation with a velocity for light in close agreement with measurements made by experiment, and also deduces that light is an electromagnetic wave. == Publication == Following standard procedure for the time, the paper was first read to the Royal Society on 8 December 1864, having been sent by Maxwell to the society on 27 October. It then underwent peer review, being sent to William Thomson (later Lord Kelvin) on 24 December 1864. It was then sent to George Gabriel Stokes, the Society's physical sciences secretary, on 23 March 1865. It was approved for publication in the Philosophical Transactions of the Royal Society on 15 June 1865, by the Committee of Papers (essentially the society's governing council) and sent to the printer the following day (16 June). During this period, Philosophical Transactions was only published as a bound volume once a year, and would have been prepared for the society's anniversary day on 30 November (the exact date is not recorded). However, the printer would have prepared and delivered to Maxwell offprints, for the author to distribute as he wished, soon after 16 June. == Maxwell's original equations == In part III of the paper, which is entitled "General Equations of the Electromagnetic Field", Maxwell formulated twenty equations which were to become known as Maxwell's equations, until this term became applied instead to a vectorized set of four equations selected in 1884, which had all appeared in his 1861 paper "On Physical Lines of Force". Heaviside's versions of Maxwell's equations are distinct by virtue of the fact that they are written in modern vector notation. They actually only contain one of the original eight—equation "G" (Gauss's Law). Another of Heaviside's four equations is an amalgamation of Maxwell's law of total currents (equation "A") with Ampère's circuital law (equation "C"). This amalgamation, which Maxwell himself had actually originally made at equation (112) in "On Physical Lines of Force", is the one that modifies Ampère's Circuital Law to include Maxwell's displacement current. === Heaviside's equations === Eighteen of Maxwell's twenty original equations can be vectorized into six equations, labeled (A) to (F) below, each of which represents a group of three original equations in component form. The 19th and 20th of Maxwell's component equations appear as (G) and (H) below, making a total of eight vector equations. These are listed below in Maxwell's original order, designated by the letters that Maxwell assigned to them in his 1865 paper. (A) The law of total currents (B) Definition of the magnetic potential (C) Ampère's circuital law (D) The Lorentz force and Faraday's law of induction (E) The electric elasticity equation (F) Ohm's law (G) Gauss's law (H) Equation of continuity of charge Notation Maxwell did not consider completely general materials; his initial formulation used linear, isotropic, nondispersive media with permittivity ϵ and permeability μ, although he also discussed the possibility of anisotropic materials. Gauss's law for magnetism (∇⋅ B = 0) is not included in the above list, but follows directly from equation (B) by taking divergences (because the divergence of the curl is zero). Substituting (A) into (C) yields the familiar differential form of the Maxwell-Ampère law. Equation (D) implicitly contains the Lorentz force law and the differential form of Faraday's law of induction. For a static magnetic field, ∂ A / ∂ t {\displaystyle \partial \mathbf {A} /\partial t} vanishes, and the electric field E becomes conservative and is given by −∇ϕ, so that (D) reduces to This is simply the Lorentz force law on a per-unit-charge basis — although Maxwell's equation (D) first appeared at equation (77) in "On Physical Lines of Force" in 1861, 34 years before Lorentz derived his force law, which is now usually presented as a supplement to the four "Maxwell's equations". The cross-product term in the Lorentz force law is the source of the so-called motional emf in electric generators (see also Moving magnet and conductor problem). Where there is no motion through the magnetic field — e.g., in transformers — we can drop the cross-product term, and the force per unit charge (called f) reduces to the electric field E, so that Maxwell's equation (D) reduces to Taking curls, noting that the curl of a gradient is zero, we obtain which is the differential form of Faraday's law. Thus the three terms on the right side of equation (D) may be described, from left to right, as the motional term, the transformer term, and the conservative term. In deriving the electromagnetic wave equation, Maxwell considers the situation only from the rest frame of the medium, and accordingly drops the cross-product term. But he still works from equation (D), in contrast to modern textbooks which tend to work from Faraday's law (see below). The constitutive equations (E) and (F) are now usually written in the rest frame of the medium as D = ϵE and J = σE. Maxwell's equation (G), as printed in the 1865 paper, requires his e to mean minus the charge density (if his f, g, h are the components of D), whereas his equation (H) requires his e to mean plus the charge density (if his p, q, r are the components of J). John W. Arthur: 7, 8  concludes that the sign of e in (G) is wrong, and observes: 8  that this sign is corrected in Maxwell's subsequent Treatise. Arthur speculates that the sign confusion may have arisen from the analogy between momentum and the magnetic vector potential (Maxwell's "electromagnetic momentum"), in which positive mass corresponds to negative charge: 4 . Arthur: 3  also lists some corresponding equations from Maxwell's earlier paper of 1861-2, and notes that the signs do not always match the later ones. The earlier signs (1861-2) are correct if F, G, H are the components of −A while f, g, h are the components of −D. == Maxwell – electromagnetic light wave == In part VI of "A Dynamical Theory of the Electromagnetic Field", subtitled "Electromagnetic theory of light", Maxwell uses the correction to Ampère's Circuital Law made in part III of his 1862 paper, "On Physical Lines of Force", which is defined as displacement current, to derive the electromagnetic wave equation. He obtained a wave equation with a speed in close agreement to experimental determinations of the speed of light. He commented, The agreement of the results seems to show that light and magnetism are affections of the same substance, and that light is an electromagnetic disturbance propagated through the field according to electromagnetic laws. Maxwell's derivation of the electromagnetic wave equation has been replaced in modern physics by a much less cumbersome method which combines the corrected version of Ampère's Circuital Law with Faraday's law of electromagnetic induction. === Modern equation methods === To obtain the electromagnetic wave equation in a vacuum using the modern method, we begin with the modern 'Heaviside' form of Maxwell's equations. Using (SI units) in a vacuum, these equations are If we take the curl of the curl equations we obtain If we note the vector identity where V {\displaystyle \mathbf {V} } is any vector function of space, we recover the wave equations where is the speed of light in free space. == Legacy and impact == Of this paper and Maxwell's related works, fellow physicist Richard Feynman said: "From the long view of this history of mankind – seen from, say, 10,000 years from now – there can be little doubt that the most significant event of the 19th century will be judged as Maxwell's discovery of the laws of electromagnetism." Albert Einstein used Maxwell's equations as the starting point for his special theory of relativity, presented in The Electrodynamics of Moving Bodies, one of Einstein's 1905 Annus Mirabilis papers. In it is stated: the same laws of electrodynamics and optics will be valid for all frames of reference for which the equations of mechanics hold good and Any ray of light moves in the "stationary" system of co-ordinates with the determined velocity c, whether the ray be emitted by a stationary or by a moving body. Maxwell's equations can also be derived by extending general relativity into five physical dimensions. == See also == A Treatise on Electricity and Magnetism Gauge theory == References == == Further reading == Maxwell, James C.; Torrance, Thomas F. (March 1996). A Dynamical Theory of the Electromagnetic Field. Eugene, OR: Wipf and Stock. ISBN 1-57910-015-5. Niven, W. D. (1952). The Scientific Papers of James Clerk Maxwell. Vol. 1. New York: Dover. Johnson, Kevin (May 2002). "The electromagnetic field". James Clerk Maxwell – The Great Unknown. Archived from the original on September 15, 2008. Retrieved September 7, 2009. Darrigol, Olivier (2000). Electromagnetism from Ampère to Einstein. Oxford University Press. ISBN 978-0198505945 Katz, Randy H. (February 22, 1997). "'Look Ma, No Wires': Marconi and the Invention of Radio". History of Communications Infrastructures. Retrieved Sep 7, 2009.
Wikipedia/A_dynamical_theory_of_the_electromagnetic_field
Cyclopædia: or, an Universal Dictionary of Arts and Sciences is a British encyclopedia prepared by Ephraim Chambers and first published in 1728. Six more editions appeared between 1728 and 1751, and there was a Supplement in 1753. The Cyclopædia was one of the first general encyclopedias produced in English. == Noteworthy features == The title page of the first edition summarizes the author’s aims: Cyclopædia: or, An Universal Dictionary of Arts and Sciences; containing the Definitions of the Terms, and Accounts of the Things ſignify'd thereby, in the several Arts, both Liberal and Mechanical, and the ſeveral Sciences, Human and Divine: the Figures, Kinds, Properties, Productions, Preparations, and Uſes, of Things Natural and Artificial; the Riſe, Progreſs, and State of Things Ecclesiastical, Civil, Military, and Commercial: with the ſeveral Syſtems, Sects, Opinions, &c. among Philoſophers, Divines, Mathematicians, Phyſicians, Antiquaries, Criticks, &c. The Whole intended as a Course of Antient and Modern Learning. The first edition included numerous cross-references meant to connect articles scattered by the use of alphabetical order, a dedication to the king, George II, and a philosophical preface at the beginning of Volume 1. Among other things, the preface gives an analysis of forty-seven divisions of knowledge, with classed lists of the articles belonging to each, intended to serve as a table of contents and also as a directory indicating the order in which the articles should be read. == Printing history == A second edition appeared in 1738 in two volumes in folio, with 2,466 pages. This edition was supposedly retouched and amended in a thousand places, with a few added articles and some enlarged articles. Chambers was prevented from doing more because booksellers were alarmed by a bill in Parliament containing a clause to oblige publishers of all improved editions of books to print their improvements separately. The bill, after passing the House of Commons, was unexpectedly thrown out by the House of Lords; but fearing that it might be revived, the booksellers thought it best to retreat, though more than twenty sheets had been printed. Five other editions were published in London from 1739 to 1751–1752. An edition was also published in Dublin in 1742; this and the London editions were all two volumes in folio. An Italian translation appearing in Venice, 1748–1749, 4to, nine volumes, was the first complete Italian encyclopaedia. When Chambers was in France in 1739, he rejected very favorable proposals to publish an edition there dedicated to Louis XV. Chambers' work was carefully done and popular. However, it had defects and omissions, as he was well aware; by his death on 15 May 1740, he had collected and arranged materials for seven new volumes. George Lewis Scott was employed by the booksellers to select articles for the press and to supply others, but he left before the job was finished. The job was then given to John Hill. The Supplement was published in London in 1753 in two folio volumes with 3307 pages and 12 plates. Hill was a botanist, and the botanical part, which had been weak in the Cyclopaedia, was the best. Abraham Rees, a nonconformist minister, published a revised and enlarged edition in 1778–1788, with the supplement and improvements incorporated. It was published in London as a folio of five volumes, 5,010 pages ( not paginated), and 159 plates. It was published in 418 numbers at 6d. each. Rees claimed to have added more than 4,400 new articles. At the end, he gave an index of articles, classed under 100 heads, numbering about 57,000 and filling 80 pages. The heads, with 39 cross references, were arranged alphabetically. == Precursors == Among the precursors of Chambers's Cyclopaedia was John Harris's Lexicon Technicum of 1704 (later editions from 1708 to 1744). By its title and content, it was "An Universal English Dictionary of Arts and Sciences: Explaining not only the Terms of Art, but the Arts Themselves." While Harris's work is often classified as a technical dictionary, it also took material from Newton and Halley, among others. == Successors == Chambers's Cyclopaedia in turn became the inspiration for the landmark Encyclopédie of Denis Diderot and Jean le Rond d'Alembert, which owed its inception to a proposed French translation of Chambers's work begun in 1744 by John Mills, assisted by Gottfried Sellius. The later Chambers's Encyclopaedia (1860–1868) had no connection to Ephraim Chambers's work but was the product of Robert Chambers and his brother William. == References == == Further reading == == External links == Chambers' Cyclopaedia, 1728, 2 volumes, with the 1753 supplement, 2 volumes; digitized by the University of Wisconsin Digital Collections Center. Chambers' Cyclopaedia, 1728, 2 volumes, articles are categorized. Searchable 4th edition (1741), digitized at the University of Chicago Library as part of The ARTFL Project. Cyclopaedia, or, An Universal Dictionary of Arts and Sciences: Containing an Explication of the Terms, and an Account of the Things Signified Thereby, in the Several Arts, Both Liberal and Mechanical, and the Several Sciences, Human and Divine sixth edition, 2 volumes; London: Printed for W. Innys et al., 1750
Wikipedia/Cyclopaedia,_or_an_Universal_Dictionary_of_Arts_and_Sciences
Eye surgery, also known as ophthalmic surgery or ocular surgery, is surgery performed on the eye or its adnexa. Eye surgery is part of ophthalmology and is performed by an ophthalmologist or eye surgeon. The eye is a fragile organ, and requires due care before, during, and after a surgical procedure to minimize or prevent further damage. An eye surgeon is responsible for selecting the appropriate surgical procedure for the patient, and for taking the necessary safety precautions. Mentions of eye surgery can be found in several ancient texts dating back as early as 1800 BC, with cataract treatment starting in the fifth century BC. It continues to be a widely practiced class of surgery, with various techniques having been developed for treating eye problems. == Preparation and precautions == Since the eye is heavily supplied by nerves, anesthesia is essential. Local anesthesia is most commonly used. Topical anesthesia using lidocaine topical gel is often used for quick procedures. Since topical anesthesia requires cooperation from the patient, general anesthesia is often used for children, traumatic eye injuries, or major orbitotomies, and for apprehensive patients. The physician administering anesthesia, or a nurse anesthetist or anesthetist assistant with expertise in anesthesia of the eye, monitors the patient's cardiovascular status. Sterile precautions are taken to prepare the area for surgery and lower the risk of infection. These precautions include the use of antiseptics, such as povidone-iodine, and sterile drapes, gowns, and gloves. == Laser eye surgery == Although the terms laser eye surgery and refractive surgery are commonly used as if they were interchangeable, this is not the case. Lasers may be used to treat nonrefractive conditions (e.g. to seal a retinal tear). Laser eye surgery or laser corneal surgery is a medical procedure that uses a laser to reshape the surface of the eye to correct myopia (short-sightedness), hypermetropia (long-sightedness), and astigmatism (uneven curvature of the eye's surface). Importantly, refractive surgery is not compatible with everyone, and people may find on occasion that eyewear is still needed after surgery. Recent developments also include procedures that can change eye color from brown to blue. Before proceeding with laser surgery, the eye specialist needs to certify that the patient is a suitable candidate for the surgery and there are several factors to be considered before doing laser surgery. == Cataract surgery == A cataract is an opacification or cloudiness of the eye's crystalline lens due to aging, disease, or trauma that typically prevents light from forming a clear image on the retina. If visual loss is significant, surgical removal of the lens may be warranted, with lost optical power usually replaced with a plastic intraocular lens. Owing to the high prevalence of cataracts, cataract extraction is the most common eye surgery. Rest after surgery is recommended. == Glaucoma surgery == Glaucoma is a group of diseases affecting the optic nerve that results in vision loss and is frequently characterized by raised intraocular pressure. Many types of glaucoma surgery exist, and variations or combinations of those types can facilitate the escape of excess aqueous humor from the eye to lower intraocular pressure, and a few that lower it by decreasing the production of aqueous humor. === Canaloplasty === Canaloplasty is an advanced, nonpenetrating procedure designed to enhance drainage through the eye's natural drainage system to provide sustained reduction of intraocular pressure. Canaloplasty uses microcatheter technology in a simple and minimally invasive procedure. To perform a canaloplasty, an ophthalmologist creates a tiny incision to gain access to a canal in the eye. A microcatheter circumnavigates the canal around the iris, enlarging the main drainage channel and its smaller collector channels through the injection of a sterile, gel-like material called viscoelastic. The catheter is then removed and a suture is placed within the canal and tightened. By opening up the canal, the pressure inside the eye can be reduced. == Refractive surgery == Refractive surgery aims to correct errors of refraction in the eye, reducing or eliminating the need for corrective lenses. Keratomileusis is a method of reshaping the corneal surface to change its optical power. A disc of the cornea is shaved off, quickly frozen, lathe-ground, then returned to its original power. Automated lamellar keratoplasty Laser-assisted in situ keratomileusis (LASIK) Laser assisted subepithelial keratomileusis (LASEK), a.k.a. Epi-LASIK Photorefractive keratectomy Laser thermal keratoplasty Conductive keratoplasty uses radio-frequency waves to shrink corneal collagen. It is used to treat mild to moderate hyperopia. Limbal relaxing incisions can correct minor astigmatism Astigmatic keratotomy, arcuate keratotomy, or transverse keratotomy Radial keratotomy Hexagonal keratotomy Epikeratophakia is the removal of the corneal epithelium and replacement with a lathe-cut corneal button. Intracorneal rings or corneal ring segments Implantable contact lenses Presbyopia reversal Anterior ciliary sclerotomy Scleral reinforcement surgery for the mitigation of degenerative myopia Small incision lenticule extraction == Corneal surgery == Corneal surgery includes most refractive surgery, as well as: Corneal transplant surgery is used to remove a cloudy/diseased cornea and replace it with a clear donor cornea. Penetrating keratoplasty Keratoprosthesis Phototherapeutic keratectomy Pterygium excision Corneal tattooing Osteo-odonto-keratoprosthesis is surgery in which support for an artificial cornea is created from a tooth and its surrounding jawbone. This is a still-experimental procedure used for patients with severely damaged eyes, generally from burns. Eye color-change surgery through an iris implant, known as Brightocular, or the stripping away the top layer of eye pigment, known as the stroma procedure == Vitreoretinal surgery == Vitreoretinal surgery includes: Vitrectomy Anterior vitrectomy is the removal of the front portion of vitreous tissue. It is used for preventing or treating vitreous loss during cataract or corneal surgery, or to remove misplaced vitreous in conditions such as aphakia pupillary block glaucoma. Pars plana vitrectomy or trans pars plana vitrectomy is a procedure to remove vitreous opacities and membranes through a pars plana incision. It is frequently combined with other intraocular procedures for the treatment of giant retinal tears, tractional retinal detachments, and posterior vitreous detachments. Pan retinal photocoagulation is a type of photocoagulation therapy used in the treatment of diabetic retinopathy. Retinal detachment repair Ignipuncture is an obsolete procedure that involves cauterization of the retina with a very hot, pointed instrument. A scleral buckle is used in the repair of a retinal detachment to indent or "buckle" the sclera inward, usually by sewing a piece of preserved sclera or silicone rubber to its surface. Laser photocoagulation, or photocoagulation therapy, is the use of a laser to seal a retinal tear. Pneumatic retinopexy Retinal cryopexy, or retinal cryotherapy, is a procedure that uses intense cold to induce a chorioretinal scar and to destroy retinal or choroidal tissue. Macular hole repair Partial lamellar sclerouvectomy Partial lamellar sclerocyclochoroidectomy Partial lamellar sclerochoroidectomy Posterior sclerotomy is an opening made into the vitreous through the sclera, as for detached retina or the removal of a foreign body. Radial optic neurotomy Macular translocation surgery through 360° retinotomy through scleral imbrication technique == Eye muscle surgery == With about 1.2 million procedures each year, extraocular muscle surgery is the third-most common eye surgery in the United States. [1] Archived 2016-08-18 at the Wayback Machine Eye muscle surgery typically corrects strabismus and includes: Loosening or weakening procedures Recession involves moving the insertion of a muscle posteriorly towards its origin. Myectomy Myotomy Tenectomy Tenotomy Tightening or strengthening procedures Resection Tucking Advancement is the movement of an eye muscle from its original place of attachment on the eyeball to a more forward position. Transposition or repositioning procedures Adjustable suture surgery is a method of reattaching an extraocular muscle by means of a stitch that can be shortened or lengthened within the first postoperative day, to obtain better ocular alignment. == Oculoplastic surgery == Oculoplastic surgery, or oculoplastics, is the subspecialty of ophthalmology that deals with the reconstruction of the eye and associated structures. Oculoplastic surgeons perform procedures such as the repair of droopy eyelids (blepharoplasty), repair of tear duct obstructions, orbital fracture repairs, removal of tumors in and around the eyes, and facial rejuvenation procedures including laser skin resurfacing, eye lifts, brow lifts, and even facelifts. Common procedures are: === Eyelid surgery === Blepharoplasty (eyelift) is plastic surgery of the eyelids to remove excessive skin or subcutaneous fat. East Asian blepharoplasty, also known as double eyelid surgery, is used to create a double eyelid crease for patients who have a single crease (monolid). Ptosis repair for droopy eyelid Ectropion repair Entropion repair Canthal resection A canthectomy is the surgical removal of tissue at the junction of the upper and lower eyelids. Cantholysis is the surgical division of the canthus. Canthopexy A canthoplasty is plastic surgery at the canthus. A canthorrhaphy is suturing of the outer canthus to shorten the palpebral fissure. A canthotomy is the surgical division of the canthus, usually the outer canthus. A lateral canthotomy is the surgical division of the outer canthus. Epicanthoplasty Tarsorrhaphy is a procedure in which the eyelids are partially sewn together to narrow the opening (i.e. palpebral fissure). === Orbital surgery === Orbital reconstruction or ocular prosthetics (false eyes) Orbital decompression is used for Grave's disease, a condition (often associated with overactive thyroid problems) in which the eye muscles swell. Because the eye socket is bone, the swelling cannot be accommodated and as a result, the eye is pushed forward into a protruded position. In some patients, this is very pronounced. Orbitial decompression involves removing some bone from the eye socket to open up one or more sinuses and so make space for the swollen tissue and allowing the eye to move back into normal position. === Other oculoplastic surgery === Botox injections Ultrapeel microdermabrasion Endoscopic forehead and browlift Face lift (rhytidectomy) Liposuction of the face and neck Browplasty == Surgery involving the lacrimal apparatus == A dacryocystorhinostomy or dacryocystorhinotomy is a procedure to restore the flow of tears into the nose from the lacrimal sac when the nasolacrimal duct does not function. Canaliculodacryocystostomy is a surgical correction for a congenitally blocked tear duct in which the closed segment is excised and the open end is joined to the lacrimal sac. Canaliculotomy involves slitting of the lacrimal punctum and canaliculus for the relief of epiphora A dacryoadenectomy is the surgical removal of a lacrimal gland. A dacryocystectomy is the surgical removal of a part of the lacrimal sac. A dacryocystostomy is an incision into the lacrimal sac, usually to promote drainage. A dacryocystotomy is an incision into the lacrimal sac. == Eye removal == An enucleation is the removal of the eye leaving the eye muscles and remaining orbital contents intact. An evisceration is the removal of the eye's contents, leaving the scleral shell intact. Usually performed to reduce pain in a blind eye. An exenteration is the removal of the entire orbital contents, including the eye, extraocular muscles, fat, and connective tissues; usually for malignant orbital tumors. == Other surgery == Many of these described procedures are historical and are not recommended due to a risk of complications. Particularly, these include operations done on ciliary body in an attempt to control glaucoma, since highly safer surgeries for glaucoma, including lasers, nonpenetrating surgery, guarded filtration surgery, and seton valve implants have been invented. A ciliarotomy is a surgical division of the ciliary zone in the treatment of glaucoma. A ciliectomy is the surgical removal of part of the ciliary body or the surgical removal of part of a margin of an eyelid containing the roots of the eyelashes. A ciliotomy is a surgical section of the ciliary nerves. A conjunctivoanstrostomy is an opening made from the inferior conjunctival cul-de-sac into the maxillary sinus for the treatment of epiphora. Conjuctivoplasty is plastic surgery of the conjunctiva. A conjunctivorhinostomy is a surgical correction of the total obstruction of a lacrimal canaliculus by which the conjunctiva is anastomosed with the nasal cavity to improve tear flow. A corectomedialysis, or coretomedialysis, is an excision of a small portion of the iris at its junction with the ciliary body to form an artificial pupil. A corectomy, or coretomy, is any surgical cutting operation on the iris at the pupil. A corelysis is a surgical detachment of adhesions of the iris to the capsule of the crystalline lens or cornea. A coremorphosis is the surgical formation of an artificial pupil. A coreplasty, or coreoplasty, is plastic surgery of the iris, usually for the formation of an artificial pupil. A coreoplasy, or laser pupillomydriasis, is any procedure that changes the size or shape of the pupil. A cyclectomy is an excision of portion of the ciliary body. A cyclotomy (surgery), or cyclicotomy, is a surgical incision of the ciliary body, usually for the relief of glaucoma. A cycloanemization is a surgical obliteration of the long ciliary arteries in the treatment of glaucoma. An iridectomesodialysis is the formation of an artificial pupil by detaching and excising a portion of the iris at its periphery. An iridodialysis, sometimes known as a coredialysis, is a localized separation or tearing away of the iris from its attachment to the ciliary body. An iridencleisis, or corenclisis, is a surgical procedure for glaucoma in which a portion of the iris is incised and incarcerated in a limbal incision. (Subdivided into basal iridencleisis and total iridencleisis.) An iridesis is a surgical procedure in which a portion of the iris is brought through and incarcerated in a corneal incision in order to reposition the pupil. An iridocorneosclerectomy is the surgical removal of a portion of the iris, the cornea, and the sclera. An iridocyclectomy is the surgical removal of the iris and the ciliary body. An iridocystectomy is the surgical removal of a portion of the iris to form an artificial pupil. An iridosclerectomy is the surgical removal of a portion of the sclera and a portion of the iris in the region of the limbus for the treatment of glaucoma. An iridosclerotomy is the surgical puncture of the sclera and the margin of the iris for the treatment of glaucoma. A rhinommectomy is the surgical removal of a portion of the internal canthus. A trepanotrabeculectomy is used in the treatment of chronic open- and chronic closed-angle glaucoma. == References ==
Wikipedia/Laser_eye_surgery
Photographic plates preceded film as the primary medium for capturing images in photography. These plates, made of metal or glass and coated with a light-sensitive emulsion, were integral to early photographic processes such as heliography, daguerreotypes, and photogravure. Glass plates, thinner than standard window glass, became widely used in the late 19th century for their clarity and reliability. Although largely replaced by film during the 20th century, plates continued to be used for specialised scientific and medical purposes until the late 20th century. == History == Glass plates were far superior to film for research-quality imaging because they were stable and less likely to bend or distort, especially in large-format frames for wide-field imaging. Early plates used the wet collodion process. The wet plate process was replaced late in the 19th century by gelatin dry plates. A view camera nicknamed "The Mammoth" weighing 1,400 pounds (640 kg) was built by George R. Lawrence in 1899, specifically to photograph "The Alton Limited" train owned by the Chicago & Alton Railway. It took photographs on glass plates measuring 8 feet (2.4 m) × 4.5 feet (1.4 m). Glass plate photographic material largely faded from the consumer market in the early years of the 20th century, as more convenient and less fragile films were increasingly adopted. However, photographic plates were reportedly still being used by one photography business in London until the 1970s, and by one in Bradford called the Belle Vue Studio that closed in 1975. They were in wide use for professional astrophotography as late as the 1990s. Workshops on the use of glass plate photography as an alternative medium or for artistic use are still being conducted in the early 21st century. == Scientific uses == === Astronomy === Many famous astronomical surveys were taken using photographic plates, including the first Palomar Observatory Sky Survey (POSS) of the 1950s, the follow-up POSS-II survey of the 1990s, and the UK Schmidt Telescope survey of southern declinations. A number of observatories, including Harvard College and Sonneberg, maintain large archives of photographic plates, which are used primarily for historical research on variable stars. Many solar system objects were discovered by using photographic plates, superseding earlier visual methods. Discovery of minor planets using photographic plates was pioneered by Max Wolf beginning with his discovery of 323 Brucia in 1891. The first natural satellite discovered using photographic plates was Phoebe in 1898. Pluto was discovered using photographic plates in a blink comparator; its moon Charon was discovered 48 years later in 1978 by U.S. Naval Observatory astronomer James W. Christy by carefully examining a bulge in Pluto's image on a photographic plate. Glass-backed plates, rather than film, were generally used in astronomy because they do not shrink or deform noticeably in the development process or under environmental changes. Several important applications of astrophotography, including astronomical spectroscopy and astrometry, continued using plates until digital imaging improved to the point where it could outmatch photographic results. Kodak and other manufacturers discontinued production of most kinds of plates as the market for them dwindled between 1980 and 2000, terminating most remaining astronomical use, including for sky surveys. === Physics === Photographic plates were also an important tool in early high-energy physics, as they are blackened by ionizing radiation. Ernest Rutherford was one of the first to study the absorption, in various materials, of the rays produced in radioactive decay, by using photographic plates to measure the intensity of the rays. Development of particle detection optimised nuclear emulsions in the 1930s and 1940s, first in physics laboratories, then by commercial manufacturers, enabled the discovery and measurement of both the pi-meson and K-meson, in 1947 and 1949, initiating a flood of new particle discoveries in the second half of the 20th century. === Electron microscopy === Photographic emulsions were originally coated on thin glass plates for imaging with electron microscopes, which provided a more rigid, stable and flatter plane compared to plastic films. Beginning in the 1970s, high-contrast, fine grain emulsions coated on thicker plastic films manufactured by Kodak, Ilford and DuPont replaced glass plates. These films have largely been replaced by digital imaging technologies. == Medical imaging == The sensitivity of certain types of photographic plates to ionizing radiation (usually X-rays) is also useful in medical imaging and material science applications, although they have been largely replaced with reusable and computer readable image plate detectors and other types of X-ray detectors. == Decline == The earliest flexible films of the late 1880s were sold for amateur use in medium-format cameras. The plastic was not of very high optical quality and tended to curl and otherwise not provide as desirably flat a support surface as a sheet of glass. Initially, a transparent plastic base was more expensive to produce than glass. Quality was eventually improved, manufacturing costs came down, and most amateurs gladly abandoned plates for films. After large-format high quality cut films for professional photographers were introduced in the late 1910s, the use of plates for ordinary photography of any kind became increasingly rare. The persistent use of plates in astronomical and other scientific applications started to decline in the early 1980s as they were gradually replaced by charge-coupled devices (CCDs), which also provide outstanding dimensional stability. CCD cameras have several advantages over glass plates, including high efficiency, linear light response, and simplified image acquisition and processing. However, even the largest CCD formats (e.g., 8192 × 8192 pixels) still do not have the detecting area and resolution of most photographic plates, which has forced modern survey cameras to use large CCD arrays to obtain the same coverage. The manufacture of photographic plates has been discontinued by Kodak, Agfa and other widely known traditional makers. Eastern European sources have subsequently catered to the minimal remaining demand, practically all of it for use in holography, which requires a recording medium with a large surface area and a submicroscopic level of resolution that currently (2014) available electronic image sensors cannot provide. In the realm of traditional photography, a small number of historical process enthusiasts make their own wet or dry plates from raw materials and use them in vintage large-format cameras. == Preservation == Several institutions have established archives to preserve photographic plates and prevent their valuable historical information from being lost. The emulsion on the plate can deteriorate. In addition, the glass plate medium is fragile and prone to cracking if not stored correctly. === Historical archives === The United States Library of Congress has a large collection of both wet and dry plate photographic negatives, dating from 1855 through 1900, over 7,500 of which have been digitized from the period 1861 to 1865. The George Eastman Museum holds an extensive collection of photographic plates. In 1955, wet plate negatives measuring 4 feet 6 inches (1.37 m) × 3 feet 2 inches (0.97 m) were reported to have been discovered in 1951 as part of the Holtermann Collection. These purportedly were the largest glass negatives discovered at that time. These images were taken in 1875 by Charles Bayliss and formed the "Shore Tower" panorama of Sydney Harbour. Albumen contact prints made from these negatives are in the holdings of the Holtermann Collection, the negatives are listed among the current holdings of the Collection. === Scientific archives === Preservation of photographic plates is a particular need in astronomy, where changes often occur slowly and the plates represent irreplaceable records of the sky and astronomical objects that extend back over 100 years. The method of digitization of astronomical plates enables free and easy access to those unique astronomical data and it is one of the most popular approaches to preserve them. This approach was applied at the Baldone Astrophysical Observatory where about 22,000 glass and film plates of the Schmidt Telescope were scanned and cataloged. Another astronomical plate archive is the Astronomical Photographic Data Archive (APDA) at the Pisgah Astronomical Research Institute (PARI). APDA was created in response to recommendations of a group of international scientists who gathered in 2007 to discuss how to best preserve astronomical plates (see the Osborn and Robbins reference listed under Further reading). The discussions revealed that some observatories no longer could maintain their plate collections and needed a place to archive them. APDA is dedicated to housing and cataloging unwanted plates, with the goal to eventually catalog the plates and create a database of images that can be accessed via the Internet by the global community of scientists, researchers, and students. APDA now has a collection of more than 404,000 photographic images from over 40 observatories that are housed in a secure building with environmental control. The facility possesses several plate scanners, including two high-precision ones, GAMMA I and GAMMA II, built for NASA and the Space Telescope Science Institute (STScI) and used by a team under the leadership of the late Barry Lasker to develop the Guide Star Catalog and Digitized Sky Survey that are used to guide and direct the Hubble Space Telescope. APDA's networked storage system can store and analyze more than 100 terabytes of data. A historical collection of photographic plates from Mt. Wilson Observatory is available at the Carnegie Observatories. Metadata is available via a searchable database, while a portion of the plates has been digitized. == See also == Camera Film base Photographic film Mammoth plate == References == == Further reading == Peter Kroll, Constanze La Dous, Hans-Jürgen Bräuer: "Treasure Hunting in Astronomical Plate Archives." (Proceedings of the international Workshop held at Sonneberg Observatory, March 4 to 6, 1999.) Verlag Herri Deutsch, Frankfurt am Main (1999), ISBN 3-8171-1599-7 Wayne Osborn, Lee Robbins: "Preserving Astronomy's Photographic Legacy: Current State and the Future of North American Astronomical Plates." Astronomical Society of the Pacific Conference Series, Vol. 410 (2009), ISBN 978-1-58381-700-1 Pisgah Astronomical Research Institute (PARI) Astronomical Photographic Data Archive (APDA) https://www.pari.edu/research/adpa/ == External links == Carnegie Observatories The Sonneberg Plates Archiv (Sonneberg Observatory) The Harvard College Observatory Plate Stacks APDA @ PARI Pisgah Astronomical Research Institute Astronomical Photographic Data Archive (PARI APDA)(Archive from Aug 2012) Capturing Oregon's Frontier Documentary produced by Oregon Public Broadcasting Video demonstration of collodion wet plate preparation and photographic image creation Course on field wet-plate photography Information on creation of wet-plate photographs
Wikipedia/Photographic_plates
In physics, ray tracing is a method for calculating the path of waves or particles through a system with regions of varying propagation velocity, absorption characteristics, and reflecting surfaces. Under these circumstances, wavefronts may bend, change direction, or reflect off surfaces, complicating analysis. Historically, ray tracing involved analytic solutions to the ray's trajectories. In modern applied physics and engineering physics, the term also encompasses numerical solutions to the Eikonal equation. For example, ray-marching involves repeatedly advancing idealized narrow beams called rays through the medium by discrete amounts. Simple problems can be analyzed by propagating a few rays using simple mathematics. More detailed analysis can be performed by using a computer to propagate many rays. When applied to problems of electromagnetic radiation, ray tracing often relies on approximate solutions to Maxwell's equations such as geometric optics, that are valid as long as the light waves propagate through and around objects whose dimensions are much greater than the light's wavelength. Ray theory can describe interference by accumulating the phase during ray tracing (e.g., complex-valued Fresnel coefficients and Jones calculus). It can also be extended to describe edge diffraction, with modifications such as the geometric theory of diffraction, which enables tracing diffracted rays. More complicated phenomena require methods such as physical optics or wave theory. == Technique == Ray tracing works by assuming that the particle or wave can be modeled as a large number of very narrow beams (rays), and that there exists some distance, possibly very small, over which such a ray is locally straight. The ray tracer will advance the ray over this distance, and then use a local derivative of the medium to calculate the ray's new direction. From this location, a new ray is sent out and the process is repeated until a complete path is generated. If the simulation includes solid objects, the ray may be tested for intersection with them at each step, making adjustments to the ray's direction if a collision is found. Other properties of the ray may be altered as the simulation advances as well, such as intensity, wavelength, or polarization. This process is repeated with as many rays as are necessary to understand the behavior of the system. == Uses == === Astronomy === Ray tracing is being increasingly used in astronomy to simulate realistic images of the sky. Unlike conventional simulations, ray tracing does not use the expected or calculated point spread function (PSF) of a telescope and instead traces the journey of each photon from entrance in the upper atmosphere to collision with the detector. Most of the dispersion and distortion, arising mainly from atmosphere, optics and detector are taken into account. While this method of simulating images is inherently slow, advances in CPU and GPU capabilities has somewhat mitigated this problem. It can also be used in designing telescopes. Notable examples include Large Synoptic Survey Telescope where this kind of ray tracing was first used with PhoSim to create simulated images. === Radio signals === One particular form of ray tracing is radio signal ray tracing, which traces radio signals, modeled as rays, through the ionosphere where they are refracted and/or reflected back to the Earth. This form of ray tracing involves the integration of differential equations that describe the propagation of electromagnetic waves through dispersive and anisotropic media such as the ionosphere. An example of physics-based radio signal ray tracing is shown to the right. Radio communicators use ray tracing to help determine the precise behavior of radio signals as they propagate through the ionosphere. The image at the right illustrates the complexity of the situation. Unlike optical ray tracing where the medium between objects typically has a constant refractive index, signal ray tracing must deal with the complexities of a spatially varying refractive index, where changes in ionospheric electron densities influence the refractive index and hence, ray trajectories. Two sets of signals are broadcast at two different elevation angles. When the main signal penetrates into the ionosphere, the magnetic field splits the signal into two component waves which are separately ray traced through the ionosphere. The ordinary wave (red) component follows a path completely independent of the extraordinary wave (green) component. === Ocean acoustics === Sound velocity in the ocean varies with depth due to changes in density and temperature, reaching a local minimum near a depth of 800–1000 meters. This local minimum, called the SOFAR channel, acts as a waveguide, as sound tends to bend towards it. Ray tracing may be used to calculate the path of sound through the ocean up to very large distances, incorporating the effects of the SOFAR channel, as well as reflections and refractions off the ocean surface and bottom. From this, locations of high and low signal intensity may be computed, which are useful in the fields of ocean acoustics, underwater acoustic communication, and acoustic thermometry. === Optical design === Ray tracing may be used in the design of lenses and optical systems, such as in cameras, microscopes, telescopes, and binoculars, and its application in this field dates back to the 1900s. Geometric ray tracing is used to describe the propagation of light rays through a lens system or optical instrument, allowing the image-forming properties of the system to be modeled. The following effects can be integrated into a ray tracer in a straightforward fashion: Dispersion leads to chromatic aberration Polarization Crystal optics Fresnel equations Laser light effects Thin film interference (optical coating, soap bubble) can be used to calculate the reflectivity of a surface. For the application of lens design, two special cases of wave interference are important to account for. In a focal point, rays from a point light source meet again and may constructively or destructively interfere with each other. Within a very small region near this point, incoming light may be approximated by plane waves which inherit their direction from the rays. The optical path length from the light source is used to compute the phase. The derivative of the position of the ray in the focal region on the source position is used to obtain the width of the ray, and from that the amplitude of the plane wave. The result is the point spread function, whose Fourier transform is the optical transfer function. From this, the Strehl ratio can also be calculated. The other special case to consider is that of the interference of wavefronts, which are approximated as planes. However, when the rays come close together or even cross, the wavefront approximation breaks down. Interference of spherical waves is usually not combined with ray tracing, thus diffraction at an aperture cannot be calculated. However, these limitations can be resolved by an advanced modeling technique called Field Tracing. Field Tracing is a modelling technique, combining geometric optics with physical optics enabling to overcome the limitations of interference and diffraction in designing. The ray tracing techniques are used to optimize the design of the instrument by minimizing aberrations, for photography, and for longer wavelength applications such as designing microwave or even radio systems, and for shorter wavelengths, such as ultraviolet and X-ray optics. Before the advent of the computer, ray tracing calculations were performed by hand using trigonometry and logarithmic tables. The optical formulas of many classic photographic lenses were optimized by roomfuls of people, each of whom handled a small part of the large calculation. Now they are worked out in optical design software. A simple version of ray tracing known as ray transfer matrix analysis is often used in the design of optical resonators used in lasers. The basic principles of the most frequently used algorithm could be found in Spencer and Murty's fundamental paper: "General ray tracing Procedure". ==== Focal-plane ray tracing ==== There is a ray tracing technique called focal-plane ray tracing where how an optical ray will be after a lens is determined based on a lens focal plane and how the ray crosses the plane. This method utilizes the fact that rays from a point on the front focal plane of a positive lens will be parallel right after the lens and rays toward a point on the back or rear focal plane of a negative lens will also be parallel after the lens. In each case, the direction of the parallel rays after the lens is determined by a ray appearing to cross the lens nodal points (or the lens center for a thin lens). === Seismology === In seismology, geophysicists use ray tracing to aid in earthquake location and tomographic reconstruction of the Earth's interior. Seismic wave velocity varies within and beneath Earth's crust, causing these waves to bend and reflect. Ray tracing may be used to compute paths through a geophysical model, following them back to their source, such as an earthquake, or deducing the properties of the intervening material. In particular, the discovery of the seismic shadow zone (illustrated at right) allowed scientists to deduce the presence of Earth's molten core. === General relativity === In general relativity, where gravitational lensing can occur, the geodesics of the light rays receiving at the observer are integrated backwards in time until they hit the region of interest. Image synthesis under this technique can be view as an extension of the usual ray tracing in computer graphics. An example of such synthesis is found in the 2014 film Interstellar. === Laser Plasma Interactions === In laser-plasma physics ray-tracing can be used to simplify the calculations of laser propagation inside a plasma. Analytic solutions for ray trajectories in simple plasma density profiles are a well established, however researchers in laser-plasma physics often rely on ray-marching techniques due to the complexity of plasma density, temperature, and flow profiles which are often solved for using computational fluid dynamics simulations. == See also == Atmospheric optics ray-tracing codes Atmospheric refraction Gradient-index optics List of ray tracing software Ocean acoustic tomography Ray tracing (graphics) Ray transfer matrix analysis == References ==
Wikipedia/Ray_tracing_(physics)
The transmission-line matrix (TLM) method is a space and time discretising method for computation of electromagnetic fields. It is based on the analogy between the electromagnetic field and a mesh of transmission lines. The TLM method allows the computation of complex three-dimensional electromagnetic structures and has proven to be one of the most powerful time-domain methods along with the finite difference time domain (FDTD) method. The TLM was first explored by British electrical engineer Raymond Beurle while working at English Electric Valve Company in Chelmsford. After he had been appointed professor of electrical engineering at the University of Nottingham in 1963 he jointly authored an article, "Numerical solution of 2-dimensional scattering problems using a transmission-line matrix", with Peter B. Johns in 1971. == Basic principle == The TLM method is based on Huygens' model of wave propagation and scattering and the analogy between field propagation and transmission lines. Therefore, it considers the computational domain as a mesh of transmission lines, interconnected at nodes. In the figure on the right is considered a simple example of a 2D TLM mesh with a voltage pulse of amplitude 1 V incident on the central node. This pulse will be partially reflected and transmitted according to the transmission-line theory. If we assume that each line has a characteristic impedance Z {\displaystyle Z} , then the incident pulse sees effectively three transmission lines in parallel with a total impedance of Z / 3 {\displaystyle Z/3} . The reflection coefficient and the transmission coefficient are given by R = Z / 3 − Z Z / 3 + Z = − 0.5 {\displaystyle R={\frac {Z/3-Z}{Z/3+Z}}=-0.5} T = 2 ( Z / 3 ) Z / 3 + Z = 0.5 {\displaystyle T={\frac {2(Z/3)}{Z/3+Z}}=0.5} The energy injected into the node by the incident pulse and the total energy of the scattered pulses are correspondingly E I = v i Δ t = 1 ( 1 / Z ) Δ t = Δ t / Z {\displaystyle E_{I}=vi\,\Delta t=1\left(1/Z\right)\Delta t=\Delta t/Z} E S = [ 0.5 2 + 0.5 2 + 0.5 2 + ( − 0.5 ) 2 ] ( Δ t / Z ) = Δ t / Z {\displaystyle E_{S}=\left[0.5^{2}+0.5^{2}+0.5^{2}+(-0.5)^{2}\right](\Delta t/Z)=\Delta t/Z} Therefore, the energy conservation law is fulfilled by the model. The next scattering event excites the neighbouring nodes according to the principle described above. It can be seen that every node turns into a secondary source of spherical wave. These waves combine to form the overall waveform. This is in accordance with Huygens principle of light propagation. In order to show the TLM schema we will use time and space discretisation. The time-step will be denoted with Δ t {\displaystyle \Delta t} and the space discretisation intervals with Δ x {\displaystyle \Delta x} , Δ y {\displaystyle \Delta y} and Δ z {\displaystyle \Delta z} . The absolute time and space will therefore be t = k Δ t {\displaystyle t=k\,\Delta t} , x = l Δ x {\displaystyle x=l\,\Delta x} , y = m Δ y {\displaystyle y=m\,\Delta y} , z = n Δ z {\displaystyle z=n\,\Delta z} , where k = 0 , 1 , 2 , … {\displaystyle k=0,1,2,\ldots } is the time instant and m , n , l {\displaystyle m,n,l} are the cell coordinates. In case Δ x = Δ y = Δ z {\displaystyle \Delta x=\Delta y=\Delta z} the value Δ l {\displaystyle \Delta l} will be used, which is the lattice constant. In this case the following holds: Δ t = Δ l c 0 , {\displaystyle \Delta t={\frac {\Delta l}{c_{0}}},} where c 0 {\displaystyle c_{0}} is the free space speed of light. == The 2D TLM node == === The scattering matrix of an 2D TLM node === If we consider an electromagnetic field distribution in which the only non-zero components are E x {\displaystyle E_{x}} , E y {\displaystyle E_{y}} and H z {\displaystyle H_{z}} (i.e. a TE-mode distribution), then Maxwell's equations in Cartesian coordinates reduce to ∂ H z ∂ y = ε ∂ E x ∂ t {\displaystyle {\frac {\partial {H_{z}}}{\partial {y}}}=\varepsilon {\frac {\partial {E_{x}}}{\partial {t}}}} − ∂ H z ∂ x = ε ∂ E y ∂ t {\displaystyle -{\frac {\partial {H_{z}}}{\partial {x}}}=\varepsilon {\frac {\partial {E_{y}}}{\partial {t}}}} ∂ E y ∂ x − ∂ E x ∂ y = − μ ∂ H z ∂ t {\displaystyle {\frac {\partial {E_{y}}}{\partial {x}}}-{\frac {\partial {E_{x}}}{\partial {y}}}=-\mu {\frac {\partial {H_{z}}}{\partial {t}}}} We can combine these equations to obtain ∂ 2 H z ∂ x 2 + ∂ 2 H z ∂ y 2 = μ ε ∂ 2 H z ∂ t 2 {\displaystyle {\frac {\partial ^{2}H_{z}}{\partial {x}^{2}}}+{\frac {\partial ^{2}{H_{z}}}{\partial {y}^{2}}}=\mu \varepsilon {\frac {\partial ^{2}{H_{z}}}{\partial {t}^{2}}}} The figure on the right presents a structure referred to as a series node. It describes a block of space dimensions Δ x {\displaystyle \Delta x} , Δ y {\displaystyle \Delta y} and Δ z {\displaystyle \Delta z} that consists of four ports. L ′ {\displaystyle L'} and C ′ {\displaystyle C'} are the distributed inductance and capacitance of the transmission lines. It is possible to show that a series node is equivalent to a TE-wave, more precisely the mesh current I, the x-direction voltages (ports 1 and 3) and the y-direction voltages (ports 2 and 4) may be related to the field components H z {\displaystyle H_{z}} , E x {\displaystyle E_{x}} and E y {\displaystyle E_{y}} . If the voltages on the ports are considered, L x = L y {\displaystyle L_{x}=L_{y}} , and the polarity from the above figure holds, then the following is valid − V 1 + V 2 + V 3 − V 4 = 2 L ′ Δ l ∂ I ∂ t {\displaystyle -V_{1}+V_{2}+V_{3}-V_{4}=2L'\,\Delta l{\frac {\partial {I}}{\partial {t}}}} where Δ x = Δ y = Δ l {\displaystyle \Delta x=\Delta y=\Delta l} . ( V 3 − V 1 ) − ( V 4 − V 2 ) = 2 L ′ Δ l ∂ I ∂ t {\displaystyle \left(V_{3}-V_{1}\right)-\left(V_{4}-V_{2}\right)=2L'\,\Delta l{\frac {\partial I}{\partial t}}} [ E x ( y + Δ y ) − E x ( y ) ] Δ x − [ E y ( x + Δ x ) − E y ( x ) ] Δ y = 2 L ′ Δ l ∂ I ∂ t {\displaystyle \left[E_{x}(y+\Delta y)-E_{x}(y)\right]\,\Delta x-[E_{y}(x+\Delta x)-E_{y}(x)]\Delta y=2L'\,\Delta l{\frac {\partial {I}}{\partial {t}}}} and dividing both sides by Δ x Δ y {\displaystyle \Delta x\Delta y} E x ( y + Δ y ) − E x ( y ) Δ y − E y ( x + Δ x ) − E y ( x ) Δ x = 2 L ′ Δ l ∂ I ∂ t 1 Δ x Δ y {\displaystyle {\frac {E_{x}(y+\Delta y)-E_{x}(y)}{\Delta y}}-{\frac {E_{y}(x+\Delta x)-E_{y}(x)}{\Delta x}}=2L'\,\Delta l{\frac {\partial {I}}{\partial {t}}}{\frac {1}{\Delta x\,\Delta y}}} Since Δ x = Δ y = Δ z = Δ l {\displaystyle \Delta x=\Delta y=\Delta z=\Delta l} and substituting I = H z Δ z {\displaystyle I=H_{z}\,\Delta z} gives Δ E x Δ y − Δ E y Δ x = 2 L ′ ∂ H z ∂ t {\displaystyle {\frac {\Delta E_{x}}{\Delta y}}-{\frac {\Delta E_{y}}{\Delta x}}=2L'{\frac {\partial H_{z}}{\partial t}}} This reduces to Maxwell's equations when Δ l → 0 {\displaystyle \Delta l\rightarrow 0} . Similarly, using the conditions across the capacitors on ports 1 and 4, it can be shown that the corresponding two other Maxwell equations are the following: ∂ H z ∂ y = C ′ ∂ E x ∂ t {\displaystyle {\frac {\partial {H_{z}}}{\partial {y}}}=C'{\frac {\partial {E_{x}}}{\partial {t}}}} − ∂ H z ∂ x = C ′ ∂ E y ∂ t {\displaystyle -{\frac {\partial {H_{z}}}{\partial {x}}}=C'{\frac {\partial {E_{y}}}{\partial {t}}}} Having these results, it is possible to compute the scattering matrix of a shunt node. The incident voltage pulse on port 1 at time-step k is denoted as k V 1 i {\displaystyle _{k}V_{1}^{i}} . Replacing the four line segments from the above figure with their Thevenin equivalent it is possible to show that the following equation for the reflected voltage pulse holds: k V 1 r = 0.5 ( k V 1 i + k V 2 i + k V 3 i − k V 4 i ) {\displaystyle _{k}V_{1}^{r}=0.5\left(_{k}V_{1}^{i}+_{k}V_{2}^{i}+_{k}V_{3}^{i}-_{k}V_{4}^{i}\right)} If all incident waves as well as all reflected waves are collected in one vector, then this equation may be written down for all ports in matrix form: k V r = S k V i {\displaystyle _{k}\mathbf {V} ^{r}=\mathbf {S} _{k}\mathbf {V} ^{i}} where k V i {\displaystyle _{k}\mathbf {V} ^{i}} and k V r {\displaystyle _{k}\mathbf {V} ^{r}} are the incident and the reflected pulse amplitude vectors. For a series node the scattering matrix S has the following form S = 1 2 [ 1 1 1 − 1 1 1 − 1 1 1 − 1 1 1 − 1 1 1 1 ] {\displaystyle \mathbf {S} ={\frac {1}{2}}\left[{\begin{array}{cccc}1&1&1&-1\\1&1&-1&1\\1&-1&1&1\\-1&1&1&1\end{array}}\right]} === Connection between TLM nodes === In order to describe the connection between adjacent nodes by a mesh of series nodes, look at the figure on the right. As the incident pulse in timestep k+1 on a node is the scattered pulse from an adjacent node in timestep k, the following connection equations are derived: k + 1 V 1 i ( x , y ) = k + 1 V 3 r ( x , y − 1 ) {\displaystyle _{k+1}V_{1}^{i}(x,y)=_{k+1}V_{3}^{r}(x,y-1)} k + 1 V 2 i ( x , y ) = k + 1 V 4 r ( x − 1 , y ) {\displaystyle _{k+1}V_{2}^{i}(x,y)=_{k+1}V_{4}^{r}(x-1,y)} k + 1 V 3 i ( x , y ) = k + 1 V 1 r ( x , y + 1 ) {\displaystyle _{k+1}V_{3}^{i}(x,y)=_{k+1}V_{1}^{r}(x,y+1)} k + 1 V 4 i ( x , y ) = k + 1 V 2 r ( x + 1 , y ) {\displaystyle _{k+1}V_{4}^{i}(x,y)=_{k+1}V_{2}^{r}(x+1,y)} By modifying the scattering matrix S {\displaystyle {\textbf {S}}} inhomogeneous and lossy materials can be modelled. By adjusting the connection equations it is possible to simulate different boundaries. === The shunt TLM node === Apart from the series node, described above there is also the shunt TLM node, which represents a TM-mode field distribution. The only non-zero components of such wave are H x {\displaystyle H_{x}} , H y {\displaystyle H_{y}} , and E z {\displaystyle E_{z}} . With similar considerations as for the series node the scattering matrix of the shunt node can be derived. == 3D TLM models == Most problems in electromagnetics require a three-dimensional grid. As we now have structures that describe TE and TM-field distributions, intuitively it seems possible to define a combination of shunt and series nodes providing a full description of the electromagnetic field. Such attempts have been made, but because of the complexity of the resulting structures they proved to be not very useful. Using the analogy that was presented above leads to calculation of the different field components at physically separated points. This causes difficulties in providing simple and efficient boundary definitions. A solution to these problems was provided by Johns in 1987, when he proposed the structure known as the symmetrical condensed node (SCN), presented in the figure on the right. It consists of 12 ports because two field polarisations are to be assigned to each of the 6 sides of a mesh cell. The topology of the SCN cannot be analysed using Thevenin equivalent circuits. More general energy and charge conservation principles are to be used. The electric and the magnetic fields on the sides of the SCN node number (l,m,n) at time instant k may be summarised in 12-dimensional vectors k E l , m , n = k [ E 1 , E 2 , … , E 11 , E 12 ] l , m , n T {\displaystyle _{k}\mathbf {E} _{l,m,n}=_{k}\left[E_{1},E_{2},\ldots ,E_{11},E_{12}\right]_{l,m,n}^{T}} k H l , m , n = k [ H 1 , H 2 , … , H 11 , H 12 ] l , m , n T {\displaystyle _{k}\mathbf {H} _{l,m,n}=_{k}\left[H_{1},H_{2},\ldots ,H_{11},H_{12}\right]_{l,m,n}^{T}} They can be linked with the incident and scattered amplitude vectors via k a l , m , n = 1 2 Z F k E l , m , n + Z F 2 k H l , m , n {\displaystyle _{k}\mathbf {a} _{l,m,n}={\frac {1}{2{\sqrt {Z_{F}}}}}{_{k}\mathbf {E} }_{l,m,n}+{\frac {\sqrt {Z_{F}}}{2}}{_{k}\mathbf {H} }_{l,m,n}} k b l , m , n = 1 2 Z F k E l , m , n − Z F 2 k H l , m , n {\displaystyle _{k}\mathbf {b} _{l,m,n}={\frac {1}{2{\sqrt {Z_{F}}}}}{_{k}\mathbf {E} }_{l,m,n}-{\frac {\sqrt {Z_{F}}}{2}}{_{k}\mathbf {H} }_{l,m,n}} where Z F = μ ε {\displaystyle Z_{F}={\sqrt {\frac {\mu }{\varepsilon }}}} is the field impedance, k a l , m , n {\displaystyle _{k}\mathbf {a} _{l,m,n}} is the vector of the amplitudes of the incident waves to the node, and k b l , m , n {\displaystyle _{k}\mathbf {b} _{l,m,n}} is the vector of the scattered amplitudes. The relation between the incident and scattered waves is given by the matrix equation k b l , m , n = S k a l , m , n {\displaystyle _{k}\mathbf {b} _{l,m,n}=\mathbf {S} _{k}\mathbf {a} _{l,m,n}} The scattering matrix S can be calculated. For the symmetrical condensed node with ports defined as in the figure the following result is obtained S = [ 0 S 0 S 0 T S 0 T 0 S 0 S 0 S 0 T 0 ] {\displaystyle \mathbf {S} =\left[{\begin{array}{ccc}0&\mathbf {S} _{0}&\mathbf {S} _{0}^{T}\\\mathbf {S} _{0}^{T}&0&\mathbf {S} _{0}\\\mathbf {S} _{0}&\mathbf {S} _{0}^{T}&0\end{array}}\right]} where the following matrix was used S 0 = 1 2 [ 0 0 1 − 1 0 0 − 1 1 1 1 0 0 1 1 0 0 ] {\displaystyle \mathbf {S} _{0}={\frac {1}{2}}\left[{\begin{array}{cccc}0&0&1&-1\\0&0&-1&1\\1&1&0&0\\1&1&0&0\end{array}}\right]} The connection between different SCNs is done in the same manner as for the 2D nodes. == Open-sourced code implementation of 3D-TLM == The George Green Institute for Electromagnetics Research (GGIEMR) has open-sourced an efficient implementation of 3D-TLM, capable of parallel computation by means of MPI named GGITLM and available online. == References == C. Christopoulos, The Transmission Line Modeling Method: TLM, Piscataway, NY, IEEE Press, 1995. ISBN 978-0-19-856533-8 Russer, P., Electromagnetics, Microwave Circuit and Antenna Design for Communications Engineering, Second edition, Artec House, Boston, 2006, ISBN 978-1-58053-907-4 P. B. Johns and M.O'Brien. "Use of the transmission line modelling (t.l.m) method to solve nonlinear lumped networks", The Radio Electron and Engineer. 1980. J. L. Herring, Developments in the Transmission-Line Modelling Method for Electromagnetic Compatibility Studies, PhD thesis, University of Nottingham, 1993. Mansour Ahmadian, Transmission Line Matrix (TLM) modelling of medical ultrasound PhD thesis, University of Edinburgh 2001
Wikipedia/Transmission-line_matrix_method
Diffraction topography (short: "topography") is an imaging technique based on Bragg diffraction. Diffraction topographic images ("topographies") record the intensity profile of a beam of X-rays (or, sometimes, neutrons) diffracted by a crystal. A topography thus represents a two-dimensional spatial intensity mapping (image) of the X-rays diffracted in a specific direction, so regions which diffract substantially will appear brighter than those which do not. This is equivalent to the spatial fine structure of an Laue reflection. Topographs often reveal the irregularities in a non-ideal crystal lattice. X-ray diffraction topography is one variant of X-ray imaging, making use of diffraction contrast rather than absorption contrast which is usually used in radiography and computed tomography (CT). Topography is exploited to a lesser extent with neutrons, and is the same concept as dark field imaging in an electron microscope. Topography is used for monitoring crystal quality and visualizing defects in many different crystalline materials. It has proved helpful e.g. when developing new crystal growth methods, for monitoring growth and the crystal quality achieved, and for iteratively optimizing growth conditions. In many cases, topography can be applied without preparing or otherwise damaging the sample; it is therefore one variant of non-destructive testing. == History == After the discovery of x-rays by Wilhelm Röntgen in 1895, and of the principles of X-ray diffraction by Laue and the Bragg family, it took several decades for the benefits of diffraction imaging to be fully recognized, and the first useful experimental techniques to be developed. The first systematic reports of laboratory topography techniques date from the early 1940s. In the 1950s and 1960s, topographic investigations played a role in detecting the nature of defects and improving crystal growth methods for germanium and (later) silicon as materials for semiconductor microelectronics. For a more detailed account of the historical development of topography, see J.F. Kelly – "A brief history of X-ray diffraction topography". From about the 1970s on, topography profited from the advent of synchrotron x-ray sources which provided considerably more intense x-ray beams, allowing shorter exposure times, better contrast, higher spatial resolution, and investigation of smaller samples or rapidly changing phenomena. Initial applications of topography were mainly in the field of metallurgy, controlling the growth of better crystals of various metals. Topography was later extended to semiconductors, and generally to materials for microelectronics. A related field are investigations of materials and devices for X-ray optics, such as monochromator crystals made of Silicon, Germanium or Diamond, which need to be checked for defects prior to being used. Extensions of topography to organic crystals are somewhat more recent. Topography is applied today not only to volume crystals of any kind, including semiconductor wafers, but also to thin layers, entire electronic devices, as well as to organic materials such as protein crystals and others. == Basic principle of topography == The basic working principle of diffraction topography is as follows: An incident, spatially extended beam (mostly of X-rays, or neutrons) impinges on a sample. The beam may be either monochromatic, i.e. consist one single wavelength of X-rays or neutrons, or polychromatic, i.e. composed of a mixture of wavelengths ("white beam" topography). Furthermore, the incident beam may be either parallel, consisting only of "rays" propagating all along nearly the same direction, or divergent/convergent, containing several more strongly different directions of propagation. When the beam hits the crystalline sample, Bragg diffraction occurs, i.e. the incident wave is reflected by the atoms on certain lattice planes of the sample, if it hits those planes at the right Bragg angle. Diffraction from sample can take place either in reflection geometry (Bragg case), with the beam entering and leaving through the same surface, or in transmission geometry (Laue case). Diffraction gives rise to a diffracted beam, which will leave the sample and propagate along a direction differing from the incident direction by the scattering angle 2 θ B {\displaystyle 2\theta _{B}} . The cross section of the diffracted beam may or may not be identical to that of the incident beam. In the case of strongly asymmetric reflections, the beam size (in the diffraction plane) is considerably expanded or compressed, with expansion occurring if the incidence angle is much smaller than the exit angle, and vice versa. Independently of this beam expansion, the relation of sample size to image size is given by the exit angle alone: The apparent lateral size of the sample features parallel to the exit surface is downscaled in the image by the projection effect of the exit angle. A homogeneous sample (with a regular crystal lattice) would yield a homogeneous intensity distribution in the topograph (a "flat" image with no contrast). Intensity modulations (topographic contrast) arise from irregularities in the crystal lattice, originating from various kinds of defects such as voids and inclusions in the crystal phase boundaries (regions of different crystallographic phase, polytype, ...) defective areas, non-crystalline (amorphous) areas / inclusions cracks, surface scratches stacking faults dislocations, dislocation bundles grain boundaries, domain walls growth striations point defects or defect clusters crystal deformation strain fields In many cases of defects such as dislocations, topography is not directly sensitive to the defects themselves (atomic structure of the dislocation core), but predominantly to the strain field surrounding the defect region. == Theory == Theoretical descriptions of contrast formation in X-ray topography are largely based on the dynamical theory of diffraction. This framework is helpful in the description of many aspects of topographic image formation: entrance of an X-ray wavefield into a crystal, propagation of the wavefield inside the crystal, interaction of wavefield with crystal defects, altering of wavefield propagation by local lattice strains, diffraction, multiple scattering, absorption. The theory is therefore often helpful in the interpretation of topographic images of crystal defects. The exact nature of a defect often cannot be deduced directly from the observed image (i.e., a "backwards calculation" is problematic). Instead, one has to make assumptions about the structure of the defect, deduce a hypothetical image from the assumed structure ("forward calculation", based on theory), and compare with the experimental image. If the match between both is not good enough, the assumptions have to be varied until sufficient correspondence is reached. Theoretical calculations, and in particular numerical simulations by computer based on this theory, are thus a valuable tool for the interpretation of topographic images. === Contrast mechanisms === The topographic image of a uniform crystal with a perfectly regular lattice, illuminated by a homogeneous beam, is uniform (no contrast). Contrast arises when distortions of the lattice (defects, tilted crystallites, strain) occur; when the crystal is composed of several different materials or phases; or when the thickness of the crystal changes across the image domain. ==== Structure factor contrast ==== The diffraction from a crystalline material, and thus the intensity of the diffracted beam, changes with the type and number of atoms inside the crystal unit cell. This fact is quantitatively expressed by the structure factor. Different materials have different structure factors, and similarly for different phases of the same material (e.g. for materials crystallizing in several different space groups). In samples composed of a mixture of materials/phases in spatially adjacent domains, the geometry of these domains can be resolved by topography. This is true, for example, also for twinned crystals, ferroelectric domains, and many others. ==== Orientation contrast ==== When a crystal is composed of crystallites with varying lattice orientation, topographic contrast arises: In plane-wave topography, only selected crystallites will be in diffracting position, thus yielding diffracted intensity only in some parts of the image. Upon sample rotation, these will disappear, and other crystallites will appear in the new topograph as strongly diffracting. In white-beam topography, all misoriented crystallites will be diffracting simultaneously (each at a different wavelength). However, the exit angles of the respective diffracted beams will differ, leading to overlapping regions of enhanced intensity as well as to shadows in the image, thus again giving rise to contrast. While in the case of tilted crystallites, domain walls, grain boundaries etc. orientation contrast occurs on a macroscopic scale, it can also be generated more locally around defects, e.g. due to curved lattice planes around a dislocation core. ==== Extinction contrast ==== Another type of topographic contrast, extinction contrast, is slightly more complex. While the two above variants are explicable in simple terms based on geometrical theory (basically, the Bragg law) or kinematical theory of X-ray diffraction, extinction contrast can be understood based on dynamical theory. Qualitatively, extinction contrast arises e.g. when the thickness of a sample, compared to the respective extinction length (Bragg case) or Pendelloesung length (Laue case), changes across the image. In this case, diffracted beams from areas of different thickness, having suffered different degrees of extinction, are recorded within the same image, giving rise to contrast. Topographists have systematically investigated this effect by studying wedge-shaped samples, of linearly varying thickness, allowing to directly record in one image the dependence of diffracted intensity on sample thickness as predicted by dynamical theory. In addition to mere thickness changes, extinction contrast also arises when parts of a crystal are diffracting with different strengths, or when the crystal contains deformed (strained) regions. The governing quantity for an overall theory of extinction contrast in deformed crystals is called the effective misorientation Δ θ ( r → ) = 1 h → ⋅ cos ⁡ θ B ∂ ∂ s h → [ h → ⋅ u → ( r → ) ] {\displaystyle \Delta \theta ({\vec {r}})={\frac {1}{{\vec {h}}\cdot \cos \theta _{B}}}{\frac {\partial }{\partial s_{\vec {h}}}}\left[{\vec {h}}\cdot {\vec {u}}({\vec {r}})\right]} where u → ( r → ) {\displaystyle {\vec {u}}({\vec {r}})} is the displacement vector field, and s → 0 {\displaystyle {\vec {s}}_{0}} and s → h {\displaystyle {\vec {s}}_{h}} are the directions of the incident and diffracted beam, respectively. In this way, different kinds of disturbances are "translated" into equivalent misorientation values, and contrast formation can be understood analogously to orientation contrast. For instance, a compressively strained material requires larger Bragg angles for diffraction at unchanged wavelength. To compensate for this and to reach diffraction conditions, the sample needs to be rotated, similarly as in the case of lattice tilts. A simplified and more "transparent" formula taking into account the combined effect of tilts and strains onto contrast is the following: Δ θ ( r → ) = − tan ⁡ θ B Δ d d ( r → ) ± Δ φ ( r → ) {\displaystyle \Delta \theta ({\vec {r}})=-\tan \theta _{B}{\frac {\Delta d}{d}}({\vec {r}})\pm \Delta \varphi ({\vec {r}})} ==== Visibility of defects; types of defect images ==== To discuss the visibility of defects in topographic images according to theory, consider the exemplary case of a single dislocation: It will give rise to contrast in topography only if the lattice planes involved in diffraction are distorted in some way by the existence of the dislocation. This is true in the case of an edge dislocation if the scattering vector of the Bragg reflection used is parallel to the Burgers vector of the dislocation, or at least has a component in the plane perpendicular to the dislocation line, but not if it is parallel to the dislocation line. In the case of a screw dislocation, the scattering vector has to have a component along the Burgers vector, which is now parallel to dislocation line. As a rule of thumb, a dislocation will be invisible in a topograph if the vector product g ⋅ b {\displaystyle \mathbf {g} \cdot \mathbf {b} } is zero. (A more precise rule will have to distinguish between screw and edge dislocations and to also take the direction of the dislocation line l {\displaystyle l} into account. If a defect is visible, often there occurs not only one, but several distinct images of it on the topograph. Theory predicts three images of single defects: The so-called direct image, the kinematical image, and the intermediary image. === Spatial resolution; limiting effects === The spatial resolution achievable in topographic images can be limited by one or several of three factors: the resolution (grain or pixel size) of the detector, the experimental geometry, and intrinsic diffraction effects. First, the spatial resolution of an image can obviously not be better than the grain size (in the case of film) or the pixel size (in the case of digital detectors) with which it was recorded. This is the reason why topography requires high-resolution X-ray films or CCD cameras with the smallest pixel sizes available today. Secondly, resolution can be additionally blurred by a geometric projection effect. If one point of the sample is a "hole" in an otherwise opaque mask, then the X-ray source, of finite lateral size S, is imaged through the hole onto a finite image domain given by the formula Δ x = S ⋅ d D = S D ⋅ d {\displaystyle \Delta x=S\cdot {\frac {d}{D}}={\frac {S}{D}}\cdot d} where I is the spread of the image of one sample point in the image plane, D is the source-to-sample distance, and d is the sample-to-image distance. The ratio S/D corresponds to the angle (in radians) under which the source appears from the position of the sample (the angular source size, equivalent to the incident divergence at one sample point). The achievable resolution is thus best for small sources, large sample distances, and small detector distances. This is why the detector (film) needed to be placed very close to the sample in the early days of topography; only at synchrotrons, with their small S and (very) large D, could larger values of d finally be afforded, introducing much more flexibility into topography experiments. Thirdly, even with perfect detectors and ideal geometric conditions, the visibility of special contrast features, such as the images of single dislocations, can be additionally limited by diffraction effects. A dislocation in a perfect crystal matrix gives rise to contrast only in those regions where the local orientation of the crystal lattice differs from average orientation by more than about the Darwin width of the Bragg reflection used. A quantitative description is provided by the dynamical theory of X-ray diffraction. As a result, and somehow counter-intuitively, the widths of dislocation images become narrower when the associated rocking curves are large. Thus, strong reflections of low diffraction order are particularly appropriate for topographic imaging. They permit topographists to obtain narrow, well-resolved images of dislocations, and to separate single dislocations even when the dislocation density in a material is rather high. In more unfavourable cases (weak, high-order reflections, higher photon energies), dislocation images become broad, diffuse, and overlap for high and medium dislocation densities. Highly ordered, strongly diffracting materials – like minerals or semiconductors – are generally unproblematic, whereas e.g. protein crystals are particularly challenging for topographic imaging. Apart from the Darwin width of the reflection, the width of single dislocation images may additionally depend on the Burgers vector of the dislocation, i.e. both its length and its orientation (relative to the scattering vector), and, in plane wave topography, on the angular departure from the exact Bragg angle. The latter dependence follows a reciprocity law, meaning that dislocations images become narrower inversely as the angular distance grows. So-called weak beam conditions are thus favourable in order to obtain narrow dislocation images. == Experimental realization – instrumentation == To conduct a topographic experiment, three groups of instruments are required: an x-ray source, potentially including appropriate x-ray optics; a sample stage with sample manipulator (diffractometer); and a two-dimensionally resolving detector (most often X-ray film or camera). === X-ray source === The x-ray beam used for topography is generated by an x-ray source, typically either a laboratory x-ray tube (fixed or rotating) or a synchrotron source. The latter offers advantages due to its higher beam intensity, lower divergence, and its continuous wavelength spectrum. X-ray tubes are still useful, however, due to easier access and continuous availability, and are often used for initial screening of samples and/or training of new staff. For white beam topography, not much more is required: most often, a set of slits to precisely define the beam shape and a (well polished) vacuum exit window will suffice. For those topography techniques requiring a monochromatic x-ray beam, an additional crystal monochromator is mandatory. A typical configuration at synchrotron sources is a combination of two Silicon crystals, both with surfaces oriented parallel to [111]-lattice planes, in geometrically opposite orientation. This guarantees relatively high intensity, good wavelength selectivity (about 1 part in 10000) and the possibility to change the target wavelength without having to change the beam position ("fixed exit"). === Sample stage === To place the sample under investigation into the x-ray beam, a sample holder is required. While in white-beam techniques a simple fixed holder is sometimes sufficient, experiments with monochromatic techniques typically require one or more degrees of freedom of rotational motion. Samples are therefore placed on a diffractometer, allowing to orient the sample along one, two or three axes. If the sample needs to be displaced, e.g. in order to scan its surface through the beam in several steps, additional translational degrees of freedom are required. === Detector === After being scattered by the sample, the profile of the diffracted beam needs to be detected by a two-dimensionally resolving X-ray detector. The classical "detector" is X-ray sensitive film, with nuclear plates as a traditional alternative. The first step beyond these "offline" detectors were the so-called image plates, although limited in readout speed and spatial resolution. Since about the mid-1990s, CCD cameras have emerged as a practical alternative, offering many advantages such as fast online readout and the possibility to record entire image series in place. X-ray sensitive CCD cameras, especially those with spatial resolution in the micrometer range, are now well established as electronic detectors for topography. A promising further option for the future may be pixel detectors, although their limited spatial resolution may restrict their usefulness for topography. General criteria for judging the practical usefulness of detectors for topography applications include spatial resolution, sensitivity, dynamic range ("color depth", in black-white mode), readout speed, weight (important for mounting on diffractometer arms), and price. === Systematic overview of techniques and imaging conditions === The manifold topographic techniques can be categorized according to several criteria. One of them is the distinction between restricted-beam techniques on the one hand (such as section topography or pinhole topography) and extended-beam techniques on the other hand, which use the full width and intensity of the incoming beam. Another, independent distinction is between integrated-wave topography, making use of the full spectrum of incoming X-ray wavelengths and divergences, and plane-wave (monochromatic) topopgraphy, more selective in both wavelengths and divergence. Integrated-wave topography can be realized as either single-crystal or double-crystal topography. Further distinctions include the one between topography in reflection geometry (Bragg-case) and in transmission geometry (Laue case). For a full discussion and a graphical hierarchy of topographic techniques, see [1]. == Experimental techniques I – Some classical topographic techniques == The following is an exemplary list of some of the most important experimental techniques for topography: === White-beam === White-beam topography uses the full bandwidth of X-ray wavelengths in the incoming beam, without any wavelength filtering (no monochromator). The technique is particularly useful in combination with synchrotron radiation sources, due to their wide and continuous wavelength spectrum. In contrast to the monochromatic case, in which accurate sample adjustment is often necessary in order to reach diffraction conditions, the Bragg equation is always and automatically fulfilled in the case of a white X-ray beam: Whatever the angle at which the beam hits a specific lattice plane, there is always one wavelength in the incident spectrum for which the Bragg angle is fulfilled just at this precise angle (on condition that the spectrum is wide enough). White-beam topography is therefore a very simple and fast technique. Disadvantages include the high X-ray dose, possibly leading to radiation damage to the sample, and the necessity to carefully shield the experiment. White-beam topography produces a pattern of several diffraction spots, each spot being related to one specific lattice plane in the crystal. This pattern, typically recorded on X-ray film, corresponds to a Laue pattern and shows the symmetry of the crystal lattice. The fine structure of each single spot (topograph) is related to defects and distortions in the sample. The distance between spots, and the details of contrast within one single spot, depend on the distance between sample and film; this distance is therefore an important degree of freedom for white-beam topography experiments. Deformation of the crystal will cause variation in the size of the diffraction spot. For a cylindrically bent crystal the Bragg planes in the crystal lattice will lie on Archimedean spirals (with the exception of those orientated tangentially and radially to the curvature of the bend, which are respectively cylindrical and planar), and the degree of curvature can be determined in a predictable way from the length of the spots and the geometry of the set-up. White-beam topographs are useful for fast and comprehensive visualization of crystal defect and distortions. They are, however, rather difficult to analyze in any quantitative way, and even a qualitative interpretation often requires considerable experience and time. === Plane-wave topography === Plane-wave topography is in some sense the opposite of white-beam topography, making use of monochromatic (single-wavelength) and parallel incident beam. In order to achieve diffraction conditions, the sample under study must be precisely aligned. The contrast observed strongly depends on the exact position of the angular working point on the rocking curve of the sample, i.e. on the angular distance between the actual sample rotation position and the theoretical position of the Bragg peak. A sample rotation stage is therefore an essential instrumental precondition for controlling and varying the contrast conditions. === Section topography === While the above techniques use a spatially extended, wide incident beam, section topography is based on a narrow beam on the order of some 10 micrometers (in one or, in the case of pinhole topography with a pencil beam, in both lateral dimensions). Section topographs therefore investigate only a restricted volume of the sample. On its path through the crystal, the beam is diffracted at different depths, each one contributing to image formation on a different location on the detector (film). Section topography can therefore be used for depth-resolved defect analysis. In section topography, even perfect crystals display fringes. The technique is very sensitive to crystalline defects and strain, as these distort the fringe pattern in the topograph. Quantitative analysis can be performed with the help of image simulation by computer algorithms, usually based on the Takagi-Taupin equations. An enlarged synchrotron X-ray transmission section topograph on the right shows a diffraction image of the section of a sample having a gallium nitride (GaN) layer grown by metal-organic vapour phase epitaxy on sapphire wafer. Both the epitaxial GaN layer and the sapphire substrate show numerous defects. The GaN layer actually consists of about 20 micrometers wide small-angle grains connected to each other. Strain in the epitaxial layer and substrate is visible as elongated streaks parallel to the diffraction vector direction. The defects on the underside of the sapphire wafer section image are surface defects on the unpolished backside of the sapphire wafer. Between the sapphire and GaN the defects are interfacial defects. === Projection topography === The setup for projection topography (also called "traverse" topography") is essentially identical to section topography, the difference being that both sample and film are now scanned laterally (synchronously) with respect to the narrow incident beam. A projection topograph therefore corresponds to the superposition of many adjacent section topographs, able to investigate not just a restricted portion, but the entire volume of a crystal. The technique is rather simple and has been in routine use at "Lang cameras" in many research laboratories. === Berg-Barrett === Berg-Barrett topography uses a narrow incident beam that is reflected from the surface of the sample under study under conditions of high asymmetry (grazing incidence, steep exit). To achieve sufficient spatial resolution, the detector (film) needs to be placed rather close to the sample surface. Berg-Barrett topography is another routine technique in many X-ray laboratories. == Experimental techniques II – Advanced topographic techniques == === Topography at synchrotron sources === The advent of synchrotron X-ray sources has been beneficial to X-ray topography techniques. Several of the properties of synchrotron radiation are advantageous also for topography applications: The high collimation (more precisely the small angular source size) allows to reach higher geometrical resolution in topographs, even at larger sample-to-detector distances. The continuous wavelength spectrum facilitates white-beam topography. The high beam intensities available at synchrotrons make it possible to investigate small sample volumes, to work at weaker reflections or further off Bragg-conditions (weak beam conditions), and to achieve shorter exposure times. Finally, the discrete time structure of synchrotron radiation permits topographists to use stroboscopic methods to efficiently visualize time-dependent, periodically recurrent structures (such as acoustic waves on crystal surfaces). === Neutron topography === Diffraction topography with neutron radiation has been in use for several decades, mainly at research reactors with high neutron beam intensities. Neutron topography can make use of contrast mechanisms that are partially different from the X-ray case, and thus serve e.g. to visualize magnetic structures. However, due to the comparatively low neutron intensities, neutron topography requires long exposure times. Its use is therefore rather limited in practice. Literature: Schlenker, M.; Baruchel, J.; Perrier de la Bâthie, R.; Wilson, S. A. (1975). "Neutron-diffraction section topography: Observing crystal slices before cutting them". Journal of Applied Physics. 46 (7). AIP Publishing: 2845–2848. Bibcode:1975JAP....46.2845S. doi:10.1063/1.322029. ISSN 0021-8979. Dudley, M.; Baruchel, J.; Sherwood, J. N. (1990-06-01). "Neutron topography as a tool for studying reactive organic crystals: a feasibility study". Journal of Applied Crystallography. 23 (3). International Union of Crystallography (IUCr): 186–198. Bibcode:1990JApCr..23..186D. doi:10.1107/s0021889890000371. ISSN 0021-8898. === Topography applied to organic crystals === Topography is "classically" applied to inorganic crystals, such a metals and semiconductors. However, it is nowadays applied more and more often also to organic crystals, most notably proteins. Topographic investigations can help to understand and optimize crystal growth processes also for proteins. Numerous studies have been initiated in the last 5–10 years, using both white-beam and plane-wave topography. Although considerable progress has been achieved, topography on protein crystals remains a difficult discipline: Due to large unit cells, small structure factors and high disorder, diffracted intensities are weak. Topographic imaging therefore requires long exposure times, which may lead to radiation damage of the crystals, generating in the first place the defects which are then imaged. In addition, the low structure factors lead to small Darwin widths and thus to broad dislocation images, i.e. rather low spatial resolution. Nevertheless, in some cases, protein crystals were reported to be perfect enough to achieve images of single dislocations. Literature: Stojanoff, V.; Siddons, D. P. (1996-05-01). "X-ray topography of a lysozyme crystal". Acta Crystallographica Section A. 52 (3). International Union of Crystallography (IUCr): 498–499. Bibcode:1996AcCrA..52..498S. doi:10.1107/s0108767395014553. ISSN 0108-7673. Izumi, Kunihide; Sawamura, Sinzo; Ataka, Mitsuo (1996). "X-ray topography of lysozyme crystals". Journal of Crystal Growth. 168 (1–4). Elsevier BV: 106–111. Bibcode:1996JCrGr.168..106I. doi:10.1016/0022-0248(96)00367-3. ISSN 0022-0248. Stojanoff, V.; Siddons, D. P.; Monaco, L. A.; Vekilov, P.; Rosenberger, F. (1997-09-01). "X-ray Topography of Tetragonal Lysozyme Grown by the Temperature-Controlled Technique". Acta Crystallographica Section D. 53 (5). International Union of Crystallography (IUCr): 588–595. Bibcode:1997AcCrD..53..588S. doi:10.1107/s0907444997005763. ISSN 0907-4449. PMID 15299890. Izumi, Kunihide; Taguchi, Ken; Kobayashi, Yoko; Tachibana, Masaru; Kojima, Kenichi; Ataka, Mitsuo (1999). "Screw dislocation lines in lysozyme crystals observed by Laue topography using synchrotron radiation". Journal of Crystal Growth. 206 (1–2). Elsevier BV: 155–158. Bibcode:1999JCrGr.206..155I. doi:10.1016/s0022-0248(99)00344-9. ISSN 0022-0248. Lorber, B.; Sauter, C.; Ng, J.D.; Zhu, D.W.; Giegé, R.; Vidal, O.; Robert, M.C.; Capelle, B. (1999). "Characterization of protein and virus crystals by quasi-planar wave X-ray topography: a comparison between crystals grown in solution and in agarose gel". Journal of Crystal Growth. 204 (3). Elsevier BV: 357–368. Bibcode:1999JCrGr.204..357L. doi:10.1016/s0022-0248(99)00184-0. ISSN 0022-0248. Capelle, B.; Epelboin, Y.; Härtwig, J.; Moraleda, A. B.; Otálora, F.; Stojanoff, V. (2004-01-17). "Characterization of dislocations in protein crystals by means of synchrotron double-crystal topography". Journal of Applied Crystallography. 37 (1). International Union of Crystallography (IUCr): 67–71. doi:10.1107/s0021889803024415. hdl:10261/18789. ISSN 0021-8898. Lübbert, Daniel; Meents, Alke; Weckert, Edgar (2004-05-21). "Accurate rocking-curve measurements on protein crystals grown in a homogeneous magnetic field of 2.4 T". Acta Crystallographica Section D. 60 (6). International Union of Crystallography (IUCr): 987–998. doi:10.1107/s0907444904005268. ISSN 0907-4449. PMID 15159557. Lovelace, Jeffrey J.; Murphy, Cameron R.; Bellamy, Henry D.; Brister, Keith; Pahl, Reinhard; Borgstahl, Gloria E. O. (2005-05-13). "Advances in digital topography for characterizing imperfections in protein crystals". Journal of Applied Crystallography. 38 (3). International Union of Crystallography (IUCr): 512–519. doi:10.1107/s0021889805009234. ISSN 0021-8898. === Topography on thin layered structures === Not only volume crystals can be imaged by topography, but also crystalline layers on a foreign substrate. For very thin layers, the scattering volume and thus the diffracted intensities are very low. In these cases, topographic imaging is therefore a rather demanding task, unless incident beams with very high intensities are available. == Experimental techniques III – Special techniques and recent developments == === Reticulography === A relatively new topography-related technique (first published in 1996) is the so-called reticulography. Based on white-beam topography, the new aspect consists in placing a fine-scaled metallic grid ("reticule") between sample and detector. The metallic grid lines are highly absorbing, producing dark lines in the recorded image. While for flat, homgeneous sample the image of the grid is rectilinear, just as the grid itself, strongly deformed grid images may occur in the case of tilted or strained sample. The deformation results from Bragg angle changes (and thus different directions of propagation of the diffracted beams) due to lattice parameter differences (or tilted crystallites) in the sample. The grid serves to split the diffracted beam into an array of microbeams, and to backtrace the propagation of each individual microbeam onto the sample surface. By recording reticulographic images at several sample-to-detector distances, and appropriate data processing, local distributions of misorientation across the sample surface can be derived. Lang, A. R.; Makepeace, A. P. W. (1996-11-01). "Reticulography: a simple and sensitive technique for mapping misorientations in single crystals". Journal of Synchrotron Radiation. 3 (6). International Union of Crystallography (IUCr): 313–315. Bibcode:1996JSynR...3..313L. doi:10.1107/s0909049596010515. ISSN 0909-0495. PMID 16702698. Lang, A. R.; Makepeace, A. P. W. (1999-12-01). "Synchrotron X-ray reticulographic measurement of lattice deformations associated with energetic ion implantation in diamond". Journal of Applied Crystallography. 32 (6). International Union of Crystallography (IUCr): 1119–1126. Bibcode:1999JApCr..32.1119L. doi:10.1107/s0021889899010924. ISSN 0021-8898. === Digital topography === The use of electronic detectors such as X-ray CCD cameras, replacing traditional X-ray film, facilitates topography in many ways. CCDs achieve online readout in (almost) real-time, dispensing experimentalists of the need to develop films in a dark room. Drawbacks with respect to films are the limited dynamic range and, above all, the moderate spatial resolution of commercial CCD cameras, making the development of dedicated CCD cameras necessary for high-resolution imaging. A further, decisive advantage of digital topography is the possibility to record series of images without changing detector position, thanks to online readout. This makes it possible, without complicated image registration procedures, to observe time-dependent phenomena, to perform kinetic studies, to investigate processes of device degradation and radiation damage, and to realize sequential topography (see below). === Time-resolved (stroboscopic) topography; Imaging of surface acoustic waves === To image time-dependent, periodically fluctuating phenomena, topography can be combined with stroboscopic exposure techniques. In this way, one selected phase of a sinusoidally varying movement is selectively images as a "snapshot". First applications were in the field of surface acoustic waves on semiconductor surfaces. Literature: Zolotoyabko, E.; Shilo, D.; Sauer, W.; Pernot, E.; Baruchel, J. (1998-10-19). "Visualization of 10 μm surface acoustic waves by stroboscopic x-ray topography". Applied Physics Letters. 73 (16). AIP Publishing: 2278–2280. Bibcode:1998ApPhL..73.2278Z. doi:10.1063/1.121701. ISSN 0003-6951. Sauer, W.; Streibl, M.; Metzger, T. H.; Haubrich, A. G. C.; Manus, S.; Wixforth, A.; Peisl, J.; Mazuelas, A.; Härtwig, J.; Baruchel, J. (1999-09-20). "X-ray imaging and diffraction from surface phonons on GaAs". Applied Physics Letters. 75 (12). AIP Publishing: 1709–1711. Bibcode:1999ApPhL..75.1709S. doi:10.1063/1.124797. ISSN 0003-6951. === Topo-tomography; 3D dislocation distributions === By combining topographic image formation with tomographic image reconstruction, distributions of defects can be resolved in three dimensions. Unlike "classical" computed tomography (CT), image contrast is not based on differences in absorption (absorption contrast), but on the usual contrast mechanisms of topography (diffraction contrast). In this way, three-dimensional distributions of dislocations in crystals have been imaged. Literature: Ludwig, W.; Cloetens, P.; Härtwig, J.; Baruchel, J.; Hamelin, B.; Bastie, P. (2001-09-25). "Three-dimensional imaging of crystal defects by 'topo-tomography'". Journal of Applied Crystallography. 34 (5). International Union of Crystallography (IUCr): 602–607. doi:10.1107/s002188980101086x. ISSN 0021-8898. === Sequential topography / Rocking Curve Imaging === Plane-wave topography can be made to extract an additional wealth of information from a sample by recording not just one image, but an entire sequence of topographs all along the sample's rocking curve. By following the diffracted intensity in one pixel across the entire sequence of images, local rocking curves from very small areas of sample surface can be reconstructed. Although the required post-processing and numerical analysis is sometimes moderately demanding, the effort is often compensated by very comprehensive information on the sample's local properties. Quantities that become quantitatively measurable in this way include local scattering power, local lattice tilts (crystallite misorientation), and local lattice quality and perfection. Spatial resolution is, in many cases, essentially given by the detector pixel size. The technique of sequential topography, in combination with appropriate data analysis methods also called rocking curve imaging, constitutes a method of microdiffraction imaging, i.e. a combination of X-ray imaging with X-ray diffractometry. Literature: Lübbert, D; Baumbach, T; Härtwig, J; Boller, E; Pernot, E (2000). "μm-resolved high resolution X-ray diffraction imaging for semiconductor quality control". Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms. 160 (4). Elsevier BV: 521–527. Bibcode:2000NIMPB.160..521L. doi:10.1016/s0168-583x(99)00619-9. ISSN 0168-583X. Hoszowska, J; Freund, A K; Boller, E; Sellschop, J P F; Level, G; Härtwig, J; Burns, R C; Rebak, M; Baruchel, J (2001-05-03). "Characterization of synthetic diamond crystals by spatially resolved rocking curve measurements". Journal of Physics D: Applied Physics. 34 (10A). IOP Publishing: A47 – A51. Bibcode:2001JPhD...34A..47H. doi:10.1088/0022-3727/34/10a/311. ISSN 0022-3727. Mikul k, P; L bbert, D; Koryt r, D; Pernot, P; Baumbach, T (2003-04-22). "Synchrotron area diffractometry as a tool for spatial high-resolution three-dimensional lattice misorientation mapping". Journal of Physics D: Applied Physics. 36 (10A). IOP Publishing: A74 – A78. Bibcode:2003JPhD...36A..74M. doi:10.1088/0022-3727/36/10a/315. ISSN 0022-3727. Lovelace, Jeffrey J.; Murphy, Cameron R.; Pahl, Reinhard; Brister, Keith; Borgstahl, Gloria E. O. (2006-05-10). "Tracking reflections through cryogenic cooling with topography". Journal of Applied Crystallography. 39 (3). International Union of Crystallography (IUCr): 425–432. doi:10.1107/s0021889806012763. ISSN 0021-8898. === MAXIM === The "MAXIM" (MAterials X-ray IMaging) method is another method combining diffraction analysis with spatial resolution. It can be viewed as serial topography with additional angular resolution in the exit beam. In contrast to the Rocking Curve Imaging method, it is more appropriate for more highly disturbed (polycrystalline) materials with lower crystalline perfection. The difference on the instrumental side is that MAXIM uses an array of slits / small channels (a so-called "multi-channel plate" (MCP), the two-dimensional equivalent of a Soller slit system) as an additional X-ray optical element between sample and CCD detector. These channels transmit intensity only in specific, parallel directions, and thus guarantee a one-to-one-relation between detector pixels and points on the sample surface, which would otherwise not be given in the case of materials with high strain and/or a strong mosaicity. The spatial resolution of the method is limited by a combination of detector pixel size and channel plate periodicity, which in the ideal case are identical. The angular resolution is mostly given by the aspect ratio (length over width) of the MCP channels. Literature: Wroblewski, T.; Geier, S.; Hessmer, R.; Schreck, M.; Rauschenbach, B. (1995). "X-ray imaging of polycrystalline materialsa)". Review of Scientific Instruments. 66 (6). AIP Publishing: 3560–3562. Bibcode:1995RScI...66.3560W. doi:10.1063/1.1145469. ISSN 0034-6748. Wroblewski, T.; Clauß, O.; Crostack, H.-A.; Ertel, A.; Fandrich, F.; Genzel, Ch.; Hradil, K.; Ternes, W.; Woldt, E. (1999). "A new diffractometer for materials science and imaging at HASYLAB beamline G3". Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 428 (2–3). Elsevier BV: 570–582. Bibcode:1999NIMPA.428..570W. doi:10.1016/s0168-9002(99)00144-8. ISSN 0168-9002. Pyzalla, A.; Wang, L.; Wild, E.; Wroblewski, T. (2001). "Changes in microstructure, texture and residual stresses on the surface of a rail resulting from friction and wear". Wear. 251 (1–12). Elsevier BV: 901–907. doi:10.1016/s0043-1648(01)00748-7. ISSN 0043-1648. == Literature == Books (chronological order): Tanner, Brian: X-ray diffraction topography. Pergamon Press (1976).ISBN 0080196926. Authier, André and Lagomarsino, Stefano and Tanner, Brian K. (editors): X-Ray and Neutron Dynamical Diffraction – Theory and Applications. Plenum Press / Kluwer Academic Publishers (1996). ISBN 0-306-45501-3. Bowen, Keith and Tanner, Brian: High Resolution X-Ray Diffractometry and Topography. Taylor and Francis (1998). ISBN 0-85066-758-5. Authier, André (2003). Dynamical theory of X-ray diffraction. IUCr monographs on crystallography. Vol. 11 (2 ed.). Oxford University Press. ISBN 0-19-852892-2. Reviews Lang, A. R.: Techniques and interpretation in X-ray topography. In: Diffraction and Imaging Techniques in Materials Science (edited by Amelinckx S., Gevers R. and Van Landuyt J.) 2nd ed. rev. (1978), pp 623–714. Amsterdam: North Holland. Klapper, Helmut: X-ray topography of organic crystals. In: Crystals: Growth, Properties and Applications, vol. 13 (1991), pp 109–162. Berlin-Heidelberg: Springer. Lang, A. R.: Topography. In: International Tables for Crystallography, Vol. C (1992), Section 2.7, p. 113. Kluwer, Dordrecht. Tuomi, T: Synchrotron X-ray topography of electronic materials. Journal of Synchrotron Radiation (2002) 9, 174-178. Baruchel, J. and Härtwig, J. and Pernot-Rejmánková, P.: Present state and perspectives of synchrotron radiation diffraction imaging. Journal of Synchrotron Radiation (2002) 9, 107-114. Selected original articles (chronological order): X-ray topography Barrett, Charles S. (1931-08-15). "Laue Spots From Perfect, Imperfect, and Oscillating Crystals". Physical Review. 38 (4). American Physical Society (APS): 832–833. Bibcode:1931PhRv...38..832B. doi:10.1103/physrev.38.832. ISSN 0031-899X. Berg, Wolfgang (1931). "Über eine röntgenographische Methode zur Untersuchung von Gitterstörungen an Kristallen". Die Naturwissenschaften (in German). 19 (19). Springer Science and Business Media LLC: 391–396. Bibcode:1931NW.....19..391B. doi:10.1007/bf01522358. ISSN 0028-1042. S2CID 36422396. Borrmann, G. (1941). "Über Extinktionsdiagramme der Röntgenstrahlen von Quarz". Physikalische Zeitschrift. 42: 157-162. Guinier, A.; Tennevin, J. (1949-06-02). "Sur deux variantes de la méthode de Laue et leurs applications". Acta Crystallographica. 2 (3). International Union of Crystallography (IUCr): 133–138. Bibcode:1949AcCry...2..133G. doi:10.1107/s0365110x49000370. ISSN 0365-110X. Bond, W.L.; Andrus, J. (1952). "Structural imperfections in quartz crystals". American Mineralogist. 37: 622–632. Lang, A.R (1957). "A method for the examination of crystal sections using penetrating characteristic X radiation". Acta Metallurgica. 5 (7). Elsevier BV: 358–364. doi:10.1016/0001-6160(57)90002-0. ISSN 0001-6160. Lang, A. R. (1957-12-01). "Abstracts of Papers: Point-by.point X-ray diffraction studies of imperfections in melt-grown crystals". Acta Crystallographica. 10 (12). International Union of Crystallography (IUCr): 839. doi:10.1107/s0365110x57002649. ISSN 0365-110X. Lang, A. R. (1958). "Direct Observation of Individual Dislocations by X-Ray Diffraction". Journal of Applied Physics. 29 (3). AIP Publishing: 597–598. Bibcode:1958JAP....29..597L. doi:10.1063/1.1723234. ISSN 0021-8979. Lang, A. R. (1959-03-10). "The projection topograph: a new method in X-ray diffraction microradiography". Acta Crystallographica. 12 (3). International Union of Crystallography (IUCr): 249–250. Bibcode:1959AcCry..12..249L. doi:10.1107/s0365110x59000706. ISSN 0365-110X. T. Tuomi, K. Naukkarinen, E. Laurila, P. Rabe: Rapid high resolution X-ray topography with synchrotron radiation. Acta Polytechnica Scandinavica, Ph. Incl. Nucleonics Series No. 100, (1973), 1-8. Tuomi, T.; Naukkarinen, K.; Rabe, P. (1974-09-16). "Use of synchrotron radiation in X-ray diffraction topography". Physica Status Solidi A. 25 (1). Wiley: 93–106. Bibcode:1974PSSAR..25...93T. doi:10.1002/pssa.2210250106. ISSN 0031-8965. Klapper, H. (1975-04-01). "The influence of elastic anisotropy on the X-ray topographic image width of pure screw dislocations". Journal of Applied Crystallography. 8 (2). International Union of Crystallography (IUCr): 204. Bibcode:1975JApCr...8..204K. doi:10.1107/s0021889875010163. ISSN 0021-8898. Hart, M. (1975-08-01). "Synchrotron radiation – its application to high-speed, high-resolution X-ray diffraction topography". Journal of Applied Crystallography. 8 (4). International Union of Crystallography (IUCr): 436–444. Bibcode:1975JApCr...8..436H. doi:10.1107/s002188987501093x. ISSN 0021-8898. Klapper, H. (1976-08-01). "The influence of elastic anisotropy on the X-ray topographic image width of pure screw dislocations". Journal of Applied Crystallography. 9 (4). International Union of Crystallography (IUCr): 310–317. Bibcode:1976JApCr...9..310K. doi:10.1107/s0021889876011400. ISSN 0021-8898. Tanner, B. K.; Midgley, D.; Safa, M. (1977-08-01). "Dislocation contrast in X-ray synchrotron topographs". Journal of Applied Crystallography. 10 (4). International Union of Crystallography (IUCr): 281–286. Bibcode:1977JApCr..10..281T. doi:10.1107/s0021889877013491. ISSN 0021-8898. Fisher, G. R.; Barnes, P.; Kelly, J. F. (1993-10-01). "Dislocation contrast in white-radiation synchrotron topography of silicon carbide". Journal of Applied Crystallography. 26 (5). International Union of Crystallography (IUCr): 677–682. Bibcode:1993JApCr..26..677F. doi:10.1107/s0021889893004017. ISSN 0021-8898. Lang, A R (1993-04-14). "The early days of high-resolution X-ray topography". Journal of Physics D: Applied Physics. 26 (4A). IOP Publishing: A1 – A8. Bibcode:1993JPhD...26....1L. doi:10.1088/0022-3727/26/4a/001. ISSN 0022-3727. Zontone, F.; Mancini, L.; Barrett, R.; Baruchel, J.; Härtwig, J.; Epelboin, Y. (1996-07-01). "New Features of Dislocation Images in Third-Generation Synchrotron Radiation Topographs". Journal of Synchrotron Radiation. 3 (4). International Union of Crystallography (IUCr): 173–184. Bibcode:1996JSynR...3..173Z. doi:10.1107/s0909049596002269. ISSN 0909-0495. PMID 16702676. Baruchel, José; Cloetens, Peter; Härtwig, Jürgen; Ludwig, Wolfgang; Mancini, Lucia; Pernot, Petra; Schlenker, Michel (2000-05-01). "Phase imaging using highly coherent X-rays: radiography, tomography, diffraction topography". Journal of Synchrotron Radiation. 7 (3). International Union of Crystallography (IUCr): 196–201. doi:10.1107/s0909049500002995. ISSN 0909-0495. PMID 16609195. Special applications: Kelly, J.F.; Barnes, P.; Fisher, G.R. (1995). "The use of synchrotron edge topography to study polytype nearest neighbour relationships in SiC" (PDF). Radiation Physics and Chemistry. 45 (3). Elsevier BV: 509–522. Bibcode:1995RaPC...45..509K. doi:10.1016/0969-806x(94)00101-o. ISSN 0969-806X. Wieteska, K.; Wierzchowski, W.; Graeff, W.; Turos, A.; Grötzschel, R. (2000-09-01). "Characterization of implanted semiconductors by means of white-beam and plane-wave synchrotron topography". Journal of Synchrotron Radiation. 7 (5). International Union of Crystallography (IUCr): 318–325. doi:10.1107/s0909049500009420. ISSN 0909-0495. PMID 16609215. Altin, D.; Härtwig, J.; Köhler, R.; Ludwig, W.; Ohler, M.; Klein, H. (2002-08-31). "X-ray diffraction topography using a diffractometer with a bendable monochromator at a synchrotron radiation source". Journal of Synchrotron Radiation. 9 (5). International Union of Crystallography (IUCr): 282–286. doi:10.1107/s0909049502010294. ISSN 0909-0495. PMID 12200570. Instrumentation and beamlines for topography: Espeso, José I.; Cloetens, Peter; Baruchel, José; Härtwig, Jürgen; Mairs, Trevor; Biasci, Jean Claude; Marot, Gérard; Salomé-Pateyron, Murielle; Schlenker, Michel (1998-09-01). "Conserving the Coherence and Uniformity of Third-Generation Synchrotron Radiation Beams: the Case of ID19, a 'Long' Beamline at the ESRF". Journal of Synchrotron Radiation. 5 (5). International Union of Crystallography (IUCr): 1243–1249. Bibcode:1998JSynR...5.1243E. doi:10.1107/s0909049598002271. ISSN 0909-0495. PMID 16687829. == See also == Crystallography Dynamical theory of x-ray diffraction High energy X-rays X-ray diffraction X-ray imaging X-ray optics == References == == External links == Topography: Introductions and tutorials on the web "A Brief History of X-Ray Diffraction Topography" by J.F. Kelly, University of London (GB) "X-ray topography - practice guide" by D. Black, G. Long, NIST (USA) "X-ray topography": Introduction from PTB, Braunschweig (Germany) Chapter from script on "defects in crystals" by Prof. H. Foell, University of Kiel (Germany) "Characterization of crystalline materials by X-Ray topography" - Introduction by Y. Epelboin, Paris-Jussieu (France) "X-ray diffraction imaging (X-ray topography) - An Overview about Techniques and Applications" by J. Haertwig, ESRF, Grenoble (France) The same, slightly different format Topography beamlines at synchrotrons: Imaging Beamline (ID19) of the European Synchrotron ESRF, Grenoble (France) TOPO Beamline at ANKA, Karlsruhe (Germany) Beamline F1 of HASYLAB at DESY, Hamburg (Germany) National Synchrotron Light Source (NSLS), Kansas (USA) SPring-8, near Himeji (Japan)
Wikipedia/Diffraction_topography
A lens is a transmissive optical device that focuses or disperses a light beam by means of refraction. A simple lens consists of a single piece of transparent material, while a compound lens consists of several simple lenses (elements), usually arranged along a common axis. Lenses are made from materials such as glass or plastic and are ground, polished, or molded to the required shape. A lens can focus light to form an image, unlike a prism, which refracts light without focusing. Devices that similarly focus or disperse waves and radiation other than visible light are also called "lenses", such as microwave lenses, electron lenses, acoustic lenses, or explosive lenses. Lenses are used in various imaging devices such as telescopes, binoculars, and cameras. They are also used as visual aids in glasses to correct defects of vision such as myopia and hypermetropia. == History == The word lens comes from lēns, the Latin name of the lentil (a seed of a lentil plant), because a double-convex lens is lentil-shaped. The lentil also gives its name to a geometric figure. Some scholars argue that the archeological evidence indicates that there was widespread use of lenses in antiquity, spanning several millennia. The so-called Nimrud lens is a rock crystal artifact dated to the 7th century BCE which may or may not have been used as a magnifying glass, or a burning glass. Others have suggested that certain Egyptian hieroglyphs depict "simple glass meniscal lenses". The oldest certain reference to the use of lenses is from Aristophanes' play The Clouds (424 BCE) mentioning a burning-glass. Pliny the Elder (1st century) confirms that burning-glasses were known in the Roman period. Pliny also has the earliest known reference to the use of a corrective lens when he mentions that Nero was said to watch the gladiatorial games using an emerald (presumably concave to correct for nearsightedness, though the reference is vague). Both Pliny and Seneca the Younger (3 BC–65 AD) described the magnifying effect of a glass globe filled with water. Ptolemy (2nd century) wrote a book on Optics, which however survives only in the Latin translation of an incomplete and very poor Arabic translation. The book was, however, received by medieval scholars in the Islamic world, and commented upon by Ibn Sahl (10th century), who was in turn improved upon by Alhazen (Book of Optics, 11th century). The Arabic translation of Ptolemy's Optics became available in Latin translation in the 12th century (Eugenius of Palermo 1154). Between the 11th and 13th century "reading stones" were invented. These were primitive plano-convex lenses initially made by cutting a glass sphere in half. The medieval (11th or 12th century) rock crystal Visby lenses may or may not have been intended for use as burning glasses. Spectacles were invented as an improvement of the "reading stones" of the high medieval period in Northern Italy in the second half of the 13th century. This was the start of the optical industry of grinding and polishing lenses for spectacles, first in Venice and Florence in the late 13th century, and later in the spectacle-making centres in both the Netherlands and Germany. Spectacle makers created improved types of lenses for the correction of vision based more on empirical knowledge gained from observing the effects of the lenses (probably without the knowledge of the rudimentary optical theory of the day). The practical development and experimentation with lenses led to the invention of the compound optical microscope around 1595, and the refracting telescope in 1608, both of which appeared in the spectacle-making centres in the Netherlands. With the invention of the telescope and microscope there was a great deal of experimentation with lens shapes in the 17th and early 18th centuries by those trying to correct chromatic errors seen in lenses. Opticians tried to construct lenses of varying forms of curvature, wrongly assuming errors arose from defects in the spherical figure of their surfaces. Optical theory on refraction and experimentation was showing no single-element lens could bring all colours to a focus. This led to the invention of the compound achromatic lens by Chester Moore Hall in England in 1733, an invention also claimed by fellow Englishman John Dollond in a 1758 patent. Developments in transatlantic commerce were the impetus for the construction of modern lighthouses in the 18th century, which utilize a combination of elevated sightlines, lighting sources, and lenses to provide navigational aid overseas. With maximal distance of visibility needed in lighthouses, conventional convex lenses would need to be significantly sized which would negatively affect the development of lighthouses in terms of cost, design, and implementation. Fresnel lens were developed that considered these constraints by featuring less material through their concentric annular sectioning. They were first fully implemented into a lighthouse in 1823. == Construction of simple lenses == Most lenses are spherical lenses: their two surfaces are parts of the surfaces of spheres. Each surface can be convex (bulging outwards from the lens), concave (depressed into the lens), or planar (flat). The line joining the centres of the spheres making up the lens surfaces is called the axis of the lens. Typically the lens axis passes through the physical centre of the lens, because of the way they are manufactured. Lenses may be cut or ground after manufacturing to give them a different shape or size. The lens axis may then not pass through the physical centre of the lens. Toric or sphero-cylindrical lenses have surfaces with two different radii of curvature in two orthogonal planes. They have a different focal power in different meridians. This forms an astigmatic lens. An example is eyeglass lenses that are used to correct astigmatism in someone's eye. === Types of simple lenses === Lenses are classified by the curvature of the two optical surfaces. A lens is biconvex (or double convex, or just convex) if both surfaces are convex. If both surfaces have the same radius of curvature, the lens is equiconvex. A lens with two concave surfaces is biconcave (or just concave). If one of the surfaces is flat, the lens is plano-convex or plano-concave depending on the curvature of the other surface. A lens with one convex and one concave side is convex-concave or meniscus. Convex-concave lenses are most commonly used in corrective lenses, since the shape minimizes some aberrations. For a biconvex or plano-convex lens in a lower-index medium, a collimated beam of light passing through the lens converges to a spot (a focus) behind the lens. In this case, the lens is called a positive or converging lens. For a thin lens in air, the distance from the lens to the spot is the focal length of the lens, which is commonly represented by f in diagrams and equations. An extended hemispherical lens is a special type of plano-convex lens, in which the lens's curved surface is a full hemisphere and the lens is much thicker than the radius of curvature. Another extreme case of a thick convex lens is a ball lens, whose shape is completely round. When used in novelty photography it is often called a "lensball". A ball-shaped lens has the advantage of being omnidirectional, but for most optical glass types, its focal point lies close to the ball's surface. Because of the ball's curvature extremes compared to the lens size, optical aberration is much worse than thin lenses, with the notable exception of chromatic aberration. For a biconcave or plano-concave lens in a lower-index medium, a collimated beam of light passing through the lens is diverged (spread); the lens is thus called a negative or diverging lens. The beam, after passing through the lens, appears to emanate from a particular point on the axis in front of the lens. For a thin lens in air, the distance from this point to the lens is the focal length, though it is negative with respect to the focal length of a converging lens. The behavior reverses when a lens is placed in a medium with higher refractive index than the material of the lens. In this case a biconvex or plano-convex lens diverges light, and a biconcave or plano-concave one converges it. Convex-concave (meniscus) lenses can be either positive or negative, depending on the relative curvatures of the two surfaces. A negative meniscus lens has a steeper concave surface (with a shorter radius than the convex surface) and is thinner at the centre than at the periphery. Conversely, a positive meniscus lens has a steeper convex surface (with a shorter radius than the concave surface) and is thicker at the centre than at the periphery. An ideal thin lens with two surfaces of equal curvature (also equal in the sign) would have zero optical power (as its focal length becomes infinity as shown in the lensmaker's equation), meaning that it would neither converge nor diverge light. All real lenses have a nonzero thickness, however, which makes a real lens with identical curved surfaces slightly positive. To obtain exactly zero optical power, a meniscus lens must have slightly unequal curvatures to account for the effect of the lens' thickness. === For a spherical surface === For a single refraction for a circular boundary, the relation between object and its image in the paraxial approximation is given by n 1 u + n 2 v = n 2 − n 1 R {\displaystyle {\frac {n_{1}}{u}}+{\frac {n_{2}}{v}}={\frac {n_{2}-n_{1}}{R}}} where R is the radius of the spherical surface, n2 is the refractive index of the material of the surface, n1 is the refractive index of medium (the medium other than the spherical surface material), u {\textstyle u} is the on-axis (on the optical axis) object distance from the line perpendicular to the axis toward the refraction point on the surface (which height is h), and v {\textstyle v} is the on-axis image distance from the line. Due to paraxial approximation where the line of h is close to the vertex of the spherical surface meeting the optical axis on the left, u {\textstyle u} and v {\textstyle v} are also considered distances with respect to the vertex. Moving v {\textstyle v} toward the right infinity leads to the first or object focal length f 0 {\textstyle f_{0}} for the spherical surface. Similarly, u {\textstyle u} toward the left infinity leads to the second or image focal length f i {\displaystyle f_{i}} . f 0 = n 1 n 2 − n 1 R , f i = n 2 n 2 − n 1 R {\displaystyle {\begin{aligned}f_{0}&={\frac {n_{1}}{n_{2}-n_{1}}}R,\\f_{i}&={\frac {n_{2}}{n_{2}-n_{1}}}R\end{aligned}}} Applying this equation on the two spherical surfaces of a lens and approximating the lens thickness to zero (so a thin lens) leads to the lensmaker's formula. ==== Derivation ==== Applying Snell's law on the spherical surface, n 1 sin ⁡ i = n 2 sin ⁡ r . {\displaystyle n_{1}\sin i=n_{2}\sin r\,.} Also in the diagram, tan ⁡ ( i − θ ) = h u tan ⁡ ( θ − r ) = h v sin ⁡ θ = h R {\displaystyle {\begin{aligned}\tan(i-\theta )&={\frac {h}{u}}\\\tan(\theta -r)&={\frac {h}{v}}\\\sin \theta &={\frac {h}{R}}\end{aligned}}} , and using small angle approximation (paraxial approximation) and eliminating i, r, and θ, n 2 v + n 1 u = n 2 − n 1 R . {\displaystyle {\frac {n_{2}}{v}}+{\frac {n_{1}}{u}}={\frac {n_{2}-n_{1}}{R}}\,.} === Lensmaker's equation === The (effective) focal length f {\displaystyle f} of a spherical lens in air or vacuum for paraxial rays can be calculated from the lensmaker's equation: 1 f = ( n − 1 ) [ 1 R 1 − 1 R 2 + ( n − 1 ) d n R 1 R 2 ] , {\displaystyle {\frac {1}{\ f\ }}=\left(n-1\right)\left[\ {\frac {1}{\ R_{1}\ }}-{\frac {1}{\ R_{2}\ }}+{\frac {\ \left(n-1\right)\ d~}{\ n\ R_{1}\ R_{2}\ }}\ \right]\ ,} where n {\textstyle \ n\ } is the refractive index of the lens material; R 1 {\textstyle \ R_{1}\ } is the (signed, see below) radius of curvature of the lens surface closer to the light source; R 2 {\textstyle \ R_{2}\ } is the radius of curvature of the lens surface farther from the light source; and d {\textstyle \ d\ } is the thickness of the lens (the distance along the lens axis between the two surface vertices). The focal length f {\textstyle \ f\ } is with respect to the principal planes of the lens, and the locations of the principal planes h 1 {\textstyle \ h_{1}\ } and h 2 {\textstyle \ h_{2}\ } with respect to the respective lens vertices are given by the following formulas, where it is a positive value if it is right to the respective vertex. h 1 = − ( n − 1 ) f d n R 2 {\displaystyle \ h_{1}=-\ {\frac {\ \left(n-1\right)f\ d~}{\ n\ R_{2}\ }}\ } h 2 = − ( n − 1 ) f d n R 1 {\displaystyle \ h_{2}=-\ {\frac {\ \left(n-1\right)f\ d~}{\ n\ R_{1}\ }}\ } The focal length f {\displaystyle \ f\ } is positive for converging lenses, and negative for diverging lenses. The reciprocal of the focal length, 1 f , {\textstyle \ {\tfrac {1}{\ f\ }}\ ,} is the optical power of the lens. If the focal length is in metres, this gives the optical power in dioptres (reciprocal metres). Lenses have the same focal length when light travels from the back to the front as when light goes from the front to the back. Other properties of the lens, such as the aberrations are not the same in both directions. ==== Sign convention for radii of curvature R1 and R2 ==== The signs of the lens' radii of curvature indicate whether the corresponding surfaces are convex or concave. The sign convention used to represent this varies, but in this article a positive R indicates a surface's center of curvature is further along in the direction of the ray travel (right, in the accompanying diagrams), while negative R means that rays reaching the surface have already passed the center of curvature. Consequently, for external lens surfaces as diagrammed above, R1 > 0 and R2 < 0 indicate convex surfaces (used to converge light in a positive lens), while R1 < 0 and R2 > 0 indicate concave surfaces. The reciprocal of the radius of curvature is called the curvature. A flat surface has zero curvature, and its radius of curvature is infinite. ==== Sign convention for other parameters ==== This convention is used in this article. Other conventions such as the Cartesian sign convention change the form of the equations. ==== Thin lens approximation ==== If d is small compared to R1 and R2 then the thin lens approximation can be made. For a lens in air, f  is then given by 1 f ≈ ( n − 1 ) [ 1 R 1 − 1 R 2 ] . {\displaystyle \ {\frac {1}{\ f\ }}\approx \left(n-1\right)\left[\ {\frac {1}{\ R_{1}\ }}-{\frac {1}{\ R_{2}\ }}\ \right]~.} ==== Derivation ==== The spherical thin lens equation in paraxial approximation is derived here with respect to the right figure. The 1st spherical lens surface (which meets the optical axis at V 1 {\textstyle \ V_{1}\ } as its vertex) images an on-axis object point O to the virtual image I, which can be described by the following equation, n 1 u + n 2 v ′ = n 2 − n 1 R 1 . {\displaystyle \ {\frac {\ n_{1}\ }{\ u\ }}+{\frac {\ n_{2}\ }{\ v'\ }}={\frac {\ n_{2}-n_{1}\ }{\ R_{1}\ }}~.} For the imaging by second lens surface, by taking the above sign convention, u ′ = − v ′ + d {\textstyle \ u'=-v'+d\ } and n 2 − v ′ + d + n 1 v = n 1 − n 2 R 2 . {\displaystyle \ {\frac {n_{2}}{\ -v'+d\ }}+{\frac {\ n_{1}\ }{\ v\ }}={\frac {\ n_{1}-n_{2}\ }{\ R_{2}\ }}~.} Adding these two equations yields n 1 u + n 1 v = ( n 2 − n 1 ) ( 1 R 1 − 1 R 2 ) + n 2 d ( v ′ − d ) v ′ . {\displaystyle \ {\frac {\ n_{1}\ }{u}}+{\frac {\ n_{1}\ }{v}}=\left(n_{2}-n_{1}\right)\left({\frac {1}{\ R_{1}\ }}-{\frac {1}{\ R_{2}\ }}\right)+{\frac {\ n_{2}\ d\ }{\ \left(\ v'-d\ \right)\ v'\ }}~.} For the thin lens approximation where d → 0 , {\displaystyle \ d\rightarrow 0\ ,} the 2nd term of the RHS (Right Hand Side) is gone, so n 1 u + n 1 v = ( n 2 − n 1 ) ( 1 R 1 − 1 R 2 ) . {\displaystyle \ {\frac {\ n_{1}\ }{u}}+{\frac {\ n_{1}\ }{v}}=\left(n_{2}-n_{1}\right)\left({\frac {1}{\ R_{1}\ }}-{\frac {1}{\ R_{2}\ }}\right)~.} The focal length f {\displaystyle \ f\ } of the thin lens is found by limiting u → − ∞ , {\displaystyle \ u\rightarrow -\infty \ ,} n 1 f = ( n 2 − n 1 ) ( 1 R 1 − 1 R 2 ) → 1 f = ( n 2 n 1 − 1 ) ( 1 R 1 − 1 R 2 ) . {\displaystyle \ {\frac {\ n_{1}\ }{\ f\ }}=\left(n_{2}-n_{1}\right)\left({\frac {1}{\ R_{1}\ }}-{\frac {1}{\ R_{2}\ }}\right)\rightarrow {\frac {1}{\ f\ }}=\left({\frac {\ n_{2}\ }{\ n_{1}\ }}-1\right)\left({\frac {1}{\ R_{1}\ }}-{\frac {1}{\ R_{2}\ }}\right)~.} So, the Gaussian thin lens equation is 1 u + 1 v = 1 f . {\displaystyle \ {\frac {1}{\ u\ }}+{\frac {1}{\ v\ }}={\frac {1}{\ f\ }}~.} For the thin lens in air or vacuum where n 1 = 1 {\textstyle \ n_{1}=1\ } can be assumed, f {\textstyle \ f\ } becomes 1 f = ( n − 1 ) ( 1 R 1 − 1 R 2 ) {\displaystyle \ {\frac {1}{\ f\ }}=\left(n-1\right)\left({\frac {1}{\ R_{1}\ }}-{\frac {1}{\ R_{2}\ }}\right)\ } where the subscript of 2 in n 2 {\textstyle \ n_{2}\ } is dropped. == Imaging properties == As mentioned above, a positive or converging lens in air focuses a collimated beam travelling along the lens axis to a spot (known as the focal point) at a distance f from the lens. Conversely, a point source of light placed at the focal point is converted into a collimated beam by the lens. These two cases are examples of image formation in lenses. In the former case, an object at an infinite distance (as represented by a collimated beam of waves) is focused to an image at the focal point of the lens. In the latter, an object at the focal length distance from the lens is imaged at infinity. The plane perpendicular to the lens axis situated at a distance f from the lens is called the focal plane. === Lens equation === For paraxial rays, if the distances from an object to a spherical thin lens (a lens of negligible thickness) and from the lens to the image are S1 and S2 respectively, the distances are related by the (Gaussian) thin lens formula: 1 f = 1 S 1 + 1 S 2 . {\displaystyle {1 \over f}={1 \over S_{1}}+{1 \over S_{2}}\,.} The right figure shows how the image of an object point can be found by using three rays; the first ray parallelly incident on the lens and refracted toward the second focal point of it, the second ray crossing the optical center of the lens (so its direction does not change), and the third ray toward the first focal point and refracted to the direction parallel to the optical axis. This is a simple ray tracing method easily used. Two rays among the three are sufficient to locate the image point. By moving the object along the optical axis, it is shown that the second ray determines the image size while other rays help to locate the image location. The lens equation can also be put into the "Newtonian" form: f 2 = x 1 x 2 , {\displaystyle f^{2}=x_{1}x_{2}\,,} where x 1 = S 1 − f {\displaystyle x_{1}=S_{1}-f} and x 2 = S 2 − f . {\displaystyle x_{2}=S_{2}-f\,.} x 1 {\textstyle x_{1}} is positive if it is left to the front focal point F 1 {\textstyle F_{1}} , and x 2 {\textstyle x_{2}} is positive if it is right to the rear focal point F 2 {\textstyle F_{2}} . Because f 2 {\textstyle f^{2}} is positive, an object point and the corresponding imaging point made by a lens are always in opposite sides with respect to their respective focal points. ( x 1 {\textstyle x_{1}} and x 2 {\textstyle x_{2}} are either positive or negative.) This Newtonian form of the lens equation can be derived by using a similarity between triangles P1PO1F1 and L3L2F1 and another similarity between triangles L1L2F2 and P2P02F2 in the right figure. The similarities give the following equations and combining these results gives the Newtonian form of the lens equation. y 1 x 1 = | y 2 | f y 1 f = | y 2 | x 2 {\displaystyle {\begin{array}{lcr}{\frac {y_{1}}{x_{1}}}={\frac {\left\vert y_{2}\right\vert }{f}}\\{\frac {y_{1}}{f}}={\frac {\left\vert y_{2}\right\vert }{x_{2}}}\end{array}}} The above equations also hold for thick lenses (including a compound lens made by multiple lenses, that can be treated as a thick lens) in air or vacuum (which refractive index can be treated as 1) if S 1 {\textstyle S_{1}} , S 2 {\textstyle S_{2}} , and f {\textstyle f} are with respect to the principal planes of the lens ( f {\textstyle f} is the effective focal length in this case). This is because of triangle similarities like the thin lens case above; similarity between triangles P1PO1F1 and L3H1F1 and another similarity between triangles L1'H2F2 and P2P02F2 in the right figure. If distances S1 or S2 pass through a medium other than air or vacuum, then a more complicated analysis is required. If an object is placed at a distance S1 > f from a positive lens of focal length f, we will find an image at a distance S2 according to this formula. If a screen is placed at a distance S2 on the opposite side of the lens, an image is formed on it. This sort of image, which can be projected onto a screen or image sensor, is known as a real image. This is the principle of the camera, and also of the human eye, in which the retina serves as the image sensor. The focusing adjustment of a camera adjusts S2, as using an image distance different from that required by this formula produces a defocused (fuzzy) image for an object at a distance of S1 from the camera. Put another way, modifying S2 causes objects at a different S1 to come into perfect focus. In some cases, S2 is negative, indicating that the image is formed on the opposite side of the lens from where those rays are being considered. Since the diverging light rays emanating from the lens never come into focus, and those rays are not physically present at the point where they appear to form an image, this is called a virtual image. Unlike real images, a virtual image cannot be projected on a screen, but appears to an observer looking through the lens as if it were a real object at the location of that virtual image. Likewise, it appears to a subsequent lens as if it were an object at that location, so that second lens could again focus that light into a real image, S1 then being measured from the virtual image location behind the first lens to the second lens. This is exactly what the eye does when looking through a magnifying glass. The magnifying glass creates a (magnified) virtual image behind the magnifying glass, but those rays are then re-imaged by the lens of the eye to create a real image on the retina. Using a positive lens of focal length f, a virtual image results when S1 < f, the lens thus being used as a magnifying glass (rather than if S1 ≫ f as for a camera). Using a negative lens (f < 0) with a real object (S1 > 0) can only produce a virtual image (S2 < 0), according to the above formula. It is also possible for the object distance S1 to be negative, in which case the lens sees a so-called virtual object. This happens when the lens is inserted into a converging beam (being focused by a previous lens) before the location of its real image. In that case even a negative lens can project a real image, as is done by a Barlow lens. For a given lens with the focal length f, the minimum distance between an object and the real image is 4f (S1 = S2 = 2f). This is derived by letting L = S1 + S2, expressing S2 in terms of S1 by the lens equation (or expressing S1 in terms of S2), and equating the derivative of L with respect to S1 (or S2) to zero. (Note that L has no limit in increasing so its extremum is only the minimum, at which the derivate of L is zero.) === Magnification === The linear magnification of an imaging system using a single lens is given by M = − S 2 S 1 = f f − S 1 = − f x 1 {\displaystyle M=-{\frac {S_{2}}{S_{1}}}={\frac {f}{f-S_{1}}}\ =-{\frac {f}{x_{1}}}} where M is the magnification factor defined as the ratio of the size of an image compared to the size of the object. The sign convention here dictates that if M is negative, as it is for real images, the image is upside-down with respect to the object. For virtual images M is positive, so the image is upright. This magnification formula provides two easy ways to distinguish converging (f > 0) and diverging (f < 0) lenses: For an object very close to the lens (0 < S1 < |f|), a converging lens would form a magnified (bigger) virtual image, whereas a diverging lens would form a demagnified (smaller) image; For an object very far from the lens (S1 > |f| > 0), a converging lens would form an inverted image, whereas a diverging lens would form an upright image. Linear magnification M is not always the most useful measure of magnifying power. For instance, when characterizing a visual telescope or binoculars that produce only a virtual image, one would be more concerned with the angular magnification—which expresses how much larger a distant object appears through the telescope compared to the naked eye. In the case of a camera one would quote the plate scale, which compares the apparent (angular) size of a distant object to the size of the real image produced at the focus. The plate scale is the reciprocal of the focal length of the camera lens; lenses are categorized as long-focus lenses or wide-angle lenses according to their focal lengths. Using an inappropriate measurement of magnification can be formally correct but yield a meaningless number. For instance, using a magnifying glass of 5 cm focal length, held 20 cm from the eye and 5 cm from the object, produces a virtual image at infinity of infinite linear size: M = ∞. But the angular magnification is 5, meaning that the object appears 5 times larger to the eye than without the lens. When taking a picture of the moon using a camera with a 50 mm lens, one is not concerned with the linear magnification M ≈ −50 mm / 380000 km = −1.3×10−10. Rather, the plate scale of the camera is about 1°/mm, from which one can conclude that the 0.5 mm image on the film corresponds to an angular size of the moon seen from earth of about 0.5°. In the extreme case where an object is an infinite distance away, S1 = ∞, S2 = f and M = −f/∞ = 0, indicating that the object would be imaged to a single point in the focal plane. In fact, the diameter of the projected spot is not actually zero, since diffraction places a lower limit on the size of the point spread function. This is called the diffraction limit. === Table for thin lens imaging properties === == Aberrations == Lenses do not form perfect images, and always introduce some degree of distortion or aberration that makes the image an imperfect replica of the object. Careful design of the lens system for a particular application minimizes the aberration. Several types of aberration affect image quality, including spherical aberration, coma, and chromatic aberration. === Spherical aberration === Spherical aberration occurs because spherical surfaces are not the ideal shape for a lens, but are by far the simplest shape to which glass can be ground and polished, and so are often used. Spherical aberration causes beams parallel to, but laterally distant from, the lens axis to be focused in a slightly different place than beams close to the axis. This manifests itself as a blurring of the image. Spherical aberration can be minimised with normal lens shapes by carefully choosing the surface curvatures for a particular application. For instance, a plano-convex lens, which is used to focus a collimated beam, produces a sharper focal spot when used with the convex side towards the beam source. === Coma === Coma, or comatic aberration, derives its name from the comet-like appearance of the aberrated image. Coma occurs when an object off the optical axis of the lens is imaged, where rays pass through the lens at an angle to the axis θ. Rays that pass through the centre of a lens of focal length f are focused at a point with distance f tan θ from the axis. Rays passing through the outer margins of the lens are focused at different points, either further from the axis (positive coma) or closer to the axis (negative coma). In general, a bundle of parallel rays passing through the lens at a fixed distance from the centre of the lens are focused to a ring-shaped image in the focal plane, known as a comatic circle (see each circle of the image in the below figure). The sum of all these circles results in a V-shaped or comet-like flare. As with spherical aberration, coma can be minimised (and in some cases eliminated) by choosing the curvature of the two lens surfaces to match the application. Lenses in which both spherical aberration and coma are minimised are called bestform lenses. === Chromatic aberration === Chromatic aberration is caused by the dispersion of the lens material—the variation of its refractive index, n, with the wavelength of light. Since, from the formulae above, f is dependent upon n, it follows that light of different wavelengths is focused to different positions. Chromatic aberration of a lens is seen as fringes of colour around the image. It can be minimised by using an achromatic doublet (or achromat) in which two materials with differing dispersion are bonded together to form a single lens. This reduces the amount of chromatic aberration over a certain range of wavelengths, though it does not produce perfect correction. The use of achromats was an important step in the development of the optical microscope. An apochromat is a lens or lens system with even better chromatic aberration correction, combined with improved spherical aberration correction. Apochromats are much more expensive than achromats. Different lens materials may also be used to minimise chromatic aberration, such as specialised coatings or lenses made from the crystal fluorite. This naturally occurring substance has the highest known Abbe number, indicating that the material has low dispersion. === Other types of aberration === Other kinds of aberration include field curvature, barrel and pincushion distortion, and astigmatism. === Aperture diffraction === Even if a lens is designed to minimize or eliminate the aberrations described above, the image quality is still limited by the diffraction of light passing through the lens' finite aperture. A diffraction-limited lens is one in which aberrations have been reduced to the point where the image quality is primarily limited by diffraction under the design conditions. == Compound lenses == Simple lenses are subject to the optical aberrations discussed above. In many cases these aberrations can be compensated for to a great extent by using a combination of simple lenses with complementary aberrations. A compound lens is a collection of simple lenses of different shapes and made of materials of different refractive indices, arranged one after the other with a common axis. In a multiple-lens system, if the purpose of the system is to image an object, then the system design can be such that each lens treats the image made by the previous lens as an object, and produces the new image of it, so the imaging is cascaded through the lenses. As shown above, the Gaussian lens equation for a spherical lens is derived such that the 2nd surface of the lens images the image made by the 1st lens surface. For multi-lens imaging, 3rd lens surface (the front surface of the 2nd lens) can image the image made by the 2nd surface, and 4th surface (the back surface of the 2nd lens) can also image the image made by the 3rd surface. This imaging cascade by each lens surface justifies the imaging cascade by each lens. For a two-lens system the object distances of each lens can be denoted as s o 1 {\textstyle s_{o1}} and s o 2 {\textstyle s_{o2}} , and the image distances as and s i 1 {\textstyle s_{i1}} and s i 2 {\textstyle s_{i2}} . If the lenses are thin, each satisfies the thin lens formula 1 f j = 1 s o j + 1 s i j , {\displaystyle {\frac {1}{f_{j}}}={\frac {1}{s_{oj}}}+{\frac {1}{s_{ij}}},} If the distance between the two lenses is d {\displaystyle d} , then s o 2 = d − s i 1 {\textstyle s_{o2}=d-s_{i1}} . (The 2nd lens images the image of the first lens.) FFD (Front Focal Distance) is defined as the distance between the front (left) focal point of an optical system and its nearest optical surface vertex. If an object is located at the front focal point of the system, then its image made by the system is located infinitely far way to the right (i.e., light rays from the object is collimated after the system). To do this, the image of the 1st lens is located at the focal point of the 2nd lens, i.e., s i 1 = d − f 2 {\displaystyle s_{i1}=d-f_{2}} . So, the thin lens formula for the 1st lens becomes 1 f 1 = 1 F F D + 1 d − f 2 → F F D = f 1 ( d − f 2 ) d − ( f 1 + f 2 ) . {\displaystyle {\frac {1}{f_{1}}}={\frac {1}{FFD}}+{\frac {1}{d-f_{2}}}\rightarrow FFD={\frac {f_{1}(d-f_{2})}{d-(f_{1}+f_{2})}}.} BFD (Back Focal Distance) is similarly defined as the distance between the back (right) focal point of an optical system and its nearest optical surface vertex. If an object is located infinitely far away from the system (to the left), then its image made by the system is located at the back focal point. In this case, the 1st lens images the object at its focal point. So, the thin lens formula for the 2nd lens becomes 1 f 2 = 1 B F D + 1 d − f 1 → B F D = f 2 ( d − f 1 ) d − ( f 1 + f 2 ) . {\displaystyle {\frac {1}{f_{2}}}={\frac {1}{BFD}}+{\frac {1}{d-f_{1}}}\rightarrow BFD={\frac {f_{2}(d-f_{1})}{d-(f_{1}+f_{2})}}.} A simplest case is where thin lenses are placed in contact ( d = 0 {\displaystyle d=0} ). Then the combined focal length f of the lenses is given by 1 f = 1 f 1 + 1 f 2 . {\displaystyle {\frac {1}{f}}={\frac {1}{f_{1}}}+{\frac {1}{f_{2}}}\,.} Since 1/f is the power of a lens with focal length f, it can be seen that the powers of thin lenses in contact are additive. The general case of multiple thin lenses in contact is 1 f = ∑ k = 1 N 1 f k {\displaystyle {\frac {1}{f}}=\sum _{k=1}^{N}{\frac {1}{f_{k}}}} where N {\textstyle N} is the number of lenses. If two thin lenses are separated in air by some distance d, then the focal length for the combined system is given by 1 f = 1 f 1 + 1 f 2 − d f 1 f 2 . {\displaystyle {\frac {1}{f}}={\frac {1}{f_{1}}}+{\frac {1}{f_{2}}}-{\frac {d}{f_{1}f_{2}}}\,.} As d tends to zero, the focal length of the system tends to the value of f given for thin lenses in contact. It can be shown that the same formula works for thick lenses if d is taken as the distance between their principal planes. If the separation distance between two lenses is equal to the sum of their focal lengths (d = f1 + f2), then the FFD and BFD are infinite. This corresponds to a pair of lenses that transforms a parallel (collimated) beam into another collimated beam. This type of system is called an afocal system, since it produces no net convergence or divergence of the beam. Two lenses at this separation form the simplest type of optical telescope. Although the system does not alter the divergence of a collimated beam, it does alter the (transverse) width of the beam. The magnification of such a telescope is given by M = − f 2 f 1 , {\displaystyle M=-{\frac {f_{2}}{f_{1}}}\,,} which is the ratio of the output beam width to the input beam width. Note the sign convention: a telescope with two convex lenses (f1 > 0, f2 > 0) produces a negative magnification, indicating an inverted image. A convex plus a concave lens (f1 > 0 > f2) produces a positive magnification and the image is upright. For further information on simple optical telescopes, see Refracting telescope § Refracting telescope designs. == Non spherical types == Cylindrical lenses have curvature along only one axis. They are used to focus light into a line, or to convert the elliptical light from a laser diode into a round beam. They are also used in motion picture anamorphic lenses. Aspheric lenses have at least one surface that is neither spherical nor cylindrical. The more complicated shapes allow such lenses to form images with less aberration than standard simple lenses, but they are more difficult and expensive to produce. These were formerly complex to make and often extremely expensive, but advances in technology have greatly reduced the manufacturing cost for such lenses. A Fresnel lens has its optical surface broken up into narrow rings, allowing the lens to be much thinner and lighter than conventional lenses. Durable Fresnel lenses can be molded from plastic and are inexpensive. Lenticular lenses are arrays of microlenses that are used in lenticular printing to make images that have an illusion of depth or that change when viewed from different angles. Bifocal lens has two or more, or a graduated, focal lengths ground into the lens. A gradient index lens has flat optical surfaces, but has a radial or axial variation in index of refraction that causes light passing through the lens to be focused. An axicon has a conical optical surface. It images a point source into a line along the optic axis, or transforms a laser beam into a ring. Diffractive optical elements can function as lenses. Superlenses are made from negative index metamaterials and claim to produce images at spatial resolutions exceeding the diffraction limit. The first superlenses were made in 2004 using such a metamaterial for microwaves. Improved versions have been made by other researchers. As of 2014 the superlens has not yet been demonstrated at visible or near-infrared wavelengths. A prototype flat ultrathin lens, with no curvature has been developed. == Uses == A single convex lens mounted in a frame with a handle or stand is a magnifying glass. Lenses are used as prosthetics for the correction of refractive errors such as myopia, hypermetropia, presbyopia, and astigmatism. (See corrective lens, contact lens, eyeglasses, intraocular lens.) Most lenses used for other purposes have strict axial symmetry; eyeglass lenses are only approximately symmetric. They are usually shaped to fit in a roughly oval, not circular, frame; the optical centres are placed over the eyeballs; their curvature may not be axially symmetric to correct for astigmatism. Sunglasses' lenses are designed to attenuate light; sunglass lenses that also correct visual impairments can be custom made. Other uses are in imaging systems such as monoculars, binoculars, telescopes, microscopes, cameras and projectors. Some of these instruments produce a virtual image when applied to the human eye; others produce a real image that can be captured on photographic film or an optical sensor, or can be viewed on a screen. In these devices lenses are sometimes paired up with curved mirrors to make a catadioptric system where the lens's spherical aberration corrects the opposite aberration in the mirror (such as Schmidt and meniscus correctors). Convex lenses produce an image of an object at infinity at their focus; if the sun is imaged, much of the visible and infrared light incident on the lens is concentrated into the small image. A large lens creates enough intensity to burn a flammable object at the focal point. Since ignition can be achieved even with a poorly made lens, lenses have been used as burning-glasses for at least 2400 years. A modern application is the use of relatively large lenses to concentrate solar energy on relatively small photovoltaic cells, harvesting more energy without the need to use larger and more expensive cells. Radio astronomy and radar systems often use dielectric lenses, commonly called a lens antenna to refract electromagnetic radiation into a collector antenna. Lenses can become scratched and abraded. Abrasion-resistant coatings are available to help control this. == See also == == Notes == == References == == Bibliography == Hecht, Eugene (1987). Optics (2nd ed.). Addison Wesley. ISBN 978-0-201-11609-0. Chapters 5 & 6. Hecht, Eugene (2002). Optics (4th ed.). Addison Wesley. ISBN 978-0-321-18878-6. Greivenkamp, John E. (2004). Field Guide to Geometrical Optics. SPIE Field Guides vol. FG01. SPIE. ISBN 978-0-8194-5294-8. == External links == A chapter from an online textbook on refraction and lenses Archived 17 December 2009 at the Wayback Machine Thin Spherical Lenses Archived 13 March 2020 at the Wayback Machine (.pdf) on Project PHYSNET Archived 14 May 2017 at the Wayback Machine. Lens article at digitalartform.com Article on Ancient Egyptian lenses Archived 25 May 2022 at the Wayback Machine FDTD Animation of Electromagnetic Propagation through Convex Lens (on- and off-axis) Video on YouTube The Use of Magnifying Lenses in the Classical World Archived 13 November 2017 at the Wayback Machine Henker, Otto (1911). "Lens" . Encyclopædia Britannica. Vol. 16 (11th ed.). pp. 421–427. (with 21 diagrams) === Simulations === Learning by Simulations Archived 21 January 2010 at the Wayback Machine – Concave and Convex Lenses OpticalRayTracer Archived 6 October 2010 at the Wayback Machine – Open source lens simulator (downloadable java) Animations demonstrating lens Archived 4 April 2012 at the Wayback Machine by QED
Wikipedia/Lensmaker's_equation
In optics, the corpuscular theory of light states that light is made up of small discrete particles called "corpuscles" (little particles) which travel in a straight line with a finite velocity and possess impetus. This notion was based on an alternate description of atomism of the time period. Isaac Newton laid the foundations for this theory through his work in optics. This early conception of the particle theory of light was an early forerunner to the modern understanding of the photon. This theory came to dominate the conceptions of light in the eighteenth century, displacing the previously prominent vibration theories, where light was viewed as "pressure" of the medium between the source and the receiver, first championed by René Descartes, and later in a more refined form by Christiaan Huygens. In part correct, being able to successfully explain refraction, reflection, rectilinear propagation and to a lesser extent diffraction, the theory would fall out of favor in the early nineteenth century, as the wave theory of light amassed new experimental evidence. The modern understanding of light is the concept of wave-particle duality. == Mechanical philosophy == In the early 17th century, natural philosophers began to develop new ways to understand nature gradually replacing Aristotelianism, which had been for centuries the dominant scientific theory, during the process known as the Scientific Revolution. Various European philosophers adopted what came to be known as mechanical philosophy sometime between around 1610 to 1650, which described the universe and its contents as a kind of large-scale mechanism, a philosophy that explained the universe is made with matter and motion. This mechanical philosophy was based on Epicureanism, and the work of Leucippus and his pupil Democritus and their atomism, in which everything in the universe, including a person's body, mind, soul and even thoughts, was made of atoms; very small particles of moving matter. During the early part of the 17th century, the atomistic portion of mechanical philosophy was largely developed by Gassendi, René Descartes and other atomists. == Pierre Gassendi's atomist matter theory == The core of Pierre Gassendi's philosophy is his atomist matter theory. In his work, Syntagma Philosophicum, ("Philosophical Treatise"), published posthumously in 1658, Gassendi tried to explain aspects of matter and natural phenomena of the world in terms of atoms and the void. He took Epicurean atomism and modified it to be compatible with Christian theology, by suggesting God created a finite number of indivisible and moving atoms, and has a continuing divine relationship to creation (of matter). Gassendi thought that atoms move in an empty space, classically known as the void, which contradicts the Aristotelian view that the universe is fully made of matter. Gassendi also suggests that information gathered by the human senses has a material form, especially in the case of vision. == Corpuscular theories == Corpuscular theories, or corpuscularianism, are similar to the theories of atomism, except that in atomism the atoms were supposed to be indivisible, whereas corpuscles could in principle be divided. Corpuscles are single, infinitesimally small, particles that have shape, size, color, and other physical properties that alter their functions and effects in phenomena in the mechanical and biological sciences. This later led to the modern idea that compounds have secondary properties different from the elements of those compounds. Gassendi asserts that corpuscles are particles that carry other substances and are of different types. These corpuscles are also emissions from various sources such as solar entities, animals, or plants. Robert Boyle was a strong proponent of corpuscularianism and used the theory to exemplify the differences between a vacuum and a plenum, by which he aimed to further support his mechanical philosophy and overall atomist theory. About a half-century after Gassendi, Isaac Newton used existing corpuscular theories to develop his particle theory of the physics of light. == Isaac Newton == Isaac Newton worked on optics throughout his research career, conducting various experiments and developing hypotheses to explain his results. He dismissed Descartes' theory of light because he rejected Descartes’ understanding of space, which derived from it. With the publication of Opticks in 1704, Newton for the first time took a clear position supporting a corpuscular interpretation, though it would fall on his followers to systemise the theory. In the 1718 edition of Opticks, Newton added several uncertain hypotheses about the nature of light, formulated as queries. In query (Qu.) 16, he wondered whether the way a quavering motion of a finger pressing against the bottom of the eye causes the sensation of circles of colour is similar to how light affects the retina, and whether the independent continuation of the induced sensation for about a second indicates a vibrating nature of the motions in the eye. In Qu. 17, Newton compared the vibrations to the waves propagating in concentric circles after a stone has been thrown in water, and to "the Vibrations or Tremors excited in the Air by percussion". He therefore proposed that light rays would similarly excite waves of vibrations in a reflecting or refracting medium, which in turn could overtake the rays of light and alternately accelerate and retard them. Newton then suggested in Qu. 18 and Qu. 19 that light propagates through vacuum via a very subtle "Aethereal Medium", just like heat was thought to spread. Although the previous hypotheses describe wave-like aspects of light, Newton still believed in particle-like properties. In Qu. 28, he asked: "Are not all Hypotheses erroneous in which Light is supposed to consist in Pression or Motion propagated through a fluid Medium." He did not believe the arguments explained the proposed new modifications of rays, and stressed how pression and motion would not propagate through fluid in straight lines beyond obstacles as light rays do. In Qu. 29, he wondered: "Are not the Rays of Light very small Bodies emitted from shining Substances? For such Bodies will pass through uniform Mediums in right Lines without bending into the Shadow, which is the Nature of the Rays of Light. They will also be capable of several Properties, and be able to conserve their Properties unchanged in passing through several Mediums, which is another Condition of the Rays of Light." He connected these properties to several effects of the interaction of light rays with matter and vacuum. Newton's corpuscular theory was an elaboration of his view of reality as interactions of material points through forces. Note Albert Einstein's description of Newton's conception of physical reality: [Newton's] physical reality is characterised by concepts of space, time, the material point and force (interaction between material points). Physical events are to be thought of as movements according to the law of material points in space. The material point is the only representative of reality in so far as it is subject to change. The concept of the material point is obviously due to observable bodies; one conceived of the material point on the analogy of movable bodies by omitting characteristics of extension, form, spatial locality, and all their 'inner' qualities, retaining only inertia, translation, and the additional concept of force. == Polarization == The fact that light could be polarized was for the first time qualitatively explained by Newton using the particle theory. Étienne-Louis Malus in 1810 created a mathematical particle theory of polarization. Jean-Baptiste Biot in 1812 showed that this theory explained all known phenomena of light polarization. At that time polarization was considered proof of the particle theory. Nowadays, polarisation is considered a property of waves and may only manifest in transverse waves. Longitudinal waves may not be polarised. == End of corpuscular theory == The dominance of Newtonian natural philosophy in the eighteenth century was one of the decisive factors ensuring the prevalence of the corpuscular theory of light. Newtonians maintained that the corpuscles of light were projectiles that travelled from the source to the receiver with a finite speed. In this description, the propagation of light is transportation of matter. However by the turn of the century, beginning with Thomas Young's double-slit experiment in 1801, more evidence in the form of novel experiments on diffraction, interference, and polarization showcased issues with the theory. A wave theory based on Young, Augustin-Jean Fresnel and François Arago's work would materialise in a novel wave theory of light. == Quantum mechanics == The notions of light as a particle resurfaced in the 20th century with the photoelectric effect. In 1905, Albert Einstein explained this effect by introducing the concept of light quanta or photons. Quantum particles are considered to have wave–particle duality. In quantum field theory, photons are explained as excitations of the electromagnetic field using second quantization. == See also == Corpuscularianism Speed of gravity Photon Philosophy of physics Opticks by Isaac Newton The Skeptical Chemist by Robert Boyle == References == == External links == Observing the quantum behavior of light in an undergraduate laboratory JJ Thorn et al.: Am. J. Phys. 72, 1210-1219 (2004) Opticks, or, a Treatise of the Reflections, Refractions, Inflections, and Colours of Light. Sir Isaack Newton. 1704. Project Gutenberg book released 23 August 2010. Pierre Gassendi. Fisher, Saul. 2009. Stanford Encyclopedia of Philosophy. Isaac Newton. Smith, George. 2007. Stanford Encyclopedia of Philosophy. Robert Boyle. MacIntosh, J.J. 2010. Stanford Encyclopedia of Philosophy. YouTube video. Physics - Newton's corpuscular theory of light - Science. elearnin. Uploaded 5 Jan 2013. Robert Hooke's Critique of Newton's Theory of Light and Colors (delivered 1672) Robert Hooke. Thomas Birch, The History of the Royal Society, vol. 3 (London: 1757), pp. 10–15. Newton Project, University of Sussex. Corpuscule or Wave. Arman Kashef. 2022. Xaporia: The Free and Independent Blog.
Wikipedia/Corpuscle_theory_of_light
A camera lens, photographic lens or photographic objective is an optical lens or assembly of lenses (compound lens) used in conjunction with a camera body and mechanism to make images of objects either on photographic film or on other media capable of storing an image chemically or electronically. There is no major difference in principle between a lens used for a still camera, a video camera, a telescope, a microscope, or other apparatus, but the details of design and construction are different. A lens might be permanently fixed to a camera, or it might be interchangeable with lenses of different focal lengths, apertures, and other properties. While in principle a simple convex lens will suffice, in practice a compound lens made up of a number of optical lens elements is required to correct (as much as possible) the many optical aberrations that arise. Some aberrations will be present in any lens system. It is the job of the lens designer to balance these and produce a design that is suitable for photographic use and possibly mass production. == Theory of operation == Typical rectilinear lenses can be thought of as "improved" pinhole "lenses". As shown, a pinhole "lens" is simply a small aperture that blocks most rays of light, ideally selecting one ray to the object for each point on the image sensor. Pinhole lenses have a few severe limitations: A pinhole camera with a large aperture is blurry because each pixel is essentially the shadow of the aperture stop, so its size is no smaller than the size of the aperture (third image). Here a pixel is the area of the detector exposed to light from a point on the object. Making the pinhole smaller improves resolution (up to a limit), but reduces the amount of light captured. At a certain point, shrinking the hole does not improve the resolution because of the diffraction limit. Beyond this limit, making the hole smaller makes the image blurrier as well as darker. Practical lenses can be thought of as an answer to the question: "how can a pinhole lens be modified to admit more light and give a smaller spot size?". A first step is to put a simple convex lens at the pinhole with a focal length equal to the distance to the film plane (assuming the camera will take pictures of distant objects). This allows the pinhole to be opened up significantly (fourth image) because a thin convex lens bends light rays in proportion to their distance to the axis of the lens, with rays striking the center of the lens passing straight through. The geometry is almost the same as with a simple pinhole lens, but rather than being illuminated by single rays of light, each image point is illuminated by a focused "pencil" of light rays. From the front of the camera, the small hole (the aperture), would be seen. The virtual image of the aperture as seen from the world is known as the lens's entrance pupil; ideally, all rays of light leaving a point on the object that enter the entrance pupil will be focused to the same point on the image sensor/film (provided the object point is in the field of view). If one were inside the camera, one would see the lens acting as a projector. The virtual image of the aperture from inside the camera is the lens's exit pupil. In this simple case, the aperture, entrance pupil, and exit pupil are all in the same place because the only optical element is in the plane of the aperture, but in general these three will be in different places. Practical photographic lenses include more lens elements. The additional elements allow lens designers to reduce various aberrations, but the principle of operation remains the same: pencils of rays are collected at the entrance pupil and focused down from the exit pupil onto the image plane. == Construction == A camera lens may be made from a number of elements: from one, as in the Box Brownie's meniscus lens, to over 20 in the more complex zooms. These elements may themselves comprise a group of lenses cemented together. The front element is critical to the performance of the whole assembly. In all modern lenses the surface is coated to reduce abrasion, flare, and surface reflectance, and to adjust color balance. To minimize aberration, the curvature is usually set so that the angle of incidence and the angle of refraction are equal. In a prime lens this is easy, but in a zoom there is always a compromise. The lens usually is focused by adjusting the distance from the lens assembly to the image plane, or by moving elements of the lens assembly. To improve performance, some lenses have a cam system that adjusts the distance between the groups as the lens is focused. Manufacturers call this different things: Nikon calls it CRC (close range correction); Canon calls it a floating system; and Hasselblad and Mamiya call it FLE (floating lens element). Glass is the most common material used to construct lens elements, due to its good optical properties and resistance to scratching. Other materials are also used, such as quartz glass, fluorite, plastics like acrylic (Plexiglass), and even germanium and meteoritic glass. Plastics allow the manufacturing of strongly aspherical lens elements which are difficult or impossible to manufacture in glass, and which simplify or improve lens manufacturing and performance. Plastics are not used for the outermost elements of all but the cheapest lenses as they scratch easily. Molded plastic lenses have been used for the cheapest disposable cameras for many years, and have acquired a bad reputation: manufacturers of quality optics tend to use euphemisms such as "optical resin". However many modern, high performance (and high priced) lenses from popular manufacturers include molded or hybrid aspherical elements, so it is not true that all lenses with plastic elements are of low photographic quality. The 1951 USAF resolution test chart is one way to measure the resolving power of a lens. The quality of the material, coatings, and build affect the resolution. Lens resolution is ultimately limited by diffraction, and very few photographic lenses approach this resolution. Ones that do are called "diffraction limited" and are usually extremely expensive. Today, most lenses are multi-coated in order to minimize lens flare and other unwanted effects. Some lenses have a UV coating to keep out the ultraviolet light that could taint color. Most modern optical cements for bonding glass elements also block UV light, negating the need for a UV filter. However, this leaves an avenue for lens fungus to attack if lenses are not cared for appropriately. UV photographers must go to great lengths to find lenses with no cement or coatings. A lens will most often have an aperture adjustment mechanism, usually an iris diaphragm, to regulate the amount of light that passes. In early camera models a rotating plate or slider with different sized holes was used. These Waterhouse stops may still be found on modern, specialized lenses. A shutter, to regulate the time during which light may pass, may be incorporated within the lens assembly (for better quality imagery), within the camera, or even, rarely, in front of the lens. Some cameras with leaf shutters in the lens omit the aperture, and the shutter does double duty. == Aperture and focal length == The two fundamental parameters of an optical lens are the focal length and the maximum aperture. The lens' focal length determines the magnification of the image projected onto the image plane, and the aperture the light intensity of that image. For a given photographic system the focal length determines the angle of view, short focal lengths giving a wider field of view than longer focal length lenses. A wider aperture, identified by a smaller f-number, allows using a faster shutter speed for the same exposure. The camera equation, or G#, is the ratio of the radiance reaching the camera sensor to the irradiance on the focal plane of the camera lens. The maximum usable aperture of a lens is specified as the focal ratio or f-number, defined as the lens's focal length divided by the effective aperture (or entrance pupil), a dimensionless number. The lower the f-number, the higher light intensity at the focal plane. Larger apertures (smaller f-numbers) provide a much shallower depth of field than smaller apertures, other conditions being equal. Practical lens assemblies may also contain mechanisms to deal with measuring light, secondary apertures for flare reduction, and mechanisms to hold the aperture open until the instant of exposure to allow SLR cameras to focus with a brighter image with shallower depth of field, theoretically allowing better focus accuracy. Focal lengths are usually specified in millimetres (mm), but older lenses might be marked in centimetres (cm) or inches. For a given film or sensor size, specified by the length of the diagonal, a lens may be classified as a: Normal lens: angle of view of the diagonal about 50° and a focal length approximately equal to the image diagonal. Wide-angle lens: angle of view wider than 60° and focal length shorter than normal. Long-focus lens: any lens with a focal length longer than the diagonal measure of the film or sensor. Angle of view is narrower. The most common type of long-focus lens is the telephoto lens, a design that uses special optical configurations to make the lens shorter than its focal length. A side effect of using lenses of different focal lengths is the different distances from which a subject can be framed, resulting in a different perspective. Photographs can be taken of a person stretching out a hand with a wideangle, a normal lens, and a telephoto, which contain exactly the same image size by changing the distance from the subject. But the perspective will be different. With the wideangle, the hands will be exaggeratedly large relative to the head. As the focal length increases, the emphasis on the outstretched hand decreases. However, if pictures are taken from the same distance, and enlarged and cropped to contain the same view, the pictures will have identical perspective. A moderate long-focus (telephoto) lens is often recommended for portraiture because the perspective corresponding to the longer shooting distance is considered to look more flattering. The widest aperture lens in history of photography is believed to be the Carl Zeiss Planar 50mm f/0.7, which was designed and made specifically for the NASA Apollo lunar program to capture the far side of the Moon in 1966. Three of these lenses were purchased by filmmaker Stanley Kubrick in order to film scenes in his 1975 film Barry Lyndon, using candlelight as the sole light source. == Number of elements == The complexity of a lens — the number of elements and their degree of asphericity — depends upon the angle of view, the maximum aperture, and intended price point, among other variables. An extreme wideangle lens of large aperture must be of very complex construction to correct for optical aberrations, which are worse at the edge of the field and when the edge of a large lens is used for image-forming. A long-focus lens of small aperture can be of very simple construction to attain comparable image quality: a doublet (two elements) will often suffice. Some older cameras were fitted with convertible lenses (German: Satzobjektiv) of normal focal length. The front element could be unscrewed, leaving a lens of twice the focal length, and half the angle of view and half the aperture. The simpler half-lens was of adequate quality for the narrow angle of view and small relative aperture. This would require the bellows had to be extended to twice the normal length. Good-quality lenses with maximum aperture no greater than f/2.8 and fixed, normal, focal length need at least three (triplet) or four elements (the trade name "Tessar" derives from the Greek tessera, meaning "four"). The widest-range zooms often have fifteen or more. The reflection of light at each of the many interfaces between different optical media (air, glass, plastic) seriously degraded the contrast and color saturation of early lenses, particularly zoom lenses, especially where the lens was directly illuminated by a light source. The introduction of optical coatings, and advances in coating technology over the years, have resulted in major improvements, and modern high-quality zoom lenses give images of quite acceptable contrast, although zoom lenses with many elements will transmit less light than lenses made with fewer elements (all other factors such as aperture, focal length, and coatings being equal). == Lens mounts == Many single-lens reflex cameras and some rangefinder cameras have detachable lenses. A few other types do as well, notably the Mamiya TLR cameras and SLR, medium format cameras (RZ67, RB67, 645-1000s)other companies that produce medium format equipment such as Bronica, Hasselblad and Fuji have similar camera styles that allow interchangeability in the lenses as well, and mirrorless interchangeable-lens cameras. The lenses attach to the camera using a lens mount, which contains mechanical linkages and often also electrical contacts between the lens and camera body. The lens mount design is an important issue for compatibility between cameras and lenses. There is no universal standard for lens mounts, and each major camera maker typically uses its own proprietary design, incompatible with other makers. A few older manual focus lens mount designs, such as the Leica M39 lens mount for rangefinders, M42 lens mount for early SLRs, and the Pentax K mount are found across multiple brands, but this is not common today. A few mount designs, such as the Olympus/Kodak Four Thirds System mount for DSLRs, have also been licensed to other makers. Most large-format cameras take interchangeable lenses as well, which are usually mounted in a lensboard or on the front standard. The most common interchangeable lens mounts on the market today include the Canon EF, EF-S and EF-M autofocus lens mounts. Others include the Nikon F manual and autofocus mounts, the Olympus/Kodak Four Thirds and Olympus/Panasonic Micro Four Thirds digital-only mounts, the Pentax K mount and autofocus variants, the Sony Alpha mount (derived from the Minolta mount) and the Sony E digital-only mount. == Types of lenses == === "Close-up" or macro === A macro lens used in macro or "close-up" photography (not to be confused with the compositional term close up) is any lens that produces an image on the focal plane (i.e., film or a digital sensor) that is one quarter of life size (1:4) to the same size (1:1) as the subject being imaged. There is no official standard to define a macro lens, usually a prime lens, but a 1:1 ratio is, typically, considered "true" macro. Magnification from life size to larger is called "Micro" photography (2:1, 3:1 etc.). This configuration is generally used to image close-up very small subjects. A macro lens may be of any focal length, the actual focus length being determined by its practical use, considering magnification, the required ratio, access to the subject, and illumination considerations. It can be a special lens corrected optically for close up work or it can be any lens modified (with adapters or spacers, which are also known as "extension tubes".) to bring the focal plane "forward" for very close photography. Depending on the camera to subject distance and aperture, the depth-of-field can be very narrow, limiting the linear depth of the area that will be in focus. Lenses are usually stopped down to give a greater depth-of-field. === Zoom === Some lenses, called zoom lenses, have a focal length that varies as internal elements are moved, typically by rotating the barrel or pressing a button which activates an electric motor. Commonly, the lens may zoom from moderate wide-angle, through normal, to moderate telephoto; or from normal to extreme telephoto. The zoom range is limited by manufacturing constraints; the ideal of a lens of large maximum aperture which will zoom from extreme wideangle to extreme telephoto is not attainable. Zoom lenses are widely used for small-format cameras of all types: still and cine cameras with fixed or interchangeable lenses. Bulk and price limit their use for larger film sizes. Motorized zoom lenses may also have the focus, iris, and other functions motorized. === Special-purpose === Apochromat (apo) lenses have added correction for chromatic aberration. Process lenses have extreme correction for aberrations of geometry (pincushion distortion, barrel distortion) and are generally intended for use at a specific distance and at small aperture. Enlarger lenses are made to be used with photographic enlargers (specialised projectors), rather than cameras. Lenses for aerial photography. Shift lens allow the lens to be raised or lowered relative to the film of sensor plane to correct or exaggerate perspective distortion. Fisheye lenses: extreme wide-angle lenses with an angle of view of up to 180 degrees or more, with very noticeable (and intended) distortion. Stereoscopic lenses, to produce pairs of photographs which give a 3-dimensional effect when viewed with an appropriate viewer. Soft-focus lenses which give a soft, but not out-of-focus, image and have an imperfection-removing effect popular among portrait and fashion photographers. Infrared lenses Ultraviolet lenses Swivel lenses rotate while attached to a camera body to give unique perspectives and camera angles. Shift lenses and tilt/shift lenses (collectively perspective control lenses) allow special control of perspective on SLR cameras by mimicking view camera movements. telecentric lenses (or orthographic lenses) make any object appear as the same size regardless of their distance from the lens. == History and technical development == == Lens designs == Some notable photographic optical lens designs are: Angenieux retrofocus Cooke triplet Double-Gauss Goerz Dagor Leitz Elmar Rapid Rectilinear Zeiss Sonnar Zeiss Planar Zeiss Tessar == See also == Anti-fogging treatment of optical surfaces Large format lens Lens (optics) Lens hood Lens cover Lenses for SLR and DSLR cameras Optical train Teleconverter Teleside converter William Taylor (inventor) == References == == Sources == Kingslake, Rudolf (1989). A History of the Photographic Lens. Boston: Academic Press. ISBN 978-0-12-408640-1. Guy, N. K. (2012). The Lens: A Practical Guide for the Creative Photographer. Rocky Nook. ISBN 978-1-933952-97-0. == External links == Photo.net Lens Tutorial optical glass
Wikipedia/Photographic_lens
In perceptual psychology, unconscious inference (German: unbewusster Schluss), also referred to as unconscious conclusion, is a term coined in 1867 by the German physicist and polymath Hermann von Helmholtz to describe an involuntary, pre-rational and reflex-like mechanism which is part of the formation of visual impressions. While precursory notions have been identified in the writings of Thomas Hobbes, Robert Hooke, and Francis North (especially in connection with auditory perception) as well as in Francis Bacon's Novum Organum, Helmholtz's theory was long ignored or even dismissed by philosophy and psychology. It has since received new attention from modern research, and the work of recent scholars has approached Helmholtz's view. Elaborate theoretical frameworks concerning unconscious inference have persisted for a thousand years, originating with Ibn al-Haytham, ca. 1030. These theories have enjoyed widespread acceptance for nearly four centuries, beginning with René Descartes' contributions in 1637. In the third and final volume of his Handbuch der physiologischen Optik (1856–1867, translated as Treatise on Physiological Optics in 1920-1925, available here), Helmholtz discussed the psychological effects of visual perception. His first example is that of the illusion of the Sun rotating around the Earth: Every evening apparently before our eyes the sun goes down behind the stationary horizon, although we are well aware that the sun is fixed and the horizon moves. == Optical illusions == We are unable to do away with such optical illusions by convincing ourselves rationally that our eyes have played tricks on us: obstinately and unswervingly, the mechanism follows its own rule and thus wields an imperious mastery over the human mind. While optical illusions are the most obvious instances of unconscious inference, people's perceptions of each other are similarly influenced by such unintended, unconscious conclusions. Helmholtz's second example refers to theatrical performance, arguing that the strong emotional effect of a play results mainly from the viewers' inability to doubt the visual impressions generated by unconscious inference: An actor who cleverly portrays an old man is for us an old man there on the stage, so long as we let the immediate impression sway us, and do not forcibly recall that the programme states that the person moving about there is the young actor with whom we are acquainted. We consider him as being angry or in pain according as he shows us one or the other mode of countenance and demeanour. He arouses fright or sympathy in us [...]; and the deep-seated conviction that all this is only show and play does not hinder our emotions at all, provided the actor does not cease to play his part. On the contrary, a fictitious tale of this sort, which we seem to enter into ourselves, grips and tortures us more than a similar true story would do when we read it in a dry documentary report. The mere sight of another person is sufficient to produce an emotional attitude without any reasonable basis whatsoever, yet highly resilient against all rational criticism. Obviously, the impression is based on the spontaneous, spurious attribution of traits - a process we can hardly avoid, for the human eye, so to speak, is incapable of doubt and thus cannot ward off the impression. The formation of visual impressions, Helmholtz realized, is achieved primarily by unconscious judgments, the results of which "can never once be elevated to the plane of conscious judgments" and thus "lack the purifying and scrutinizing work of conscious thinking". In spite of this, the results of unconscious judgments are so impervious to conscious control, so resistant to contradiction that they are "impossible to get rid of" and "the effect of them cannot be overcome". So whatever impressions this unconscious inference process leads to, they strike "our consciousness as a foreign and overpowering force of nature". The reason, Helmholtz suggested, lies in the way visual sensory impressions are processed neurologically. The higher cortical centres responsible for conscious deliberation are not involved in the formation of visual impressions. However, as the process is spontaneous and automatic, we are unable to account for just how we arrived at our judgments. Through our eyes, we necessarily perceive things as real, for the results of the unconscious conclusions are interpretations which "are urged on our consciousness, so to speak, as if an external power had constrained us, over which our will has no control". In recognizing these attitude-formation mechanisms underlying the human processing of nonverbal cues, Helmholtz anticipated developments in science by more than a century. As Daniel Gilbert has pointed out, "Helmholtz presaged many current thinkers not only by postulating the existence of such [unconscious inferential] operations, but also by describing their general features". At the same time, he added, it is "probably fair to say that Helmholtz's ideas about the social inference process have exerted no impact whatsoever on social psychology". Indeed, psychologists have largely felt that Helmholtz had fallen prey to an error in reasoning. As Edwin G. Boring summed up the debate, "Since an inference is ostensibly a conscious process and can therefore be neither unconscious nor immediate, [Helmholtz's] view was rejected as self-contradictory". However, several recent authors have since approached Helmholtz's conception under a variety of headings, such as "snap judgments", "nonconscious social information processing", "spontaneous trait inference", "people as flexible interpreters", and "unintended thought". Siegfried Frey has pointed out the revolutionary quality of Helmholtz's proposition that it is from the perceiver, not the actor, whence springs the meaning-attribution process performed when we interpret a nonverbal stimulus: By failing to distinguish appearance from reality, the psychology of expression merely perpetuated a fallacy deeply ingrained in everyday language: with unswerving belief in our perceptions, we routinely call the other person’s expression what is, in plain truth, our own impression of her or him. == Influences in current computer science and psychology == === The Helmholtz machine === Work in computer science has made use of Helmholtz's ideas of unconscious inference by suggesting the cortex contains a generative model of the world. They develop a statistical method for discovering the structure inherent in a set of patterns: Following Helmholtz, we view the human perceptual system as a statistical inference engine whose function is to infer the probable causes of sensory input. We show that a device of this kind can learn how to perform these inferences without requiring a teacher to label each sensory input vector with its underlying causes. === Free energy principle === The Free energy principle provides an explanation for embodied perception in neuroscience and tries to explain how biological systems maintain order by restricting themselves to a limited number of states or beliefs about hidden states in their environment. A biological system performs active inference in sampling action outcomes to maximise the evidence for its model of the world: The notion that self-organising biological systems – like a cell or brain – can be understood as minimising variational free energy is based upon Helmholtz’s observations on unconscious inference and subsequent treatments in psychology and machine learning. == Notes == == References == Bacon, Francis (1620). Novum Organum Scientiarum. London: Bill. Boring, Edwin G. (1942). Sensation and Perception in the History of Experimental Psychology. New York: Appleton-Century Co. Boring, Edwin G. (1950). A history of experimental psychology. New York: Appleton-Century-Crofts). Frey, Siegfried (1998). "Prejudice and inferential communication: a new look at an old problem". In Eibl-Eibesfeldt, Irenäus; Salter, Frank Kemp (eds.). Indoctrinability, ideology, and warfare. Evolutionary perspectives. New York: Berghahn Books. pp. 189–217. Frey, Siegfried (2005). Die Macht des Bildes. Der Einfluß nonverbaler Kommunikation auf Kultur und Politik. Bern: Huber. ISBN 3-456-84174-4. Gilbert, Daniel (1989). "Thinking lightly about others: Automatic components of the social inference process". In Uleman, J. S.; Bargh, J. A. (eds.). Unintended thought. New York: Guilford. pp. 189–211. von Helmholtz, Hermann (1867). Handbuch der physiologischen Optik. Vol. 3. Leipzig: Voss. Quotations are from the English translation produced by Optical Society of America (1924–25): Treatise on Physiological Optics. Kassler, Jamie Croy (2004). The beginnings of the modern philosophy of music in England. Francis North's "A philosophical essay of musick" (1677) with comments of Isaac Newton, Roger North and in the Philosophical transactions. Aldershot: Ashgate. Lewicki, Pawel (1986). Nonconscious social information processing. New York: Academic Press. ISBN 0-12-446120-4. Newman, L. S.; Moskowitz, G. B.; Uleman, J. S. (1996), Zanna, M. P. (ed.), "People as flexible interpreters: Evidence and issues from spontaneous trait inference", Advances in Experimental Social Psychology, 28, San Diego, CA: Academic Press: 211–279, doi:10.1016/S0065-2601(08)60239-7, ISBN 9780120152285 Newman, L. S.; Uleman, J. S. (1989). "Spontaneous trait inference". In Uleman, J. S.; Bargh, J. A. (eds.). Unintended thought. New York: Guilford. pp. 155–188. Schneider, David J.; Hastorf, Albert H.; Ellsworth, Phoebe C. (1979). Person perception. Reading, Mass.: Addison-Wesley. ISBN 0-201-06768-4. Uleman, J. S.; Bargh, J. A., eds. (1989). Unintended thought. New York: Guilford. Universität Duisburg-Essen: Designing virtual humans for Web 2.0 based learning processes - Unconscious judgments.
Wikipedia/Unconscious_inference
Bloodless surgery is a non-invasive surgical method developed by orthopedic surgeon Adolf Lorenz, who was known as "the bloodless surgeon of Vienna". His medical practice was a consequence of his severe allergy to carbolic acid routinely used in operating rooms of the era. His condition forced him to become a "dry surgeon". Contemporary usage of the term refers to both invasive and noninvasive medical techniques and protocols. The expression does not mean surgery that makes no use of blood or blood transfusion. Rather, it refers to surgery performed without transfusion of allogeneic blood. Champions of bloodless surgery do, however, transfuse products made from allogeneic blood (blood from other people) and they also make use of pre-donated blood for autologous transfusion (blood pre-donated by the patient). Interest in bloodless surgery has arisen for several reasons. Jehovah's Witnesses reject blood transfusions on religious grounds; others may be concerned about bloodborne diseases, such as hepatitis and AIDS. == History == During the early 1960s, American heart surgeon Denton Cooley successfully performed numerous bloodless open-heart surgeries on Jehovah's Witness patients. Fifteen years later, he and his associate published a report of more than 500 cardiac surgeries in this population, documenting that cardiac surgery could be safely performed without blood transfusion. Ron Lapin (1941–1995) was an American surgeon, who became interested in bloodless surgery in the mid-1970s. He was known as a "bloodless surgeon" due to his willingness to perform surgeries on severely anemic Jehovah's Witness patients without the use of blood transfusions. Patricia A. Ford (born 1955) was the first surgeon to perform a bloodless bone marrow transplant. In 1988, Professor James Isbister, a haematologist from Australia, first proposed a paradigm shift back to a patient focus. In 2005, he penned an article in the journal, 'Updates in Blood Conservation and Transfusion alternatives'. In this article Prof. Isbister coined the term 'patient blood management', noting that the focus should be changed from the product to the patient. == Principles == Several principles of bloodless surgery have been published. Preoperative techniques such as erythropoietin (EPO) or iron administration are designed to stimulate the patient's own erythropoiesis. In surgery, control of bleeding is achieved with the use of laser or sonic scalpels, minimally invasive surgical techniques, electrosurgery and electrocautery, low central venous pressure anesthesia (for select cases), or suture ligation of vessels. Other methods include the use of blood substitutes, which at present do not carry oxygen but expand the volume of the blood to prevent shock. Blood substitutes which do carry oxygen, such as PolyHeme, are also under development. Many doctors view acute normovolemic hemodilution, a form of storage of a patient's own blood, as a pillar of "bloodless surgery" but the technique is not an option for patients who refuse autologous blood transfusions. Intraoperative blood salvage is a technique which recycles and cleans blood from a patient during an operation and redirects it into the patient's body. Postoperatively, surgeons seek to minimize further blood loss by continuing administration of medications to augment blood cell mass and minimizing the number of blood draws and the quantity of blood drawn for testing, for example, by using pediatric blood tubes for adult patients. HBOC's such as Polyheme and Hemepure have been discontinued due to severe adverse reactions including death. South Africa was the only country where they were legally authorized as standard treatment but they are no longer available. == Benefits == Bloodless medicine appeals to many doctors because it carries low risk of post-operative infection when compared with procedures requiring blood transfusion. Additionally, it may be economically beneficial in some countries. For example, the cost of blood in the US hovers around $500 a unit, including testing. These costs are further increased as, according to Jan Hoffman (an administrator for the blood conservation program at Geisinger Medical Center in Danville, Pennsylvania), hospitals must cover the cost of the first three units of blood infused per patient per calendar year. By contrast, hospitals may be reimbursed for drugs that boost a patient's red blood cell count, a treatment approach often used before and after surgery to reduce the need for a blood transfusion. However, such payments are highly contingent upon negotiations with insurance companies. Geisinger Medical Center began a blood conservation program in 2005 and reported a recorded savings of $273,000 in its first six months of operation. The Cleveland Clinic lowered their direct costs from US$35.5 million in 2009 to $26.4 million in 2012—a savings of nearly $10 million over 3 years. Health risks appear to be another contributing factor in their appeal, especially in light of recent studies that suggest that blood transfusions can increase the risk of complications and reduce survival rates. Thus, patients who do not receive blood products during hospitalization often recover more quickly, experience fewer complications, and are able to be discharged home more quickly. == See also == Knocking, a documentary on Jehovah's Witnesses that features a bloodless liver transplant == References == == Sources == No man's blood by Gene Church (1983) ISBN 0-86666-155-7 First Do No Harm Documentary http://www.asiageographic.com/productions_pnn.html Cleveland Clinic - Wall Street Journal https://www.wsj.com/articles/SB10001424127887323494504578340962879110432
Wikipedia/Bloodless_surgery
Visual perception is the ability to detect light and use it to form an image of the surrounding environment. Photodetection without image formation is classified as light sensing. In most vertebrates, visual perception can be enabled by photopic vision (daytime vision) or scotopic vision (night vision), with most vertebrates having both. Visual perception detects light (photons) in the visible spectrum reflected by objects in the environment or emitted by light sources. The visible range of light is defined by what is readily perceptible to humans, though the visual perception of non-humans often extends beyond the visual spectrum. The resulting perception is also known as vision, sight, or eyesight (adjectives visual, optical, and ocular, respectively). The various physiological components involved in vision are referred to collectively as the visual system, and are the focus of much research in linguistics, psychology, cognitive science, neuroscience, and molecular biology, collectively referred to as vision science. == Visual system == Most vertebrates achieve vision through similar visual systems. Generally, light enters the eye through the cornea and is focused by the lens onto the retina, a light-sensitive membrane at the back of the eye. Specialized photoreceptive cells in the retina act as transducers, converting the light into neural impulses. The photoreceptors are broadly classes into cone cells and rod cells, which enable photopic and scotopic vision, respectively. These photoreceptors' signals are transmitted by the optic nerve, from the retina upstream to central ganglia in the brain. The lateral geniculate nucleus, which transmits the information to the visual cortex. Signals from the retina also travel directly from the retina to the superior colliculus. The lateral geniculate nucleus sends signals to the primary visual cortex, also called striate cortex. Extrastriate cortex, also called visual association cortex is a set of cortical structures, that receive information from striate cortex, as well as each other. Recent descriptions of visual association cortex describe a division into two functional pathways, a ventral and a dorsal pathway. This conjecture is known as the two streams hypothesis. == Study == The major problem in visual perception is that what people see is not simply a translation of retinal stimuli (i.e., the image on the retina), with the brain altering the basic information taken in. Thus people interested in perception have long struggled to explain what visual processing does to create what is actually seen. === Early studies === There were two major ancient Greek schools, providing a primitive explanation of how vision works. The first was the "emission theory" of vision which maintained that vision occurs when rays emanate from the eyes and are intercepted by visual objects. If an object was seen directly it was by 'means of rays' coming out of the eyes and again falling on the object. A refracted image was, however, seen by 'means of rays' as well, which came out of the eyes, traversed through the air, and after refraction, fell on the visible object which was sighted as the result of the movement of the rays from the eye. This theory was championed by scholars who were followers of Euclid's Optics and Ptolemy's Optics. The second school advocated the so-called 'intromission' approach which sees vision as coming from something entering the eyes representative of the object. With its main propagator Aristotle (De Sensu), and his followers, this theory seems to have some contact with modern theories of what vision really is, but it remained only a speculation lacking any experimental foundation. (In eighteenth-century England, Isaac Newton, John Locke, and others, carried the intromission theory of vision forward by insisting that vision involved a process in which rays—composed of actual corporeal matter—emanated from seen objects and entered the seer's mind/sensorium through the eye's aperture.) Both schools of thought relied upon the principle that "like is only known by like", and thus upon the notion that the eye was composed of some "internal fire" that interacted with the "external fire" of visible light and made vision possible. Plato makes this assertion in his dialogue Timaeus (45b and 46b), as does Empedocles (as reported by Aristotle in his De Sensu, DK frag. B17). Alhazen (965 – c. 1040) carried out many investigations and experiments on visual perception, extended the work of Ptolemy on binocular vision, and commented on the anatomical works of Galen. He was the first person to explain that vision occurs when light bounces on an object and then is directed to one's eyes. Leonardo da Vinci (1452–1519) is believed to be the first to recognize the special optical qualities of the eye. He wrote "The function of the human eye ... was described by a large number of authors in a certain way. But I found it to be completely different." His main experimental finding was that there is only a distinct and clear vision at the line of sight—the optical line that ends at the fovea. Although he did not use these words literally he actually is the father of the modern distinction between foveal and peripheral vision. Isaac Newton (1642–1726/27) was the first to discover through experimentation, by isolating individual colors of the spectrum of light passing through a prism, that the visually perceived color of objects appeared due to the character of light the objects reflected, and that these divided colors could not be changed into any other color, which was contrary to scientific expectation of the day. === Unconscious inference === Hermann von Helmholtz is often credited with the first modern study of visual perception. Helmholtz examined the human eye and concluded that it was incapable of producing a high-quality image. Insufficient information seemed to make vision impossible. He, therefore, concluded that vision could only be the result of some form of "unconscious inference", coining that term in 1867. He proposed the brain was making assumptions and conclusions from incomplete data, based on previous experiences. Inference requires prior experience of the world. Examples of well-known assumptions, based on visual experience, are: light comes from above; objects are normally not viewed from below; faces are seen (and recognized) upright; closer objects can block the view of more distant objects, but not vice versa; and figures (i.e., foreground objects) tend to have convex borders. The study of visual illusions (cases when the inference process goes wrong) has yielded much insight into what sort of assumptions the visual system makes. Another type of unconscious inference hypothesis (based on probabilities) has recently been revived in so-called Bayesian studies of visual perception. Proponents of this approach consider that the visual system performs some form of Bayesian inference to derive a perception from sensory data. However, it is not clear how proponents of this view derive, in principle, the relevant probabilities required by the Bayesian equation. Models based on this idea have been used to describe various visual perceptual functions, such as the perception of motion, the perception of depth, and figure-ground perception. The "wholly empirical theory of perception" is a related and newer approach that rationalizes visual perception without explicitly invoking Bayesian formalisms. === Gestalt theory === Gestalt psychologists working primarily in the 1930s and 1940s raised many of the research questions that are studied by vision scientists today. The Gestalt Laws of Organization have guided the study of how people perceive visual components as organized patterns or wholes, instead of many different parts. "Gestalt" is a German word that partially translates to "configuration or pattern" along with "whole or emergent structure". According to this theory, there are eight main factors that determine how the visual system automatically groups elements into patterns: Proximity, Similarity, Closure, Symmetry, Common Fate (i.e. common motion), Continuity as well as Good Gestalt (pattern that is regular, simple, and orderly) and Past Experience. === Analysis of eye movement === During the 1960s, technical development permitted the continuous registration of eye movement during reading, in picture viewing, and later, in visual problem solving, and when headset-cameras became available, also during driving. The picture to the right shows what may happen during the first two seconds of visual inspection. While the background is out of focus, representing the peripheral vision, the first eye movement goes to the boots of the man (just because they are very near the starting fixation and have a reasonable contrast). Eye movements serve the function of attentional selection, i.e., to select a fraction of all visual inputs for deeper processing by the brain. The following fixations jump from face to face. They might even permit comparisons between faces. It may be concluded that the icon face is a very attractive search icon within the peripheral field of vision. The foveal vision adds detailed information to the peripheral first impression. It can also be noted that there are different types of eye movements: fixational eye movements (microsaccades, ocular drift, and tremor), vergence movements, saccadic movements and pursuit movements. Fixations are comparably static points where the eye rests. However, the eye is never completely still, and gaze position will drift. These drifts are in turn corrected by microsaccades, very small fixational eye movements. Vergence movements involve the cooperation of both eyes to allow for an image to fall on the same area of both retinas. This results in a single focused image. Saccadic movements is the type of eye movement that makes jumps from one position to another position and is used to rapidly scan a particular scene/image. Lastly, pursuit movement is smooth eye movement and is used to follow objects in motion. === Face and object recognition === There is considerable evidence that face and object recognition are accomplished by distinct systems. For example, prosopagnosic patients show deficits in face, but not object processing, while object agnosic patients (most notably, patient C.K.) show deficits in object processing with spared face processing. Behaviorally, it has been shown that faces, but not objects, are subject to inversion effects, leading to the claim that faces are "special". Further, face and object processing recruit distinct neural systems. Notably, some have argued that the apparent specialization of the human brain for face processing does not reflect true domain specificity, but rather a more general process of expert-level discrimination within a given class of stimulus, though this latter claim is the subject of substantial debate. Using fMRI and electrophysiology Doris Tsao and colleagues described brain regions and a mechanism for face recognition in macaque monkeys. The inferotemporal cortex has a key role in the task of recognition and differentiation of different objects. A study by MIT shows that subset regions of the IT cortex are in charge of different objects. By selectively shutting off neural activity of many small areas of the cortex, the animal gets alternately unable to distinguish between certain particular pairments of objects. This shows that the IT cortex is divided into regions that respond to different and particular visual features. In a similar way, certain particular patches and regions of the cortex are more involved in face recognition than other object recognition. Some studies tend to show that rather than the uniform global image, some particular features and regions of interest of the objects are key elements when the brain needs to recognise an object in an image. In this way, the human vision is vulnerable to small particular changes to the image, such as disrupting the edges of the object, modifying texture or any small change in a crucial region of the image. Studies of people whose sight has been restored after a long blindness reveal that they cannot necessarily recognize objects and faces (as opposed to color, motion, and simple geometric shapes). Some hypothesize that being blind during childhood prevents some part of the visual system necessary for these higher-level tasks from developing properly. The general belief that a critical period lasts until age 5 or 6 was challenged by a 2007 study that found that older patients could improve these abilities with years of exposure. == Cognitive and computational approaches == In the 1970s, David Marr developed a multi-level theory of vision, which analyzed the process of vision at different levels of abstraction. In order to focus on the understanding of specific problems in vision, he identified three levels of analysis: the computational, algorithmic and implementational levels. Many vision scientists, including Tomaso Poggio, have embraced these levels of analysis and employed them to further characterize vision from a computational perspective. The computational level addresses, at a high level of abstraction, the problems that the visual system must overcome. The algorithmic level attempts to identify the strategy that may be used to solve these problems. Finally, the implementational level attempts to explain how solutions to these problems are realized in neural circuitry. Marr suggested that it is possible to investigate vision at any of these levels independently. Marr described vision as proceeding from a two-dimensional visual array (on the retina) to a three-dimensional description of the world as output. His stages of vision include: A 2D or primal sketch of the scene, based on feature extraction of fundamental components of the scene, including edges, regions, etc. Note the similarity in concept to a pencil sketch drawn quickly by an artist as an impression. A 21⁄2 D sketch of the scene, where textures are acknowledged, etc. Note the similarity in concept to the stage in drawing where an artist highlights or shades areas of a scene, to provide depth. A 3 D model, where the scene is visualized in a continuous, 3-dimensional map. Marr's 21⁄2D sketch assumes that a depth map is constructed, and that this map is the basis of 3D shape perception. However, both stereoscopic and pictorial perception, as well as monocular viewing, make clear that the perception of 3D shape precedes, and does not rely on, the perception of the depth of points. It is not clear how a preliminary depth map could, in principle, be constructed, nor how this would address the question of figure-ground organization, or grouping. The role of perceptual organizing constraints, overlooked by Marr, in the production of 3D shape percepts from binocularly-viewed 3D objects has been demonstrated empirically for the case of 3D wire objects, e.g. For a more detailed discussion, see Pizlo (2008). A more recent, alternative framework proposes that vision is composed instead of the following three stages: encoding, selection, and decoding. Encoding is to sample and represent visual inputs (e.g., to represent visual inputs as neural activities in the retina). Selection, or attentional selection, is to select a tiny fraction of input information for further processing, e.g., by shifting gaze to an object or visual location to better process the visual signals at that location. Decoding is to infer or recognize the selected input signals, e.g., to recognize the object at the center of gaze as somebody's face. In this framework, attentional selection starts at the primary visual cortex along the visual pathway, and the attentional constraints impose a dichotomy between the central and peripheral visual fields for visual recognition or decoding. == Transduction == Transduction is the process through which energy from environmental stimuli is converted to neural activity. The retina contains three different cell layers: photoreceptor layer, bipolar cell layer, and ganglion cell layer. The photoreceptor layer where transduction occurs is farthest from the lens. It contains photoreceptors with different sensitivities called rods and cones. The cones are responsible for color perception and are of three distinct types labeled red, green, and blue. Rods are responsible for the perception of objects in low light. Photoreceptors contain within them a special chemical called a photopigment, which is embedded in the membrane of the lamellae; a single human rod contains approximately 10 million of them. The photopigment molecules consist of two parts: an opsin (a protein) and retinal (a lipid). There are 3 specific photopigments (each with their own wavelength sensitivity) that respond across the spectrum of visible light. When the appropriate wavelengths (those that the specific photopigment is sensitive to) hit the photoreceptor, the photopigment splits into two, which sends a signal to the bipolar cell layer, which in turn sends a signal to the ganglion cells, the axons of which form the optic nerve and transmit the information to the brain. If a particular cone type is missing or abnormal, due to a genetic anomaly, a color vision deficiency, sometimes called color blindness will occur. == Opponent process == Transduction involves chemical messages sent from the photoreceptors to the bipolar cells to the ganglion cells. Several photoreceptors may send their information to one ganglion cell. There are two types of ganglion cells: red/green and yellow/blue. These neurons constantly fire—even when not stimulated. The brain interprets different colors (and with a lot of information, an image) when the rate of firing of these neurons alters. Red light stimulates the red cone, which in turn stimulates the red/green ganglion cell. Likewise, green light stimulates the green cone, which stimulates the green/red ganglion cell and blue light stimulates the blue cone which stimulates the blue/yellow ganglion cell. The rate of firing of the ganglion cells is increased when it is signaled by one cone and decreased (inhibited) when it is signaled by the other cone. The first color in the name of the ganglion cell is the color that excites it and the second is the color that inhibits it. i.e.: A red cone would excite the red/green ganglion cell and the green cone would inhibit the red/green ganglion cell. This is an opponent process. If the rate of firing of a red/green ganglion cell is increased, the brain would know that the light was red, if the rate was decreased, the brain would know that the color of the light was green. == Artificial visual perception == Theories and observations of visual perception have been the main source of inspiration for computer vision (also called machine vision, or computational vision). Special hardware structures and software algorithms provide machines with the capability to interpret the images coming from a camera or a sensor. == See also == === Vision deficiencies or disorders === === Related disciplines === == References == == Further reading == Von Helmholtz, Hermann (1867). Handbuch der physiologischen Optik. Vol. 3. Leipzig: Voss. Quotations are from the English translation produced by the Optical Society of America (1924–25): Treatise on Physiological Optics Archived September 27, 2018, at the Wayback Machine. == External links == The Organization of the Retina and Visual System Effect of Detail on Visual Perception by Jon McLoone, the Wolfram Demonstrations Project The Joy of Visual Perception, resource on the eye's perception abilities. VisionScience. Resource for Research in Human and Animal Vision A collection of resources in vision science and perception Vision and Psychophysics Vision, Scholarpedia Expert articles about Vision What are the limits of human vision?
Wikipedia/Intromission_theory
A photonic integrated circuit (PIC) or integrated optical circuit is a microchip containing two or more photonic components that form a functioning circuit. This technology detects, generates, transports, and processes light. Photonic integrated circuits use photons (or particles of light) as opposed to electrons that are used by electronic integrated circuits. The major difference between the two is that a photonic integrated circuit provides functions for information signals imposed on optical wavelengths typically in the visible spectrum or near-infrared (850–1650 nm). One of the most commercially utilized material platforms for photonic integrated circuits is indium phosphide (InP), which allows for the integration of various optically active and passive functions on the same chip. Initial examples of photonic integrated circuits were simple 2-section distributed Bragg reflector (DBR) lasers, consisting of two independently controlled device sections—a gain section and a DBR mirror section. Consequently, all modern monolithic tunable lasers, widely tunable lasers, externally modulated lasers and transmitters, integrated receivers, etc. are examples of photonic integrated circuits. As of 2012, devices integrate hundreds of functions onto a single chip. Pioneering work in this arena was performed at Bell Laboratories. The most notable academic centers of excellence of photonic integrated circuits in InP are the University of California at Santa Barbara, USA, the Eindhoven University of Technology, and the University of Twente in the Netherlands. A 2005 development showed that silicon can, even though it is an indirect bandgap material, still be used to generate laser light via the Raman nonlinearity. Such lasers are not electrically driven but optically driven and therefore still necessitate a further optical pump laser source. == History == Photonics is the science behind the detection, generation, and manipulation of photons. According to quantum mechanics and the concept of wave–particle duality first proposed by Albert Einstein in 1905, light acts as both an electromagnetic wave and a particle. For example, total internal reflection in an optical fibre allows it to act as a waveguide. Integrated circuits using electrical components were first developed in the late 1940s and early 1950s, but it took until 1958 for them to become commercially available. When the laser and laser diode were invented in the 1960s, the term "photonics" fell into more common usage to describe the application of light to replace applications previously achieved through the use of electronics. By the 1980s, photonics gained traction through its role in fibre optic communication. At the start of the decade, an assistant in a new research group at Delft University Of Technology, Meint Smit, started pioneering in the field of integrated photonics. He is credited with inventing the Arrayed Waveguide Grating (AWG), a core component of modern digital connections for the Internet and phones. Smit has received several awards, including an ERC Advanced Grant, a Rank Prize for Optoelectronics and a LEOS Technical Achievement Award. In October 2022, during an experiment held at the Technical University of Denmark in Copenhagen, a photonic chip transmitted 1.84 petabits per second of data over a fibre-optic cable more than 7.9 kilometres long. First, the data stream was split into 37 sections, each of which was sent down a separate core of the fibre-optic cable. Next, each of these channels was split into 223 parts corresponding to equidistant spikes of light across the spectrum. == Comparison to electronic integration == Unlike electronic integration where silicon is the dominant material, system photonic integrated circuits have been fabricated from a variety of material systems, including electro-optic crystals such as lithium niobate, silica on silicon, silicon on insulator, various polymers, and semiconductor materials which are used to make semiconductor lasers such as GaAs and InP. The different material systems are used because they each provide different advantages and limitations depending on the function to be integrated. For instance, silica (silicon dioxide) based PICs have very desirable properties for passive photonic circuits such as AWGs (see below) due to their comparatively low losses and low thermal sensitivity, GaAs or InP based PICs allow the direct integration of light sources and Silicon PICs enable co-integration of the photonics with transistor based electronics. The fabrication techniques are similar to those used in electronic integrated circuits in which photolithography is used to pattern wafers for etching and material deposition. Unlike electronics where the primary device is the transistor, there is no single dominant device. The range of devices required on a chip includes low loss interconnect waveguides, power splitters, optical amplifiers, optical modulators, filters, lasers and detectors. These devices require a variety of different materials and fabrication techniques making it difficult to realize all of them on a single chip. Newer techniques using resonant photonic interferometry is making way for UV LEDs to be used for optical computing requirements with much cheaper costs leading the way to petahertz consumer electronics. == Examples of photonic integrated circuits == The primary application for photonic integrated circuits is in the area of fiber-optic communication though applications in other fields such as biomedical and photonic computing are also possible. The arrayed waveguide gratings (AWGs) which are commonly used as optical (de)multiplexers in wavelength division multiplexed (WDM) fiber-optic communication systems are an example of a photonic integrated circuit which has replaced previous multiplexing schemes which utilized multiple discrete filter elements. Since separating optical modes is a need for quantum computing, this technology may be helpful to miniaturize quantum computers (see linear optical quantum computing). Another example of a photonic integrated chip in wide use today in fiber-optic communication systems is the externally modulated laser (EML) which combines a distributed feed back laser diode with an electro-absorption modulator on a single InP based chip. == Applications == As global data consumption rises and demand for faster networks continues to grow, the world needs to find more sustainable solutions to the energy crisis and climate change. At the same time, ever more innovative applications for sensor technology, such as Lidar in autonomous driving vehicles, appear on the market. There is a need to keep pace with technological challenges. The expansion of 5G data networks and data centres, safer autonomous driving vehicles, and more efficient food production cannot be sustainably met by electronic microchip technology alone. However, combining electrical devices with integrated photonics provides a more energy efficient way to increase the speed and capacity of data networks, reduce costs and meet an increasingly diverse range of needs across various industries. === Data and telecommunications === The primary application for PICs is in the area of fibre-optic communication. The arrayed waveguide grating (AWG) which are commonly used as optical (de)multiplexers in wavelength division multiplexed (WDM) fibre-optic communication systems are an example of a photonic integrated circuit. Another example in fibre-optic communication systems is the externally modulated laser (EML) which combines a distributed feedback laser diode with an electro-absorption modulator. The PICs can also increase bandwidth and data transfer speeds by deploying few-modes optical planar waveguides. Especially, if modes can be easily converted from conventional single-mode planar waveguides into few-mode waveguides, and selectively excite the desired modes. For example, a bidirectional spatial mode slicer and combiner can be used to achieve the desired higher or lower-order modes. Its principle of operation depends on cascading stages of V-shape and/ or M-shape graded-index planar waveguides. Not only can PICs increase bandwidth and data transfer speeds, they can reduce energy consumption in data centres, which spend a large proportion of energy on cooling servers. === Healthcare and medicine === Using advanced biosensors and creating more affordable diagnostic biomedical instruments, integrated photonics opens the door to lab-on-a-chip (LOC) technology, cutting waiting times, and taking diagnosis out of laboratories and into the hands of doctors and patients. Based on an ultrasensitive photonic biosensor, SurfiX Diagnostics' diagnostics platform provides a variety of point-of-care tests. Similarly, Amazec Photonics has developed a fibre optic sensing technology with photonic chips which enables high-resolution temperature sensing (fractions of 0.1 milliKelvin) without having to inject the temperature sensor within the body. This way, medical specialists are able to measure both cardiac output and circulating blood volume from outside the body. Another example of optical sensor technology is EFI's "OptiGrip" device, which offers greater control over tissue feeling for minimal invasive surgery. === Automotive and engineering applications === PICs can be applied in sensor systems, like Lidar (which stands for light detection and ranging), to monitor the surroundings of vehicles. It can also be deployed in-car connectivity through Li-Fi, which is similar to WiFi but uses light. This technology facilitates communication between vehicles and urban infrastructure to improve driver safety. For example, some modern vehicles pick up traffic signs and remind the driver of the speed limit. In terms of engineering, fibre optic sensors can be used to detect different quantities, such as pressure, temperature, vibrations, accelerations, and mechanical strain. Sensing technology from PhotonFirst uses integrated photonics to measure things like shape changes in aeroplanes, electric vehicle battery temperature, and infrastructure strain. === Agriculture and food === Sensors play a role in innovations in agriculture and the food industry in order to reduce wastage and detect diseases. Light sensing technology powered by PICs can measure variables beyond the range of the human eye, allowing the food supply chain to detect disease, ripeness and nutrients in fruit and plants. It can also help food producers to determine soil quality and plant growth, as well as measuring CO2 emissions. A new, miniaturised, near-infrared sensor, developed by MantiSpectra, is small enough to fit into a smartphone, and can be used to analyse chemical compounds of products like milk and plastics. === AI applications === In 2025, researchers at Columbia Engineering developed a 3D photonic-electronic chip that could significantly improve AI hardware. By combining light-based data movement with CMOS electronics, this chip addressed AI's energy and data transfer bottlenecks, improving both efficiency and bandwidth. The breakthrough allowed for high-speed, energy-efficient data communication, enabling AI systems to process vast amounts of data with minimal power. With a bandwidth of 800 Gb/s and a density of 5.3 Tb/s/mm², this technology offered major advances for AI, autonomous vehicles, and high-performance computing. == Types of fabrication and materials == The fabrication techniques are similar to those used in electronic integrated circuits, in which photolithography is used to pattern wafers for etching and material deposition. The platforms considered most versatile are indium phosphide (InP) and silicon photonics (SiPh): Indium phosphide (InP) PICs have active laser generation, amplification, control, and detection. This makes them an ideal component for communication and sensing applications. Silicon nitride (SiN) PICs have a vast spectral range and ultra low-loss waveguide. This makes them highly suited to detectors, spectrometers, biosensors, and quantum computers. The lowest propagation losses reported in SiN (0.1 dB/cm down to 0.1 dB/m) have been achieved by LioniX International's TriPleX waveguides. Silicon photonics (SiPh) PICs provide low losses for passive components like waveguides and can be used in minuscule photonic circuits. They are compatible with existing electronic fabrication. The term "silicon photonics" actually refers to the technology rather than the material. It combines high density photonic integrated circuits (PICs) with complementary metal oxide semiconductor (CMOS) electronics fabrication. The most technologically mature and commercially used platform is silicon on insulator (SOI). Other platforms include: Lithium niobate (LiNbO3) is an ideal modulator for low loss mode. It is highly effective at matching fibre input–output due to its low index and broad transparency window. For more complex PICs, lithium niobate can be formed into large crystals. As part of project ELENA, there is a European initiative to stimulate production of LiNbO3-PICs. Attempts are also being made to develop lithium niobate on insulator (LNOI). Silica has a low weight and small form factor. It is a common component of optical communication networks, such as planar light wave circuits (PLCs). Gallium arsenide (GaAS) has high electron mobility. This means GaAS transistors operate at high speeds, making them ideal analogue integrated circuit drivers for high speed lasers and modulators. By combining and configuring different chip types (including existing electronic chips) in a hybrid or heterogeneous integration, it is possible to leverage the strengths of each. Taking this complementary approach to integration addresses the demand for increasingly sophisticated energy-efficient solutions. == Current status == As of 2010, photonic integration was an active topic in U.S. Defense contracts. It was included by the Optical Internetworking Forum for inclusion in 100 gigahertz optical networking standards. A recent study presents a novel two-dimensional photonic crystal design for electro-reflective modulators, offering reduced size and enhanced efficiency compared to traditional bulky structures. This design achieves high optical transmission ratios with precise angle control, addressing critical challenges in miniaturizing optoelectronic devices for improved performance in PICs. In this structure, both lateral and vertical fabrication technologies are combined, introducing a novel approach that merges two-dimensional designs with three-dimensional structures. This hybrid technique offers new possibilities for enhancing the functionality and integration of photonic components within photonic integrated circuits. == See also == Integrated quantum photonics Optical computing Optical transistor Silicon photonics == Notes == == References == Larry Coldren; Scott Corzine; Milan Mashanovitch (2012). Diode Lasers and Photonic Integrated Circuits (Second ed.). John Wiley and Sons. ISBN 9781118148181. McAulay, Alastair D. (1999). Optical Computer Architectures: The Application of Optical Concepts to Next Generation Computers. Guha, A.; Ramnarayan, R.; Derstine, M. (1987). "Architectural issues in designing symbolic processors in optics". Proceedings of the 14th annual international symposium on Computer architecture - ISCA '87. p. 145. doi:10.1145/30350.30367. ISBN 0818607769. S2CID 14228669. Altera Corporation (2011). "Overcome Copper Limits with Optical Interfaces" (PDF). Brenner, K.-H.; Huang, Alan (1986). "Logic and architectures for digital optical computers (A)". J. Opt. Soc. Am. A3: 62. Bibcode:1986JOSAA...3...62B. Brenner, K.-H. (1988). "A programmable optical processor based on symbolic substitution". Appl. Opt. 27 (9): 1687–1691. Bibcode:1988ApOpt..27.1687B. doi:10.1364/AO.27.001687. PMID 20531637. S2CID 43648075.
Wikipedia/Integrated_optics
In atomic physics, the Bohr model or Rutherford–Bohr model was a model of the atom that incorporated some early quantum concepts. Developed from 1911 to 1918 by Niels Bohr and building on Ernest Rutherford's nuclear model, it supplanted the plum pudding model of J. J. Thomson only to be replaced by the quantum atomic model in the 1920s. It consists of a small, dense nucleus surrounded by orbiting electrons. It is analogous to the structure of the Solar System, but with attraction provided by electrostatic force rather than gravity, and with the electron energies quantized (assuming only discrete values). In the history of atomic physics, it followed, and ultimately replaced, several earlier models, including Joseph Larmor's Solar System model (1897), Jean Perrin's model (1901), the cubical model (1902), Hantaro Nagaoka's Saturnian model (1904), the plum pudding model (1904), Arthur Haas's quantum model (1910), the Rutherford model (1911), and John William Nicholson's nuclear quantum model (1912). The improvement over the 1911 Rutherford model mainly concerned the new quantum mechanical interpretation introduced by Haas and Nicholson, but forsaking any attempt to explain radiation according to classical physics. The model's key success lies in explaining the Rydberg formula for hydrogen's spectral emission lines. While the Rydberg formula had been known experimentally, it did not gain a theoretical basis until the Bohr model was introduced. Not only did the Bohr model explain the reasons for the structure of the Rydberg formula, it also provided a justification for the fundamental physical constants that make up the formula's empirical results. The Bohr model is a relatively primitive model of the hydrogen atom, compared to the valence shell model. As a theory, it can be derived as a first-order approximation of the hydrogen atom using the broader and much more accurate quantum mechanics and thus may be considered to be an obsolete scientific theory. However, because of its simplicity, and its correct results for selected systems (see below for application), the Bohr model is still commonly taught to introduce students to quantum mechanics or energy level diagrams before moving on to the more accurate, but more complex, valence shell atom. A related quantum model was proposed by Arthur Erich Haas in 1910 but was rejected until the 1911 Solvay Congress where it was thoroughly discussed. The quantum theory of the period between Planck's discovery of the quantum (1900) and the advent of a mature quantum mechanics (1925) is often referred to as the old quantum theory. == Background == Until the second decade of the 20th century, atomic models were generally speculative. Even the concept of atoms, let alone atoms with internal structure, faced opposition from some scientists.: 2  === Planetary models === In the late 1800s speculations on the possible structure of the atom included planetary models with orbiting charged electrons.: 35  These models faced a significant constraint. In 1897, Joseph Larmor showed that an accelerating charge would radiate power according to classical electrodynamics, a result known as the Larmor formula. Since electrons forced to remain in orbit are continuously accelerating, they would be mechanically unstable. Larmor noted that electromagnetic effect of multiple electrons, suitable arranged, would cancel each other. Thus subsequent atomic models based on classical electrodynamics needed to adopt such special multi-electron arrangements.: 113  === Thomson's atom model === When Bohr began his work on a new atomic theory in the summer of 1912: 237  the atomic model proposed by J. J. Thomson, now known as the plum pudding model, was the best available.: 37  Thomson proposed a model with electrons rotating in coplanar rings within an atomic-sized, positively-charged, spherical volume. Thomson showed that this model was mechanically stable by lengthy calculations and was electrodynamically stable under his original assumption of thousands of electrons per atom. Moreover, he suggested that the particularly stable configurations of electrons in rings was connected to chemical properties of the atoms. He developed a formula for the scattering of beta particles that seemed to match experimental results.: 38  However Thomson himself later showed that the atom had a factor of a thousand fewer electrons, challenging the stability argument and forcing the poorly understood positive sphere to have most of the atom's mass. Thomson was also unable to explain the many lines in atomic spectra.: 18  === Rutherford nuclear model === In 1908, Hans Geiger and Ernest Marsden demonstrated that alpha particle occasionally scatter at large angles, a result inconsistent with Thomson's model. In 1911 Ernest Rutherford developed a new scattering model, showing that the observed large angle scattering could be explained by a compact, highly charged mass at the center of the atom. Rutherford scattering did not involve the electrons and thus his model of the atom was incomplete. Bohr begins his first paper on his atomic model by describing Rutherford's atom as consisting of a small, dense, positively charged nucleus attracting negatively charged electrons. === Atomic spectra === By the early twentieth century, it was expected that the atom would account for the many atomic spectral lines. These lines were summarized in empirical formula by Johann Balmer and Johannes Rydberg. In 1897, Lord Rayleigh showed that vibrations of electrical systems predicted spectral lines that depend on the square of the vibrational frequency, contradicting the empirical formula which depended directly on the frequency.: 18  In 1907 Arthur W. Conway showed that, rather than the entire atom vibrating, vibrations of only one of the electrons in the system described by Thomson might be sufficient to account for spectral series.: II:106  Although Bohr's model would also rely on just the electron to explain the spectrum, he did not assume an electrodynamical model for the atom. The other important advance in the understanding of atomic spectra was the Rydberg–Ritz combination principle which related atomic spectral line frequencies to differences between 'terms', special frequencies characteristic of each element.: 173  Bohr would recognize the terms as energy levels of the atom divided by the Planck constant, leading to the modern view that the spectral lines result from energy differences.: 847  === Haas atomic model === In 1910, Arthur Erich Haas proposed a model of the hydrogen atom with an electron circulating on the surface of a sphere of positive charge. The model resembled Thomson's plum pudding model, but Haas added a radical new twist: he constrained the electron's potential energy, E pot {\displaystyle E_{\text{pot}}} , on a sphere of radius a to equal the frequency, f, of the electron's orbit on the sphere times the Planck constant:: 197  E pot = − e 2 a = h f {\displaystyle E_{\text{pot}}={\frac {-e^{2}}{a}}=hf} where e represents the charge on the electron and the sphere. Haas combined this constraint with the balance-of-forces equation. The attractive force between the electron and the sphere balances the centrifugal force: e 2 a 2 = m a ( 2 π f ) 2 {\displaystyle {\frac {e^{2}}{a^{2}}}=ma(2\pi f)^{2}} where m is the mass of the electron. This combination relates the radius of the sphere to the Planck constant: a = h 2 4 π 2 e 2 m {\displaystyle a={\frac {h^{2}}{4\pi ^{2}e^{2}m}}} Haas solved for the Planck constant using the then-current value for the radius of the hydrogen atom. Three years later, Bohr would use similar equations with different interpretation. Bohr took the Planck constant as given value and used the equations to predict, a, the radius of the electron orbiting in the ground state of the hydrogen atom. This value is now called the Bohr radius.: 197  === Influence of the Solvay Conference === The first Solvay Conference, in 1911, was one of the first international physics conferences. Nine Nobel or future Nobel laureates attended, including Ernest Rutherford, Bohr's mentor.: 271  Bohr did not attend but he read the Solvay reports and discussed them with Rutherford.: 233  The subject of the conference was the theory of radiation and the energy quanta of Max Planck's oscillators. Planck's lecture at the conference ended with comments about atoms and the discussion that followed it concerned atomic models. Hendrik Lorentz raised the question of the composition of the atom based on Haas's model, a form of Thomson's plum pudding model with a quantum modification. Lorentz explained that the size of atoms could be taken to determine the Planck constant as Haas had done or the Planck constant could be taken as determining the size of atoms.: 273  Bohr would adopt the second path. The discussions outlined the need for the quantum theory to be included in the atom. Planck explicitly mentions the failings of classical mechanics.: 273  While Bohr had already expressed a similar opinion in his PhD thesis, at Solvay the leading scientists of the day discussed a break with classical theories.: 244  Bohr's first paper on his atomic model cites the Solvay proceedings saying: "Whatever the alteration in the laws of motion of the electrons may be, it seems necessary to introduce in the laws in question a quantity foreign to the classical electrodynamics, i.e. Planck's constant, or as it often is called the elementary quantum of action." Encouraged by the Solvay discussions, Bohr would assume the atom was stable and abandon the efforts to stabilize classical models of the atom: 199  === Nicholson atom theory === In 1911 John William Nicholson published a model of the atom which would influence Bohr's model. Nicholson developed his model based on the analysis of astrophysical spectroscopy. He connected the observed spectral line frequencies with the orbits of electrons in his atoms. The connection he adopted associated the atomic electron orbital angular momentum with the Planck constant. Whereas Planck focused on a quantum of energy, Nicholson's angular momentum quantum relates to orbital frequency. This new concept gave Planck constant an atomic meaning for the first time.: 169  In his 1913 paper Bohr cites Nicholson as finding quantized angular momentum important for the atom. The other critical influence of Nicholson work was his detailed analysis of spectra. Before Nicholson's work Bohr thought the spectral data was not useful for understanding atoms. In comparing his work to Nicholson's, Bohr came to understand the spectral data and their value. When he then learned from a friend about Balmer's compact formula for the spectral line data, Bohr quickly realized his model would match it in detail.: 178  Nicholson's model was based on classical electrodynamics along the lines of J.J. Thomson's plum pudding model but his negative electrons orbiting a positive nucleus rather than circulating in a sphere. To avoid immediate collapse of this system he required that electrons come in pairs so the rotational acceleration of each electron was matched across the orbit.: 163  By 1913 Bohr had already shown, from the analysis of alpha particle energy loss, that hydrogen had only a single electron not a matched pair.: 195  Bohr's atomic model would abandon classical electrodynamics. Nicholson's model of radiation was quantum but was attached to the orbits of the electrons. Bohr quantization would associate it with differences in energy levels of his model of hydrogen rather than the orbital frequency. === Bohr's previous work === Bohr completed his PhD in 1911 with a thesis 'Studies on the Electron Theory of Metals', an application of the classical electron theory of Hendrik Lorentz. Bohr noted two deficits of the classical model. The first concerned the specific heat of metals which James Clerk Maxwell noted in 1875: every additional degree of freedom in a theory of metals, like subatomic electrons, cause more disagreement with experiment. The second, the classical theory could not explain magnetism.: 194  After his PhD, Bohr worked briefly in the lab of JJ Thomson before moving to Rutherford's lab in Manchester to study radioactivity. He arrived just after Rutherford completed his proposal of a compact nuclear core for atoms. Charles Galton Darwin, also at Manchester, had just completed an analysis of alpha particle energy loss in metals, concluding the electron collisions where the dominant cause of loss. Bohr showed in a subsequent paper that Darwin's results would improve by accounting for electron binding energy. Importantly this allowed Bohr to conclude that hydrogen atoms have a single electron.: 195  == Development == Next, Bohr was told by his friend, Hans Hansen, that the Balmer series is calculated using the Balmer formula, an empirical equation discovered by Johann Balmer in 1885 that described wavelengths of some spectral lines of hydrogen. This was further generalized by Johannes Rydberg in 1888, resulting in what is now known as the Rydberg formula. After this, Bohr declared, "everything became clear". In 1913 Niels Bohr put forth three postulates to provide an electron model consistent with Rutherford's nuclear model: The electron is able to revolve in certain stable orbits around the nucleus without radiating any energy, contrary to what classical electromagnetism suggests. These stable orbits are called stationary orbits and are attained at certain discrete distances from the nucleus. The electron cannot have any other orbit in between the discrete ones. The stationary orbits are attained at distances for which the angular momentum of the revolving electron is an integer multiple of the reduced Planck constant: m e v r = n ℏ {\displaystyle m_{\mathrm {e} }vr=n\hbar } , where n = 1 , 2 , 3 , . . . {\displaystyle n=1,2,3,...} is called the principal quantum number, and ℏ = h / 2 π {\displaystyle \hbar =h/2\pi } . The lowest value of n {\displaystyle n} is 1; this gives the smallest possible orbital radius, known as the Bohr radius, of 0.0529 nm for hydrogen. Once an electron is in this lowest orbit, it can get no closer to the nucleus. Starting from the angular momentum quantum rule as Bohr admits is previously given by Nicholson in his 1912 paper, Bohr was able to calculate the energies of the allowed orbits of the hydrogen atom and other hydrogen-like atoms and ions. These orbits are associated with definite energies and are also called energy shells or energy levels. In these orbits, the electron's acceleration does not result in radiation and energy loss. The Bohr model of an atom was based upon Planck's quantum theory of radiation. Electrons can only gain and lose energy by jumping from one allowed orbit to another, absorbing or emitting electromagnetic radiation with a frequency ν {\displaystyle \nu } determined by the energy difference of the levels according to the Planck relation: Δ E = E 2 − E 1 = h ν {\displaystyle \Delta E=E_{2}-E_{1}=h\nu } , where h {\displaystyle h} is the Planck constant. Other points are: Like Einstein's theory of the photoelectric effect, Bohr's formula assumes that during a quantum jump a discrete amount of energy is radiated. However, unlike Einstein, Bohr stuck to the classical Maxwell theory of the electromagnetic field. Quantization of the electromagnetic field was explained by the discreteness of the atomic energy levels; Bohr did not believe in the existence of photons. According to the Maxwell theory the frequency ν {\displaystyle \nu } of classical radiation is equal to the rotation frequency ν {\displaystyle \nu } rot of the electron in its orbit, with harmonics at integer multiples of this frequency. This result is obtained from the Bohr model for jumps between energy levels E n {\displaystyle E_{n}} and E n − k {\displaystyle E_{n-k}} when k {\displaystyle k} is much smaller than n {\displaystyle n} . These jumps reproduce the frequency of the k {\displaystyle k} -th harmonic of orbit n {\displaystyle n} . For sufficiently large values of n {\displaystyle n} (so-called Rydberg states), the two orbits involved in the emission process have nearly the same rotation frequency, so that the classical orbital frequency is not ambiguous. But for small n {\displaystyle n} (or large k {\displaystyle k} ), the radiation frequency has no unambiguous classical interpretation. This marks the birth of the correspondence principle, requiring quantum theory to agree with the classical theory only in the limit of large quantum numbers. The Bohr–Kramers–Slater theory (BKS theory) is a failed attempt to extend the Bohr model, which violates the conservation of energy and momentum in quantum jumps, with the conservation laws only holding on average. Bohr's condition, that the angular momentum be an integer multiple of ℏ {\displaystyle \hbar } , was later reinterpreted in 1924 by de Broglie as a standing wave condition: the electron is described by a wave and a whole number of wavelengths must fit along the circumference of the electron's orbit: n λ = 2 π r . {\displaystyle n\lambda =2\pi r.} According to de Broglie's hypothesis, matter particles such as the electron behave as waves. The de Broglie wavelength of an electron is λ = h m v , {\displaystyle \lambda ={\frac {h}{mv}},} which implies that n h m v = 2 π r , {\displaystyle {\frac {nh}{mv}}=2\pi r,} or n h 2 π = m v r , {\displaystyle {\frac {nh}{2\pi }}=mvr,} where m v r {\displaystyle mvr} is the angular momentum of the orbiting electron. Writing ℓ {\displaystyle \ell } for this angular momentum, the previous equation becomes ℓ = n h 2 π , {\displaystyle \ell ={\frac {nh}{2\pi }},} which is Bohr's second postulate. Bohr described angular momentum of the electron orbit as 2 / h {\displaystyle 2/h} while de Broglie's wavelength of λ = h / p {\displaystyle \lambda =h/p} described h {\displaystyle h} divided by the electron momentum. In 1913, however, Bohr justified his rule by appealing to the correspondence principle, without providing any sort of wave interpretation. In 1913, the wave behavior of matter particles such as the electron was not suspected. In 1925, a new kind of mechanics was proposed, quantum mechanics, in which Bohr's model of electrons traveling in quantized orbits was extended into a more accurate model of electron motion. The new theory was proposed by Werner Heisenberg. Another form of the same theory, wave mechanics, was discovered by the Austrian physicist Erwin Schrödinger independently, and by different reasoning. Schrödinger employed de Broglie's matter waves, but sought wave solutions of a three-dimensional wave equation describing electrons that were constrained to move about the nucleus of a hydrogen-like atom, by being trapped by the potential of the positive nuclear charge. == Electron energy levels == The Bohr model gives almost exact results only for a system where two charged points orbit each other at speeds much less than that of light. This not only involves one-electron systems such as the hydrogen atom, singly ionized helium, and doubly ionized lithium, but it includes positronium and Rydberg states of any atom where one electron is far away from everything else. It can be used for K-line X-ray transition calculations if other assumptions are added (see Moseley's law below). In high energy physics, it can be used to calculate the masses of heavy quark mesons. Calculation of the orbits requires two assumptions. Classical mechanics The electron is held in a circular orbit by electrostatic attraction. The centripetal force is equal to the Coulomb force. m e v 2 r = Z k e e 2 r 2 , {\displaystyle {\frac {m_{\mathrm {e} }v^{2}}{r}}={\frac {Zk_{\mathrm {e} }e^{2}}{r^{2}}},} where me is the electron's mass, e is the elementary charge, ke is the Coulomb constant and Z is the atom's atomic number. It is assumed here that the mass of the nucleus is much larger than the electron mass (which is a good assumption). This equation determines the electron's speed at any radius: v = Z k e e 2 m e r . {\displaystyle v={\sqrt {\frac {Zk_{\mathrm {e} }e^{2}}{m_{\mathrm {e} }r}}}.} It also determines the electron's total energy at any radius: E = − 1 2 m e v 2 . {\displaystyle E=-{\frac {1}{2}}m_{\mathrm {e} }v^{2}.} The total energy is negative and inversely proportional to r. This means that it takes energy to pull the orbiting electron away from the proton. For infinite values of r, the energy is zero, corresponding to a motionless electron infinitely far from the proton. The total energy is half the potential energy, the difference being the kinetic energy of the electron. This is also true for noncircular orbits by the virial theorem. A quantum rule The angular momentum L = mevr is an integer multiple of ħ: m e v r = n ℏ . {\displaystyle m_{\mathrm {e} }vr=n\hbar .} === Derivation === In classical mechanics, if an electron is orbiting around an atom with period T, and if its coupling to the electromagnetic field is weak, so that the orbit doesn't decay very much in one cycle, it will emit electromagnetic radiation in a pattern repeating at every period, so that the Fourier transform of the pattern will only have frequencies which are multiples of 1/T. However, in quantum mechanics, the quantization of angular momentum leads to discrete energy levels of the orbits, and the emitted frequencies are quantized according to the energy differences between these levels. This discrete nature of energy levels introduces a fundamental departure from the classical radiation law, giving rise to distinct spectral lines in the emitted radiation. Bohr assumes that the electron is circling the nucleus in an elliptical orbit obeying the rules of classical mechanics, but with no loss of radiation due to the Larmor formula. Denoting the total energy as E, the electron charge as −e, the nucleus charge as K = Ze, the electron mass as me, half the major axis of the ellipse as a, he starts with these equations:: 3  E is assumed to be negative, because a positive energy is required to unbind the electron from the nucleus and put it at rest at an infinite distance. Eq. (1a) is obtained from equating the centripetal force to the Coulombian force acting between the nucleus and the electron, considering that E = T + U {\displaystyle E=T+U} (where T is the average kinetic energy and U the average electrostatic potential), and that for Kepler's second law, the average separation between the electron and the nucleus is a. Eq. (1b) is obtained from the same premises of eq. (1a) plus the virial theorem, stating that, for an elliptical orbit, Then Bohr assumes that | E | {\displaystyle \vert E\vert } is an integer multiple of the energy of a quantum of light with half the frequency of the electron's revolution frequency,: 4  i.e.: From eq. (1a, 1b, 2), it descends: He further assumes that the orbit is circular, i.e. a = r {\displaystyle a=r} , and, denoting the angular momentum of the electron as L, introduces the equation: Eq. (4) stems from the virial theorem, and from the classical mechanics relationships between the angular momentum, the kinetic energy and the frequency of revolution. From eq. (1c, 2, 4), it stems: where: that is: This results states that the angular momentum of the electron is an integer multiple of the reduced Planck constant.: 15  Substituting the expression for the velocity gives an equation for r in terms of n: m e k e Z e 2 m e r r = n ℏ , {\displaystyle m_{\text{e}}{\sqrt {\dfrac {k_{\text{e}}Ze^{2}}{m_{\text{e}}r}}}r=n\hbar ,} so that the allowed orbit radius at any n is r n = n 2 ℏ 2 Z k e e 2 m e . {\displaystyle r_{n}={\frac {n^{2}\hbar ^{2}}{Zk_{\mathrm {e} }e^{2}m_{\mathrm {e} }}}.} The smallest possible value of r in the hydrogen atom (Z = 1) is called the Bohr radius and is equal to: r 1 = ℏ 2 k e e 2 m e ≈ 5.29 × 10 − 11 m = 52.9 p m . {\displaystyle r_{1}={\frac {\hbar ^{2}}{k_{\mathrm {e} }e^{2}m_{\mathrm {e} }}}\approx 5.29\times 10^{-11}~\mathrm {m} =52.9~\mathrm {pm} .} The energy of the n-th level for any atom is determined by the radius and quantum number: E = − Z k e e 2 2 r n = − Z 2 ( k e e 2 ) 2 m e 2 ℏ 2 n 2 ≈ − 13.6 Z 2 n 2 e V . {\displaystyle E=-{\frac {Zk_{\mathrm {e} }e^{2}}{2r_{n}}}=-{\frac {Z^{2}(k_{\mathrm {e} }e^{2})^{2}m_{\mathrm {e} }}{2\hbar ^{2}n^{2}}}\approx {\frac {-13.6\ Z^{2}}{n^{2}}}~\mathrm {eV} .} An electron in the lowest energy level of hydrogen (n = 1) therefore has about 13.6 eV less energy than a motionless electron infinitely far from the nucleus. The next energy level (n = 2) is −3.4 eV. The third (n = 3) is −1.51 eV, and so on. For larger values of n, these are also the binding energies of a highly excited atom with one electron in a large circular orbit around the rest of the atom. The hydrogen formula also coincides with the Wallis product. The combination of natural constants in the energy formula is called the Rydberg energy (RE): R E = ( k e e 2 ) 2 m e 2 ℏ 2 . {\displaystyle R_{\mathrm {E} }={\frac {(k_{\mathrm {e} }e^{2})^{2}m_{\mathrm {e} }}{2\hbar ^{2}}}.} This expression is clarified by interpreting it in combinations that form more natural units: m e c 2 {\displaystyle m_{\mathrm {e} }c^{2}} is the rest mass energy of the electron (511 keV), k e e 2 ℏ c = α ≈ 1 137 {\displaystyle {\frac {k_{\mathrm {e} }e^{2}}{\hbar c}}=\alpha \approx {\frac {1}{137}}} is the fine-structure constant, R E = 1 2 ( m e c 2 ) α 2 {\displaystyle R_{\mathrm {E} }={\frac {1}{2}}(m_{\mathrm {e} }c^{2})\alpha ^{2}} . Since this derivation is with the assumption that the nucleus is orbited by one electron, we can generalize this result by letting the nucleus have a charge q = Ze, where Z is the atomic number. This will now give us energy levels for hydrogenic (hydrogen-like) atoms, which can serve as a rough order-of-magnitude approximation of the actual energy levels. So for nuclei with Z protons, the energy levels are (to a rough approximation): E n = − Z 2 R E n 2 . {\displaystyle E_{n}=-{\frac {Z^{2}R_{\mathrm {E} }}{n^{2}}}.} The actual energy levels cannot be solved analytically for more than one electron (see n-body problem) because the electrons are not only affected by the nucleus but also interact with each other via the Coulomb force. When Z = 1/α (Z ≈ 137), the motion becomes highly relativistic, and Z2 cancels the α2 in R; the orbit energy begins to be comparable to rest energy. Sufficiently large nuclei, if they were stable, would reduce their charge by creating a bound electron from the vacuum, ejecting the positron to infinity. This is the theoretical phenomenon of electromagnetic charge screening which predicts a maximum nuclear charge. Emission of such positrons has been observed in the collisions of heavy ions to create temporary super-heavy nuclei. The Bohr formula properly uses the reduced mass of electron and proton in all situations, instead of the mass of the electron, m red = m e m p m e + m p = m e 1 1 + m e / m p . {\displaystyle m_{\text{red}}={\frac {m_{\mathrm {e} }m_{\mathrm {p} }}{m_{\mathrm {e} }+m_{\mathrm {p} }}}=m_{\mathrm {e} }{\frac {1}{1+m_{\mathrm {e} }/m_{\mathrm {p} }}}.} However, these numbers are very nearly the same, due to the much larger mass of the proton, about 1836.1 times the mass of the electron, so that the reduced mass in the system is the mass of the electron multiplied by the constant 1836.1/(1 + 1836.1) = 0.99946. This fact was historically important in convincing Rutherford of the importance of Bohr's model, for it explained the fact that the frequencies of lines in the spectra for singly ionized helium do not differ from those of hydrogen by a factor of exactly 4, but rather by 4 times the ratio of the reduced mass for the hydrogen vs. the helium systems, which was much closer to the experimental ratio than exactly 4. For positronium, the formula uses the reduced mass also, but in this case, it is exactly the electron mass divided by 2. For any value of the radius, the electron and the positron are each moving at half the speed around their common center of mass, and each has only one fourth the kinetic energy. The total kinetic energy is half what it would be for a single electron moving around a heavy nucleus. E n = R E 2 n 2 {\displaystyle E_{n}={\frac {R_{\mathrm {E} }}{2n^{2}}}} (positronium). == Rydberg formula == Beginning in late 1860s, Johann Balmer and later Johannes Rydberg and Walther Ritz developed increasingly accurate empirical formula matching measured atomic spectral lines. Critical for Bohr's later work, Rydberg expressed his formula in terms of wave-number, equivalent to frequency. These formula contained a constant, R {\displaystyle R} , now known the Rydberg constant and a pair of integers indexing the lines:: 247  ν = R ( 1 m 2 − 1 n 2 ) . {\displaystyle \nu =R\left({\frac {1}{m^{2}}}-{\frac {1}{n^{2}}}\right).} Despite many attempts, no theory of the atom could reproduce these relatively simple formula.: 169  In Bohr's theory describing the energies of transitions or quantum jumps between orbital energy levels is able to explain these formula. For the hydrogen atom Bohr starts with his derived formula for the energy released as a free electron moves into a stable circular orbit indexed by τ {\displaystyle \tau } : W τ = 2 π 2 m e 4 h 2 τ 2 {\displaystyle W_{\tau }={\frac {2\pi ^{2}me^{4}}{h^{2}\tau ^{2}}}} The energy difference between two such levels is then: h ν = W τ 2 − W τ 1 = 2 π 2 m e 4 h 2 ( 1 τ 2 2 − 1 τ 1 2 ) {\displaystyle h\nu =W_{\tau _{2}}-W_{\tau _{1}}={\frac {2\pi ^{2}me^{4}}{h^{2}}}\left({\frac {1}{\tau _{2}^{2}}}-{\frac {1}{\tau _{1}^{2}}}\right)} Therefore, Bohr's theory gives the Rydberg formula and moreover the numerical value the Rydberg constant for hydrogen in terms of more fundamental constants of nature, including the electron's charge, the electron's mass, and the Planck constant:: 31  c R H = 2 π 2 m e 4 h 3 . {\displaystyle cR_{\text{H}}={\frac {2\pi ^{2}me^{4}}{h^{3}}}.} Since the energy of a photon is E = h c λ , {\displaystyle E={\frac {hc}{\lambda }},} these results can be expressed in terms of the wavelength of the photon given off: 1 λ = R ( 1 n f 2 − 1 n i 2 ) . {\displaystyle {\frac {1}{\lambda }}=R\left({\frac {1}{n_{\text{f}}^{2}}}-{\frac {1}{n_{\text{i}}^{2}}}\right).} Bohr's derivation of the Rydberg constant, as well as the concomitant agreement of Bohr's formula with experimentally observed spectral lines of the Lyman (nf = 1), Balmer (nf = 2), and Paschen (nf = 3) series, and successful theoretical prediction of other lines not yet observed, was one reason that his model was immediately accepted.: 34  To apply to atoms with more than one electron, the Rydberg formula can be modified by replacing Z with Z − b or n with n − b where b is constant representing a screening effect due to the inner-shell and other electrons (see Electron shell and the later discussion of the "Shell Model of the Atom" below). This was established empirically before Bohr presented his model. == Shell model (heavier atoms) == Bohr's original three papers in 1913 described mainly the electron configuration in lighter elements. Bohr called his electron shells, "rings" in 1913. Atomic orbitals within shells did not exist at the time of his planetary model. Bohr explains in Part 3 of his famous 1913 paper that the maximum electrons in a shell is eight, writing: "We see, further, that a ring of n electrons cannot rotate in a single ring round a nucleus of charge ne unless n < 8." For smaller atoms, the electron shells would be filled as follows: "rings of electrons will only join together if they contain equal numbers of electrons; and that accordingly the numbers of electrons on inner rings will only be 2, 4, 8". However, in larger atoms the innermost shell would contain eight electrons, "on the other hand, the periodic system of the elements strongly suggests that already in neon N = 10 an inner ring of eight electrons will occur". Bohr wrote "From the above we are led to the following possible scheme for the arrangement of the electrons in light atoms:" In Bohr's third 1913 paper Part III called "Systems Containing Several Nuclei", he says that two atoms form molecules on a symmetrical plane and he reverts to describing hydrogen. The 1913 Bohr model did not discuss higher elements in detail and John William Nicholson was one of the first to prove in 1914 that it couldn't work for lithium, but was an attractive theory for hydrogen and ionized helium. In 1921, following the work of chemists and others involved in work on the periodic table, Bohr extended the model of hydrogen to give an approximate model for heavier atoms. This gave a physical picture that reproduced many known atomic properties for the first time although these properties were proposed contemporarily with the identical work of chemist Charles Rugeley Bury Bohr's partner in research during 1914 to 1916 was Walther Kossel who corrected Bohr's work to show that electrons interacted through the outer rings, and Kossel called the rings: "shells". Irving Langmuir is credited with the first viable arrangement of electrons in shells with only two in the first shell and going up to eight in the next according to the octet rule of 1904, although Kossel had already predicted a maximum of eight per shell in 1916. Heavier atoms have more protons in the nucleus, and more electrons to cancel the charge. Bohr took from these chemists the idea that each discrete orbit could only hold a certain number of electrons. Per Kossel, after that the orbit is full, the next level would have to be used. This gives the atom a shell structure designed by Kossel, Langmuir, and Bury, in which each shell corresponds to a Bohr orbit. This model is even more approximate than the model of hydrogen, because it treats the electrons in each shell as non-interacting. But the repulsions of electrons are taken into account somewhat by the phenomenon of screening. The electrons in outer orbits do not only orbit the nucleus, but they also move around the inner electrons, so the effective charge Z that they feel is reduced by the number of the electrons in the inner orbit. For example, the lithium atom has two electrons in the lowest 1s orbit, and these orbit at Z = 2. Each one sees the nuclear charge of Z = 3 minus the screening effect of the other, which crudely reduces the nuclear charge by 1 unit. This means that the innermost electrons orbit at approximately 1/2 the Bohr radius. The outermost electron in lithium orbits at roughly the Bohr radius, since the two inner electrons reduce the nuclear charge by 2. This outer electron should be at nearly one Bohr radius from the nucleus. Because the electrons strongly repel each other, the effective charge description is very approximate; the effective charge Z doesn't usually come out to be an integer. The shell model was able to qualitatively explain many of the mysterious properties of atoms which became codified in the late 19th century in the periodic table of the elements. One property was the size of atoms, which could be determined approximately by measuring the viscosity of gases and density of pure crystalline solids. Atoms tend to get smaller toward the right in the periodic table, and become much larger at the next line of the table. Atoms to the right of the table tend to gain electrons, while atoms to the left tend to lose them. Every element on the last column of the table is chemically inert (noble gas). In the shell model, this phenomenon is explained by shell-filling. Successive atoms become smaller because they are filling orbits of the same size, until the orbit is full, at which point the next atom in the table has a loosely bound outer electron, causing it to expand. The first Bohr orbit is filled when it has two electrons, which explains why helium is inert. The second orbit allows eight electrons, and when it is full the atom is neon, again inert. The third orbital contains eight again, except that in the more correct Sommerfeld treatment (reproduced in modern quantum mechanics) there are extra "d" electrons. The third orbit may hold an extra 10 d electrons, but these positions are not filled until a few more orbitals from the next level are filled (filling the n = 3 d-orbitals produces the 10 transition elements). The irregular filling pattern is an effect of interactions between electrons, which are not taken into account in either the Bohr or Sommerfeld models and which are difficult to calculate even in the modern treatment. == Moseley's law and calculation (K-alpha X-ray emission lines) == Niels Bohr said in 1962: "You see actually the Rutherford work was not taken seriously. We cannot understand today, but it was not taken seriously at all. There was no mention of it any place. The great change came from Moseley." In 1913, Henry Moseley found an empirical relationship between the strongest X-ray line emitted by atoms under electron bombardment (then known as the K-alpha line), and their atomic number Z. Moseley's empiric formula was found to be derivable from Rydberg's formula and later Bohr's formula (Moseley actually mentions only Ernest Rutherford and Antonius Van den Broek in terms of models as these had been published before Moseley's work and Moseley's 1913 paper was published the same month as the first Bohr model paper). The two additional assumptions that [1] this X-ray line came from a transition between energy levels with quantum numbers 1 and 2, and [2], that the atomic number Z when used in the formula for atoms heavier than hydrogen, should be diminished by 1, to (Z − 1)2. Moseley wrote to Bohr, puzzled about his results, but Bohr was not able to help. At that time, he thought that the postulated innermost "K" shell of electrons should have at least four electrons, not the two which would have neatly explained the result. So Moseley published his results without a theoretical explanation. It was Walther Kossel in 1914 and in 1916 who explained that in the periodic table new elements would be created as electrons were added to the outer shell. In Kossel's paper, he writes: "This leads to the conclusion that the electrons, which are added further, should be put into concentric rings or shells, on each of which ... only a certain number of electrons—namely, eight in our case—should be arranged. As soon as one ring or shell is completed, a new one has to be started for the next element; the number of electrons, which are most easily accessible, and lie at the outermost periphery, increases again from element to element and, therefore, in the formation of each new shell the chemical periodicity is repeated." Later, chemist Langmuir realized that the effect was caused by charge screening, with an inner shell containing only 2 electrons. In his 1919 paper, Irving Langmuir postulated the existence of "cells" which could each only contain two electrons each, and these were arranged in "equidistant layers". In the Moseley experiment, one of the innermost electrons in the atom is knocked out, leaving a vacancy in the lowest Bohr orbit, which contains a single remaining electron. This vacancy is then filled by an electron from the next orbit, which has n=2. But the n=2 electrons see an effective charge of Z − 1, which is the value appropriate for the charge of the nucleus, when a single electron remains in the lowest Bohr orbit to screen the nuclear charge +Z, and lower it by −1 (due to the electron's negative charge screening the nuclear positive charge). The energy gained by an electron dropping from the second shell to the first gives Moseley's law for K-alpha lines, E = h ν = E i − E f = R E ( Z − 1 ) 2 ( 1 1 2 − 1 2 2 ) , {\displaystyle E=h\nu =E_{i}-E_{f}=R_{\mathrm {E} }(Z-1)^{2}\left({\frac {1}{1^{2}}}-{\frac {1}{2^{2}}}\right),} or f = ν = R v ( 3 4 ) ( Z − 1 ) 2 = ( 2.46 × 10 15 Hz ) ( Z − 1 ) 2 . {\displaystyle f=\nu =R_{\mathrm {v} }\left({\frac {3}{4}}\right)(Z-1)^{2}=(2.46\times 10^{15}~{\text{Hz}})(Z-1)^{2}.} Here, Rv = RE/h is the Rydberg constant, in terms of frequency equal to 3.28×1015 Hz. For values of Z between 11 and 31 this latter relationship had been empirically derived by Moseley, in a simple (linear) plot of the square root of X-ray frequency against atomic number (however, for silver, Z = 47, the experimentally obtained screening term should be replaced by 0.4). Notwithstanding its restricted validity, Moseley's law not only established the objective meaning of atomic number, but as Bohr noted, it also did more than the Rydberg derivation to establish the validity of the Rutherford/Van den Broek/Bohr nuclear model of the atom, with atomic number (place on the periodic table) standing for whole units of nuclear charge. Van den Broek had published his model in January 1913 showing the periodic table was arranged according to charge while Bohr's atomic model was not published until July 1913. The K-alpha line of Moseley's time is now known to be a pair of close lines, written as (Kα1 and Kα2) in Siegbahn notation. == Shortcomings == The Bohr model gives an incorrect value L=ħ for the ground state orbital angular momentum: The angular momentum in the true ground state is known to be zero from experiment. Although mental pictures fail somewhat at these levels of scale, an electron in the lowest modern "orbital" with no orbital momentum, may be thought of as not to revolve "around" the nucleus at all, but merely to go tightly around it in an ellipse with zero area (this may be pictured as "back and forth", without striking or interacting with the nucleus). This is only reproduced in a more sophisticated semiclassical treatment like Sommerfeld's. Still, even the most sophisticated semiclassical model fails to explain the fact that the lowest energy state is spherically symmetric – it doesn't point in any particular direction. In modern quantum mechanics, the electron in hydrogen is a spherical cloud of probability that grows denser near the nucleus. The rate-constant of probability-decay in hydrogen is equal to the inverse of the Bohr radius, but since Bohr worked with circular orbits, not zero area ellipses, the fact that these two numbers exactly agree is considered a "coincidence". (However, many such coincidental agreements are found between the semiclassical vs. full quantum mechanical treatment of the atom; these include identical energy levels in the hydrogen atom and the derivation of a fine-structure constant, which arises from the relativistic Bohr–Sommerfeld model (see below) and which happens to be equal to an entirely different concept, in full modern quantum mechanics). The Bohr model also failed to explain: Much of the spectra of larger atoms. At best, it can make predictions about the K-alpha and some L-alpha X-ray emission spectra for larger atoms, if two additional ad hoc assumptions are made. Emission spectra for atoms with a single outer-shell electron (atoms in the lithium group) can also be approximately predicted. Also, if the empiric electron–nuclear screening factors for many atoms are known, many other spectral lines can be deduced from the information, in similar atoms of differing elements, via the Ritz–Rydberg combination principles (see Rydberg formula). All these techniques essentially make use of Bohr's Newtonian energy-potential picture of the atom. The relative intensities of spectral lines; although in some simple cases, Bohr's formula or modifications of it, was able to provide reasonable estimates (for example, calculations by Kramers for the Stark effect). The existence of fine structure and hyperfine structure in spectral lines, which are known to be due to a variety of relativistic and subtle effects, as well as complications from electron spin. The Zeeman effect – changes in spectral lines due to external magnetic fields; these are also due to more complicated quantum principles interacting with electron spin and orbital magnetic fields. Doublets and triplets appear in the spectra of some atoms as very close pairs of lines. Bohr's model cannot say why some energy levels should be very close together. Multi-electron atoms do not have energy levels predicted by the model. It does not work for (neutral) helium. == Refinements == Several enhancements to the Bohr model were proposed, most notably the Sommerfeld or Bohr–Sommerfeld models, which suggested that electrons travel in elliptical orbits around a nucleus instead of the Bohr model's circular orbits. This model supplemented the quantized angular momentum condition of the Bohr model with an additional radial quantization condition, the Wilson–Sommerfeld quantization condition ∫ 0 T p r d q r = n h , {\displaystyle \int _{0}^{T}p_{\text{r}}\,dq_{\text{r}}=nh,} where pr is the radial momentum canonically conjugate to the coordinate qr, which is the radial position, and T is one full orbital period. The integral is the action of action-angle coordinates. This condition, suggested by the correspondence principle, is the only one possible, since the quantum numbers are adiabatic invariants. The Bohr–Sommerfeld model was fundamentally inconsistent and led to many paradoxes. The magnetic quantum number measured the tilt of the orbital plane relative to the xy plane, and it could only take a few discrete values. This contradicted the obvious fact that an atom could have any orientation relative to the coordinates, without restriction. The Sommerfeld quantization can be performed in different canonical coordinates and sometimes gives different answers. The incorporation of radiation corrections was difficult, because it required finding action-angle coordinates for a combined radiation/atom system, which is difficult when the radiation is allowed to escape. The whole theory did not extend to non-integrable motions, which meant that many systems could not be treated even in principle. In the end, the model was replaced by the modern quantum-mechanical treatment of the hydrogen atom, which was first given by Wolfgang Pauli in 1925, using Heisenberg's matrix mechanics. The current picture of the hydrogen atom is based on the atomic orbitals of wave mechanics, which Erwin Schrödinger developed in 1926. However, this is not to say that the Bohr–Sommerfeld model was without its successes. Calculations based on the Bohr–Sommerfeld model were able to accurately explain a number of more complex atomic spectral effects. For example, up to first-order perturbations, the Bohr model and quantum mechanics make the same predictions for the spectral line splitting in the Stark effect. At higher-order perturbations, however, the Bohr model and quantum mechanics differ, and measurements of the Stark effect under high field strengths helped confirm the correctness of quantum mechanics over the Bohr model. The prevailing theory behind this difference lies in the shapes of the orbitals of the electrons, which vary according to the energy state of the electron. The Bohr–Sommerfeld quantization conditions lead to questions in modern mathematics. Consistent semiclassical quantization condition requires a certain type of structure on the phase space, which places topological limitations on the types of symplectic manifolds which can be quantized. In particular, the symplectic form should be the curvature form of a connection of a Hermitian line bundle, which is called a prequantization. Bohr also updated his model in 1922, assuming that certain numbers of electrons (for example, 2, 8, and 18) correspond to stable "closed shells". == Model of the chemical bond == Niels Bohr proposed a model of the atom and a model of the chemical bond. According to his model for a diatomic molecule, the electrons of the atoms of the molecule form a rotating ring whose plane is perpendicular to the axis of the molecule and equidistant from the atomic nuclei. The dynamic equilibrium of the molecular system is achieved through the balance of forces between the forces of attraction of nuclei to the plane of the ring of electrons and the forces of mutual repulsion of the nuclei. The Bohr model of the chemical bond took into account the Coulomb repulsion – the electrons in the ring are at the maximum distance from each other. == Symbolism of planetary atomic models == Although Bohr's atomic model was superseded by quantum models in the 1920s, the visual image of electrons orbiting a nucleus has remained the popular concept of atoms. The concept of an atom as a tiny planetary system has been widely used as a symbol for atoms and even for "atomic" energy (even though this is more properly considered nuclear energy).: 58  Examples of its use over the past century include but are not limited to: The logo of the United States Atomic Energy Commission, which was in part responsible for its later usage in relation to nuclear fission technology in particular. The flag of the International Atomic Energy Agency is a "crest-and-spinning-atom emblem", enclosed in olive branches. The US minor league baseball Albuquerque Isotopes' logo shows baseballs as electrons orbiting a large letter "A". A similar symbol, the atomic whirl, was chosen as the symbol for the American Atheists, and has come to be used as a symbol of atheism in general. The Unicode Miscellaneous Symbols code point U+269B (⚛) for an atom looks like a planetary atom model. The television show The Big Bang Theory uses a planetary-like image in its print logo. The JavaScript library React uses planetary-like image as its logo. On maps, it is generally used to indicate a nuclear power installation. == See also == == References == === Footnotes === === Primary sources === Bohr, N. (July 1913). "I. On the constitution of atoms and molecules". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 26 (151): 1–25. Bibcode:1913PMag...26....1B. doi:10.1080/14786441308634955. Bohr, N. (September 1913). "XXXVII. On the constitution of atoms and molecules". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 26 (153): 476–502. Bibcode:1913PMag...26..476B. doi:10.1080/14786441308634993. Bohr, N. (1 November 1913). "LXXIII. On the constitution of atoms and molecules". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 26 (155): 857–875. Bibcode:1913PMag...26..857B. doi:10.1080/14786441308635031. Bohr, N. (October 1913). "The Spectra of Helium and Hydrogen". Nature. 92 (2295): 231–232. Bibcode:1913Natur..92..231B. doi:10.1038/092231d0. S2CID 11988018. Bohr, N. (March 1921). "Atomic Structure". Nature. 107 (2682): 104–107. Bibcode:1921Natur.107..104B. doi:10.1038/107104a0. S2CID 4035652. A. Einstein (1917). "Zum Quantensatz von Sommerfeld und Epstein". Verhandlungen der Deutschen Physikalischen Gesellschaft. 19: 82–92. Reprinted in The Collected Papers of Albert Einstein, A. Engel translator, (1997) Princeton University Press, Princeton. 6 p. 434. (provides an elegant reformulation of the Bohr–Sommerfeld quantization conditions, as well as an important insight into the quantization of non-integrable (chaotic) dynamical systems.) de Broglie, Maurice; Langevin, Paul; Solvay, Ernest; Einstein, Albert (1912). La théorie du rayonnement et les quanta : rapports et discussions de la réunion tenue à Bruxelles, du 30 octobre au 3 novembre 1911, sous les auspices de M.E. Solvay (in French). Gauthier-Villars. OCLC 1048217622. == Further reading == Linus Carl Pauling (1970). "Chapter 5-1". General Chemistry (3rd ed.). San Francisco: W.H. Freeman & Co. Reprint: Linus Pauling (1988). General Chemistry. New York: Dover Publications. ISBN 0-486-65622-5. George Gamow (1985). "Chapter 2". Thirty Years That Shook Physics. Dover Publications. Walter J. Lehmann (1972). "Chapter 18". Atomic and Molecular Structure: the development of our concepts. John Wiley and Sons. ISBN 0-471-52440-9. Paul Tipler and Ralph Llewellyn (2002). Modern Physics (4th ed.). W. H. Freeman. ISBN 0-7167-4345-0. Klaus Hentschel: Elektronenbahnen, Quantensprünge und Spektren, in: Charlotte Bigg & Jochen Hennig (eds.) Atombilder. Ikonografien des Atoms in Wissenschaft und Öffentlichkeit des 20. Jahrhunderts, Göttingen: Wallstein-Verlag 2009, pp. 51–61 Steven and Susan Zumdahl (2010). "Chapter 7.4". Chemistry (8th ed.). Brooks/Cole. ISBN 978-0-495-82992-8. Kragh, Helge (November 2011). "Conceptual objections to the Bohr atomic theory — do electrons have a 'free will'?". The European Physical Journal H. 36 (3): 327–352. Bibcode:2011EPJH...36..327K. doi:10.1140/epjh/e2011-20031-x. S2CID 120859582. == External links == Standing waves in Bohr's atomic model—An interactive simulation to intuitively explain the quantization condition of standing waves in Bohr's atomic mode
Wikipedia/Bohr_model_of_the_atom
Fiber-reinforced concrete or fibre-reinforced concrete (FRC) is concrete containing fibrous material which increases its structural integrity. It contains short discrete fibers that are uniformly distributed and randomly oriented. Fibers include steel fibers, glass fibers, synthetic fibers and natural fibers – each of which lend varying properties to the concrete. In addition, the character of fiber-reinforced concrete changes with varying concretes, fiber materials, geometries, distribution, orientation, and densities. == Historical perspective == The concept of using fibers as reinforcement is not new. Fibers have been used as reinforcement since ancient times. Historically, horsehair was used in mortar and straw in mudbricks. In the 1900s, asbestos fibers were used in concrete. In the 1950s, the concept of composite materials came into being and fiber-reinforced concrete was one of the topics of interest. Once the health risks associated with asbestos were discovered, there was a need to find a replacement for the substance in concrete and other building materials. By the 1960s, steel, glass (GFRC), and synthetic (such as polypropylene) fibers were used in concrete. Research into new fiber-reinforced concretes continues today. Fibers are usually used in concrete to control cracking due to plastic shrinkage and to drying shrinkage. They also reduce the permeability of concrete and thus reduce bleeding of water. Some types of fibers produce greater impact, abrasion, and shatter resistance in concrete. Larger steel or synthetic fibers can replace rebar or steel completely in certain situations. Fiber reinforced concrete has all but completely replaced bar in underground construction industry such as tunnel segments where almost all tunnel linings are fiber reinforced in lieu of using rebar. This may, in part, be due to issues relating to oxidation or corrosion of steel reinforcements. This can occur in climates that are subjected to water or intense and repeated moisture, see Surfside Building Collapse. Indeed, some fibers actually reduce the compressive strength of concrete. Lignocellulosic fibers in a cement matrix can degrade due to the hydrolysis of lignin and hemicelluloses. The amount of fibers added to a concrete mix is expressed as a percentage of the total volume of the composite (concrete and fibers), termed "volume fraction" (Vf). Vf typically ranges from 0.1 to 3%. The aspect ratio (l/d) is calculated by dividing fiber length (l) by its diameter (d). Fibers with a non-circular cross section use an equivalent diameter for the calculation of aspect ratio. If the fiber's modulus of elasticity is higher than the matrix (concrete or mortar binder), they help to carry the load by increasing the tensile strength of the material. Increasing the aspect ratio of the fiber usually segments the flexural strength and toughness of the matrix. Longer length results in better matrix inside the concrete and finer diameter increases the count of fibers. To ensure that each fiber strand is effective, it is recommended to use fibers longer than the maximum aggregate size. Normal concrete contains 19 mm (0.75 in) equivalent diameter aggregate which is 35-45% of concrete, fibers longer than 20 mm (0.79 in) are more effective. However, fibers that are too long and not properly treated at time of processing tend to "ball" in the mix and create work-ability problems. Fibers are added for long term durability of concrete. Glass and polyester decompose in alkaline condition of concrete and various additives and surface treatment of concrete. The High Speed 1 tunnel linings incorporated concrete containing 1 kg/m3 or more of polypropylene fibers, of diameter 18 & 32 μm, giving the benefits noted below. Adding fine diameter polypropylene fibers, not only provides reinforcement in tunnel lining, but also prevents "spalling" and damage of lining in case of fire due to accident. == Benefits == Glass fibers can: Improve concrete strength at low cost. Adds tensile reinforcement in all directions, unlike rebar. Add a decorative look as they are visible in the finished concrete surface. Polypropylene and nylon fibers can: Improve mix cohesion, improving pumpability over long distances Improve freeze-thaw resistance Improve resistance to explosive spalling in case of a severe fire Improve impact- and abrasion-resistance Increase resistance to plastic shrinkage during curing Improve structural strength Reduce steel reinforcement requirements Improve ductility Reduce crack widths and control the crack widths tightly, thus improving durability Steel fibers can: Improve structural strength Reduce steel reinforcement requirements Reduce crack widths and control the crack widths tightly, thus improving durability Improve impact- and abrasion-resistance Improve freeze-thaw resistance Natural (lignocellulosic, LC) fibers and/or particles can: Improve ductility Contribute to crack control via bridging Reduce the negative environmental impact of the materials (GWP - global warming potential) Reduce weight LC (plant-based) fibers and particles can degrade in a cement matrix Blends of both steel and polymeric fibers are often used in construction projects in order to combine the benefits of both products; structural improvements provided by steel fibers and the resistance to explosive spalling and plastic shrinkage improvements provided by polymeric fibers. In certain specific circumstances, steel fiber or macro synthetic fibers can entirely replace traditional steel reinforcement bar ("rebar") in reinforced concrete. This is most common in industrial flooring but also in some other precasting applications. Typically, these are corroborated with laboratory testing to confirm that performance requirements are met. Care should be taken to ensure that local design code requirements are also met, which may impose minimum quantities of steel reinforcement within the concrete. There are increasing numbers of tunnelling projects using precast lining segments reinforced only with steel fibers. Micro-rebar has also been recently tested and approved to replace traditional reinforcement in vertical walls designed in accordance with ACI 318 Chapter 14. == Some developments == At least half of the concrete in a typical building component protects the steel reinforcement from corrosion. Concrete using only fiber as reinforcement can result in saving of concrete, thereby greenhouse effect associated with it. FRC can be molded into many shapes, giving designers and engineers greater flexibility. High performance FRC (HPFRC) claims it can sustain strain-hardening up to several percent strain, resulting in a material ductility of at least two orders of magnitude higher when compared to normal concrete or standard fiber-reinforced concrete. HPFRC also claims a unique cracking behavior. When loaded to beyond the elastic range, HPFRC maintains crack width to below 100 μm, even when deformed to several percent tensile strains. Field results with HPFRC and The Michigan Department of Transportation resulted in early-age cracking. Recent studies on high-performance fiber-reinforced concrete in a bridge deck found that adding fibers provided residual strength and controlled cracking. There were fewer and narrower cracks in the FRC even though the FRC had more shrinkage than the control. Residual strength is directly proportional to the fiber content. The use of natural fibers has become a topic of research mainly due to the expected positive environmental impact, recyclability, and economy. The degradation of natural fibers and particles in a cement matrix is a concern. Some studies were performed using waste carpet fibers in concrete as an environmentally friendly use of recycled carpet waste. A carpet typically consists of two layers of backing (usually fabric from polypropylene tape yarns), joined by CaCO3 filled styrene-butadiene latex rubber (SBR), and face fibers (majority being nylon 6 and nylon 66 textured yarns). Such nylon and polypropylene fibers can be used for concrete reinforcement. Other ideas are emerging to use recycled materials as fibers: recycled polyethylene terephthalate (PET) fiber, for example. == Standards == === International === The following are several international standards for fiber-reinforced concrete: BS EN 14889-1:2006 – Fibres for Concrete. Steel Fibres. Definitions, specifications & conformity BS EN 14845-1:2007 – Test methods for fibres in concrete ASTM A820-16 – Standard Specification for Fiber-Reinforced Concrete (superseded) ASTM C1116/C1116M - Standard Specification for Fiber-Reinforced Concrete ASTM C1018-97 – Standard Test Method for Flexural Toughness and First-Crack Strength of Fiber-Reinforced Concrete (Using Beam With Third-Point Loading) (Withdrawn 2006) === Canada === CSA A23.1-19 Annex U - Ultra High Performance Concrete (with and without Fiber Reinforcement) CSA S6-19, 8.1 - Design Guideline for Ultra High Performance Concrete == See also == Fiber-reinforced plastic Glass-reinforced plastic Reinforced concrete Steel fibre-reinforced shotcrete Textile-reinforced concrete == References == === Citations === === Books === Wietek, Bernhard (2021). Kumm, Frieder (ed.). Fiber Concrete: In Construction. Wiesbaden: Springer Nature. doi:10.1007/978-3-658-34481-8. ISBN 978-3-658-34480-1. OCLC 1267299045.
Wikipedia/Fiber-reinforced_concrete
In photography and cinematography, headroom or head room is a concept of aesthetic composition that addresses the relative vertical position of the subject within the frame of the image. Headroom refers specifically to the distance between the top of the subject's head and the top of the frame, but the term is sometimes used instead of lead room, nose room or 'looking room' to include the sense of space on both sides of the image. The amount of headroom that is considered aesthetically pleasing is a dynamic quantity; it changes relative to how much of the frame is filled by the subject. Rather than pointing and shooting, one must compose the image to be pleasing. Too much room between a subject's head and the top of frame results in dead space. == Origins of headroom == The concept of headroom was born with portrait painting techniques. Classical painters used a technique linked to headroom called the "rule of thirds". The "rule of thirds" was first coined by the painter John Thomas Smith in his book "Remarks on Rural Scenery." The rule of thirds suggests that the subject's eyes, as a centre of interest, are ideally positioned one-third of the way down from the top of the frame. With a subject placed one-third of the way down from the top of the frame, the subject aligns with the proper headroom to make an image pleasing to the eye. This technique has carried on with other visual forms of art such as photography and cinematography. == Psychology of headroom == Perceptual psychological studies have been carried out with experimenters using a white dot placed in various positions within a frame to demonstrate that observers attribute potential motion to a static object within a frame, relative to its position. The unmoving object is described as 'pulling' toward the center or toward an edge or corner. Proper headroom is achieved when the object is no longer seen to be slipping out of the frame—when its potential for motion is seen to be neutral in all directions. Head room can also aid the creator in imposing a certain feeling upon the viewer. Headroom is a way of balancing out a frame. According to Dr. John Suhler in his e-book Photographic Psychology: Image and Psyche, “the eye appreciates the appearance of balance in [an image]. It makes us feel centered, steady, and stable. It suggests poise and gracefulness.” Headroom helps create this balance. If there is too much headroom, this can make a viewer feel unsettled. On the other hand, a head that is partially cut off can make a viewer feel claustrophobic. A perfectly framed headroom can make the viewer feel at ease and focus on the eyes of a subject. This allows a filmmaker to advance the story visually by altering a viewer's subconscious. It may also aid a photographer in creating the tone they want in a photograph. For a head shot, they would want the viewer to be completely at ease and be drawn to the person in the shot. They would frame it perfectly. For a piece of art about mental illness, they may give the viewer either too much headroom or entirely too little. It depends on the overall effect a photographer wants. == Headroom in cinematography == Cinematography has the added factors of the movement of the subject, the movement of the camera, and the possibility of zooming in or out. Headroom changes as the camera zooms in or out, and the camera must simultaneously tilt up or down to keep the center of interest approximately one-third of the way down from the top of the frame. The closer the subject, the less headroom needed. In extreme close-ups, the top of the head is out of the frame, but the concept of headroom still applies via the rule of thirds. This also changes when one is shooting an extreme wide-shot. The subject can still be placed correctly for rule of thirds, but have a significant amount of space between his/her head and the top from frame. It can be more visually pleasing to give equal weight to the sky and ground rather than ensuring proper headroom is being achieved. == Headroom in television == In television broadcast camera work, the amount of headroom seen by the production crew is slightly greater than the amount seen by home viewers, whose frames are reduced in area by about 5%. To adjust for this, broadcast camera headroom is slightly expanded so that home viewers will see the correct amount of headroom. Professional video camera viewfinders and professional video monitors often include an overscan setting to compare between full screen resolution and "domestic cut-off" as an aid to achieving good headroom and lead room. == Examples == == See also == Highlight headroom Max Headroom == References == == Further reading == Millerson, Gerald. Video camera techniques, Focal Press, 1994, p. 80. ISBN 0-240-51376-2
Wikipedia/Headroom_(photographic_framing)
Photographic plates preceded film as the primary medium for capturing images in photography. These plates, made of metal or glass and coated with a light-sensitive emulsion, were integral to early photographic processes such as heliography, daguerreotypes, and photogravure. Glass plates, thinner than standard window glass, became widely used in the late 19th century for their clarity and reliability. Although largely replaced by film during the 20th century, plates continued to be used for specialised scientific and medical purposes until the late 20th century. == History == Glass plates were far superior to film for research-quality imaging because they were stable and less likely to bend or distort, especially in large-format frames for wide-field imaging. Early plates used the wet collodion process. The wet plate process was replaced late in the 19th century by gelatin dry plates. A view camera nicknamed "The Mammoth" weighing 1,400 pounds (640 kg) was built by George R. Lawrence in 1899, specifically to photograph "The Alton Limited" train owned by the Chicago & Alton Railway. It took photographs on glass plates measuring 8 feet (2.4 m) × 4.5 feet (1.4 m). Glass plate photographic material largely faded from the consumer market in the early years of the 20th century, as more convenient and less fragile films were increasingly adopted. However, photographic plates were reportedly still being used by one photography business in London until the 1970s, and by one in Bradford called the Belle Vue Studio that closed in 1975. They were in wide use for professional astrophotography as late as the 1990s. Workshops on the use of glass plate photography as an alternative medium or for artistic use are still being conducted in the early 21st century. == Scientific uses == === Astronomy === Many famous astronomical surveys were taken using photographic plates, including the first Palomar Observatory Sky Survey (POSS) of the 1950s, the follow-up POSS-II survey of the 1990s, and the UK Schmidt Telescope survey of southern declinations. A number of observatories, including Harvard College and Sonneberg, maintain large archives of photographic plates, which are used primarily for historical research on variable stars. Many solar system objects were discovered by using photographic plates, superseding earlier visual methods. Discovery of minor planets using photographic plates was pioneered by Max Wolf beginning with his discovery of 323 Brucia in 1891. The first natural satellite discovered using photographic plates was Phoebe in 1898. Pluto was discovered using photographic plates in a blink comparator; its moon Charon was discovered 48 years later in 1978 by U.S. Naval Observatory astronomer James W. Christy by carefully examining a bulge in Pluto's image on a photographic plate. Glass-backed plates, rather than film, were generally used in astronomy because they do not shrink or deform noticeably in the development process or under environmental changes. Several important applications of astrophotography, including astronomical spectroscopy and astrometry, continued using plates until digital imaging improved to the point where it could outmatch photographic results. Kodak and other manufacturers discontinued production of most kinds of plates as the market for them dwindled between 1980 and 2000, terminating most remaining astronomical use, including for sky surveys. === Physics === Photographic plates were also an important tool in early high-energy physics, as they are blackened by ionizing radiation. Ernest Rutherford was one of the first to study the absorption, in various materials, of the rays produced in radioactive decay, by using photographic plates to measure the intensity of the rays. Development of particle detection optimised nuclear emulsions in the 1930s and 1940s, first in physics laboratories, then by commercial manufacturers, enabled the discovery and measurement of both the pi-meson and K-meson, in 1947 and 1949, initiating a flood of new particle discoveries in the second half of the 20th century. === Electron microscopy === Photographic emulsions were originally coated on thin glass plates for imaging with electron microscopes, which provided a more rigid, stable and flatter plane compared to plastic films. Beginning in the 1970s, high-contrast, fine grain emulsions coated on thicker plastic films manufactured by Kodak, Ilford and DuPont replaced glass plates. These films have largely been replaced by digital imaging technologies. == Medical imaging == The sensitivity of certain types of photographic plates to ionizing radiation (usually X-rays) is also useful in medical imaging and material science applications, although they have been largely replaced with reusable and computer readable image plate detectors and other types of X-ray detectors. == Decline == The earliest flexible films of the late 1880s were sold for amateur use in medium-format cameras. The plastic was not of very high optical quality and tended to curl and otherwise not provide as desirably flat a support surface as a sheet of glass. Initially, a transparent plastic base was more expensive to produce than glass. Quality was eventually improved, manufacturing costs came down, and most amateurs gladly abandoned plates for films. After large-format high quality cut films for professional photographers were introduced in the late 1910s, the use of plates for ordinary photography of any kind became increasingly rare. The persistent use of plates in astronomical and other scientific applications started to decline in the early 1980s as they were gradually replaced by charge-coupled devices (CCDs), which also provide outstanding dimensional stability. CCD cameras have several advantages over glass plates, including high efficiency, linear light response, and simplified image acquisition and processing. However, even the largest CCD formats (e.g., 8192 × 8192 pixels) still do not have the detecting area and resolution of most photographic plates, which has forced modern survey cameras to use large CCD arrays to obtain the same coverage. The manufacture of photographic plates has been discontinued by Kodak, Agfa and other widely known traditional makers. Eastern European sources have subsequently catered to the minimal remaining demand, practically all of it for use in holography, which requires a recording medium with a large surface area and a submicroscopic level of resolution that currently (2014) available electronic image sensors cannot provide. In the realm of traditional photography, a small number of historical process enthusiasts make their own wet or dry plates from raw materials and use them in vintage large-format cameras. == Preservation == Several institutions have established archives to preserve photographic plates and prevent their valuable historical information from being lost. The emulsion on the plate can deteriorate. In addition, the glass plate medium is fragile and prone to cracking if not stored correctly. === Historical archives === The United States Library of Congress has a large collection of both wet and dry plate photographic negatives, dating from 1855 through 1900, over 7,500 of which have been digitized from the period 1861 to 1865. The George Eastman Museum holds an extensive collection of photographic plates. In 1955, wet plate negatives measuring 4 feet 6 inches (1.37 m) × 3 feet 2 inches (0.97 m) were reported to have been discovered in 1951 as part of the Holtermann Collection. These purportedly were the largest glass negatives discovered at that time. These images were taken in 1875 by Charles Bayliss and formed the "Shore Tower" panorama of Sydney Harbour. Albumen contact prints made from these negatives are in the holdings of the Holtermann Collection, the negatives are listed among the current holdings of the Collection. === Scientific archives === Preservation of photographic plates is a particular need in astronomy, where changes often occur slowly and the plates represent irreplaceable records of the sky and astronomical objects that extend back over 100 years. The method of digitization of astronomical plates enables free and easy access to those unique astronomical data and it is one of the most popular approaches to preserve them. This approach was applied at the Baldone Astrophysical Observatory where about 22,000 glass and film plates of the Schmidt Telescope were scanned and cataloged. Another astronomical plate archive is the Astronomical Photographic Data Archive (APDA) at the Pisgah Astronomical Research Institute (PARI). APDA was created in response to recommendations of a group of international scientists who gathered in 2007 to discuss how to best preserve astronomical plates (see the Osborn and Robbins reference listed under Further reading). The discussions revealed that some observatories no longer could maintain their plate collections and needed a place to archive them. APDA is dedicated to housing and cataloging unwanted plates, with the goal to eventually catalog the plates and create a database of images that can be accessed via the Internet by the global community of scientists, researchers, and students. APDA now has a collection of more than 404,000 photographic images from over 40 observatories that are housed in a secure building with environmental control. The facility possesses several plate scanners, including two high-precision ones, GAMMA I and GAMMA II, built for NASA and the Space Telescope Science Institute (STScI) and used by a team under the leadership of the late Barry Lasker to develop the Guide Star Catalog and Digitized Sky Survey that are used to guide and direct the Hubble Space Telescope. APDA's networked storage system can store and analyze more than 100 terabytes of data. A historical collection of photographic plates from Mt. Wilson Observatory is available at the Carnegie Observatories. Metadata is available via a searchable database, while a portion of the plates has been digitized. == See also == Camera Film base Photographic film Mammoth plate == References == == Further reading == Peter Kroll, Constanze La Dous, Hans-Jürgen Bräuer: "Treasure Hunting in Astronomical Plate Archives." (Proceedings of the international Workshop held at Sonneberg Observatory, March 4 to 6, 1999.) Verlag Herri Deutsch, Frankfurt am Main (1999), ISBN 3-8171-1599-7 Wayne Osborn, Lee Robbins: "Preserving Astronomy's Photographic Legacy: Current State and the Future of North American Astronomical Plates." Astronomical Society of the Pacific Conference Series, Vol. 410 (2009), ISBN 978-1-58381-700-1 Pisgah Astronomical Research Institute (PARI) Astronomical Photographic Data Archive (APDA) https://www.pari.edu/research/adpa/ == External links == Carnegie Observatories The Sonneberg Plates Archiv (Sonneberg Observatory) The Harvard College Observatory Plate Stacks APDA @ PARI Pisgah Astronomical Research Institute Astronomical Photographic Data Archive (PARI APDA)(Archive from Aug 2012) Capturing Oregon's Frontier Documentary produced by Oregon Public Broadcasting Video demonstration of collodion wet plate preparation and photographic image creation Course on field wet-plate photography Information on creation of wet-plate photographs
Wikipedia/Photographic_plate
The Brenizer method, sometimes referred to as bokeh panorama or bokehrama, is a photographic technique characterized by the creation of a digital image exhibiting a shallow depth of field in tandem with a wide angle of view. Created by use of panoramic stitching techniques applied to portraiture, it was popularized by photographer Ryan Brenizer. The combination of these characteristics enables a photographer to mimic the look of large format film photography with a digital camera. Large format cameras use a negative that is at least 4×5 inches (102×127 mm) and are known for their very shallow depth of field when using a wide aperture and their unique high level of clarity, contrast and control. Image sensor formats of common digital cameras, in comparison, are much smaller, ranging down to the tiny sensors in camera phones. The Brenizer method increases the effective sensor size of the camera, simulating the characteristics of large format photography. While the aesthetics of this form of imaging most closely resemble large format analog photography, its look has also led it to being compared to tilt shift photography. Both techniques create images that exhibit an unusually shallow depth of field. == History, method and usage == Ryan Brenizer initially referred to the technique as a bokeh panorama. It uses panoramic stitching, for the purpose of applying the shallow depth-of-field associated with wide-aperture telephoto lenses to a wider-field-of-view composition. Shallow depth of field panoramic stitching photographs are sometimes referred to as the Brenizer method, as he popularized it in recent years through his work. An image produced by this method is sometimes referred to as a bokeh panorama (or the portmanteau bokehrama) in reference to the deliberate blurring style of bokeh photography. The process requires taking multiple shots of a scene in a manner that allows for later image stitching using a fast lens, generally of a focal length of 50 mm or longer. It is also beneficial to use manual focus, manual white balance and manual shutter and aperture controls to maintain a uniform exposure across the entire set of images. This method is of interest because: It allows for the cheap and relatively easy creation of aesthetics usually only available through the use of expensive, complicated and bulky equipment. It provides a way of imitating a traditional film-based process with digital equipment. It creates very high-resolution images. The method is used for portrait photography and, increasingly, automobile photography. == Examples == == References == == External links == Method instructions from Ryan Brenizer Brenizer Method on YouTube Guide by Edward Noble The best lenses for brenizer / bokehpanoramas Brenizer Method Calculation by Brett Maxwell Examples at Flickr group The Brenizer Method and Bokeh Panoramas
Wikipedia/Brenizer_method
Photographic fixer is a mixture of chemicals used in the final step in the photographic processing of film or paper. The fixer stabilises the image, removing the unexposed silver halide remaining on the photographic film or photographic paper, leaving behind the reduced metallic silver that forms the image. By fixation, the film or paper is insensitive to further action by light. Without fixing, the remaining silver halide would darken and cause fogging of the image. == Chemistry == Fixation is commonly achieved by treating the film or paper with a solution of thiosulfate salt. Popular salts are sodium thiosulfate—commonly called hypo—and ammonium thiosulfate—commonly used in modern rapid fixer formulae. Fixation by thiosulfate involves these chemical reactions (X = halide, typically Br−): AgX + 2 S2O32− → [Ag(S2O3)2]3− + X− AgX + 3 S2O32− → [Ag(S2O3)3]5− + X− In addition to thiosulphate the fixer typically contains mildly acidic compounds to adjust the pH and suppress trace amounts of the developer. This compound is often an alkali hydrogen sulfite (bisulfite) which also serves to preserve the thiosulphate. Less commonly it may also contain other additives e.g. for the hardening of gelatin. There are also non-thiosulphate fixers, at least for special purposes. Fixer is used for processing all commonly used films, including black-and-white films, Kodachrome, and chromogenic films. == Chromogenic films == In chromogenic films, the remaining silver must be removed by a chemical mixture called a bleach fix, sometimes shortened to blix. This mixture contains ammonium thiosulfate and ferric EDTA, a powerful chelating agent. The fixing agent reduces the silver which is then dissolved by the chelating agent. == Washing and stabilisation == After fixation, washing is important to remove the exhausted chemicals from the emulsion. Otherwise they cause image deterioration. Other treatments of the remaining silver-based image are sometimes used to prevent "burning". == References ==
Wikipedia/Photographic_fixer
Photographic processing or photographic development is the chemical means by which photographic film or paper is treated after photographic exposure to produce a negative or positive image. Photographic processing transforms the latent image into a visible image, makes this permanent and renders it insensitive to light. All processes based upon the gelatin silver process are similar, regardless of the film or paper's manufacturer. Exceptional variations include instant films such as those made by Polaroid and thermally developed films. Kodachrome required Kodak's proprietary K-14 process. Kodachrome film production ceased in 2009, and K-14 processing is no longer available as of December 30, 2010. Ilfochrome materials use the dye destruction process. Deliberately using the wrong process for a film is known as cross processing. == Common processes == All photographic processing use a series of chemical baths. Processing, especially the development stages, requires very close control of temperature, agitation and time. === Black and white negative processing === The film may be soaked in water to swell the gelatin layer, facilitating the action of the subsequent chemical treatments. The developer converts the latent image to macroscopic particles of metallic silver. A stop bath, typically a dilute solution of acetic acid or citric acid, halts the action of the developer. A rinse with clean water may be substituted. The fixer makes the image permanent and light-resistant by dissolving remaining silver halide. A common fixer is hypo, specifically ammonium thiosulfate. Washing in clean water removes any remaining fixer. Residual fixer can corrode the silver image, leading to discolouration, staining and fading. The washing time can be reduced and the fixer more completely removed if a hypo clearing agent is used after the fixer. Film may be rinsed in a dilute solution of a non-ionic wetting agent to assist uniform drying, which eliminates drying marks caused by hard water. (In very hard water areas, a pre-rinse in distilled water may be required – otherwise the final rinse wetting agent can cause residual ionic calcium on the film to drop out of solution, causing spotting on the negative.) Film is then dried in a dust-free environment, cut and placed into protective sleeves. Once the film is processed, it is then referred to as a negative. The negative may now be printed; the negative is placed in an enlarger and projected onto a sheet of photographic paper. Many different techniques can be used during the enlargement process. Two examples of enlargement techniques are dodging and burning. Alternatively (or as well), the negative may be scanned for digital printing or web viewing after adjustment, retouching, and/or manipulation. From a chemical standpoint, conventional black and white negative film is processed by a developer that reduces silver halide to silver metal, exposed silver halide is reduced faster than unexposed silver halide, which leaves a silver metal image. It is then fixed by converting all remaining silver halide into a soluble silver complex, which is then washed away with water. An example of a black and white developer is Kodak D-76 which has bis(4-hydroxy-N-methylanilinium) sulfate with hydroquinone and sodium sulfite. In graphic art film, also called lithographic film which is a special type of black and white film used for converting images into halftone images for offset printing, a developer containing methol-hydroquinone and sulfite stabilizers may be used. Exposed silver halide oxidizes the hydroquinone, which then oxidizes a nucleating agent in the film, which is attacked by a hydroxide ion and converts it via hydrolysis into a nucleating agent for silver metal, which it then forms on unexposed silver halide, creating a silver image. The film is then fixed by converting all remaining silver halide into soluble silver complexes. === Black and white reversal processing === This process has three additional stages: Following the first developer and rinse, the film is bleached to remove the developed negative image. This negative image is composed of metallic silver formed in the first developer step. The bleach used here only affects the negative, metallic silver grains, it does not affect the unexposed and therefore undeveloped silver halide. The film then contains a latent positive image formed from unexposed and undeveloped silver halide salts. The film is fogged, either chemically or by exposure to light. The remaining silver halide salts are developed in the second developer, converting them into a positive image composed of metallic silver. Finally, the film is fixed, washed, dried and cut. === Colour processing === Chromogenic materials use dye couplers to form colour images. Modern colour negative film is developed with the C-41 process and colour negative print materials with the RA-4 process. These processes are very similar, with differences in the first chemical developer. The C-41 and RA-4 processes consist of the following steps: The colour developer develops the silver negative image by reducing the silver halide crystals that have been exposed to light to metallic silver, this consists of the developer donating electrons to the silver halide, turning it into metallic silver; the donation oxidizes the developer which then activates the dye couplers to form the colour dyes in each emulsion layer, but only does so in the dye couplers that are around unexposed silver halide. A rehalogenising bleach converts the developed metallic silver into silver halide. A fixer removes all silver halide by converting it into soluble silver complexes that are then washed away, leaving only the dyes. The film is washed, stabilised, dried and cut. In the RA-4 process, the bleach and fix are combined. This is optional, and reduces the number of processing steps. Transparency films, except Kodachrome, are developed using the E-6 process, which has the following stages: A black and white developer develops the silver in each image layer. Development is stopped with a rinse or a stop bath. The film is fogged in the reversal step. The fogged silver halides are developed and oxidized developing agents couple with the dye couplers in each layer. The film is bleached, fixed, washed/rinsed, stabilised and dried as described above. The Kodachrome process is called K-14. It is very involved, requiring 4 separate developers, one for black and white and 3 for color, reexposing the film in between development stages, 8 or more tanks of processing chemicals, each with precise concentration, temperature and agitation, resulting in very complex processing equipment with precise chemical control. In some old processes, the film emulsion was hardened during the process, typically before the bleach. Such a hardening bath often used aldehydes, such as formaldehyde and glutaraldehyde. In modern processing, these hardening steps are unnecessary because the film emulsion is sufficiently hardened to withstand the processing chemicals. A typical chromogenic color film development process can be described from a chemical standpoint as follows: Exposed silver halide oxidizes the developer. The oxidized developer then reacts with color couplers, which are molecules near the exposed silver halide crystals, to create color dyes which ultimately create a negative image, after this the film is bleached, fixed, washed, stabilized and dried. The dye is only created where the couplers are. Thus the development chemical must travel a short distance from the exposed silver halide to the coupler and create a dye there. The amount of dye created is small and the reaction only occurs near the exposed silver halide and thus doesn't spread throughout the entire layer. The developer diffuses into the film emulsion to react with its layers. This process happens simultaneously for all three colors of couplers in the film: cyan (in the red-sensitive layer in the film), magenta(for the green-sensitive layer), and yellow (for the blue-sensitive layer). Color film has these three layers, to be able to perform subtractive color mixing and be able to replicate colors in images. == Further processing == Black and white emulsions both negative and positive, may be further processed. The image silver may be reacted with elements such as selenium or sulphur to increase image permanence and for aesthetic reasons. This process is known as toning. In selenium toning, the image silver is changed to silver selenide; in sepia toning, the image is converted to silver sulphide. These chemicals are more resistant to atmospheric oxidising agents than silver. If colour negative film is processed in conventional black and white developer, and fixed and then bleached with a bath containing hydrochloric acid and potassium dichromate solution, the resultant film, once exposed to light, can be redeveloped in colour developer to produce an unusual pastel colour effect. == Processing apparatus == Before processing, the film must be removed from the camera and from its cassette, spool or holder in a light-proof room or container. === Small scale processing === In amateur processing, the film is removed from the camera and wound onto a reel in complete darkness (usually inside a darkroom with the safelight turned off or a lightproof bag with arm holes). The reel holds the film in a spiral shape, with space between each successive loop so the chemicals may flow freely across the film's surfaces. The reel is placed in a specially designed light-proof tank (called a daylight processing tank or a light-trap tank) where it is retained until final washing is complete. Sheet films can be processed in trays, in hangers (which are used in deep tanks), or rotary processing drums. Each sheet can be developed individually for special requirements. Stand development, long development in dilute developer without agitation, is occasionally used. === Commercial processing === In commercial, central processing, the film is removed automatically or by an operator handling the film in a light proof bag from which it is fed into the processing machine. The processing machinery is generally run on a continuous basis with films spliced together in a continuous line. All the processing steps are carried out within a single processing machine with automatically controlled time, temperature and solution replenishment rate. The film or prints emerge washed and dry and ready to be cut by hand. Some modern machines also cut films and prints automatically, sometimes resulting in negatives cut across the middle of the frame where the space between frames is very thin or the frame edge is indistinct, as in an image taken in low light. Alternatively stores may use minilabs to develop films and make prints on the spot automatically without needing to send film to a remote, central facility for processing and printing. Some processing chemistries used in minilabs require a minimum amount of processing per given amount of time to remain stable and usable. Once rendered unstable due to low use, the chemistry needs to be completely replaced, or replenishers can be added to restore the chemistry to a usable state. Some chemistries have been designed with this in mind given the declining demand for film processing in minilabs, often requiring specific handling. Often chemistries become damaged by oxidation. Also, development chemicals need to be thoroughly agitated constantly to ensure consistent results. The effectiveness (activity) of the chemistry is determined through pre-exposed film control strips. == Environmental and safety issues == Many photographic solutions have high chemical and biological oxygen demand (COD and BOD). These chemical wastes are often treated with ozone, peroxide or aeration to reduce the COD in commercial laboratories. Exhausted fixer and to some extent rinse water contain silver thiosulfate complex ions. They are far less toxic than free silver ion, and they become silver sulfide sludge in the sewer pipes or treatment plant. However, the maximum silver concentration in discharge is very often tightly regulated. Silver is also a somewhat precious resource. Therefore, in most large scale processing establishments, exhausted fixer is collected for silver recovery and disposal. Many photographic chemicals use non-biodegradable compounds, such as EDTA, DTPA, NTA and borate. EDTA, DTPA, and NTA are very often used as chelating agents in all processing solutions, particularly in developers and washing aid solutions. EDTA and other polyamine polycarboxylic acids are used as iron ligands in colour bleach solutions. These are relatively nontoxic, and in particular EDTA is approved as a food additive. However, due to poor biodegradability, these chelating agents are found in alarmingly high concentrations in some water sources from which municipal tap water is taken. Water containing these chelating agents can leach metal from water treatment equipment as well as pipes. This is becoming an issue in Europe and some parts of the world. Another non-biodegradable compound in common use is surfactant. A common wetting agent for even drying of processed film uses Union Carbide/Dow Triton X-100 or octylphenol ethoxylate. This surfactant is also found to have estrogenic effect and possibly other harms to organisms including mammals. Development of more biodegradable alternatives to the EDTA and other bleaching agent constituents were sought by major manufacturers, until the industry became less profitable when the digital era began. In most amateur darkrooms, a popular bleach is potassium ferricyanide. This compound decomposes in the waste water stream to liberate cyanide gas. Other popular bleach solutions use potassium dichromate (a hexavalent chromium) or permanganate. Both ferricyanide and dichromate are tightly regulated for sewer disposal from commercial premises in some areas. Borates, such as borax (sodium tetraborate), boric acid and sodium metaborate, are toxic to plants, even at a concentration of 100 ppm. Many film developers and fixers contain 1 to 20 g/L of these compounds at working strength. Most non-hardening fixers from major manufacturers are now borate-free, but many film developers still use borate as the buffering agent. Also, some, but not all, alkaline fixer formulae and products contain a large amount of borate. New products should phase out borates, because for most photographic purposes, except in acid hardening fixers, borates can be substituted with a suitable biodegradable compound. Developing agents are commonly hydroxylated benzene compounds or aminated benzene compounds, and they are harmful to humans and experimental animals. Some are mutagens. They also have a large chemical oxygen demand (COD). Ascorbic acid and its isomers, and other similar sugar derived reductone reducing agents are a viable substitute for many developing agents. Developers using these compounds were actively patented in the US, Europe and Japan, until the 1990s but the number of such patents is very low since the late-1990s, when the digital era began. Development chemicals may be recycled by up to 70% using an absorber resin, only requiring periodic chemical analysis on pH, density and bromide levels. Other developers need ion-exchange columns and chemical analysis, allowing for up to 80% of the developer to be reused. Some bleaches are claimed to be fully bio-degradable while others can be regenerated by adding bleach concentrate to overflow (waste). Used fixers can have 60 to 90% of their silver content removed through electrolysis, in a closed loop where the fixer is continually recycled (regenerated). Stabilizers may or may not contain formaldehyde. == See also == List of photographic processes Fogging (photography) Darkroom Cross processing Caffenol == Notes == == References == == Further reading == Rogers, David (October 2007), The Chemistry of Photography: From Classical to Digital Technologies, Royal Society of Chemistry, ISBN 9780854042739, OCLC 1184188382 == External links == Kodak Processing manuals The Massive Dev Chart - film development times The Comprehensive Development Times Chart - Manufacturer's film development times database Ilford guide to processing black & white film
Wikipedia/Photographic_processing
Lomography, or simply lomo, is a photographic style which involves taking spontaneous photographs with minimal attention to technical details. Lomographic images often exploit the unpredictable, non-standard optical traits of toy cameras (such as light leaks and irregular lens alignment), and non-standard film processing techniques for aesthetic effect. Similar-looking techniques with digital photography, often involving "lomo" image filters in post-processing, may also be considered lomographic. "Lomography" is claimed as a commercial trademark by Lomographische GmbH. However, it has become a genericised trademark; most camera phone photo editor apps include a "lomo" filter. == History == While cheap plastic toy cameras using film often used in lomography were and are produced by multiple manufacturers, Lomography is named after the Soviet-era cameras produced by Leningradskoye Optiko-Mekhanicheskoye Obyedinenie. Formerly a state-run optics manufacturer, LOMO privatised following the dissolution of the Soviet Union and became LOMO PLC. The company created and produced the 35 mm LOMO LC-A Compact Automat camera, now central to the lomography movement. This camera was loosely based upon the Cosina CX-1 introduced in the early 1980s. The LOMO LC-A produces "unique, colorful, and sometimes blurry" images. Lomography has been a highly social pursuit since 1992, with local and international events organised by Lomographische GmbH. Lomographische, doing business as Lomography, is also a commercial company selling analogue cameras, films and accessories. The company continues to promote the Lomographic style; however, it is not necessary to use the company's products to take lomographic photos. == Company == Lomographische GmbH, doing business as Lomography, is a commercial company headquartered in Vienna, Austria, which sells cameras, accessories, and film. It hosts local and international events through its non-profit division, the Lomographic Society International. The company is the namesake of the lomography genre of experimental photography. The Lomographic Society International was founded in 1992 by a group of Viennese students interested in the LC-A. Lomography started as an art movement through which the students put on exhibitions of photos; the art movement then developed into the Lomographische AG, a commercial enterprise. Lomography signed an exclusive distribution agreement with LOMO PLC in 1995—becoming the sole distributor of all LOMO LC-A cameras outside of the former Soviet Union. The new company reached an agreement with the deputy mayor of St Petersburg, the future Russian Prime Minister and President, Vladimir Putin, to receive a tax break in order to keep the LOMO factory in the city open. Since the introduction of the original LOMO LC-A, Lomography has produced a line of their own film cameras. In 2005, production of the original LOMO LC-A was discontinued. Its replacement, the LOMO LC-A+, was introduced in 2006. The new camera, made in China rather than Russia, featured the original Russian lens manufactured by LOMO PLC. This changed as of mid-2007 with the lens now made in China as well. In 2012 the LC-A+ camera was re-released as a special edition. It costs ten times the original secondhand value of the old LOMO LC-A. The Lomographic Society International (Lomography) has moved on to produce their own range of analogue cameras, films and accessories. Lomography has also released products catered to digital devices, such as the Smartphone Film Scanner; and several lenses such as the Daguerreotype Achromat lens collection for analogue and digital SLR cameras with Canon EF, Nikon F or Pentax K mounts, inspired by 19th century Daguerreotype photography. In 2013, together with Zenit, Lomography produced a new version of the Petzval Lens designed to work with Canon EF and Nikon F mount SLR cameras. Some have questioned the pricing of Lomography's plastic "toy" cameras, which run from US$100 to $400. ==== Models ==== Cameras that have been marketed by Lomography: LOMO LC-A+ LC-A 120 Lomo LC-Wide Diana F+ Diana Mini – a 35 mm version of the Diana F+ Diana Baby – a 110 format version of the Diana F+ Diana Multi Pinhole Operator Diana Instant Square Spinner 360° – a 360° panoramic camera Sprocket Rocket ActionSampler – a four-lensed miniature 35 mm camera Pop-9 Oktomat Fisheye One Fisheye No.2 Fisheye Baby – a 110 format version of the Fisheye No.2 Colorsplash Colorsplash Flash SuperSampler La Sardina Fritz the Blitz Flash LomoKino – a 35 mm analog movie camera Konstruktor – a build-it-yourself 35 mm SLR camera HydroChrome Sutton's Panoramic Belair Camera LomoMod No.1 – a flat-packed DIY cardboard medium format camera LomoApparat Lomo'Instant Lomo'Instant Automat Lomo'Instant Automat Glass Lomo'Instant Wide Lomomatic 110 – a new 110 format camera with a glass lens ==== Film ==== The company produces 35 mm, 120 and 110 film in color negative, black and white as well as redscale. Lomography also produces its own range of experimental color-shifting film called LomoChrome. === Photo gallery === == See also == Lo-fi photography == References == == Further reading == Lomographische GmbH's website Lily Rothman (10 May 2012). "Lomography and the 'Analogue Future'". Time. Tyler Lee (18 April 2017). "Lomography Reinvigorates The Disposable Camera With 'Simple Use'". Ubergizmo. "What is Lomography?". 1stWebDesigner. 5 May 2017. "What is Lomography or lomo camera?". The Darkroom Photo Lab. 11 February 2016.
Wikipedia/Lomography
In the processing of photographic films, plates or papers, the photographic developer (or just developer) is one or more chemicals that convert the latent image to a visible image. Developing agents achieve this conversion by reducing the silver halides, which are pale-colored, into silver metal, which is black when in the form of fine particles. The conversion occurs within the gelatine matrix. The special feature of photography is that the developer acts more quickly on those particles of silver halide that have been exposed to light. When left in developer, all the silver halides will eventually be reduced and turn black. Generally, the longer a developer is allowed to work, the darker the image. == Chemical composition of developers == The developer typically consists of a mixture of chemical compounds prepared as an aqueous solution. For black-and-white photography, three main components of this mixture are::115 Developing agents. Popular developing agents are metol (monomethyl-p-aminophenol hemisulfate), phenidone (1-phenyl-3-pyrazolidinone), dimezone (4,4-dimethyl-1-phenylpyrazolidin-3-one) and hydroquinone (benzene-1,4-diol). Dimezone is thought to resist oxidation in solution better than Phenidone A, but it is not as available as Phenidone A. Dimezone is also known as Phenidone B. Alkaline agent such as sodium carbonate, borax, or sodium hydroxide to create the appropriately high pH. Sodium sulfite to delay oxidation of the developing agents by atmospheric oxygen. Notable standard formulas include Eastman Kodak D-76 film developer, D-72 print developer, and D-96 motion picture negative developer. Hydroquinone is superadditive with metol, meaning that it acts to "recharge" the metol after it has been oxidised in the process of reducing silver in the emulsion. Sulfite in a developer not only acts to prevent aerial oxidation of the developing agents in solution, it also facilitates the regeneration of metol by hydroquinone (reducing compensation and adjacency effects) and in high enough concentrations acts as a silver halide solvent. The original lithographic developer contained formaldehyde (often added as paraformaldehyde powder) in a low sulfite/bisulfite solution. Most developers also contain small amounts of potassium bromide to modify and restrain the action of the developer:218-219 to suppress chemical fogging. Developers for high contrast work have higher concentrations of hydroquinone and lower concentrations of metol and tend to use strong alkalis such as sodium hydroxide to push the pH up to around pH 11 to 12. Metol is difficult to dissolve in solutions of high salt content and instructions for mixing developer formulae therefore almost always list metol first. It is important to dissolve chemicals in the order in which they are listed. Some photographers add a pinch of sodium sulfite before dissolving the metol to prevent oxidation, but large amounts of sulfite in solution will make the metol very slow to dissolve. Because metol is relatively toxic and can cause skin sensitisation, modern commercial developers often use phenidone or dimezone S (4-hydroxymethyl-4-methyl-1-phenyl-3-pyrazolidone) instead. Dimezone, Dimezone S, is a white crystalline powder that is soluble in water and polar solvents. DD-X, HC-110, TMax developer, and PQ Universal developer are a few common film developers that use Dimezone as a developing agent. Dimezone is acutely toxic and an irritant. Hydroquinone can also be toxic to the human operator as well as environment; some modern developers replace it with ascorbic acid, or vitamin C. This, however, suffers from poor stability. Ascorbate developers may have the advantage of being compensating and sharpness-enhancing, as oxidation by-products formed during development are acidic, meaning they retard development in and adjacent to areas of high activity. This also explains why ascorbate developers have poor keeping properties, as oxidised ascorbate is both ineffective as a developing agent and lowers the pH of the solution, making the remaining developing agents less active. Recently, claims for practical methods to improve the stability of ascorbate developers have been made by several experimenters. Other developing agents in use are p-aminophenol, glycin (N-(4-hydroxyphenyl)glycine), pyrogallol, and catechol. When used in low sulfite developer composition, the latter two compounds cause gelatin to harden and stain in the vicinity of developing grains. Generally, the optical density of the stain increases in the heavily exposed (and heavily developed) area. This is a property that is highly sought after by some photographers because it increases negative contrast in relation to density, meaning that highlight detail can be captured without "blocking" (reaching high enough density that detail and tonality are severely compromised). Hydroquinone shares this property. However, the staining effect only appears in solutions with very little sulfite, and most hydroquinone developers contain substantial quantities of sulfite. In the early days of photography, a wide range of developing agents were used, including chlorohydroquinone, ferrous oxalate,:131 hydroxylamine, ferrous lactate, ferrous citrate, Eikonogen, atchecin, antipyrin, acetanilid and Amidol (which unusually required mildly acidic conditions). Developers also contain a water softening agent to prevent calcium scum formation (e.g., EDTA salts, sodium tripolyphosphate, NTA salts, etc.). The original lithographic developer was based upon a low sulfite/bisulfite developer with formaldehyde (added as the powder paraformaldehyde). The very low sulfite, high hydroquinone and high alkalinity encouraged "infectious development" (exposed developing silver halide crystals collided with unexposed silver halide crystals, causing them to also reduce) which enhanced the edge effect in line images. These high energy developers had a short tray life, but when used within their tray life provided consistent usable results. Modern lithographic developers contain hydrazine compounds, tetrazolium compounds and other amine contrast boosters to increase contrast without relying on the classic hydroquinone-only lithographic developer formulation. The modern formulae are very similar to rapid access developers (except for those additives) and therefore they enjoy long tray life. However, classic lithographic developers using hydroquinone alone suffers very poor tray life and inconsistent results. == Development == The developer selectively reduces silver halide crystals in the emulsion to metallic silver, but only those having latent image centres created by action of light. The light sensitive layer or emulsion consists of silver halide crystals in a gelatin base. Two photons of light must be absorbed by one silver halide crystal to form a stable two atom silver metal crystal. The developer used generally will only reduce silver halide crystals that have an existing silver crystal. Faster exposure or lower light level films usually have larger grains because those images capture less light. Fine grain films, like Kodachrome, require more light to increase the chance that the halide crystal will absorb at least two quanta of light as they have a smaller cross sectional size. Therefore, silver halide crystal size is proportional to film speed. The metallic silver image has dark (black) appearance. Once the desired level of reduction is achieved the development process is halted by washing in a dilute acid and then the undeveloped silver halide is removed by dissolving it in a thiosulfate solution, a process called fixing. Most commercial film developers use a dual solution or "push" (pushes the films speed) development (compensating developer, like Diafine) procedure where the reducing agent e.g. hydroquinone solution soaks into and swells the gelatin then the film is introduced into the alkaline solution which activates (lowers reduction potential) of the developer. The areas with the most light exposure use up the tiny amount of developer in the gelatin and stop making silver crystal before the film at that point is totally opaque. The areas that received the least light continue to develop because they haven't used up their developer. There is less contrast, but time is not critical and films from several customers and different exposures will develop satisfactorily. The time over which development takes place, and the type of developer, affect the relationship between the density of silver in the developed image and the quantity of light. This study is called sensitometry and was pioneered by F Hurter & V C Driffield in the late 19th century. == Colour development == In colour and chromogenic black-and-white photography, a similar development process is used except that the reduction of silver simultaneously oxidizes the paraphenylene colour developing agent which then takes part in the production of dye-stuffs in the emulsion by reacting with the appropriate couplers. There are three distinct processes used here. The C-41 process is used for almost all colour negative films and in this process dye couplers in the emulsion react with the oxidized colour developing agent in the developer solution to generate the visible dyes. An almost identical process is then used to produce colour prints from films. The developing agents used are derivates of paraphenylene diamine. In colour negative films, there are 3 types of dye couplers. There are the normal cyan, magenta and yellow dye forming couplers, but also there is a magenta coloured cyan masking coupler and a yellow coloured magenta masking coupler. These form respectively normal cyan dye, and magenta dye, but form an orange positive mask to correct colour. In addition, there is a third type of coupler called a DIR (Developer Inhibitor Release) coupler. This coupler releases a powerful inhibitor during dye formation, which affects edge effects and causes effects between layers to enhance overall image quality. == Reversal film development == In Ektachrome-type (E-6 process) transparencies, the film is first processed in an unusual developer containing phenidone and Hydroquinone-monosulfonate. This black and white developer is used for 6:00 at 100.4°F (38°C), with more time yielding "push" processing to increase the apparent film speed by reducing the Dmax, or maximum density. The first developer is the most critical step in Process E-6. The solution is essentially a black-and-white film developer, because it forms only a negative silver image in each layer of the film; no dye images are yet formed. Then, the film goes directly into the first wash for 2:00 at 100 °F, which acts as a controlled stop bath. Next, the film goes into the reversal bath. This step prepares the film for the colour developer step. In this reversal bath, a chemical reversal agent is absorbed into the emulsion, with no chemical reaction taking place until the film enters the colour developer. The reversal process can also be carried out using 800 footcandle-seconds of light, which is used by process engineers to troubleshoot reversal bath chemistry problems. Next, the film is developed to completion in the colour developer bath, which contains CD-3 as the colour developing agent. When film enters the colour developer, the reversal agent absorbed by the emulsion in the reversal bath chemically fogs (or "exposes") the unexposed silver halide (if it has not already been fogged by light in the previous step). The colour developer acts on the chemically exposed silver halide to form a positive silver image. However, the metallic silver image formed in the first developer, which is a negative image, is not a part of the reaction that takes place in this step. What is being reacted in this stage is the "leftover" of the negative image, that is, a positive image. As the colour development progresses, metallic silver image is formed, but more importantly, the colour developing agent is oxidised. Oxidised colour developer molecules react with the couplers to form colour dyes in situ. Thus colour dye is formed at the site of development in each of the three layers of the film. Each layer of the film contains different couplers, which react with the same oxidised developer molecules but form different colour dyes. Next, the film goes into the pre-bleach (formerly conditioner) bath, which has a precursor of formaldehyde (as a dye preservative) and EDTA to "kick off" the bleach. Next, the film goes into a bleach solution. The bleach converts metallic silver into silver bromide, which is converted to soluble silver compounds in the fixer. The C-41 color negative process introduced in 1972 uses ferric EDTA. Reversal processes have used ferric EDTA at least since the introduction of the E-6 process in 1976. For Kodachrome ferric EDTA is used at least in the current K-14 process. During bleaching, ferric EDTA is changed to ferrous EDTA before fixing, and final wash. Fe3+EDTA + Ag + Br− → Fe2+EDTA + AgBr Previously potassium ferricyanide was often used as bleach. The most common processing chemistry for such films is E6, derived from a long line of developers produced for the Ektachrome range of films. Ektachrome papers are also available. Standard black and white stock can also be reversal processed to give black and white slides. After 'first development,' the initial silver image is then removed (e.g. using a potassium bichromate/sulfuric acid bleach, which requires a subsequent "clearing bath" to remove the chromate stain from the film). The unfixed film is then fogged (physically or chemically) and 'second-developed'. However, the process works best with slow films such as Ilford Pan-F processed to give a high gamma. Kodak's chemistry kit for reversing Panatomic-X ("Direct Positive Film Developing Outfit") used sodium bisulfate in place of sulfuric acid in the bleach, and used a fogging developer that was inherently unstable, and had to be mixed and used within a two-hour period. (If two rolls, the maximum capacity of a single pint of redeveloper, were to be processed in succession, the redeveloper had to be mixed while the first roll was in the first developer.) == Proprietary methods == The K-14 process for Kodachrome films involved adding all the dyes to the emulsion during development. Special equipment was needed to process Kodachrome. Since 2010, there has been no commercial entity that processes Kodachrome anywhere in the world. In colour print development the Ilfochrome, or Cibachrome, process uses a print material with the dye-stuffs present and which are bleached out in appropriate places during developing. The chemistry involved here is wholly different from C41 chemistry; (it uses azo-dyes which are much more resistant to fading in sunlight). == References == The British Journal (1956). Photographic Almanac. London, England: Henry Greenwood and Company Limited. Langford, M. J. (1980). Advanced Photography. London, England: Focal Press.
Wikipedia/Photographic_developer
The conservation and restoration of photographs is the study of the physical care and treatment of photographic materials. It covers both efforts undertaken by photograph conservators, librarians, archivists, and museum curators who manage photograph collections at various cultural heritage institutions, as well as steps taken to preserve collections of personal and family photographs. It is an umbrella term that includes both preventative preservation activities such as environmental control and conservation techniques that involve treating individual items. Both preservation and conservation require an in-depth understanding of how photographs are made, and the causes and prevention of deterioration. Conservator-restorers use this knowledge to treat photographic materials, stabilizing them from further deterioration, and sometimes restoring them for aesthetic purposes. While conservation can improve the appearance of a photograph, image quality is not the primary purpose of conservation. Conservators will try to improve the visual appearance of a photograph as much as possible, while also ensuring its long-term survival and adhering the profession's ethical standards. Photograph conservators also play a role in the field of connoisseurship. Their understanding of the physical object and its structure makes them uniquely suited to a technical examination of the photograph, which can reveal clues about how, when, and where it was made. Photograph preservation is distinguished from digital or optical restoration, which is concerned with creating and editing a digital copy of the original image rather than treating the original photographic material. Photograph preservation does not normally include moving image materials, which by their nature require a very different approach. Film preservation concerns itself with these materials. == Photographic processes == Physical photographs usually consist of three components: the final image material (e.g. silver, platinum, dyes, or pigments), the transparent binder layer (e.g. albumen, collodion, or gelatin) in which the final image material is suspended, and the primary support (e.g. paper, glass, metal, or plastic). These components affect the susceptibility of photos to damage and the preservation and conservation methods required. Photograph preservation and conservation are also concerned with the negatives from which most old photographic prints are made. Most negatives are either glass plate or film-based. === Timeline === Source: 1816: HeliographyThe first person who succeeded in producing a paper negative of the camera image was Joseph Nicephore Niepce. He coated pewter plates with bitumen (an asphaltic varnish that hardens with exposure to light) and put them in a Camera Obscura. After exposure to sunlight for a long time, the parts that were exposed to light became hard and the parts that were not could be washed off with lavender oil. 1837: DaguerreotypeThe daguerreotype process (named after Louis Jacques Mande Daguerre) produces a unique image, as there is no negative created. After coating a copper plate with light-sensitive silver iodide, the plate is exposed to an image for over 20 minutes and then treated with fumes from heated mercury. The longer the exposure to light, the more mercury fumes are adsorbed by the silver iodide. After the plate is washed with salt water, the image appears, reversed. This was the earliest photographic process to gain popularity in America. It was used until around 1860. 1839: Salt printThis was the dominant form of paper print until Albumen prints were introduced in 1850. Salt prints were made using both paper and glass negatives. 1841: CalotypeWilliam Henry Fox Talbot invented the negative-positive system of photography commonly used today. He first developed the Talbotype, which used silver chloride to sensitize paper. After improving the process by using silver iodide, he renamed it Calotype. The process could produce many positive images, but they were not as sharp because they were printed on fibrous paper rather than glass. 1842: Cyanotype (Ferro-plusiate, Blue process)This process forms blue-colored images through a reaction to iron salts. John Herschel studied it in order to reproduce his complicated math formulas and memos. Other processes that fall into this category include Kallitype, Vandyketype, and Platinum printing. 1850: Albumen printThis process, introduced by Louis Désiré Blanquart-Evrard, was the most common kind of print in the latter half of the nineteenth century. Beautiful sepia gradation images were created by using albumen and silver chloride. The surfaces of prints made with this process were glossy because of the egg whites which were layered heavily to prevent the originally thin prints from curling, cracking, or tearing easily. This type of print was especially common for studio portraits and landscape or stereoviews. 1851: Wet collodion process and AmbrotypeFrederick Scott Archer developed the wet collodion processes, which used a thick glass plate unevenly hand-coated with a collodion-based, light-sensitive emulsion. Collodion, which means ‘glue’ in Greek, is nitrocellulose dissolved in ether and ethanol. The Ambrotype, an adaptation of the wet collodion process, was developed by Archer and Peter W. Fry. It involved placing a dark background behind the glass so that the negative image would look positive, and was popular in America until around 1870. 1855: Gum printing Orange colored dichromate has photosensitivity when it is mixed with colloids such as gum arabic, albumen, or gelatin. Using that feature, Alphonse Poitevin invented the gum printing process. It gained in popularity after 1898, and again in 1960s and 1970s because of its unique look. 1858: Tintype (also called Ferrotype and Melainotype) In this photographic process the emulsion was painted directly onto a japanned (varnish finish) iron plate. it was much cheaper and sturdier than the Ambrotype and Daguerreotype. 1861: RGB additive color modelApplied physician James Clerk Maxwell made the first color photo by mixing red, green, and blue light. 1871: Gelatin dry plateRichard L. Maddox discovered that gelatin could be a carrier for silver salts. By 1879, the gelatin dry plate had replaced the collodion wet plate. It was a revolutionary innovation in photography since it needed less light exposure, was usable when dry, meaning photographers no longer needed to pack and carry dangerous liquids, and could be standardized because it could be factory produced. 1873: Platinum printing (Platinotype)William Willis patented platinum printing in Britain. The process rapidly spread and became a dominant method in Europe and America by 1894 since it had a visibly different color tone compared to albumen and gelatin silver prints. Late 1880s: Gelatin silver printThis has been the major photograph printing process since the late 1880s up to the present. Prints consist of paper coated with an emulsion of silver halide in gelatin. The surface is generally smooth; under magnification, the print appears to sparkle. 1889: KallitypeDr. W. W. J. Nicol invented and refined the Kallitype. Vandyketype, or Single Kalliitype, is the simplest type of Kalltype and creates beautiful brown images. 1889: Film negativesCellulose nitrate film was developed by Eastman Kodak in 1889 and refined in 1903. It is made of silver gelatin on a cellulose nitrate base. The negatives are flammable and therefore can be dangerous. Nitrate sheet film was used widely though the 1930s, while nitrate roll film was used through the 1950s. The nitrate base was replaced with cellulose acetate in 1923. By 1937, Cellulose diacetate was used as the base, and beginning in 1947 Cellulose triacetate was used. Polyester film was introduced around 1960. 1935: Color photographsKodak introduced color film and transparencies in 1935. The first process was called Kodachrome. Ektachrome, introduced in the late 1940s, became equally popular. There are now a variety of color processes that use different materials; most consist of dyes (cyan, magenta, and yellow, each of which have different absorption peaks) suspended in a gelatin layer. === Photograph stability === Photograph stability refers to the ability of prints and film to remain visibly unchanged over periods of time. Different photographic processes yield varying degrees of stability. In addition, different materials may have dark-storage stability which differs from their stability in light. Kodachrome An extreme case with slides was stability under the intense light of projection. When stored in darkness, Kodachrome's long-term stability under suitable conditions is superior to other types of color film. Images on Kodachrome slides over fifty years old retain accurate color and density. Kodachrome film stored in darkness is largely responsible for excellent color footage of World War II, for example. It has been calculated that the yellow dye in Kodachrome, the least stable, would suffer a 20% loss of dye in 185 years. This is because developed Kodachrome does not retain unused color couplers. However, Kodachrome's color stability under bright light, especially during projection, is inferior to substantive slide films. Kodachrome's fade time under projection is about one hour, compared to Fujichrome's two and a half hours. Thus, old Kodachrome slides should be exposed to light only when copying to another medium. Silver halide Black-and-white negatives and prints made by the silver halide process are stable so long as the photographic substrate is stable. Some papers may yellow with age, or the gelatin matrix may yellow and crack with age. If not developed properly, small amounts of silver halide remaining in the gelatin will darken when exposed to light. In some prints, the black silver oxide is reduced to metallic silver with time, and the image takes on a metallic sheen as the dark areas reflect light instead of absorbing it. Silver can also react with sulfur in the air and form silver sulfide. A correctly processed and stored silver print or negative probably has the greatest stability of any photographic medium, as attested by the wealth of surviving historical black-and-white photographs. Chromogenic Chromogenic dye color processes include Type "R" and process RA-4 (also known as "type C prints"), process C-41 color negatives. and process E-6 color reversal (Ektachrome) film. Chromogenic processes yield organic dyes that are less stable than silver, and can also leave unreacted dye couplers behind during developing. Both factors may lead to color changes over time. The three dyes, cyan, magenta, and yellow, which make up the print may fade at different rates, causing a color shift in the print. Modern chromogenic papers such as Kodak Endura have achieved excellent stability, however, and are rated for 100 years in home display. Dye destruction Dye destruction prints are the most archival color prints, at least among the wet chemical processes, and arguably among all processes. The most well-known kind of dye destruction print is the Cibachrome, now known as Ilfochrome. Ink jet Some ink jet prints are now considered to have excellent stability, while others are not. Ink jet prints using dye-pigment mixtures are now common in photography, and often claim stability on par with chromogenic prints. However, these claims are based on accelerated aging studies rather than historical experience, because the technology is still relatively young. == Types and causes of deterioration == There are two main types of deterioration found in photographic materials. Chemical deterioration occurs when the chemicals in the photograph or negative undergo reactions (either through contact with outside catalysts, or because the chemicals are inherently unstable) that damage the material. Physical or structural deterioration occurs when chemical reactions are not involved, and include abrasion and tearing. Both types of deterioration are caused by three main factors: environmental storage conditions, inappropriate storage enclosures and repair attempts, and human use and handling. Chemical damage can also be caused by improper chemical processing. Different types of photographic materials are particularly susceptible to different types and causes of deterioration. === Environmental factors === Temperature and humidity interact with one another and cause chemical and physical deterioration. High temperature and relative humidity, along with pollution, can cause fading and discoloration of silver images and color dyes. Higher temperatures cause faster deterioration: the rate of deterioration is approximately doubled with every temperature increase of 10 °C. Fluctuations in temperature and relative humidity are particularly damaging, as they also speed up chemical deterioration and can cause structural damage such as cracked emulsions and warped support layers. Too-high relative humidity can cause fading, discoloration and silver mirroring, and can cause binders to soften and become sticky, making photographs susceptible to physical damage. It can also cause photographs to adhere to frames and other enclosures. Too-low relative humidity can cause physical damage including desiccation, embrittlement, and curling. Pollution can include oxidant and acidic / sulfiding gases that cause chemical deterioration, as well as dust and particulates that can cause abrasion. Sources of indoor pollution that affect photographs include paint fumes, plywood, cardboard, and cleaning supplies. Exposure to light causes embrittlement, fading, and yellowing. The damage is cumulative and usually irreversible. UV light (including from sunlight and fluorescent light) and visible light in the blue part of the spectrum are especially harmful to photographs, but all forms of light, including incandescent and tungsten, are damaging. Mold growth is fostered by high temperatures and humidity as well as dust particles. They cause damage to the surface of photographs and help break down binder layers. The presence of insects and rodents is also fostered by high temperatures and humidity. They eat paper fiber, albumen, and gelatin binders, leaving chew marks and droppings. Species likely to cause problems include cockroaches and silverfish. === Other factors === Inappropriate storage containers and repair attempts Cabinets made of inferior materials can give off harmful gases, while other reactive materials such as acidic paper sleeves, rubber bands, paper clips, pressure sensitive tape, and glues and adhesives commonly used for storage and repairs in the past can also cause chemical deterioration. Storing items too loosely, too tightly, or in enclosures that do not provide adequate physical protection can all cause physical damage such as curling and breakage. Handling and use Human handling, including by researchers and staff, can also cause both chemical and physical deterioration. Oils, dirt, lotions, and perspiration transmitted through fingerprints can destroy emulsion and cause bleaching, staining, and silver mirroring. Physical damage caused by human handling includes abrasion, scratches, tears, breakage, and cracks. Improper chemical processing Chemical processing, including use of exhausted fixer, insufficient length of fixing, and residual fixer left behind by inadequate washing can cause fading and discoloration. Heat, humidity, and light can accelerate such damage. Adherence to ISO standards at the time of processing can help avoid this type of deterioration. === Examples of threats to specific photographic materials === Glass plate negatives and ambrotypes are prone to breakage. Deterioration of film negatives, regardless of type, is humidity and temperature dependent. Nitrate film will first fade, then become brittle and sticky. It will then soften, adhere to paper enclosures, and produce an odor. Finally, it will disintegrate into a brown, acrid powder. Because of its flammability, it must be handled with particular care. Cellulose aetate, diacetate, and triacetate film produce acetic acid, which smells like vinegar. The deterioration process is therefore known as "vinegar syndrome". The negatives become very brittle and, in diacetate and triacetate film, the base shrinks, causing grooves ("channeling"). In addition to fading, silver-based images are prone to silver mirroring, which presents as a bluish metallic sheen on the surface of the photograph or negative and is caused by oxidation, which causes the silver to migrate to the surface of the emulsion. Color photographs are an inherently unstable medium, and are more susceptible to light and fading than black and white photographic processes. They are composed of various dyes, all of which eventually fade, albeit at different rates (causing discoloration along with fading). Many color photographic processes are also susceptible to fading even in the dark (known as "dark-fading"). There is little that can be done to restore faded images, and even under ideal conditions, most color photographs will not survive undamaged for more than 50 years. == Preservation strategies == === Temperature and relative humidity control === Maintenance of a proper environment such as control of temperature and relative humidity (RH; a measure of how saturated the air is with moisture) is extremely important to the preservation of photographic materials. Temperature should be maintained at or below 70 °F (21 °C) (the lower the better); an "often-recommended" compromise between preservation needs and human comfort is 65–70 °F (18–21 °C) (storage-only areas should be kept cooler). Temperature is the controlling factor in the stability of contemporary color photographs. For color photographs, storage at low temperatures (40 °F (4 °C) or below) is recommended. Relative humidity should be maintained at 30–50% without cycling more than +/- 5% a day. The lower part of that range is best for "long term stability of several photographic processes". Not only do relative humidity levels above 60% cause deterioration, but also low and fluctuating humidity may also damage them. Climate control equipment can be used to control temperatures and humidity. Air conditioners, dehumidifiers, and humidifiers can be helpful, but it is important to make sure they help instead of hurt (for example, air conditioning raises humidity). === Cold storage === Cold storage is recommended for especially vulnerable materials. Original prints, negatives, and transparencies (not glass plates, daguerreotypes, ambrotypes, tintypes, or other images on glass or metal) should be placed in packaging (archival folders in board boxes in double freezer weight Ziplock bags) in cold storage, and temperatures should be maintained at 1.7–4.4 °C (35.1–39.9 °F). According to the guideline of National Archives facilities, clear plastic bags such as Zip-locks or flush-cut bags with twist-ties (polyethylene or polypropylene plastic bags) and cotton gloves are needed. Removing items from cold storage requires letting them acclimate to room conditions. Photographs must be allowed to warm up slowly in a cool, dry place, such as an office or processing area. Original items should be retrieved from the storage only in an emergency and no more than once a year. Without cold storage, temperature-sensitive materials will deteriorate in a matter of a few decades; with cold storage they can remain unchanged for many centuries. === Light control === Photographs should not be hung near light. Hanging photographs on a wall can cause damage from the exposure to direct sunlight, or to fluorescent lights. Displays of photographs should be changed periodically because most photographs will deteriorate in light over time. UV-absorbing sleeves can be used to filter out damaging rays from fluorescent tubes and UV- absorbing sheets can be placed over windows or in frames. Low UV-emitting bulbs are available. Light levels should be kept at 50–100 Lux (5–10 footcandles) for most photographs when in use for research as well as exhibit. Exposure of color slides to the light in the projector should be kept to a minimum, and photographs should be stored in dark storage. The best way to preserve a photograph is to display a facsimile. === Pollution control === Controlling air quality is difficult. Ideally, air entering a storage or exhibition area should be filtered and purified. Gaseous pollution should be removed with chemical filters or wet scrubbers. Exterior windows should be kept when possible. Interior sources of harmful gases should be minimized. Metal cabinets are preferable to wooden cabinets, which can produce harmful peroxides. Air can be filtered to keep out gaseous pollutants and particulates such as nitrogen dioxide, sulphur dioxide, hydrogen sulfide, and ozone. Air filters must be changed regularly to be effective. Air circulation should also be checked periodically. === Handling control === Handling and use policies should be established and staff should be trained in policies and policy enforcement and telling users the policies when they arrive. Policies for processing, handling for loaned or exhibited items, and disaster prevention and recovery should also be created and followed. Work spaces should be clean and uncluttered. Clean gloves or clean, dry hands should be used whenever photographs are handled. Foods, drink, dirt, cleaning chemicals, and photocopy machines should be kept away from photo storage, exhibit, or work spaces. For precious materials, users should be provided with duplicates, not originals. === Storage systems control === Proper storage materials are essential for the long-term stability of photographs and negatives. Enclosures keep away dirt and pollutants. All enclosures used to house photographs and those should meet the specifications provided in the International Organization for Standardization (ISO). Most photographs can be safely kept in paper enclosures; some can also be safely stored in some types of plastic enclosures. Paper enclosures protect objects from light, but may result in increased handling for viewing. Paper enclosures must be acid-free, lignin-free, and are available in both buffered (alkaline, pH 8.5) and unbuffered (neutral, pH 7) stock. Storage materials must pass the ANSI Photographic Activity Test (PAT) which is noted in suppliers’ catalogs. Paper enclosures also protects the photographs from the accumulation of moisture and detrimental gases and are relatively inexpensive. Plastic enclosures include uncoated polyester film, uncoated cellulose triacetate, polyethylene, and polypropylene. Plastic enclosures are transparent. Photographs can be viewed without removal from the enclosure, thus it can reduce handling. However, plastic enclosures can trap moisture and cause ferrotyping (sticking, with a resultant glossy area). Plastic is not suitable for prints with surface damage, glass or metal-based photographs, nor for film-based negatives and transparencies from the 1950s, unless the latter are in cold storage. It should not to be used to store older safety film negatives as this may hasten their deterioration. Horizontal storage is preferable for many photographic prints and oversize photographs. It provides overall support to the images and prevents mechanical damage such as bending. Vertical storage is often more efficient and may make access to a collection easier. Materials of similar size should be stored together. Boxes and files should not be overcrowded. === Reproduction and digitization === Unlike the born-digital photographs that are widely produced and consumed today, historical photographs such as old slides, films, and printed photos are not easy to preserve. An important component of long-term photograph preservation is making reproductions (by photocopying, photographing, or scanning and digitizing) of photographs for use in exhibitions and by researchers, which reduces the damage caused by non-controlled environments and handling. Digitizing photographs also allows access by a much wider public, especially where the images have intrinsic historic value. Digital scans, however, are not replacements for the original, as digital file formats may become obsolete. Originals should always be preserved, even if they have been digitized. Born-digital photographs also require preservation, using digital preservation techniques. Safeguarding European Photographic Images for Access (SEPIA) lists ten principles for digitization of historical photograph. Summarized, they are: Photographs are an essential part of our cultural heritage, which contain our past, documentary and artistic value and the history of photographic processes; Digitizing photographs that deteriorate quickly is urgent matter to facilitate access for a large audience; Since digitization is not an end itself but a tool, selection of photographs to digitize should be based on an understanding of the nature and potential use of the collection; It's essential to define the aims, priorities, technical requirements, procedures and future use for investments; The creation of a digital image is a sophisticated activity which requires photographic expertise with ethical judgment; Digital images need regular maintenance in order to keep pace with changing technologies; A good digitization project requires teamwork, combining expertise on imaging, collection management, IT, conservation, descriptive methods and preservation strategies; The input of specialists in every project is essential to integrate preservation measures in the work-flow, handle fragile materials and avoid damage to the originals; Preservation specialists need to manage of digital assets in line with the overall preservation policy of the organization; and Museums, archives and libraries actively involve to develop international standards for the preservation of digital collections in the long-term view. ==== Examples ==== An example of digitization as part of a photograph preservation strategy is the photographic collection of the Tay Bridge disaster of 1879. These photographs have been digitized and disseminated more widely. Only the positive prints survive, owing to the widespread practice of recycling the original glass negatives to reclaim the silver content. Even when carefully preserved and kept in the dark, damage can occur through intermittent exposure to light, as shown by damage to the image of the intact bridge (at left). An example of a larger digitization project is the Cased Photographs Project, which provides access to digital images and detailed descriptions of daguerreotypes, ambrotypes, tintypes, and related photographs in the collections of the Bancroft Library and the California State Library. == Conservation treatments == Photograph conservation involves the physical treatment of individual photographs. As defined by the American Institute for Conservation, treatment is "the deliberate alteration of the chemical and/or physical aspects of cultural property, aimed primarily at prolonging its existence. Treatment may consist of stabilization and/or restoration." Stabilization treatments aim to maintain photographs in their current condition, minimizing further deterioration, while restoration treatments aim to return photographs to their original state. Conservation treatments range from very simple tear repairs or flattening to more complex treatments such as stain removal. Treatments vary widely depending on the type of photograph and its intended use. Therefore, conservators must by knowledgeable regarding both of these issues. Guides for the preservation of personal and family photograph collections, such as Cornell University's Preserving Your Family Photographs and the AIC's Caring for Your Treasures, recommend that people contact a trained conservator if they have rapidly deteriorating negatives or photographs with active mold growth, staining from pressure sensitive tape, severe tears, adhesion to enclosures, and other types of damage requiring conservation treatment. == Professional organizations == There are a number of international organizations concerned with conservation of photographs along with other subjects, including the International Council on Archives (ICA), the International Institute for Conservation of Historic and Artistic Works (IIC), and the International Council of Museums - Committee for Conservation (ICOM-CC). The Photographic Records Working Group Archived 2007-09-27 at the Wayback Machine is a specialty group within the ICOM-CC. In the United States, the national membership organization of conservation professionals is the American Institute for Conservation of Historic and Artistic works (AIC) to which the Photographic Materials Group (PMG) belongs. The Northeast Document Conservation Center (NEDCC) and the Conservation Center for Art and Historic Artifacts (CCAHA) also play an important role in the field of conservation. The Image Permanence Institute (IPI) at Rochester Institute of Technology is one of the leaders in preservation research of images in particular. === Codes and standards === Photograph conservators and preservation managers are guided in their work by codes of ethics and technical standards. The International Council on Archives publishes a Code of Ethics and Guidelines for Practice. Additionally, members of other professions (such as archivists and librarians) who deal with preservation of photographs do so in accordance with their professional organization's codes of ethics. For example, the Society of American Archivists Code of Ethics states that "Archivists protect all documentary materials for which they are responsible and guard them against defacement, physical damage, deterioration, and theft." The International Organization for Standardization (ISO) and American National Standards Institute (ANSI) both publish technical standards that govern the materials and procedures used in photograph preservation and conservation. The International Federation of Library Associations and Institutions has published a list of ANSI standards pertaining to the care and handling of photographs. == Education and training == Photograph conservators can be found in museums, archives, and libraries, as well as in private practice. Conservators often have earned their master's degrees in art conservation, though many have also been trained through apprenticeship. They often have backgrounds in art history, chemistry, or photography. Among numerous programs concerned with conservation of photographs around the world are: University of Amsterdam University of Melbourne in Australia The Royal Danish Academy of Fine Arts Canadian Conservation Institute The National School of Conservation, Restoration and Museology in Mexico (ENCRyM) Royal Institute for the Study and Conservation of Belgium's Artistic Heritage Institut national du patrimoine Université Paris 1 Hochschule für Technik und Wirtschaft Berlin (HTW) Fachhochschule Köln Staatlichen Akademie der Bildenden Künste Stuttgart Swiss Conservation-Restoration Campus Hochschule der Künste Bern Fratelli Alinari Studio Art Centers International, Florence (SACI) Royal College of Art/Victoria and Albert Museum The International master Program in Conservation of Antique Photographs and Paper Heritage held at the EICAP Faculty of Applied arts-Helwan University In addition, Getty Conservation Institute (GCI) works internationally to advance conservation practice in the visual arts. The United States, in particular, has many training or degree programs for photograph conservators offered by graduate schools and organizations such as: Conservation Center for Art and Historic Artifacts (CCAHA) Northeast Document Conservation Center (NEDCC) George Eastman House Buffalo State College Institute of Fine Arts, New York University University of Delaware Campbell Center for Historic Preservation Studies Northern States Conservation Center Smithsonian Center for Materials Research and Education. There are also photographic conservation teaching courses available online from various providers, for instance: Conserve Photography online teaching courses Citaliarestauro.com online teaching course NEDCC online teaching course == Storing photos at home == A photobook or photo album is a series of photographic prints collected by an individual or family in the form of a book. Some book-shaped photo albums have compartments into which photos can be inserted, usually made of plastic; other albums have heavy abrasive paper covered with clear plastic sheets on which photos can be placed. Earlier albums were often simply books of heavy paper, on which photographs could be glued or attached using adhesive corners or pages. From the point of view of professionals, a photo album is not the best way to store photos, especially valuable ones, but family archives are usually preserved in this way. It is recommended that physical images be stored in the following manner: Archive boxes: store hundreds of images securely, but still be easy to find. Acid-free albums: protect photos and allow easy viewing at home. Ideal photo storage involves placing each photo in a separate folder made of buffered or acid-free paper. Buffer paper folders are especially recommended for photographs that have been previously adhered to poor quality material or with an adhesive that will cause further acid formation. Store photographs 8x10 inches or smaller vertically along the long edge of the photograph in a buff paper folder in a larger archival box, and label each folder with appropriate information to identify it. The rigid folder protects the photo from sagging or crumpling as long as the box is not too tightly stuffed or insufficiently filled. Stack large photographs or fragile photos flat in archival boxes with other materials of comparable size. == See also == Preservation (library and archival science) Conservation and restoration of books, manuscripts, documents and ephemera Collections care Media preservation Conservation (cultural heritage) Conservation and restoration of photographic plates == Notes == == Further reading == Clark, Susie, and Franziska Frey. Care of Photographs. Amsterdam: European Commission on Preservation and Access, 2003. Eaton, George T. Conservation of Photographs. Kodak Publication No. F-40. Rochester, NY: Eastman Kodak Co., 1985. Hayes, Sandra. "Preserving History: Digital Imaging Methods of Selected Mississippi Archivists." Mississippi Archivists 65, no. 4 (Winter 2001): 101–102. Norris, Debra Hess, and Jennifer Jae Gutierrez, eds. Issues in the Conservation of Photographs. Los Angeles: Getty Conservation Institute, 2010. Reilly, James. Care and Identification of 19th Century Photographic Prints. Kodak Publication No. G2S. Rochester, NY: Eastman Kodak Co., 1986. Ritzenthaler, Mary Lynn and Diane Voght-O'Connor, et al. Photographs: Archival Care and Management. Chicago: Society of American Archivistis, 2006. == External links == === Organizations === The Photographic Materials Group of the American Institute for Conservation The Advanced Residency Program in Photograph Conservation at the George Eastman House Notes on Photographs, a Wiki from George Eastman House The Image Permanence Institute ICOM-CC Photographic Materials Working Group Archived 2007-09-27 at the Wayback Machine === Guides === Basics of Photograph Preservation Preservation and Archives Professionals Care, Handling, and Storage of Photographs, Library of Congress The Care and Preservation of Photographic Prints, The Henry Ford Museum Preservation of Photographs: Select Bibliography, Northeast Document Conservation Center Caring for Your Photographs, The American Institute for Conservation of Historic and Artistic Works (AIC) A Consumer Guide to Digital and Print Stability, Image Permanence Institute Archived 2010-12-28 at the Wayback Machine
Wikipedia/Conservation_and_restoration_of_photographs
Photographic printing is the process of producing a final image on paper for viewing, using chemically sensitized paper. The paper is exposed to a photographic negative, a positive transparency (or slide), or a digital image file projected using an enlarger or digital exposure unit such as a LightJet or Minilab printer. Alternatively, the negative or transparency may be placed atop the paper and directly exposed, creating a contact print. Digital photographs are commonly printed on plain paper, for example by a color printer, but this is not considered "photographic printing". Following exposure, the paper is processed to reveal and make permanent the latent image. == Printing on black-and-white paper == The process consists of four major steps, performed in a photographic darkroom or within an automated photo printing machine. These steps are: Exposure of the image onto the sensitized paper using a contact printer or enlarger; Processing of the latent image using the following chemical process: Development of the exposed image reduces the silver halide in the latent image to metallic silver; Stopping development by neutralising, diluting or removing the developing chemicals; Fixing the image by dissolving undeveloped silver halide from the light-sensitive emulsion: Washing thoroughly to remove processing chemicals protects the finished print from fading and deterioration. Optionally, after fixing, the print is treated with a hypo clearing agent to ensure complete removal of the fixer, which would otherwise compromise the long term stability of the image. Prints can be chemically toned or hand coloured after processing. === Panalure paper === Kodak Panalure is a panchromatic black-and-white photographic printing paper. Panalure was developed to facilitate the printing of full-tone black-and-white images from colour negatives – a difficult task with conventional orthochromatic papers due to the orange tint of the film base. Panalure also finds application as paper negatives in large format cameras. It is generally not suitable for conventional black-and-white printing, since it must be handled and developed in near-complete darkness. Kodak has announced that it will no longer produce or sell this product. However, as of 2006, it is still available from various online retailers. === Silver mirroring === Silver mirroring, or "silvering", is a degradation process of old black-and white-photographic prints caused by conversion of the black silver oxide to silver metal. This results in a slightly bluish, reflective patch in the darkest part of a print or negative when examined in raking light. It often indicates improper storage of the prints. == Printing on coloured paper == For more info see also: Chromogenic print Colour papers require specific chemical processing in proprietary chemicals. Today's processes are called RA-4, which is for printing colour negatives, and Ilfochrome, for colour transparencies. === Printing from colour negatives === Colour negatives are printed on RA-4 papers and produce a Type C print. These are essentially the same as colour negative films in that they consist of three emulsion layers, each sensitive to red, green and blue light. Upon processing, colour couplers produce cyan, magenta and yellow dyes, representing the true colours of the subject. The processing sequence is very similar to the C-41 process. Rollei makes a film called 'Digibase 200 Pro' that is like a conventional C-41 film but it has no orange mask, allowing easy prints on black-and-white paper with a grade 2 or 3 variable contrast filter === Printing from colour transparencies === Ilfochrome paper uses the dye destruction process to produce prints from positive transparencies. The colour dyes are incorporated into the paper and bleached during processing. Ilfochrome, EP2 and Type R print papers and chemicals are no longer in production. == References == == See also == Contact print Film developing Gelatin-silver process List of photographic processes Photographic paper Photographic print toning Standard photographic print sizes == External links == Evolution of photo printing technologies: from chemical processes to digital innovations, from Daguerreotypes to Nanoprinting
Wikipedia/Photographic_printing
The CMYK color model (also known as process color, or four color) is a subtractive color model, based on the CMY color model, used in color printing, and is also used to describe the printing process itself. The abbreviation CMYK refers to the four ink plates used: cyan, magenta, yellow, and key (most often black). The CMYK model works by partially or entirely masking colors on a lighter, usually white, background. The ink reduces the light that would otherwise be reflected. Such a model is called subtractive, as inks subtract some colors from white light; in the CMY model, white light minus red leaves cyan, white light minus green leaves magenta, and white light minus blue leaves yellow. In additive color models, such as RGB, white is the additive combination of all primary colored lights, and black is the absence of light. In the CMYK model, it is the opposite: white is the natural color of the paper or other background, and black results from a full combination of colored inks. To save cost on ink, and to produce deeper black tones, unsaturated and dark colors are produced by using black ink instead of or in addition to combinations of cyan, magenta, and yellow. The CMYK printing process was invented in the 1890s, when newspapers began to publish color comic strips. == Halftoning == With CMYK printing, halftoning (also called screening) allows for less than full saturation of the primary colors; tiny dots of each primary color are printed in a pattern small enough that humans perceive a solid color. Magenta printed with a 20% halftone, for example, produces a pink color, because the eye perceives the tiny magenta dots on the large white paper as lighter and less saturated than the color of pure magenta ink. Halftoning allows for a continuous variability of each color, which enables continuous color mixing of the primaries. Without halftoning, each primary would be binary, i.e. on/off, which only allows for the reproduction of eight colors: white, the three primaries (cyan, magenta, yellow), the three secondaries (red, green, blue), and black. == Comparison to CMY == The CMYK color model is based on the CMY color model, which omits the black ink. Four-color printing uses black ink in addition to subtractive primaries for several reasons: In traditional preparation of color separations, a red keyline on the black line art marked the outline of solid or tint color areas. In some cases a black keyline was used when it served as both a color indicator and an outline to be printed in black because usually the black plate contained the keyline. The K in CMYK represents the keyline, or black, plate, also sometimes called the key plate. Text is typically printed in black and includes fine detail (such as serifs). To avoid even slight blurring when reproducing text (or other finely detailed outlines) using three inks would require impractically accurate registration. A combination of 100% cyan, magenta, and yellow inks soaks the paper with ink, making it slower to dry, causing bleeding, or (especially on low-quality paper such as newsprint) weakening the paper so much that it tears. Although a combination of 100% cyan, magenta, and yellow inks would, in theory, completely absorb the entire visible spectrum of light and produce a perfect black, practical inks fall short of their ideal characteristics, and the result is a dark, muddy color that is not quite black. Black ink absorbs more light and yields much better blacks. Black ink is less expensive than the combination of colored inks that makes black. A black made with just CMY inks is sometimes called a composite black. When a very dark area is wanted, a colored or gray CMY "bedding" is applied first, then a full black layer is applied on top, making a rich, deep black; this is called rich black. The amount of black to use to replace amounts of the other inks is variable, and the choice depends on the technology, paper and ink in use. Processes called under color removal, under color addition, and gray component replacement are used to decide on the final mix; different CMYK recipes will be used depending on the printing task. == Other printer color models == CMYK, as well as all other process color printing, is contrasted with spot color printing, in which specific colored inks are used to generate the colors seen. Some printing presses are capable of printing with both four-color process inks and additional spot color inks at the same time. High-quality printed materials, such as marketing brochures and books, often include photographs requiring process-color printing, other graphic effects requiring spot colors (such as metallic inks), and finishes such as varnish, which enhances the glossy appearance of the printed piece. CMYK are the process printers which often have a relatively small color gamut. Processes such as Pantone's proprietary six-color (CMYKOG) Hexachrome considerably expand the gamut. Light, saturated colors often cannot be created with CMYK, and light colors in general may make visible the halftone pattern. Using a CcMmYK process, with the addition of light cyan and magenta inks to CMYK, can solve these problems, and such a process is used by many inkjet printers, including desktop models. == Comparison with RGB displays == Comparisons between RGB displays and CMYK prints can be difficult, since the color reproduction technologies and properties are very different. A computer monitor mixes shades of red, green, and blue light to create color images. A CMYK printer instead uses light-absorbing cyan, magenta, and yellow inks, whose colors are mixed using dithering, halftoning, or some other optical technique. Similar to electronic displays, the inks used in printing produce color gamuts that are only a subsets of the visible gamut, and the two color modes have their own specific ranges, each being capable of producing colors the other is not. As a result of this, an image rendered on an electronic display and rendered in print can vary in appearance. When designing images to be printed, designers work in RGB color spaces (electronic displays) capable of rendering colors a CMYK process cannot, and it is often difficult to accurately visualize a printed result that must fit into a different color space that both lacks some colors an electronic display can produce and includes colors it cannot. === Spectrum of printed paper === To reproduce color, the CMYK color model codes for absorbing light rather than emitting it (as is assumed by RGB). The K component ideally absorbs all wavelengths and is therefore achromatic. The cyan, magenta, and yellow components are used for color reproduction and they may be viewed as the inverse of RGB: Cyan absorbs red, magenta absorbs green, and yellow absorbs blue (−R,−G,−B). == Conversion == Since RGB and CMYK spaces are both device-dependent spaces, there is no simple or general conversion formula that converts between them. Conversions are generally done through color management systems, using color profiles that describe the spaces being converted. An ICC profile defines the bidirectional conversion between a neutral "profile connection" color space (CIE XYZ or Lab) and a selected colorspace, in this case both RGB and CMYK. The precision of the conversion depends on the profile itself, the exact methodology, and because the gamuts do not generally match, the rendering intent and constraints such as ink limit. ICC profiles, internally built out of lookup tables and other transformation functions, are capable of handling many effects of ink blending. One example is the dot gain, which show up as non-linear components in the color-to-density mapping. More complex interactions such as Neugebauer blending can be modelled in higher-dimension lookup tables. The problem of computing a colorimetric estimate of the color that results from printing various combinations of ink has been addressed by many scientists. A general method that has emerged for the case of halftone printing is to treat each tiny overlap of color dots as one of 8 (combinations of CMY) or of 16 (combinations of CMYK) colors, which in this context are known as Neugebauer primaries. The resultant color would be an area-weighted colorimetric combination of these primary colors, except that the Yule–Nielsen effect of scattered light between and within the areas complicates the physics and the analysis; empirical formulas for such analysis have been developed, in terms of detailed dye combination absorption spectra and empirical parameters. Standardization of printing practices allow for some profiles to be predefined. One of them is the US Specifications for Web Offset Publications, which has its ICC color profile built into some software including Microsoft Office (as Agfa RSWOP.icm). The device-dependency of RGB and CMYK also means that a set of RGB or CMYK values cannot uniquely represent a color, so long as no device or paper–ink–process combination is specified. With RGB the sRGB standard is widespread enough to be the implied default, but there is not a single form of CMYK which is widespread enough to be the default. == See also == CMY color model CcMmYK color model Cycolor RGB color model Gray component replacement Jacob Christoph Le Blon SWOP CMYK standard Color management Technicolor, the three-strip version of which is based on the CMYK model == References == == External links == XCmyk – A Windows software with source code for converting CMYK to RGB. RGB to CMYK converter – Tool for RGB to CMYK color converter online. Color Space Fundamentals – animated illustration of RGB vs. CMYK ICC profile registry, which lists some standard CMYK profiles, their paper types, and color separation limits
Wikipedia/CMYK_color_model
Scanography (also spelled scannography), more commonly referred to as scanner photography, is the process of capturing digitized images of objects for the purpose of creating printable art using a flatbed "photo" scanner with a CCD (charge-coupled device) array capturing device. Fine art scanography differs from traditional document scanning by using atypical objects, often three-dimensional, as well as from photography, due to the nature of the scanner's operation. == History == The process of creating art with a scanner can be as simple as arranging objects on the scanner and capturing the resulting image; in fact, some early artists in the field worked with photocopiers to capture and print in a single step, resulting in the field of Xerox art. Artist Sonia Landy Sheridan, artist in residence at 3M and founder of the Generative Systems program at the Art Institute of Chicago was one of the first to exploit this ability in 1968, altering the variables of the photocopying process to produce artwork rather than mere copies. Though the physical process of arranging objects on a glass platen to capture a photogram is shared by both "Xerox" artists and "scanographers", regarding image quality- scanner photography has more in common with large format photography. The process records extremely fine detail with a rather shallow depth of field and produces a digital file (or "digital negative") for printmaking. Using a computer and a photo editor between the scanning and the printing process provides the artist with a greater level of control, allowing, at a minimum, the ability to "clean" the image by removing specks and other imperfections in the capture. With the increased availability and affordability of flatbed color scanners in the 1990s, photoartists could now purchase a scanner rather than rent this equipment and the technician necessary to operate it, as Darryl Curran did in the early 1990s. Renting studio time at Nash Editions, Curran captured "scannograms" of objects from 1993-97. Harold Feinstein's One Hundred Shell and One Hundred Flower series contained scanned images side by side with traditional large format photography. Joseph Scheer scanned moths in Night Visions: The Secret Designs of Moths. Never manipulating the scan, from 2003 artist Brian Miller pioneered movement, lighting, and background in scanner photo capture while maintaining classical subjects like figures and fruit; work available at Pierogi Gallery, NY. Exhibited 2005 to 2009 Madrid, New York, East Hampton. Published 2005 in La Sexualidad Es Tan Fragil Como el Amor, ISBN 84-609-6225-3, and 2007 Color Elefante, ISSN 1698-9295. A 2008 exhibition titled "Scanner as Camera" at Washington and Lee University in Lexington, Virginia drew eight artists from across the United States whose subjects ranged from scanned and digitally manipulated historic ambrotype and tintype photographs and drawings to birds and insects found by the artist. == Capture process == Scanners differ significantly from digital camera in many areas. First, the optical resolution of a flatbed scanner can exceed 5000 pixels per inch (200 pixels per mm). Even at a relatively low resolution of 1200 pixels per inch (47 p/mm) a letter sized image would be 134 megapixels in size. The depth of field of most scanners is very limited, usually no more than half an inch (12 mm), but the built-in light source provides excellent sharpness, color saturation, and unique shadow effects. The time it takes the scanning head to traverse the bed means that scanners can only be used to capture still objects, and common items used are flowers, leaves, and other suitable "still life" subjects. === Equipment === Using a flatbed scanner to scan items other than paper documents exceeds the original purpose of the scanner, so special care must be taken with the process. The bed of the scanner is typically made of glass, and care needs to be taken that the glass not be scratched or cracked when placing or removing items on the bed. Since the items to be captured are often placed directly on the bed, dust and other particles will often land on the glass, and care must be taken to keep the glass clean. Scanners will also hold only a limited amount of weight, and items that may damage the scanner, such a liquids or items that might scratch the glass, should be placed on a plastic barrier to protect the bed. Alternatively, picture frame glass cut a few inches larger than the scanner housing will protect the platen and the device from weeping botanical specimens, paints, melting ice, burning leaves or whatever the challenge. There are only two standard flatbed scanner sizes: "document" (slightly larger than a sheet of letterhead size paper and "large format" approximately the size of two sheets of paper side-by-side. Many scanners advertise two resolutions, an optical resolution and a higher resolution that is achieved by interpolation. A higher optical resolution is desirable, since that captures more data, while interpolation can actually result in reduced quality. The higher the resolution (meaning the number of pixels per inch, "ppi"), the larger the print size. Flatbed scanners typically have a hinged cover that covers the bed, and reflects light back into the scan head. This cover is usually removed or propped open when scanning 3-D objects, to prevent damage or compression of the subject. Removal of the cover also allows the artist to use additional light sources positioned above the bed, which can be used to enhance the depth captured by the scanner. Scanners can also be modified to provide additional capture abilities. For example, the scanner, with the illumination removed or disabled, can be used as a giant CCD replacement, producing a large format digital camera back at a fraction of the cost of professional large format systems. === Techniques === The simplest use of the scanner, which also most closely matches its use for document capture, is as a specialized tool for macro photography. As long as the subject can be placed on the scanner bed, the scanner is excellent for capturing very high resolution images, within its limitations. This also has a very practical application, as it can be used to make images of items being sold on auction sites such as eBay which are too small to be easily photographed with consumer level digital cameras. A common artistic use of the scanner is to capture collages of objects. The objects are arranged by the artist on the scanner bed, and then captured. Since the artist is working from the back of the image, it can be difficult to get the desired arrangement. Scanning software with the ability to generate a low resolution preview scan can help in obtaining the desired arrangement before the final, high resolution scan is made. Since the subjects are often placed in contact with the scanner, there is a high potential for damage to the scanner from objects scratching or cracking the surface of the bed, or from liquids that might seep from the subject into the interior of the scanner. These risks can be mitigated by placing a layer of transparent protective material, such as clear plastic film, onto the scanner bed. Another approach is to invert the scanner, so the bed is above the subject and not quite in contact with it. Capturing a moving subject with the scanner can be viewed as a problem, or as an opportunity for artistic effect. As the subject moves during the scan, distortions are caused along the axis of the scan head's movement, as it captures different periods of the subject's movement line by line in a manner similar to slit-scan photography; these are forms of strip photography. The artist can use this by aligning the direction of the scan head's movement to deliberately cause the desired distortion. ==== Stereoscopic scanning ==== A variation of macrophotography involves using the scanner to produce stereoscopic or "3D" images of small objects. This is made possible because of the optical system of a typical scanner, which uses prisms to put the sensor at an optical distance from the glass of 3 to 4 feet, allowing a small sensor to cover the entire width of the bed, while keeping the bed physically shallow. This also gives better than expected depth of field, and introduces a certain amount of parallax when the same object appears at different positions on the bed. This allows the generation of stereo pairs, much like the "shift" technique where a single camera is shifted to produce right and left views of a still life scene. This technique probably goes back to the earliest days of flatbed scanners and was mentioned on the photo-3D mail list by Bob Wier on December 14, 1995, though he makes vague reference to earlier experiments by others. Though it could be described as a trivial application of a centuries-old technique to a new device, the concept is not widely known, even among stereo photography enthusiasts. This may be due to the common misconception that the typical flatbed scanner uses an imager that spans the width of the bed, thus leading to the assumption that shifting objects would not produce parallax. The most basic version of this technique involves simply placing the object upside down on the scanner and moving it by hand, but this leads to irregularities between the two images. Better results can be obtained by placing the object in a glass front display box and sliding the box against a straight edge. Smaller objects such as seeds can be placed on a microscope slide and secured using small adhesive labels. Another, more involved technique is to remove the lid and turn the scanner upside down, then move the scanner rather than the subject. This allows the imaging of extremely flexible objects as well as objects such as small plants which cannot be turned upside down. A variation of this method was used in a patented system which involved mechanically moving an inverted scanner to generate multiple views to produce 3D lenticular artwork. This was marketed briefly as a "lenticular starter kit." The product has since been discontinued but the inventor continues to use it to produce his own artwork. Images generated this way can be edited with stereo imaging software and viewed as traditional stereo pairs or can be converted to any of a number of formats, including anaglyphs, which are viewed using common bicolor 3D glasses, such as those often used with 3D TV and printed materials. Anaglyphs can be printed with normal printers and used as 3D posters. The high resolution of consumer level flatbed scanners allows taking stereoscopic images of objects that would otherwise be possible only through a stereo microscope, with similar limitations involving depth of field. The scanner, of course, does not feature adjustable focus, so the sharpest focus will always be closest to the glass. A wide variety of objects have been stereographed in this fashion, including figurines, fulgurites, fossils, mineral specimens seeds, and coins. == Further manipulation == While the result of a scanner capture provides a work of digital art or media art, just as a digital photograph does, further manipulation of the captured image is possible as well. This may be as simple as flattening the background to enhance the "floating" effect provided by the scanner to complete reworking of the image/photograph. == See also == Digital art Digital image Digital photography == References == == External links == Media related to Scanography at Wikimedia Commons http://www.scannography.org contains examples and information about techniques
Wikipedia/Scanography
Pornography (colloquially called porn or porno) is sexually suggestive material, such as a picture, video, text, or audio, intended for sexual arousal. Made for consumption by adults, pornographic depictions have evolved from cave paintings, some forty millennia ago, to modern-day virtual reality presentations. A general distinction of adults-only sexual content is made, classifying it as pornography or erotica. The oldest artifacts considered pornographic were discovered in Germany in 2008 and are dated to be at least 35,000 years old. Human enchantment with sexual imagery representations has been a constant throughout history. However, the reception of such imagery varied according to the historical, cultural, and national contexts. The Indian Sanskrit text Kama Sutra (3rd century CE) contained prose, poetry, and illustrations regarding sexual behavior, and the book was celebrated; while the British English text Fanny Hill (1748), considered "the first original English prose pornography," has been one of the most prosecuted and banned books. In the late 19th century, a film by Thomas Edison that depicted a kiss was denounced as obscene in the United States, whereas Eugène Pirou's 1896 film Bedtime for the Bride was received very favorably in France. Starting from the mid-twentieth century on, societal attitudes towards sexuality became lenient in the Western world where legal definitions of obscenity were made limited. In 1969, Blue Movie by Andy Warhol became the first film to depict unsimulated sex that received a wide theatrical release in the United States. This was followed by the "Golden Age of Porn" (1969–1984). The introduction of home video and the World Wide Web in the late 20th century led to global growth in the pornography business. Beginning in the 21st century, greater access to the Internet and affordable smartphones made pornography more mainstream. Pornography has been vouched to provision a safe outlet for sexual desires that may not be satisfied within relationships and be a facilitator of sexual fulfillment in people who do not have a partner. Pornography consumption is found to induce psychological moods and emotions similar to those evoked during sexual intercourse and casual sex. Pornography usage is considered a widespread recreational activity in-line with other digitally mediated activities such as use of social media or video games. People who regard porn as sex education material were identified as more likely not to use condoms in their own sex life, thereby assuming a higher risk of contracting sexually transmitted infections (STIs); performers working for pornographic studios undergo regular testing for STIs unlike much of the general public. Comparative studies indicate higher tolerance and consumption of pornography among adults tends to be associated with their greater support for gender equality. Among feminist groups, some seek to abolish pornography believing it to be harmful, while others oppose censorship efforts insisting it is benign. A longitudinal study ascertained pornography use is not a predictive factor in intimate partner violence. Porn Studies, started in 2014, is the first international peer-reviewed, academic journal dedicated to critical study of pornographic "products and services". Pornography is a major influencer of people's perception of sex in the digital age; numerous pornographic websites rank among the top 50 most visited websites worldwide. Called an "erotic engine", pornography has been noted for its key role in the development of various communication and media processing technologies. For being an early adopter of innovations and a provider of financial capital, the pornography industry has been cited to be a contributing factor in the adoption and popularization of media related technologies. The exact economic size of the porn industry in the early twenty-first century is unknown. In 2023, estimates of the total market value stood at over US$172 billion. The legality of pornography varies across countries. People hold diverse views on the availability of pornography. From the mid-2010s, unscrupulous pornography such as deepfake pornography and revenge porn have become issues of concern. == Etymology and definition == The word pornography is a conglomerate of two ancient Greek words: πόρνος (pórnos) "fornicators", and γράφειν (gráphein) "writing, recording, or description". In Greek language, the term pornography connotes depiction of sexual activity; no date is known for the first use of the term pornography, the earliest attested, most related word found is πορνογράφος (pornographos) i.e. "someone writing about harlots" in the 3rd century CE work Deipnosophists by Athenaeus. The oldest published reference to the word pornography as in 'new pornographie,' is dated back to 1638 and is credited to Nathaniel Butter in a history of the Fleet newspaper industry. The modern word pornography entered the English language as the more familiar word in 1842 via French "pornographie," from Greek "pornographos". The term porn is an abbreviation of pornography. The related term πόρνη (pórnē) "prostitute" in Greek, originally meant "bought, purchased" similar to pernanai "to sell", from the proto-Indo-European root per-, "to hand over" — alluding to act of selling. The word pornography was originally used by classical scholars as "a bookish, and therefore inoffensive term for writing about prostitutes", but its meaning was quickly expanded to include all forms of "objectionable or obscene material in art and literature". In 1864, Webster's Dictionary published "a licentious painting" as the meaning for pornography, and the Oxford English Dictionary: "obscene painting" (1842), "description of obscene matters, obscene publication" (1977 or earlier). Definitions for the term "pornography" are varied, with people from both pro- and anti-pornography groups defining it either favorably or unfavorably, thus making any definition very stipulative. Nevertheless, academic researchers have defined pornography as sexual subject material such as a picture, video, text, or audio that is primarily intended to assist sexual arousal in the consumer, and is created and commercialized with "the consent of all persons involved". Arousal is considered the primary objective, the raison d'etre a material must fulfill for it to be treated as pornographic. As some people can feel aroused by an image that is not meant for sexual arousal and conversely cannot feel aroused by material that is clearly intended for arousal, the material that can be considered as pornography becomes subjective. == Pornography throughout history == === Pornography from ancient times === Pornography is viewed by historians as a complex cultural formation. Depictions of a sexual nature existed since prehistoric times as seen in Venus figurines and rock art. People across various civilizations created works that depicted explicit sex; these include artifacts, music, poetry, and murals among other things that are often intertwined with religious and supernatural themes. The oldest artifacts, including the Venus of Hohle Fels, which is considered to be borderline pornographic, were discovered in 2008 CE at a cave near Stuttgart in Germany, radiocarbon dating suggests they are at least 35,000 years old, from the Aurignacian period. Vast number of artifacts discovered in ancient Mesopotamia region had explicit depictions of heterosexual sex. Glyptic art from the Sumerian Early Dynastic Period frequently showed scenes of frontal sex in the missionary position. In Mesopotamian votive plaques from the early second millennium (c. 2000 – c. 1500 BCE), a man is usually shown penetrating a woman from behind while she bends over drinking beer through a straw. Middle Assyrian lead votive figurines often portrayed a man standing and penetrating a woman as she rests on an altar. Scholars have traditionally interpreted all these depictions as scenes of hieros gamos (an ancient sacred marriage between a god and a goddess), but they are more likely to be associated with Inanna, the Mesopotamian goddess of sex and sacred prostitution. Many sexually explicit images, including models of male and female sexual organs were found in the temple of Inanna at Assur. Depictions of sexual intercourse were not part of the general repertory of ancient Egyptian formal art, but rudimentary sketches of heterosexual intercourse have been found on pottery fragments and in graffiti. The final two thirds of the Turin Erotic Papyrus (Papyrus 55001), an Egyptian papyrus scroll discovered at Deir el-Medina, consists of a series of twelve vignettes showing men and women in various sexual positions. The scroll was probably painted in the Ramesside period (1292–1075 BCE) and its high artistic quality indicates that it was produced for a wealthy audience. No other similar scrolls have yet been discovered. Archaeologist Nikolaos Stampolidis had noted that the society of ancient Greece held lenient attitudes towards sexual representation in the fields of art and literature. The Greek poet Sappho's Ode to Aphrodite (600 BCE) is considered an earliest example of lesbian poetry. Red-figure pottery invented in Greece (530 BCE) often portrayed images that displayed eroticism. The fifth-century BC comic Aristophanes elaborated 106 ways of describing the male genitalia and in 91 ways the female genitalia. Lysistrata (411 BCE) is a sex-war comedy play performed in ancient Greece. In India, Hinduism embraced an inquisitive attitude towards sex as an art and a spiritual ideal. Some ancient Hindu temples incorporated various aspects of sexuality into their art work. The temples at Khajuraho and Konark are particularly renowned for their sculptures, which had detailed representations of human sexual activity. These depictions were viewed with a spiritual outlook as sexual arousal is believed to indicate the embodying of the divine. "pornography is sometimes characterised as the symptom of a degenerate society, but anyone even noddingly familiar with Greek vases or statues on ancient Hindu temples will know that so-called unnatural sex acts, orgies and all manner of complex liaisons have for millennia past been represented in art for the pleasure and inspiration of the viewer everywhere. The desire to ponder images of love-making is clearly innate in the human – perhaps particularly the male – psyche." — Tom Hodgkinson Kama, the word used to connote sexual desire, was explored in Indian literary works such as the Kama Sutra, which dealt with the practical as well as the psychological aspects of human courtship and sexual intercourse. The Sanskrit text Kama sutra was compiled by the sage Vatsyayana into its final form sometime during the second half of the third century CE. This text, which included prose, poetry, as well as illustrations regarding erotic love and sexual behavior, is one of the most celebrated Indian erotic works. Koka shastra is another medieval Indian work that explored kama. ==== Pornography from the Roman era ==== When large-scale archaeological excavations were undertaken in the ancient Roman city of Pompeii during the 18th century, much of the erotic art in Pompeii and Herculaneum came to light, shocking the authorities who endeavored to hide them away from the general public. In 1821, the moveable objects were locked away in the Secret Museum in Naples, and what could not be removed was either covered or cordoned off from public view. Other examples of early art and literature of sexual nature include: Ars Amatoria (Art of Love), a second-century CE treatise on the art of seduction and sensuality by the Roman poet Ovid; the artifacts of the Moche people in Peru (100 CE to 800 CE); The Decameron, a collection of short stories, some of which are sexual in nature by the 14th-century Italian author Giovanni Boccaccio; and the fifteenth-century Arabic sex manual The Perfumed Garden. === Pornography from early modern era === A highly developed culture of visual erotica flourished in Japan during the early modern era. From at least the 17th century, erotic artworks became part of the mainstream social culture. Depictions of sexual intercourse were often presented on pictures that were meant to provide sex education for medical professionals, courtesans, and married couples. Makura-e (pillow pictures) were made for entertainment as well as for the guidance of married couples. The ninth-century Japanese art form "Shunga", which depicted sexual acts on woodblock prints and paintings became so popular by the 18th century that the Japanese government began to issue official edicts against them. Even so, Japanese erotica flourished with the works of artists such as Suzuki Harunobu achieving worldwide fame. Japanese censorship laws enacted in 1870 made the production of erotic works difficult. The laws remained in effect until the end of the Pacific War in 1945; nevertheless, pornography flourished through the sale of "erotic, grotesque, nonsense" (ero-guro-nansensu) periodicals, particularly in the Taishō era (1912–1926). From the 1960s, pink films, which portrayed sexual themes became popular in Japan. In 1981 the first Japanese Adult video (AV) was released. The Japanese pornography industry peaked in the early 2000s when about 30,000 AVs were made a year. From the mid-2010s, increased availability of free porn on the Internet led to a decline in the production of AVs. Other forms of adult entertainment such as hentai, which refers to pornographic manga and anime, and erotic video games have become popular in recent decades. In Europe, the Italian Renaissance work from the 16th century - I Modi (The Ways) also known as The Sixteen Pleasures became famous for its engravings that explicitly depicted sex positions. The publication of this book was considered the beginning of print pornography in Rome. The second edition of this book was published in 1527, titled Aretino Postures, which combined erotic images with text - a first in the Western culture. The Vatican called for the complete destruction of all the copies of the book and imprisonment of its author Marcantonio Raimondi. With the development of printing press in Europe, the publication of written and visual material, which was essentially pornographic began. Heptaméron written in French by Marguerite de Navarre and published posthumously in 1558 is one of the earliest examples of salacious texts from this era. Beginning with the Age of Enlightenment and advances in printing technology, the production of erotic material became popular enough that an underground marketplace for such works developed in England with a separate publishing and bookselling business. Historians have identified the 18th century as an age of pornographic opulence. Written by anonymous authors, the titles: The Progress of Nature (1744); The History of the Human Heart: or, the Adventures of a Young Gentleman (1749), which had descriptions of female ejaculation; and The Child of Nature (1774) have been noted as prominent pornographic fictional works from this period. The book Fanny Hill (1748), is considered "the first original English prose pornography, and the first pornography to use the form of the novel." An erotic literary work by John Cleland, Fanny Hill was first published in England as Memoirs of a Woman of Pleasure. The novel has been one of the most prosecuted and banned books in history. The author John Cleland was charged for "corrupting the King's subjects." At around the same time, erotic graphic art that began to be extensively produced in Paris came to be known in the Anglosphere as "French postcards". Enlightenment-era France had been noted by historians as the center of origin for modern-era pornography. The works of French pornography, which often concentrated on the education of an ingénue into libertine, dominated the sale of sexually explicit content. The French sought to interlace narratives of sexual pleasure with philosophical and anti-establishment basis. Political pornography began with the French Revolution (1789–99). Apart from the sexual component, pornography became a popular medium for protest against the social and political norms of the time. Pornography during this period was used to explore the ideas of sexual freedom for women and men, the various methods of contraception, and to expose the offenses of powerful royals and elites. The working and lower classes in France produced pornographic material en masse with themes of impotency, incest, and orgies that ridiculed the authority of the Church-State, aristocrats, priests, monks, and other royalty. One of the most important authors of socially radical pornography was the French aristocrat Marquis de Sade (1740–1814), whose name helped derive the words "sadism" and "sadist". He advocated libertine sexuality and published writings that were critical of authorities, many of which contained pornographic content. His work Justine (1791) interlaced orgiastic scenes along with extensive debates on the ills of property and traditional hierarchy in society. ==== Pornography in the Victorian era ==== During the Victorian era (1837–1901), the invention of the rotary printing press made publication of books easier, many works of lascivious nature were published during this period often under pen names or anonymity. In 1837, the Holywell Street (known as "Booksellers' Row") in London had more than 50 shops that sold pornographic material. Many of the works published in the Victorian era are considered bold and graphic even by today's lenient standards. The English novel The Adventures, Intrigues, and Amours, of a Lady's Maid! written by anonymous "Herself" (c. 1838) professed the notion that homosexual acts are more pleasurable for women than heterosexuality which is linked to painful and uncomfortable experiences. Some of the popular publications from this era include: The Pearl (magazine of erotic tales and poems published from 1879 to 1881); Gamiani, or Two Nights of Excess (1870) by Alfred de Musset; and Venus in Furs (1870) by Leopold von Sacher-Masoch, from whose name the term "masochism" was derived. The Sins of the Cities of the Plain (1881) is one of the first sole male homosexual literary work published in English, this work is said to have inspired another gay literary work Teleny, or The Reverse of the Medal (1893), whose authorship has often been attributed to Oscar Wilde. The Romance of Lust, written anonymously and published in four volumes during 1873–1876, contained graphical descriptions of themes detailing incest, homosexuality, and orgies. Other publications from the Victorian era that included fetish and taboo themes such as sadomasochism and 'cross-generational sex' are: My Secret Life (1888–1894) and Forbidden Fruit (1898). On accusations of obscenity many of these works had been outlawed until the 1960s. === Criminalization === ==== The UK Obscene Publications Act ==== The world's first law that criminalized pornography was the UK Obscene Publications Act 1857, enacted at the urging of the Society for the Suppression of Vice. The act passed by the British Parliament in 1857 applied to the United Kingdom and Ireland. The act made the sale of obscene material a statutory offense, and gave the authorities the power to seize and destroy any material which they considered as obscene. For centuries before, sexually explicit material was considered a domain that is exclusive to aristocratic classes. When pornographic material flourished in the Victorian-era England, the affluent classes believed they are sensible enough to deal with it, unlike the lower working classes whom they thought would get distracted by such material and cease to be productive. Beliefs that masturbation would make people ill, insane, or become blind also flourished. The obscenity act gave government officials the power to interfere in the private lives of people unlike any other law before. Some of the people suspected for masturbation were forced to wear chastity devices. "Cures" and "treatment" for masturbation involved such measures like giving electric shock and applying carbolic acid to the clitoris. The law was criticized for being established on still yet unproven claims that sexual material is noxious for people or public health. ==== The US Comstock Act ==== In 1865, the US postal service was seen as a "vehicle" for the transmission of materials that were deemed obscene by the American lawmakers. An act relating to the postal services was passed, which made people pay a fine of $500 for knowingly mailing any "obscene book, pamphlet, picture print, or other publication". From 1865 to up until the first three months of 1872, a total number of nine people were held for various charges of obscenity, with one person sentenced to prison for a year; while in the next ten months fifteen people were arrested under this law. This was partly due to the efforts of Anthony Comstock, who became a major figure in 1872 and held great power to control sexual related activities of people including the choice of abortion. The Comstock Act of 1873 is the American equivalent of the British Obscene Publications Act. The anti-obscenity bill, drafted by Anthony Comstock, was debated for less than an hour in the US Congress before being passed into law. Apart from the power to seize and destroy any material alleged to be obscene, the law made it possible for the authorities to make arrests over any perceived act of obscenity, which included possession of contraceptives by married couples. Reportedly 15 tons of books and 4 million pictures were destroyed, and about 15 people were driven to suicide with 4,000 arrests. At least 55 people whom Comstock identified as abortionists were indicted under the Comstock Act. ==== Steps towards liberalization ==== The laws regarding pornography have differed in various historical, cultural, and national contexts. The English Act did not apply to Scotland where the common law continued to apply. Before the English Act, publication of obscene material was treated as a common law misdemeanor, this made effectively prosecuting authors and publishers difficult even in cases where the material was clearly intended as pornography. However, neither the English, nor the United States Act defined what constituted "obscene", leaving this for the courts to determine. For implementing the Comstock act, the US courts used the British Hicklin test to define obscenity, the definition of which was first proposed in 1868, ten years after the passing of the English obscene act. The definition became cemented in 1896 and continued until the mid-twentieth century. Starting from 1957 to 1997, the US Supreme Court made numerous judgments that redefined obscenity. The nineteenth-century legislation eventually outlawed the publication, retail and trafficking of certain writings and images that were deemed pornographic. Although laws ordered the destruction of shop and warehouse stock meant for sale, the private possession and viewing of (some forms of) pornography was not made an offense until the twentieth century. Historians have explored the role of pornography in determining social norms. The Victorian attitude that pornography was only for a select few is seen in the wording of the Hicklin test, stemming from a court case in 1868, where it asked: "whether the tendency of the matter charged as obscenity is to deprave and corrupt those whose minds are open to such immoral influences". Although officially prohibited, the sale of sexual material nevertheless continued through "under the counter" means. Magazines specialising in a genre called "saucy and spicy" became popular during this time (1896 to 1955), titles of few popular magazines include; Wink: A Whirl of Girls, Flirt: A FRESH Magazine, and Snappy. Cover stories in these magazines featured segments such as "perky pin-ups" and "high-heel cuties". Some of the popular erotic literary works from the twentieth century include the novels: Story of the Eye (1928), Tropic of Cancer (1934), Tropic of Capricorn (1938), the French Histoire d'O (Story of O) (1954); and the short stories: Delta of Venus (1977), and Little Birds (1979). ==== Invention of photography and filmography ==== After the invention of photography, the birth of erotic photography followed. The oldest surviving image of a pornographic photo is dated back to about 1846, described as to depict "a rather solemn man gingerly inserting his penis into the vagina of an equally solemn and middle-aged woman". At one point of time, it was more expensive to purchase an erotic photograph than to hire a prostitute. The Parisian demimonde included Napoleon III's minister, Charles de Morny, an early patron who delighted in acquiring and displaying erotic photos at large gatherings. Pornographic film production commenced almost immediately after the invention of the motion picture in 1895. A pioneer of the motion picture camera, Thomas Edison, released various films, including The Kiss that were denounced as obscene in late 19th century America. Two of the earliest pioneers of pornographic films were Eugène Pirou and Albert Kirchner. Kirchner directed the earliest surviving pornographic film for Pirou under the trade name "Léar". The 1896 film, Le Coucher de la Mariée, showed Louise Willy performing a striptease. Pirou's film inspired a genre of risqué French films that showed women disrobing, and other filmmakers realized profits could be made from such films. === Legalization === Sexually explicit films opened producers and distributors to be liable for prosecution. Such films were produced illicitly by amateurs, starting in the 1920s, primarily in France and the United States. Processing the film was risky as was their distribution, which was strictly private. In the Western world, during the 1960s, social attitudes towards sex and pornography slowly changed. In 1967, Denmark repealed the obscenity laws on literature; this led to a decline in the sale of pornographic and erotic literature. Hoping for a similar effect, in the summer of 1969, legislators in Denmark abolished censorship on picture pornography, thereby effectively becoming, from July 1, 1969, the first country that legalized pornography, including child pornography, which was later prohibited in 1980. The 1969 legislation, instead of resulting in a decline in pornography production, led to an explosion of investment in, and commercial production of pornography in Denmark, which made the country's name synonymous with sex and pornography. The total retail turnover of pornography in Denmark for the year 1969 was estimated at $50 million. Much of the pornographic material produced in Denmark was smuggled into other countries around the world. In the United States, pornography is protected by the First Amendment to the United States Constitution unless it constitutes obscenity or child pornography that is produced with real children. Nevertheless, in Stanley v. Georgia (1969), the U.S. Supreme Court upheld the right of an adult to possess obscene material in private. Subsequently, however, the Supreme Court rejected the claim that under Stanley there is a constitutional right to provide obscene material for private use or to acquire it for private use. The right to possess obscene material does not imply the right to provide or acquire it, because the right to possess it "reflects no more than ... the law's 'solicitude to protect the privacies of the life within [the home]'". In 1969, Blue Movie by Andy Warhol became the first feature film to depict explicit sexual intercourse that received a wide public theatrical release in the United States. Film scholar Linda Williams remarked that prurience "is a key term in any discussion of moving-image sex since the sixties. Often it is the "interest" to which no one wants to own up". In 1968, the Motion Picture Association of America created a new film ratings system in which any film that was not approved by the association was released with an "X" rating. When pornographers began to release their productions with the rating X, the association adopted NC-17 rating for adults only films, leaving the X rating to pornography. Later the invented gimmick rating "XXX" became a standard for pornographic material. ==== Commissions and their findings ==== In 1970, the United States President's Commission on Obscenity and Pornography, set up to study the effects of pornography, reported that there was "no evidence to date that exposure to explicit sexual materials plays a significant role in the causation of delinquent or criminal behavior among youths or adults". The report further recommended against placing any restriction on the access of pornography by adults and suggested that legislation "should not seek to interfere with the right of adults who wish to do so to read, obtain, or view explicit sexual materials". Regarding the notion that sexually explicit content is improper, the Commission found it "inappropriate to adjust the level of adult communication to that considered suitable for children". The Supreme Court supported this view. In 1971, Sweden removed its obscenity clause. Further relaxation of legislations during the early 1970s in the US, West Germany and other countries led to rise in pornography production. The 1970s had been described by Linda Williams as 'the "Classical" Era of Theatrically Exhibited Porn', a time period now called the Golden Age of Porn. In 1979, the British Committee on Obscenity and Film Censorship better known as the Williams Committee, formed to review the laws concerning obscenity reported that pornography could not be harmful and to think anything else is to see pornography "out of proportion". The committee declared that existing variety of laws in the field should be scrapped and so long as it is prohibited from children, adults should be free to consume pornography as they see fit. The Meese Report in 1986 argued against loosening restrictions on pornography in the US. The report was criticized as biased, inaccurate, and not credible. In 1988, the Supreme Court of California ruled in the People v. Freeman case that "filming sexual activity for sale" does not amount to procuring or prostitution and shall be given protection under the first amendment. This ruling effectively legalized the production of X-rated adult content in the Los Angeles county, which by 2005 had emerged as the largest center in the world for the production of pornographic films. Pornographic films appeared throughout the twentieth century. First as stag films (1900–1940s), then as porn loops or short films for peep shows (1960s), followed by as feature films for theatrical release in adult movie theaters (1970s), and as home videos (1980s). ==== Role of magazines in legalization ==== Pornographic magazines published during the mid-twentieth century have been noted for playing an important role in the sexual revolution and the liberalization of laws and attitudes towards sexual representation in the Western world. Hugh Hefner, in 1953 published the first US issue of the Playboy, a magazine which as Hefner described is a "handbook for the urban male". The magazine contained images of nude women along with articles and interviews covering politics and culture. Twelve years later, in 1965, Bob Guccione in the UK started his publication Penthouse, and published its first American issue in 1969 as a direct competitor to Playboy. In its early days, the images of naked women published in Playboy did not show any pubic hair or genitals. Penthouse became the first magazine to show pubic hair in 1970. Playboy followed the lead and there ensued a competition between the two magazines over publication of more racy pictures, a contest that eventually got labeled as the "Pubic Wars". "We were the first to show full frontal nudity. The first to expose the clitoris completely. I think we made a very serious contribution to the liberalization of laws and attitudes. HBO would not have gone as far as it does if it was not for us breaking the barriers. Much that has happened now in the Western world with respect to sexual advances is directly due to steps that we took." — Bob Guccione, Penthouse founder in 2004. The tussle between Playboy and Penthouse paled into obscurity when Larry Flynt started Hustler, which became the first magazine to publish labial "pink shots" in 1974. Hustler projected itself as the magazine for the working classes as opposed to the urban centered Playboy and Penthouse. During the same time in 1972, Helen Gurley Brown, editor of the Cosmopolitan magazine, published a centerfold that featured actor Burt Reynolds in nude. His popular pose has been later emulated by many other famous people. The success of Cosmo led to the launch of Playgirl in 1973. At their peak, Playboy sold close to six million copies a month in the US, while Penthouse nearly five million. In the 2010s, as the market for printed versions of pornographic magazines declined, with Playboy selling about a million and Penthouse about a hundred thousand, many magazines became online publications. As of 2005, the best-selling US adult magazines maintained greater reach compared to most other non-pornographic magazines, and often ranked among top-sellers. === Modern-day pornography === Modern-day pornography began to take shape from the mid-1980s when the first desktop computers and public computer networks were released. Since the 1990s, the Internet has made pornography more accessible and culturally visible. Before the 90s, Usenet newsgroups served as the base for what has been called the "amateur revolution" where non-professionals from the late 1980s and early 1990s, with the help of digital cameras and the Internet, created and distributed their own pornographic content independent of mainstream networks. The use of the World Wide Web became popular with the introduction of Netscape navigator in 1994. This development led to newer methods of pornography distribution and consumption. The Internet turned out to be a popular source for pornography and was called the "Triple A-Engine" for offering consumers "anonymity, affordability, and accessibility", while driving the business of pornography. The notion of Internet being a medium abound with porn became popular enough that in 1995 Time published a cover story titled "CYBERPORN" with the face of a shocked child as the cover photo. In the Reno v. ACLU (1997) ruling, the US Supreme Court upheld the legality of pornography distribution and consumption by adults over the Internet. The Court noted that government may not reduce the communication between adults to "only what is fit for children". With the introduction of broadband connections, much of the distribution networks of pornography moved online giving consumers anonymous access to a wide range of pornographic material. To have better control over their content on the Internet some professional pornographers maintain their own websites. Danni's Hard Drive started in 1995 by Danni Ashe is considered one of the earliest online pornographic websites, coded by Ashe – a former stripper and nude model, the website was reported by CNN to had generated revenues of $6.5 million by 2000. According to some leading pornography providers on the Internet, customer subscription rates for a website would be about one in a thousand people who visit the site for a monthly fees averaging around $20. Ashe said in an interview that her website employs 45 people and she expects to earn $8 million in 2001 alone. The total number of pornographic websites in 2000 were estimated to be more than 60,000. The development of streaming sites, peer-to-peer file sharing (P2P) networks, and tube sites led to a subsequent decline in the sale of DVDs and adult magazines. Starting in the 21st century, greater access to the Internet and affordable smartphones made pornography more accessible and culturally mainstream. The total number of pornographic websites in 2012 was estimated to be around 25 million comprising 12% of all the websites. About 75 percent of households in the US gained Internet access by 2012. Data from 2015 suggests an increase in pornography consumption over the past few decades which is attributed to the growth of Internet pornography. Technological advancements such as digital cameras, laptops, smartphones, and Wi-Fi have democratized the production and consumption of pornography. Subscription-based service providers such as OnlyFans, founded in 2016, are becoming popular as the platforms for pornography trade in the digital era. Apart from the professional pornographers, content creators on such platforms include others like; a physics teacher, a race car driver, a woman undergoing cancer treatment. In 2022, the total pornographic content accessible online was estimated to be over 10,000 terabytes. AVN and XBIZ are the industry-specific organizations based in the US that provide information about the adult entertainment business. XBIZ Awards and AVN Awards, analogous to the Golden globes and Oscars, are the two prominent award shows of the adult entertainment industry. Free Speech Coalition (FSC) is a trade association and Adult Performer Advocacy Committee (APAC) is a labor union for the adult entertainment industry based in the US. The scholarly study of pornography notably in cultural studies is limited. Porn Studies, which began in 2014, is the first international peer-reviewed, academic journal that is exclusively dedicated to the critical study of the "products and services" identified to constitute pornography. == Classifications == === Adult content classifications === Adult content is generally classified as either pornography or erotica. Considerations of distinctness between pornography and erotica is mostly subjective. Pornographic content is categorized as softcore or hardcore. Softcore pornography contains depictions of nudity but without explicit depiction of sexual activity. Hardcore pornography includes explicit depiction of sexual activity. Hardcore porn is more regulated than softcore porn. Softcore porn was popular between the 1970s and 1990s. ==== Mainstream pornography ==== Pornography productions cater to consumers of various sexual orientations. Nonetheless, pornography featuring heterosexual acts made for heterosexual consumers, comprise the bulk of what is called the "mainstream porn", marking the industry more or less as "heteronormative". Mainstream pornography involves professional performers who work for various corporate film studios in their respective productions. Mainstream pornography productions are usually classified as feature or gonzo. Features involve storylines, characterizations, scripted dialog, elegant costumes, detailed sets, and soundtracks, which make the productions look similar to mainstream Hollywood productions but with the depictions of explicit sexual activity included. Features contain both original narratives as well as parodies that parody mainstream feature films, TV shows, celebrities, video games or literary works. Gonzo is a form of content creation that attempts to put the viewer into the scene, this is commonly achieved by close-up camera work or performers talking to the audience; also called "wall-to-wall", gonzo involves some aspects of "breaking the fourth wall" between the audience and performers. The term "gonzo" is often misused as a genre to identify demeaning depictions, however gonzo is a film-making style and not a genre. Gonzo style is variably incorporated in the creation of all types or genres of adult content. Gonzos do not involve the expensive sets or the costly production values of features, which makes their production relatively inexpensive. From the mid-2010s about 95 percent of porn productions are gonzo. ==== Indie pornography ==== Pornography productions that are independent of mainstream pornographic studios are classified as indie (or) independent pornography. These productions cater to more specific audience, and often feature different scenarios and sexual activity compared to the mainstream porn. The performers in indie porn include real-life couples and regular people, who sometimes work in partnership with other performers. Apart from content creation the performers do the background work such as videography, editing, web development themselves, and distribute under their own brand. Paysites like Clips4Sale.com, MakeLoveNotPorn.tv, and PinkLabel.tv provide a platform to the web-based content of independent pornographers. === Genres === Pornography encompasses a wide variety of genres providing for an enormous range of consumer tastes. Most of the genres or types are named according to the depiction of sexual activity, these include: anal, creampie, cum shot, double penetration, fisting, threesome. Categorizations based on the age of the performers include: teens, milf, mature. Other categorizations based on gender and sexual identity include: lesbian, transsexual, queer, shemale; while those based on race include: ethnic, interracial. Others include: Mormon, zombie. Pornography also features numerous fetishes like: "'fat' porn, amateur porn, disabled porn, porn produced by women, queer porn, BDSM and body modification." == Commercialism == Pornography is commercialized mainly through the sale of pornographic films. Many adult films had theatrical releases during the 1970s corresponding with the Golden Age of Porn. A 1970 federal study estimated that the total retail value of hardcore pornography in the United States was no more than $5 million to $10 million. The release of the VCR by Sony Corporation for the mass market in 1975 marked the shift of people from watching porn in adult movie theaters, to the confines of their houses. The introduction of VHS brought down the production quality through the 1980s. Starting in the 1990s, Internet eased the access to pornography. The pay-per-view model enabled people to buy adult content directly from cable and satellite TV service providers. According to Showtime Television network report, in 1999 adult pay-per-view services made $367 million, which was six times more than the $54 million earned in 1993. Although this development resulted in a decline in rentals, the revenues generated over the Internet, provided much financial gains for pornography producers and credit card companies among others. By the mid-1990s, the adult film industry had agents for performers, production teams, distributors, advertisers, industry magazines, and trade associations. The introduction of home video and the World Wide Web in the late twentieth century led to global growth in the pornography business. Performers got multi-film contracts. In 1998, Forrester Research reported that online "adult content" industry's estimated annual revenue is at $750 million to $1 billion. Retail stores or sex shops engaged in the sale of adult entertainment material ranging from videos, magazines, sex toys and other products, significantly contributed to the overall commercialization of pornography. Sex shops sell their products on both online shopping platforms such as Amazon and on specialized websites. In 2000, the total annual revenue from the sales and rentals of pornographic material in the US was estimated to be over $4 billion. The hotel industry through the sale of adult movies to their customers as part of room service, over pay-per-view channels, had generated an annual income of about $180-$190 million. Some of the major companies and hotel chains that were involved in the sale of adult films over pay-per-view platforms include; AT&T, Time Warner, DirecTV from General Motors, EchoStar, Liberty Media, Marriott International, Westin and Hilton Worldwide. The companies said their services are in response to a growing American market that wanted pornography delivered at home. Studies in 2001 had put the total US annual revenue (including video, pay-per-view, Internet and magazines) between $2.6 billion and $3.9 billion. From the mid 2000s, emergence of tube sites led to an increase in free streaming and a decrease in traditional studio sales. Many performers turned to subscription-based platforms like OnlyFans, which continue to provide financial independence for some, but also increase market saturation, making it harder for new creators to establish themselves. Additionally, dependence on third-party platforms leaves creators vulnerable to policy changes and financial restrictions. In 2020, Visa and Mastercard implemented restrictions on processing payments for adult content due to concerns over illegal material. Additionally, the arrival of AI-generated adult content, including deepfake pornography, poses ethical and legal dilemmas. === Economics === The production and distribution of pornography are economic activities of some importance. In Europe, Budapest is regarded as the industry center. Other pornography production centers in the world are located in Florida (US), Brazil, Czech Republic, and Japan. In the United States, the pornography industry employs about 20,000 people including 2,000 to 3,000 performers, and is centered in the San Fernando Valley of Los Angeles. By 2005, it became the largest pornography production center in the world. Apart from regular media coverage, the industry in the US receives considerable attention from private organizations, government agencies, and political organizations. As of 2011, pornography was becoming one of the biggest businesses in the United States. In 2014, the porn industry was believed to bring in at least $13 billion on a yearly basis in the United States. Through the 2010s, many pornography production companies and top pornographic websites such as Pornhub, RedTube and YouPorn have been acquired by MindGeek, a company that has been described as "a monopoly" in the pornography business. This development was identified as a problem. According to Marina Adshade, a professor from the Vancouver School of Economics and the author of Dollars and Sex: How economics influences sex and love, having a monopoly in the pornography business has forced the producers to reduce their charges, and radically changed the work of performers "who are now under greater pressure to perform acts that they would have been able to refuse in the past", all at a lower price without profits for themselves. Online pornography is available both for a fee and free of charge. The availability of free porn on the Internet has led to a decline in the business of mainstream pornography. Piracy is estimated to result in losses of some $2 billion a year for the porn industry. Budgets of many studios reduced considerably and contracts for performers became less common. Reportedly, applications by established pornography companies for porn-shoot permits in Los Angeles County fell by 95 percent during the period 2012 to 2015. According to Mark Spiegler, an adult talent agent, in the early 2000s female performers made about $100,000 a year. By 2017, the amount is about $50,000. The technological era led to decline of the studio and "the rise of the pornography worker herself". Newer ways of monetization have opened for the pornography workers who are taking the path of entrepreneurship. In 1995, Jenna Jameson signed her first contract with the porn studio Wicked Pictures. After building a brand image for herself she started her own company ClubJenna, which by 2005 was reportedly earning an annual revenue of $30-$35 million. "Performers are hustlers now," said Chanel Preston (a performer who was also chairperson of the Adult Performer Advocacy Committee), while noting that performers have to be creative to sustain their income and reach audience, both of which, she said are mainly achieved through "feature dancing, selling merchandise, webcamming", among other activities. "Custom" pornography made according to the requests of customer clients has emerged as one new business niche. The average career for the new age performer lasts about four to six months. Before moving on to the business side, adult performers use studio works to advertise and build a brand image for themselves. They acquire an audience who would later pay at personal website or webcam performances. Commercial webcamming, which emerged in the 1990s as a niche sector in the adult entertainment industry, grew to become a multibillion-dollar business by the mid-2020s. The exact economic size of the porn industry in the early-twenty-first century is unknown to anyone. Kassia Wosick, a sociologist from New Mexico State University, estimated the global porn market value at $97 billion in 2015, with the US revenue estimated at $10 and $12 billion. IBISWorld, a leading researcher of various markets and industries, calculated total US revenue to reach $3.3 billion by 2020. On the basis of a research report by a market analysis firm, USA Today published that the estimated worth of the adult entertainment industry market in 2023 is over $172 billion. === Technology === Pornographers have taken advantage of each major technological advancement for the production and distribution of their services. Pornography has been called an "erotic engine" and a driving force in the development of various media related technologies from the printing press, through photography (still and motion), to satellite TV, Home video, and streaming media. One of the world's leading anti-pornography campaigners, Gail Dines, has stated that "the demand for porn has driven the development of core cross-platform technologies for data compression, search, transmission and micro-payments." Many of the technological developments that had been led by pornography have benefited other fields of human activity too. In the early 2000s, Wicked Pictures pushed for the adoption of the MPEG-4 file format ahead of others, this later became the most commonly used format across high-speed Internet connections. In 2009, Pink Visual became one of the first companies to license and produce content with a software introduced by a small Toronto-based company called "Spatial view", which later made it possible to view 3D content on iPhones. As an early adopter of innovations, the pornography industry has been cited to be a crucial factor in the development and popularization of various media processing and communication technologies. From innovative smaller film cameras, to the VCRs, and the Internet, the porn industry has employed newer technologies much ahead than other commercial industries, this early adoption provided the developers their early financial capital, which aided in the further development of these technologies. The success of innovative technologies is predicted by their greater use in the porn industry. Pornographic content accounted for most videotape sales during the late 1970s. The pornography industry has been considered an influential factor in deciding the format wars in media, including being a factor in the VHS vs. Betamax format war (the videotape format war) and the Blu-ray vs. HD DVD format war (the high-def format war). Piracy, the illegal copying and distribution of material, is of great concern to the porn industry. The industry has been the subject of many litigations and formalized anti-piracy efforts. Many of the innovative data rendering procedures, enhanced payment systems, customer service models, and security methods developed by pornography companies have been co-opted by other mainstream businesses. Pornography companies served as the basis for a large number of innovations in web development. Much of the IT work in porn companies is done by people who are referred to as a "porn webmaster", often paid well in what are small businesses, they have more freedom to test innovations compared to other IT employees in larger organizations who tend to be risk-averse. ==== Virtual reality pornography ==== Some pornography is produced without human actors at all. The idea of computer-generated pornography was conceived very early as one of the obvious areas of application for computer graphics. Until the late 1990s, digitally manipulated pornography could not be produced cost-effectively. In the early 2000s, it became a growing segment as the modeling and animation software matured, and the rendering capabilities of computers improved. Further advances in technology allowed increasingly photorealistic 3D figures to be used in interactive pornography. The first pornographic film to be shot in 3D was 3D Sex and Zen: Extreme Ecstasy, released on 14 April 2011, in Hong Kong. The various mediums for pornography depictions have evolved throughout the course of history, starting from prehistoric cave paintings, about forty millennia ago, to futuristic virtual reality renditions. Experts in the pornography business predict more people in the future would consume porn through virtual reality headsets, which are expected to give consumers better personal experiences than they can have in the real world. Speculations are rife about an increased presence of sex robots in the future pornography productions. == Consumption == Pornography is a product made by adults-for the consumption by adults, the consumption of which has become more common among people due to the expansive use of the Internet. About 90% of pornography is consumed on the Internet with consumers preferring content that is in tune with their sexuality. Pornography has been found to be a significant influencer of people's ideas about sex in the digital age. Pornographic websites rank among the top 50 most visited websites worldwide. XVideos and Pornhub are the two most visited pornographic websites worldwide. Pornography consumption in people is found to induce "psychological moods and emotions" similar to those evoked during actual sexual intercourse and casual sex. Researchers identified four broad motivating factors for pornography consumption: an innate sexual drive or desire, to learn about sex and improve ones own sexual performance, peer pressure or social groups, lack of sexual relationship or absence of partner. Majority of pornography consumers tend to be male, unmarried, with higher levels of education. Younger people are more frequent consumers of porn than older people. There's been a gradual increase in the consumption rates across different age groups with the increased availability of free porn over the Internet. Researchers at McGill University ascertained that on viewing pornographic content, men reached their maximum arousal in about 11 minutes and women in about 12 minutes. An average visit to a pornographic website lasts for 11.6 minutes. Both marriage and divorce are found to be associated with lower subscription rates for adult entertainment websites. Subscriptions are more widespread in regions that have higher measures of social capital. Pornographic websites are most often visited during office hours. As per a recent CNBC report, seventy per cent of online-porn access in the US happens between nine-to-five hours. Sexual arousal and sexual enhancement tend to be the primary motivations among the self-reported reasons by users for their pornography consumption. Studies had found that greater levels of psychological distress leads to higher rates of pornography consumption. Pornography may provide a temporary relief from stress, or anxiety. A need to assuage coping and boredom is also found to result in higher consumption of pornography. === By Gender === A study of Austrian adults found that men consume pornography more frequently than women. The intent for consumption may vary, with men being more likely to use pornography as a stimulant for sexual arousal during solitary sexual activity, while women are more likely to use pornography as a source of information or entertainment, and rather prefer using it together with a partner to enhance sexual stimulation during partnered sexual activity. Studies have found that sexual functioning defined as "a person's ability to respond sexually or to experience sexual pleasure" is greater in women who consume pornography frequently than in women who do not. No such association was noticed in men. Women who consume pornography are more likely to know about their own sexual interests and desires, and in turn be willing and able to communicate them during partnered sexual activity, it has been reported that in women the ability to communicate their sexual preferences is associated with greater sexual satisfaction for themselves. Pornographic material is found to expand the sexual repertoire in women by making them learn new rewarding sexual behaviors such as clitoral stimulation and enhance their overall "sexual flexibility". Women who consume pornography frequently are more easily aroused during partnered sex and are more likely to engage in oral sex compared to the women who do not view pornography. Women users of pornography had reported (almost 50%) to have had engaged in cunnilingus, which research suggests is related to female orgasm, and to have had experienced orgasms more frequently than women who do not use pornography (87% vs. 64%). Most people, probably do not consider pornography use by a partner as indulging in infidelity. === By education level === A two year long survey (2018–2020) conducted to assess the role of pornography in the lives of highly educated medical university students, with median age of 24, in Germany found that pornography served as an inspiration for many students in their sex life. Pornography use among students was higher in males than in females, among the male students those who did not cheat on their partner or contracted an STI were found to be more frequent consumers of pornography. Although pornography use was more common among men, associations between pornography use and sexuality were more apparent in women. Among the female students, those who reported to be satisfied with their physical appearance have consumed three times as much pornography than the female students who had reported to be dissatisfied with their body. A feeling of physical inadequacy was found to be a restraining factor in the consumption of pornography. Female students who consume pornography more often had reported to have had multiple sexual partners. Both female and male students who enjoyed the experience of anal intercourse in their life were reported to be frequent consumers of pornography. Sexual content depicting bondage, domination, or violence was consumed by only a minority of 10%. More sexual openness and less sexual anxiety was observed in students who regularly consumed pornography. No association was noticed between regular pornography use and experience of sexual dissatisfaction in either female or male students. This finding was in concurrence with another finding from a longitudinal study, which demonstrated most pornography consumers differentiate pornographic sex from real partnered sex and do not experience diminishing satisfaction with their sex life. === By region === A vast majority of men and considerable number of women in the US use porn. A study in 2008 found that among University students aged 18 to 26 located in six college sites across the United States, 67% of young men and 49% of young women approved pornography viewing, with nearly 9 out of 10 men (87%) and 31% women reportedly using pornography. The Huffington Post reported in 2013 that porn websites registered higher number of visitors than Netflix, Amazon, and Twitter combined. A 2014 poll, which asked Americans when they had "last intentionally looked at pornography", elicited a result that 46% of men and 16% of women in the age group of 18–39 did so in the past week. A 2016 study reported that about 70% of men and 34% of women in romantic relationships use pornography annually. Gallup poll surveys conducted over the years 2011 to 2018 noted a gradual increase in the acceptance rates of pornography among the general American public. Since the late 1960s, attitudes towards pornography have become more positive in Nordic countries; in Sweden and Finland the consumption of pornography has increased over the years. A 2006 study of Norwegian adults found that over 80% of the respondents used pornography at some point in their lives, a difference of 20% was observed between men and women in their respective use. A 2015 study in Finland noted that 75% of the 30-40-y.o. women and above 90% of the 30-40-y.o. men found porn "very exciting". Of those who had watched porn during the latest year, 71% of the 18-24-y.o. women, almost 60% of the 18-49-y.o. women, and a tenth of 65+ women did so; among men, the numbers were above 90% of the men under 50-y.o., 3/4 of the 18-64-y.o. and most of the 65+ y.o.; the numbers were quickly increasing, particularly for women, partially due to increased masturbation. In 2012 and 2013, interviews with large number of Australians revealed that in the past year 63% of men and 20% of women had viewed pornography. A 2020 Egyptian study surveying 15,027 individuals in Arab countries noted a prevalence of pornography use "nearly similar to Danish, German, and American ones". In 2021, it was estimated that in modern countries, 46–74% of men and 16–41% of women are regular users of pornography. In 2022, a national survey in Japan, of men and women aged 20 to 69 revealed that 76% of men and 29% of women had used pornography as part of their sexual activity. A 2023 study reported that in Netherlands, young men who watched porn in the previous six months ranged between 65% (13–15-y.o.) to 96% (22–24-y.o.), and among young women between 22% (13–15-y.o.) to 75% (22–24-y.o.). == Legality and regulations == The legal status of pornography varies widely from country to country. Regulating hardcore pornography is more common than regulating softcore pornography. Child pornography is illegal in almost all countries, and some countries have restrictions on rape pornography and zoophilic pornography. Pornography in the United States is legal provided it does not depict minors, and is not obscene. The community standards, as indicated in the Supreme Court decision, of the 1973 Miller v. California case determine what constitutes as "obscene". The US courts do not have jurisdiction over content produced in other countries, but anyone distributing it in the US is liable to prosecution under the same community standards. As the courts consider community standards foremost in deciding any obscenity charge, the changing nature of community standards over the course of time and place makes instances of prosecution limited. In the United States, a person receiving unwanted commercial mail that he or she deems pornographic (or otherwise offensive) may obtain a Prohibitory Order. Many online sites require the user to tell the website they are a certain age and no other age verification is required. A total of 16 states and the Republican Party have passed resolutions declaring pornography a "public health" threat. These resolutions are symbolic and do not put any restrictions but are made to sway the public opinion on pornography. The notion of pornography as a threat to public health is not supported by any international health organization. The adult film industry regulations in California requires that all performers in pornographic films use condoms. However, the use of condoms in pornography is rare. As porn does better financially when actors are without condoms many companies film in other states. Twitter is the popular social media platform used by the performers in porn industry as it does not censor content unlike Instagram and Facebook. Pornography in Canada, as in the US, criminalizes the "production, distribution, or possession" of materials that are deemed obscene. Obscenity, in the Canadian context, is defined as "the undue exploitation of sex" provided it is connected to images of "crime, horror, cruelty, or violence". As to what is considered "undue" is decided by the courts, which assess the community standards in deciding whether exposure to the given material may result in any harm, with harm defined as "predisposing people to act in an anti-social manner". Pornography in the United Kingdom does not have the concept of community standards. Following the highly publicized murder of Jane Longhurst, the UK government in 2009 criminalized the possession of what it terms as "extreme pornography". The courts decide whether any material is legally extreme or not, conviction for penalty include fines or incarceration up to three years. Content banned includes representations that are considered "grossly offensive, disgusting, or otherwise of an obscene character". While there are no restrictions on depiction of male ejaculation, any depiction of female ejaculation in pornography is completely banned in the UK, as well as in Australia. In most of Southeast Asia, Middle East, and China, the production, distribution or possession of pornography is illegal and outlawed. In Russia and Ukraine, webcam modeling is allowed provided it contains no explicit performances; in other parts of the world commercial webcamming is banned as a form of pornography. Disseminating pornography to a minor is generally illegal. There are various measures to restrict minors' any access to pornography, including protocols for pornographic stores. Pornography can infringe into basic human rights of those involved, especially when sexual consent was not obtained. Revenge porn is a phenomenon where disgruntled sexual partners release images or video footage of intimate sexual activity of their partners, usually on the Internet, without authorization or consent of the individuals involved. In many countries there has been a demand to make such activities specifically illegal carrying higher punishments than mere breach of privacy, or image rights, or circulation of prurient material. As a result, some jurisdictions have enacted specific laws against "revenge porn". === What is not pornography === In the US, a July 2014 criminal case decision in Massachusetts — Commonwealth v. Rex, 469 Mass. 36 (2014), made a legal determination as to what was not to be considered "pornography" and in this particular case "child pornography". It was determined that photographs of naked children that were from sources such as National Geographic magazine, a sociology textbook, and a nudist catalog were not considered pornography in Massachusetts even while in the possession of a convicted and (at the time) incarcerated sex offender. Drawing the line depends on time, place and context. Occidental mainstream culture has been increasingly getting "pornified" (i.e. influenced by pornographic themes, with mainstream films often including unsimulated sexual acts). Since the very definition of pornography is subjective, material that is considered erotic or even religious in one society may be denounced as pornography in another. When European travellers visited India in the 19th century, they were dismayed at the religious representation of sexuality on the Hindu temples and deemed them as pornographic. Similarly many films and television programs that are unobjectionable in contemporary Western societies are labeled as "pornography" in Muslim societies. Thus, assessing a material as pornography is very much personalized; to rehash a cliché, "pornography is very much in the eye of the beholder". === Copyright status === In the United States, some courts have applied US copyright protection to pornographic materials. Some courts have held that copyright protection effectively applies to works, whether they are obscene or not, but not all courts have ruled the same way. The copyright protection rights of pornography in the United States has again been challenged as late as February 2012. == STIs prevention and safer sex practices == Performers working for pornographic film studios undergo regular testing for sexually transmitted infections (STIs) every two weeks. They have to test negative for: HIV, trichomoniasis, chlamydia, gonorrhea, syphilis, and hepatitis B and C before showing up on a set and are then inspected for sores on their mouths, hands, and genitals before commencing work. The industry believes this method of testing to be a viable practice for safer sex as its medical consultants claim that since 2004, about 350,000 pornographic scenes have been filmed without condoms and HIV has not been transmitted even once because of performance on set. However, some studies suggest that adult film performers have high rates of chlamydia or gonorrhea infection, and many of these cases may be missed by industry screening because these bacteria can colonize many sites on the body. In the initial years, studios assessed performers suitability on the results from their blood and urine tests. According to a 2019 study by the American College of Emergency Physicians, swab tests offer better insight than urine samples for detecting bacterial STIs like chlamydia and gonorrhea. Performers such as Cherie DeVille have emphasized swab tests for safer sex. According to performer Angela White, studios will not allow them to work unless they are completely clean, insisting on performers testing regularly, she said "So for me, because I work so much, I’m testing every 12 days – and that is a full sweep of STIs such as chlamydia, gonorrhoea, syphilis, HIV and trichomoniasis. We’re doing throat swabs, vaginal swabs and anal swabs." Allan Ronald, a Canadian doctor and HIV/AIDS specialist who did groundbreaking studies on the transmission of STIs among prostitutes in Africa, said there's no doubt about the efficiency of the testing method, but he felt a little uncomfortable: "because it's giving the wrong message — that you can have multiple sex partners without condoms — but I can't say it doesn't work." Relatedly, it has been found that individuals who received little sex education or perceive pornography as a source of information about sex are less apt to use condoms in their own sex life, making themselves more susceptible to contract STIs. In 2020—the US National Sex Education Standards—released recommendations to incorporate "porn literacy" to students from grade 6 to 12 as part of sex education in the US. Veteran performer and former nurse Nina Hartley, who has a degree in nursing, stated that the amount of time involved in shooting a scene can be very long, and with condoms in place it becomes a painful proposition as their usage is uncomfortable despite the use of lube, causes friction burn, and opens up lesions in the genital mucosa. Advocating the testing method for performers, Hartley said, "Testing works for us, and condoms work for outsiders." "We're tested every fourteen days. That is literally twenty-three more times than the average American. If that person makes it to their yearly physical. I have met tons of people that haven't been to the doctor in years. That scares me because they have no idea what their status is.... I don't hook up with people outside of the porn industry because I'm terrified. And I'm not the only one. There's many performers that know: if you go out into the wild, you will come back with something." — Ash Hollywood (Porn actress). Emphasizing that performers in the industry take necessary precautions like PrEP and are at lower risk to contract HIV than most sexually active persons outside the industry, many prominent female performers have vehemently opposed regulatory measures like Measure B that sought to make the use of condoms mandatory in pornographic films. Professional female performers have called the use of condoms on a daily basis at work an occupational hazard as they cause micro-tears, friction burn, swelling, and yeast infections, which altogether, they say, makes them more susceptible to contract STIs. == Views on pornography == Pornography has been vouched to provide a safe outlet for sexual desires that may not be satisfied within relationships and be a facilitator of sexual fulfillment in people who cannot or do not want to have real-life partners. Pornography is viewed by people in general for various reasons; varying from a need to enrich their sexual arousal, to facilitate orgasm, as an aid for masturbation, learn about sexual techniques, reduce stress, alleviate boredom, enjoy themselves, see representation of people like themselves, know their sexual orientation, improve their romantic relationships, or simply because their partner wants them to. Pornography is noted for engrossing people "on more than masturbatory levels". Aesthetic philosophers argue whether pornographic representations can be considered as expressions of art. Pornography has been equated with journalism as both offer a view into the unknown or the hidden aspects. French philosopher Michel Foucault remarked that, "it is in pornography that we find information about the hidden, the forbidden and the taboo". Scholars such as Linda Williams, Jennifer Nash, and Tim Dean believe pornography "is a form of thinking", comprised with ideas that are way more reflective about sexuality and gender than what the creators or consumers of pornography intend. Pornography has been referred by people as a means to explore their sexuality. People have reported porn being helpful in learning about human sexuality in general. Studies recommend clinical practitioners to use pornography as an instruction aid to show their clients new and alternative sexual behaviors as part of psychosexual therapy. British psychologist, Oliver James, known for his work on 'happiness', stated that "a high proportion of men use porn as a distraction or to reduce stress ... It serves an anti-depressant purpose for the unhappy." British-American novelist, Salman Rushdie opined that pornography presence in society is "a kind of standard-bearer for freedom, even civilisation". As per evaluation by medical professionals, pornography can neither be good nor bad as it does not endorse or advocate a single set of values regarding sex. As such, individuals may introspect their own values with regards to sex while evaluating pornography. The relationship between pornography and its audience is found to be complex. While many users reported their use to have had positive effects, others especially women were found to be troubled with body image issues, the cause of which is attributed to the unrealistic image of "beauty" that pornography portrays. The increasing prevalence of alleged beauty enhancing procedures such as breast augmentation and labiaplasty among the common populace has been attributed to the popularity of pornography. Data from pornographic websites regarding the viewing habits of people is studied by academics to analyze the sexual preferences and mating choices. More often men look for women who have larger chest and hips, with a smaller Waist–hip ratio. Women are found to prefer men who are taller, stronger, appear highly masculine, and are in roles that can provide resources while being protective (CEO, doctor, athlete, lawmen). Studies on harmful effects of pornography include finding any potential influence of pornography on rape, domestic violence, sexual dysfunction, difficulties with sexual relationships, and child sexual abuse. A longitudinal study had ascertained that pornography use cannot be a perpetrating factor in intimate partner violence. A 2020 study that analyzed depictions in video-pornography found that normative sexual behaviors (e.g., vaginal intercourse, fellatio) were the most commonly depicted, while depictions of extreme acts of violence and rape were very rare. There is no clear evidence to assume that pornography is a cause of rape. Several studies conclude that liberalization of porn in society may be associated with decreased rates of rape and sexual violence, while others have suggested no effect, or are inconclusive. No correlation has been found between pornography use and the practice of sexual consent or lack thereof. Mental health experts are divided over the issue of pornography use being a problem for people. While some literature reviews suggest pornography use can be addictive, insufficient evidence exists to draw conclusions. According to clinical psychologist and certified sex therapist David Ley, calling pornography an "addiction" has been "an area of substantial, protracted controversy and debate". Ley explained pornography doesn't effect an adult brain or body in the way alcohol or drugs do, he said "An alcoholic going cold turkey can have seizures and die because their brain has become physiologically dependent on the alcohol, but no one has ever had seizures or died from not getting to watch porn when they want to." Scholars note that pornography use has no implication on public health as it does not meet the definition of a public health crisis. Neuroscience has noted that minds of the young are in developmental stages and exposure to emotionally charged material such as pornography would likely have an impact on them unlike on adults, and has suggested caution while enabling potential access to such material. Opposition to pornography use is associated with sexual satisfaction, gender violence, and marital quality (wives watching pornography more frequently scored much better than the rest). Some issues of doxing and revenge porn had been linked to a few pornography websites. Since the mid-2010s deepfake pornography has become an issue of concern. === Feminist outlook === Feminist movements in the late 1970s and 1980s dealt with the issues of pornography and sexuality in debates that are referred to as the "sex wars". While some feminist groups seek to abolish pornography believing it to be harmful, other feminist groups oppose censorship efforts insisting it is benign. A large scale study of data from the General Social Survey (2010–2018) refuted the argument that pornography is inherently anti-woman or anti-feminist and that it drives sexism. The study did not find a relationship between "pornography viewing" and "pornography tolerance" with higher sexism—a posit that was held by some feminists; it instead found higher pornography consumption and pornography tolerance among men to be associated with their greater support for gender equality. The study concluded that "pornography is more likely to be about the sex rather than the sexism". People who supported regulated pornography expressed lesser attitudes of sexism than people who sought to abolish pornography. Notably, non-feminists are found more likely to support a ban on pornography than feminists. Many feminists, both male and female, have reflected that the effects of pornography on society are neutral. Adult users of pornography were found more egalitarian than nonusers, they are more likely to hold favorable attitudes towards women in positions of power and in workplaces outside home than the nonusers. ==== Criticism ==== A 2016 study authored by Black feminists criticized the American adult entertainment industry for alleged omission and exclusion of Black women in pornographic representations, particularly in the interracial genres. As pornography becomes a kind of manual on how bodies in pleasure can look, and is "one of the few places where we see our bodies--and other people's bodies," it becomes imperative on pornography to represent "variety of forms", stated the feminist scholars. Anti-pornography feminists argue that aesthetics of pornography demote Black women with undertones of racism. Gender studies scholars Mireille Miller-Young and Jennifer Christine Nash, in their writings on intersectionality of race and pornography, noted that Black people have been depicted as being hypersexual and Black women—more objectified. The scholars also noted major discrepancies in pay rates of the performers, White women have historically made 75 percent more per scene and sometimes still make 50 percent more compared to Black women. Feminist resentment about pornography tend to focus on two concerns: that pornography depicts violence and aggression, and that pornography objectifies women. Multiple analyses of pornographic videos found that Women have been overwhelmingly at the receiving end of aggression from male performers; with the reaction of Women being either positive or neutral towards aggression, which is at odds considering a report that found only 14.2% of US adult women find pain during sex as appealing. Two studies in the 1990s found that Black women were the targets of aggression and faced more violence from both Black and White men than did White women. However, more recent research from 2018 found that Black women were the least likely group of women to suffer nonconsensual aggression and are more likely to receive affection from their male partners. While Black men engaged in fewer intimate behaviors than White men; White women were found more likely to experience violence during sexual activity with White men than with Black men. Concerning Asian women, a 2016 study based on a sample of 3053 videos from Xvideos.com, found that in the 170 videos of the Asian women category, there was much less aggression, less objectification, but also the women had less agency. However, another study found that in a sample of 172 videos from Pornhub, the 25+ videos of the Asian/Japanese category had considerably more aggression than those of other categories. A 2002 study of "internet rape sites" found that among the 56 clear pictures they found, 34 had Asian women, and nearly half the sites had either an image or a text reference to an Asian woman. Findings on depictions of Asian women in pornography aren't consistent in scientific literature. The prevalence of aggression in pornography appears to be changing. A 2018 study of popular videos on Pornhub found that segments of aggression towards women are fewer now, and they have reduced gradually over the past decade with viewers preferring content where women genuinely experience pleasure. ==== Anti-pornography ==== Prominent anti-pornography feminists such as Andrea Dworkin and Catharine MacKinnon argue that all pornography is demeaning to women, or that it contributes to violence against women–both in its production and in its consumption. The production of pornography, they argue, entails the physical, psychological, or economic coercion of the women who perform in it. They charged that pornography eroticizes the domination, humiliation, and coercion of women, while reinforcing sexual and cultural attitudes that are complicit in rape and sexual harassment. Other sex work exclusionary feminists have insisted that pornography presents a severely distorted image of sexual consent, and it reinforces sexual myths like: women are readily available–and desire to engage in sex at any time–with any man–on men's terms–and always respond positively to men's advances. ==== Pro-pornography ==== In contrast to the objections, other feminist scholars "ranging from Betty Friedan and Kate Millett to Karen DeCrow, Wendy Kaminer and Jamaica Kincaid" have supported the right to consume pornography. The anti-porn feminist stranglehold began to loose when sex-positive feminists like Susie Bright, performers Nina Hartley, and Candida Royalle affirmed the rights of women to consume and produce porn. The works of Camille Paglia established that westerners have been "pagan celebrants" for long and pornography has been an inseparable part of western culture. Wendy McElroy has noted that both feminism and pornography are mutually related, with both thriving in environments of tolerance, and both repressed anytime regulations are placed on sexual expression. Societies where pornography and sexual expression is prohibited are more likely to be the places where women are often subjected to violence and sexual abuse. ==== Rise of feminist pornography ==== The lesbian feminist movement of the 1980s is considered a seminal moment for the women in porn industry as more women entered into the developmental side, allowing women to gear porn more towards women as they knew what women wanted, both from the perspective of actresses as well as the female audience. This involved making lesbian pornography that is not merely geared towards heterosexual males—a change considered good, as for a long time the porn industry had been directed by men for men. Furthermore, the advent of the VCR, home video, and affordable video cameras allowed for the possibility of feminist pornography. Feminist porn directors are interested in challenging representations of men and women, as well as in providing sexually-empowering imagery that features many kinds of bodies. Angela White started her own production company, AWG Entertainment, in which she has complete creative control over the content—from her partners, to the location, costumes, and the "vibe" of the video, "I am a feminist, so what I create is feminist, and I produce ethical porn, which is when everything is consensual", she said. Women are more likely to consume porn that is "female-centered" and feature acts such as cunnilingus, a study of pornographic videos found that when men spend more time performing cunnilingus they have higher volumes of ejaculate, an increase in sexual arousal resulting from exposure to the vaginal secretion 'copulins' during cunnilingus is reasoned to be the cause. Female-centric porn is mostly made by women, in these works the initiation of sexual activity is done by the female. Porn for women is identified by factors like greater attention on "sensual surroundings" and "soft focus camerawork", rather than on explicit depiction of sexual activity, making the productions more warm and humane compared to the traditional porn made for heterosexual men. "If feminists define pornography, per se, as the enemy, the result will be to make a lot of women ashamed of their sexual feelings and afraid to be honest about them. And the last thing women need is more sexual shame, guilt, and hypocrisy—this time served up by feminism" — Ellen Willis. ==== Pay rates ==== Porn industry has been noted for being one of the few industries where women enjoy a power advantage in the workplace. "Actresses have the power," Alec Metro, one of the men in line, ruefully noticed of the X-rated industry. A former firefighter who claimed to have lost a bid for a job to affirmative action, Metro was already divining that porn might not be the ideal career choice for escaping the forces of what he called "reverse discrimination". Female performers can often dictate which male actors they will and will not work with. Porn—at least, porn produced for a heterosexual audience—is one of the few contemporary occupations where the pay gap operates in the favor of women. The average actress makes fifty to a hundred per cent more money than her male counterpart. === Psychological perspective === Psychologists consider pornography to be of particular relevance in the study of intimate relationships and the development of adolescent sexuality. Mainstream psychology is mostly concerned with the study of effects of pornography, while critical psychology and applied psychology is engaged in more nuanced and academic study of pornography. Problematic pornography use is assessed in clinical psychology. A 2013 study refuted the notion that porn actresses have higher rates of psychological problems than regular women. The study compared 177 porn actresses with regular women of similar age, ethnicity, and marital status, and found that the porn actresses had "higher levels of self-esteem, positive feelings, social support, sexual satisfaction, and spirituality" compared to the regular women. ==== Psychoanalyst views ==== In analytical psychology, human sexual instincts and religious-spiritual instincts are considered tightly associated with each other, with both sharing a common instinctual objective, which, as Carl Jung acknowledged, is the striving of the psyche for "wholeness". The psyche of a person is understood to be differentiated, as being made-up of psychological traits that are feminine and masculine in nature. According to Jung, this differentiation allowed the formation of opposite polarities, which made "consciousness possible". According to psychologist and author Giorgio Tricarico, as an individual moves through various life experiences, their psyche approaches wholeness or the state of "non-differentiated"—a realm of unified, impersonal (it) consciousness—considered belonging to the sacred or divine. In Hindu cosmological view, the universe is weaved from the two coexisting generative principles: feminine and masculine. Men and women, in whatever form they appear, are considered microcosmic representatives, or mirrors of the macrocosmic energies of Shiva and Shakti. The Goddess or Shakti (spirit, feminine principle) is the "pure consciousness", the animating energy or power underlying existence. Shakti, embraced by Shiva (matter, masculine principle), together in "perpetual union" form the nondual "Absolute". Shiva and Shakti in a state of "copulation" represent the flow of "erotic energy" in an individual. Sigmund Freud called the feminine Shakti "libido that cannot be simply repressed." Self-realization or becoming aware of one's " 'deep' femininity" entails dealing with the powerful sexual energy. Schools of religious thought such as the Kundalini yoga and Tantra, which involve using the sexual energy for purpose of self-realization had been developed in India. The concept of gender had been studied in Hinduism for long, where the conclusion was arrived that the human self, an emanation from the nondual Absolute—is androgynous, and is encased in human bodies, which themselves are androgyne in the sense that every body is a compound of the feminine (spirit) and the masculine (matter) principles. Sexuality is considered a creative function of nature to align the two principles with the nondual Absolute. The masculine and the feminine principles of the self have been identified to the deities Shiva and Shakti, who make-up the two sexual polarities; by establishing a connection between the two, for the flow of erotic energy (as in the case of an electric circuit between positive and negative terminals for the flow of electric current), in one's own being—by the means of sexual stimulation—through "erotic visualization" or "ritual copulation", the self would "divest" from its body identity and realign into the "bipolar being", which, then represents a unit microcosm mirroring the nondual macrocosm; thus an individual in being one with the absolute experiences bliss-considered as the power of the goddess (Shakti) in a tangible form. The Hindu tantric idea of feminine and masculine principles arriving at unity in the "divine feminine" or the "unified divine consciousness" is analogous to the Analytical psychology idea of coincidentia oppositorum which expounds the unity of opposites. In the Hindu tantric view, the women who participate in union rituals, thereby enabling men to attain self-realization are regarded as shakti or the goddess, as they are believed to embody her. Recognition of the deity in an objective woman is centered upon a man's acceptance of the subjective feminine and the primacy of her desires. Tricarico professed that modern-day pornography in its essence is a "desacralised, technological, and consumerist" equivalent of the ancient sacred prostitution – a custom that involved honoring of the sacred feminine and worship of the prostitutes as goddess. The feminine is believed to embody particular qualities of the sacred or divine more broadly and deeply than the masculine, consequently in women, the ability to incorporate nondualistic awareness is assumed to be higher. Tricarico argued that women in porn, through their performances of many sexual acts, would inadvertently approach the non-differentiated state, an effect which he called the "intimation of hierophany". "Porn actresses may embody the medium to enter what used to be the realm of the sacred", he said. The actresses have been likened to the "descendants of the lost goddesses" who are now offering the gift of the "numinous" to all through their performances, but are unacknowledged or devalued for their contributions. The use of epithets like "bitch", "whore", "slut" for sexually active women has been attributed to the denial of the subjective feminine by men. The subdued acceptance of female sexuality, as a value in its own right, is manifested when a man's admiration for the "bitch" gets subtended if she happens to be his wife or girlfriend. Along with showing "admiration, lust, gratitude, and desire", men show brazen hate and disgust towards women, this behavioral dichotomy had been ascribed to the "patriarchal hypocrisy" embedded in men. The unconscious perception by men of the greater ability in women to reach the undifferentiated state of psyche is reasoned to be a cause for their intentional humiliation, wilful devaluation, and deliberate belittlement of women. According to Julia Kristeva, a psychoanalyst scholar, the psychological rejection and fear of the mother figure in males is the root cause for their behaviors that seek to subjugate women. Men in their infancy live in a state of "undifferentiated physical and psychic fusion" with the mother, experiencing "emotional exhilaration and jouissance". However, as they mature, sensing their separateness from the mother, they seek to become independent subjects and take recourse to paternal images and patriarchal behaviors with the hope of eliminating any possible further "undifferentiated/psychotic fusion" with the mother as they feel threatened by it. According to psychoanalyst Melanie Klein, the rejection and fear of the maternal image, in females, leads them to reject their own femininity. Tricarico hoped that porn becomes a place where men discard patriarchal antics, women embrace the sacred aspects, and audience incorporate porn as a joyful experience for the body – a genuine form of play that helps them approach the non-differentiated state. In being with other, we differentiate ourselves, and experience jouissance whilst becoming a unit being. === Religious attitudes === Many religions have long and vehemently opposed a wide range of sexual behaviors, as a result religious people are found highly susceptible to experience great distress in their use of pornography. Religious people who use pornography tend to feel sexually ashamed. Sexual shame—which arises from a person's perception of their self in other peoples mind, and a negative assessment of their own sexuality—is considered a powerful factor that over time governs an individual's behavior. As sexuality is interwoven into one's personal identity, sexual shame or sexual embarrassment are found to attack the person's very sense of self. When a sexual shaming event occurs, the person attributes causation to oneself resulting in self condemnation, and experience feelings of sadness, loneliness, anger, unworthiness, and rejection, along with a perceived judgment of their self by others. In this mental landscape, a fear arises that ones sexual self needs to be hidden. This psychological process initiates and fuels further shame and lowers one's self-esteem. Sexual shame constricts the "psychic space for free play with one's sexuality". Sexual shame in people begets more shame, and leads to a cycle of powerlessness culminating in deepening negative emotions. Those who tend to feel shame easily are found to be at greater risk for depression and anxiety disorders. According to clinical psychologist Gershen Kaufman, all Sexual disorders are majorly "disorders of shame". The cause of attributing shame to sexuality is traced back to the biblical interpretation of nakedness being shameful. Much of the Christian mythology presented sexuality as an obstacle to be surmounted in the way of salvation. The major abrahamic religions condemn and consider all forms of nonmarital and nonreproductive sexual pleasure as unacceptable. In Hinduism, bhoga (sexual pleasure) is celebrated as a value in itself and is considered one of the two ways to nirvana, the other being the more demanding yoga. In the Hindu tantric view, watching coitus as an act of Shiva and Shakti is believed to unfurl the Kundalini, and is considered equivalent to one's engaging in maithuna or the fifth M of the panchamakara. A central concept in Hinduism, purushartha, advocates pursuit of the four main goals for happiness: dharma (virtue), artha (riches), kama (pleasure), and moksha (freedom). The pursuit of Kama was elaborated by the sage Vatsyayana in his treatise Kama Sutra, which states that sexual pleasure and food are essential for the well-being of the body, and on both of them depend virtue and prosperity. Food, despite causing indigestion sometimes, would still be consumed regularly, and so it must be with pleasure, which must be pursued with caution while eliminating unwanted or harmful effects. As no one abstains from cooking food worrying about beggars who ask for it, or restrain from sowing wheat fearing animals that destroy the crop, similarly, instructs Vatsyayana, that men and women acquire knowledge of Kama by the time they reach youth and pursue it even though dangers exist; and those who become accomplished in Dharma, Artha, and Kama would attain highest happiness in this world and hereafter. According to the Buddha, happiness is of two types: one derived from "domestic life" and the other from "monastic life", and between the two, monastic kind is "superior". As a result of the Buddha's effective advocacy for monasticism, in Buddhist communities, marriage and divorce remained civil matters and never acquired sacremental significance. Counsel over sex life for householders was minimal, while for the monks it was extensive as in Vinaya since all sexual behaviors were meant to be suppressed for the sake of enlightenment. The early Buddhist texts castigated women as detrimental beings. The Buddha himself said often that a woman's body is "a vessel of impurity, full of stinking filth. It is like a rotten pit ... like a toilet, with nine holes pouring all sorts of filth." Once when it came to his notice that a monk, Suddina, transgressed celibacy with his wife for the sake of progeny, the Buddha chided him saying, "It were better for you, foolish man, that your male organ should enter the mouth of a terrible and poisonous snake, than that it should enter a woman." Per the Buddha, all sexual desires are incompatible with enlightenment. In Buddhism, people who even derive pleasure from watching others engage in sexual activity were relegated as pandaka (pusillanimous). The Buddha said sexuality is a fetter that must be evaded completely and men who engage with it are "impure" and will not be freed from "old age". After the Buddha died in old age, subsequent generations of Buddhists resolved their problematic attitudes towards sex by accommodating different views. According to Indonesia's foremost Islamic preacher, Abdullah Gymnastiar, shame is a noble emotion commanded in the Quran and was held high by Muhammad, who had been quoted as saying "Faith is compiled of seventy branches... and shame is one of them." To cultivate shame in Muslims, their sexual gaze needs to be checked, as unchecked gaze is believed to be the door through which Satan enters and soils the heart. In 2006, when anti-pornography protests erupted in Indonesia, the world's most populous Muslim-majority country, over the publication of the inaugural Indonesian edition of Playboy – Abdullah called for a legislation to ban pornography and embarked on a mission to shroud the state with a sense of shame, giving the slogan "the more shameful, the more faithful". During these protests, Indonesia's foremost Islamic newspaper, Republika, published daily front-page editorials which featured a logo of the word pornografi crossed out with a red X. The Jakarta office of Playboy Indonesia was ransacked by the members of Islamic Defenders Front (Front Pembela Islam or FPI), and bookstore owners were threatened not to sell any issue of the magazine. Consequently, in December 2008, Indonesian lawmakers signed an anti-pornography bill into law with overwhelming political support. Highly religious people are more likely to support policies against pornography such as censorship. Ironically, regions with highly religious and conservative people were found to search for more pornography online. Religious people are prone to having obsessive thoughts regarding sin and punishment by God over their pornography use causing them to feel ashamed, and perceive themselves to have pornography addiction while also suffering from OCD related symptoms. A study of sexually active religious people found those who are highly spiritually matured have less shame, while those not spiritually matured have high shame. == See also == == Notes == == References == === Citations === === Works cited === == External links == The dictionary definition of pornography at Wiktionary Quotations related to Pornography at Wikiquote
Wikipedia/Pornography
The conservation and restoration of photographic plates is the practice of caring for and maintaining photographic plates to preserve their materials and content. The practice includes the measures that can be taken by conservators, curators, collection managers, and other professionals to conserve the material unique to photographic plate processes. This practice includes understanding the composition and agents of deterioration of photographic plates, as well as the preventive, and interventive conservational measures that can be taken to increase a photographic image's longevity. == History == === Composition === In general, black and white photographic negatives are made up of fine silver particles (or color dyes for color negatives), which are embedded in a thin layer called a binder; the two together comprising the emulsion. This emulsion layer sits upon what is called the support, which can be paper, metal, film, or, as in the case of photographic plates, glass. Before exposure, a photographic plate consists of a photosensitive substance layered on a support medium. Glass plates emerged as a common support medium for photographic negatives from the mid-nineteenth century to the 1920s. Depending upon the period, there can be variants to the binder and, thus, the chemistry of the image. In the case of the Wet Plate Collodion, the image is run under a wash bath to stop the development of the image after exposure. An important part of the photographic process, "fixing", is then used to wash the silver particles that are not part of the image, which then produces a stable negative image. The fix bath will ensure that the remaining silver halide crystals are no longer sensitive to additional light exposure, removing all excess. This negative image can then be used over many years to produce paper positives. It is important for the conservator to understand the chemistry, in order to prevent further chemical reactions. === Processes === Collodion glass plate negative: This process was invented by the Englishman Frederick Scott Archer in 1851. While the first process to take advantage of glass plates was the albumen print method, it was quite laborious and was quickly surpassed by the collodion glass plate negative in common use. The collodion photographic process was a wet plate process, which meant that the glass plate itself had to be wet while it was exposed and throughout processing. This required a portable darkroom to be taken wherever a photographer went, in order to produce a negative image successfully. During the process, the collodion emulsion was poured onto a glass plate before being exposed. The glass plate was then developed, fixed, washed, and protected with a varnish. Collodion is cotton dissolved in nitric acid. Because collodion was both complex and dangerous to produce, it was often purchased by the photographer. Once dissolved, iodide was added. Over time, bromide was added to help the image be more sensitive to light. Sometimes albumen (made of egg whites) was used to help the collodion stick to the glass plate. Pyrogallic Acid or ferrous sulfate was often used to develop the latent image, then sodium thiosulfate (also known as hypo) or potassium cyanide was used to fix the image. Gelatin dry plate negative: This process was invented by Richard Leach Maddox in 1871, but it was not commonly used until 1879, when the process became commercially successful. Because of this process's advances in photography, it soon replaced the wet plate process in the 1880s. The collodion binder formerly used was replaced by gelatin, which already contained light-sensitive silver salts. This meant the emulsion was already present and did not have to be painted on the glass plate right before exposure – which now took less than one second. Because of this advancement, photographers did not have to carry a portable darkroom, as the plate could be developed later. To make the gelatin dry plate, the glass was cleaned, polished, and treated to ensure the gelatin would adhere to the glass plate. Treatments included applying thin layers of gelatin, albumen, or chemical etching. After 1879, when further improvements were made to the gelatin emulsion, gelatin glass plates began being mass-produced by companies such as Wratten & Wainright, Keystone Dry Plate Works, and notably the Eastman Dry Plate Company. This led to the advanced use and production of photographic glass plates until around 1925 and marked the start of the development of modern photography as an industry. Screen Plate: The Screen Plate process is also known as Autochrome Lumière and was invented in France in 1907 by Louis and Auguste Lumière. Screen plates were an additive color screen process considered the first successful color process for commercial photography. The Lumière brothers drew upon the color theories of James Clerk Maxwell and Louis Ducos de Hauron from the second half of the 19th century. Autochrome plates were "covered in microscopic red, green, and blue colored potato starch grains". These grains, before being placed on the glass plate, were sorted through sieves to break them down to "thousandths of a millimeter" in diameter. Once broken down in size, they were separated into groups, then dyed either red, violet, or green. The grains were mixed and then spread over a glass plate covered with a tacky varnish. A second varnish was then applied over the layer of starch grains. The second coating of varnish was a hydrophobic layer composed of castor oil, cellulose nitrate, and dammar resin. Ambrotype: The Ambrotype process resembled the Wet Plate Collodion considerably in composition and creation, and was considered to be a "Collodion Positive". In 1850, Louis Désiré Blanquart-Evrard realized that the negative appeared as a positive by underexposing the image and placing the image against a dark background. Dark backgrounds—created with paints, fabrics, and papers—were used to achieve this effect. In some instances, bleach was used, after the image was developed, to yield a softer appearance. In this process, the chemical composition and fix bath are critical elements to the lifespan of the image, but the material that backs the glass plate may also cause deterioration. == Agents of deterioration == There are ten accepted agents of deterioration: dissociation, fire, incorrect relative humidity, incorrect temperature, light, pests, pollutants, physical forces, thieves, and water. Photographic plates face risks of damage from both external forces and from their own chemical composition. For a conservator to create an appropriate plan to protect against agents of deterioration, they must understand what might impact a photographic plate. The following list addresses how each agent of deterioration harms photographic glass plates. === Relative humidity and temperature === Relative humidity (RH) and temperature are two of the most common threats to photographic plates. As with all material collections, high temperature in combination with high humidity can cause mold growth and attract pests. Photographic plates face significant structural and chemical challenges unique to their makeup. There are two types of photographic glass plates: collodion wet plates and gelatin dry plates. Structurally, collodion wet plates are held together with a specific emulsion type, made using a silver halide mixture in gelatin. Fluctuations in RH can strain the adhesive emulsion, causing the gelatin to expand and contract. The strain from incorrect RH can also cause the emulsion to crack or separate along the plates' edges. With gelatin dry plates, high humidity can cause mold to grow on the emulsion. High levels of humidity can cause glass plates that have been stored incorrectly to stick together, compromising the image on the plate. Increasing RH can cause deterioration of other elements; these include the silver halide, varnish, and glass support. Decreasing the RH will cause deterioration by eventually leading to the flaking of the binder and dehydration of the glass. Much like RH, temperatures must be precise and closely monitored for the correct storage of photographic glass plates. A safe temperature to keep glass plates is 65 °F (18 °C); however, a fluctuation of +/- 2 °F (−17 °C) would not cause a significant impact, making the safest range 63 to 67 °F (17 to 19 °C). Low temperatures aid in slowing a plate's inherent fragility by delaying the chemical reactions that cause decay of the plate's structure. Increasing temperatures or frequently high fluctuations will speed up the decay process. === Theft and dissociation === Although theft and dissociation can occur separately, it is not uncommon for the two to go hand-and-hand. Dissociation typically results in overtime from an ordered system falling apart due to lack of routine maintenance updates or from a catastrophic event leading to data loss. If an inventory is not regularly updated it could become easy for a single, or several, glass plates to go missing. Regular inventory maintenance can also serve as a deterrent against theft. Ensuring glass plates are locked and stored where only designated museum staff can access them is the best preventative measure against theft. === Water and pests === Deterioration in glass is often directly related to moisture, from humidity or direct contact. Enough moisture over time will result in the chemical composition of the image to change. In the 1990s, The United States National Archive began to notice that some glass plates featured in their collection, on the non-photo bearing side of the scale, a crystalline deposit, known as sick-glass, was present. If a glass plate has been subject to large amounts of moisture, it could grow mold on the plate's emulsion. Mold will eat away at the emulsion and attract other living pests. Insects will be more likely to appear in areas already compromised by inappropriate storage conditions. Insects will produce waste materials that, like dust, build up over time, causing further damage. Pests eat glass plate storage materials such as paper envelopes or cardboard boxes. === Light === Photographic plates and all photographic materials are susceptible to light. Extensive and ongoing exposure to light can cause significant and irreversible deterioration. Sunlight is the type of light most damaging to photographic plates. However, indoor lighting and other forms of UV lighting all pose a threat to photographic plates, causing fading and yellowing. Light is especially threatening to color photographic materials as it causes accelerated fading of the color dyes. Exposure to light could lead to deterioration and discoloration of the pigments present on the plate. === Pollutants and fire === Air pollution can threaten photographic plates through poor air quality and dirt that can damage the materials. This can include dust and gaseous pollution in an urban environment. Air pollution can cause fading of photographic materials. If a plate is subject to poor air quality, debris removal must be done with care using a cotton cloth; if done incorrectly, the glass might be subject to abrasions. Other sources of air pollution include "photocopying machines, construction materials, paint fumes, cardboard, carpets, and janitorial supplies". Fire can cause severe damage to photographic glass plates. The heat produced by a fire can aid in increasing the chemical decomposition rate of the plate's emulsion. Pollutants in the air produced by the fire—such as smoke and debris—can also attach to or rest upon plates. The same care should be taken removing trash from the aftermath of a fire that would be used to remove dust and other air pollutants. === Material and chemical === The glass composition of photographic plates can be a factor of deterioration. Due to poor quality or an inherent fragility, "sick glass" can occur. Environmental conditions are usually linked to the increase or presence of this glass corrosion. The effect of "sick glass" can be weeping and crizzling caused by excessive alkali and a lack of stabilizers. Weeping involves droplets forming on the glass that appear as tiny crystals. This deterioration is especially threatening to cased photographs because the cover glass could be corroded and damage the image underneath. Corrosion of the glass plate support can also damage the image layer by causing the lifting of the binder and varnish layers. The other chemical components of glass-plate negatives can also be threatening agents of deterioration. For instance, the silver image layer could undergo oxidative deterioration, leading to fading and discoloration. Additionally, the collodion binder itself is made up of cellulose nitrate, which is known to be a highly flammable compound. Most of these agents of deterioration are the result of poor chemical processing as a result of inherent frgility, but poor environmental and storage conditions usually accelerate them. === Physical === Glass plates are relatively stable dimensionally but also very fragile and brittle. Glass is highly susceptible to breakage, cracks, and fractures. This can be caused by human error, including dropping or bumping the glass plate, or it can be caused by failure of storage equipment, housing, shelves, etc., which may lead to an impact against the glass. Different breakage and stress states affect the image layer and binder differently. Types of breakage: Impact Break: Point of impact and surrounding radiating arcs. Cracks: Running perpendicular to applied stress. Blind Cracks: Breaks do not carry through the whole thickness of the glass. == Preventive conservation == === Environment === Environmental controls are a crucial part of the preservation of photographic glass plates. Relative humidity (RH), temperature, and light play a significant role in keeping the multiple materials in photographic glass plates maintained. The following environmentally regulatory measures are taken for their preservation: For photographic glass plates, the temperature is kept cool at approximately 65 °F (18 °C). RH levels are generally kept at 30–40%. If RH drops below 30%, the image binder of the glass plate will dehydrate. If RH rises above 40%, the glass will begin hydrating. For cased glass plate photographs, such as ambrotypes, RH levels are kept at 40–50% and the temperature between 65 and 68 °F (18 and 20 °C). These levels differ because of the case being at risk for embrittlement, brass mat, and glass deterioration. Though cold storage is safe for photographic plates, with proper acclimation periods to room temperature, frozen storage, unlike for film photography, is not recommended. Fluctuations, called "cycling", in RH and temperature should be avoided. Environmental fluctuations can contribute to mold growth, chemical deterioration including discoloration and yellowing, degradation of the silver halide crystals resulting in silver mirroring, and deterioration of the emulsion. Acceptable fluctuations include +/- 2 degrees of temperature and +/- 3% of relative humidity. Photographic glass plates, especially negatives, are preserved in dark enclosures due to their risk of deterioration when exposed to light, particularly UV and sunlight. If displayed, spot-lighting and uneven heating of the photographic plate is avoided. Light levels are kept below 50 lux. === Handling === Photographic glass plates are handled carefully to avoid physical or chemical deterioration and damage – the following measures aid in their preservation through proper handling: To prevent fingerprints, non-vinyl plastic gloves are worn when handling – either latex or nitrile. Cotton gloves are not recommended by conservators due to the possibility of glass easily slipping from the cotton material. Cotton gloves are also susceptible to snagging on the emulsion, if it is flaking, or on the edges of the glass support. When handling, a glass plate is not held by one edge or corner but by two opposite edges and always with two hands. the glass plate on a flat surface is always placed with the emulsion side up. Glass plates are never stacked nor have any pressure placed upon them. The sleeve or enclosure is labeled before placing the glass plate inside. Since glass plates are fragile and brittle, duplicates are created if a glass plate is to be used often for printing. This helps to minimize the threat of breakage of the original. === Storage === Storage of photographic glass plates is important to their preservation. Museums and other cultural institutions take the following measures to ensure their glass plates are properly housed: Photographic glass plates are housed in four-flap enclosures, emulsion side up. These four-flap buffered enclosures prevent a glass plate from being pulled in and out, which would cause further deterioration of the image from flaking and abrasions. The four-flap enclosure allows the glass plate to be accessed by unfolding the flaps without pulling the plate across any surface or material. The photographic glass plates are stored vertically on the long side of the plate in storage boxes whose material is free of acid, lignin, polyvinyl chloride (PVC), dye, sulfur, and alum. The acidity of any paper storage used should show a pH between 7 and 8.5. Glass plates should not be packed tightly and should not rub against each other. Each plate should be separated with stiffeners made of acid-free folder stock or cardboard to support the plate. Photographic glass plates stored in a partially filled box will have spacers, most likely acid-free corrugated paperboard, inserted to prevent significant bumping or moving. Glass plates larger than 10" x 12" are stored in legal-size boxes that are partially filled to prevent a box that is too heavy. The extra space in the box is filled with board or spacers to avoid shifting when jostled. Storage boxes of photographic glass plates are stored on a lower shelf, specifically below four feet. This helps prevent someone from lifting them down from above their head and dropping them from a great height. Each storage box of photographic glass plates should be labeled with words such as "Heavy", "Handle with Care", and "Caution: Contains Glass Negatives", so all with access to the collection know to be extra careful when lifting the box off a shelf. When there are concerns about the reactivity of housing materials, the Photographic Activity Test (PAT) by the Image Permanence Institute should be consulted. The use of the PAT is a standard in the preservation of photographic plates. The PAT "explores the possibility of chemical interactions between photographs and a given material after prolonged contact". It is considered best practice to use steel shelving to store photographic plates. It is not recommended to use wood cabinets or crates. Wood shelves are susceptible to termites and are more prone to trigger chemical reactions with the plates. Wood shelves tend to possess finishes, paints, and glues that cause off-gassing. Acetic acid and formaldehyde build-up are also more likely to occur. Lastly, given the weight of the photographic plates, it is more problematic that the relative weakness of wood shelving can hold the weight of the collection. ==== Storage of broken photographic plates ==== Broken or cracked glass plates are stored specially, separate from other photographic plates, and in the following ways: Broken glass plates are stored flat, unlike intact plates stored vertically. Stacking broken plates only five plates high is recommended due to the plates' weight. This will prevent further breakage and damage. Photographic glass plates with cracked or damaged binder are stored on sink-mats. Those with minor flaking are still housed in the four-flap enclosure that is labeled appropriately, describing the damage. Glass plates with extensive flaking are stored on sink-mats horizontally and placed in a storage box with a label that reads "Caution: Broken glass. Carry Horizontally." Broken glass plate shards are "sandwiched" in between two pieces of buffered board and placed inside the four-flap enclosure. AIC advises that form-fit support should be created for broken glass shards by cutting out two pieces of 4-ply mat board that fit each shard. These pieces are then glued to each side of the buffered board with either wheat starch paste or 3M #415 double-stick tape, placing each shard in between the form-fit support to help prevent further damage. These broken shards are then placed in individual four-flap enclosures and stored flat, with appropriate labeling that warns of their broken condition. Another method of storing broken shards is to place them on sink mats. If this method is used, each piece is separated with paperboard spacers to prevent the pieces from touching. These paperboard spacers are sometimes attached with adhesives to the mat so that physical damage does not occur to the shards. They are stored horizontally and placed in a storage box with a label that reads "Caution: Broken glass. Carry Horizontally." === Maintenance/housekeeping === Maintenance/housekeeping of photographic plates requires minimal intervention: Light cleaning of plates is carried out occasionally by removing dust with a soft brush for their preservation. To dust the emulsion side, it is best to use an unused paint brush and, very gently, brush from the center to the outside of the plate. To clean the underside of the leaf (non-emulsion side), dip a cotton ball or cotton round into a cup of distilled water, and work from the middle of the plate to the outside. Water on the emulsion side will wash the emulsion away, causing the image to be lost forever, be careful to ensure this cleaning treatment is only used on the glass support underside and not the emulsion side of the plate. Conservators should also keep the surrounding collections area clean of dust, pests, and any other debris that may attract pests. Food and drink should not be permitted in the storage area as they attract pests. To prevent deterioration from air pollutants, it is helpful to have the air entering the storage area filtered and purified, windows closed, obsolete/outdated media minimized, and enclosures and cabinets in use to protect collection objects. == Conservation treatment == Many broken or cracked glass plates may benefit from conservation treatment. There are various actions taken in reassembling and restoring these plates using the following materials and methods: === Handling === Conservators tend to wear Neoprene gloves to help protect the emulsion from fingerprints that will cause deterioration over time. They avoid handling glass fragments to help prevent further breaking of the glass. A padded (foamed polyethylene) and tight weave tissue or Sintered Teflon lined box are preferred by conservators to store fragments, as they help prevent further breaking or cracking. === Adhesives === Conservators use various adhesives; each adhesive type has benefits and disadvantages for different situations. Paraloid B-72 – A solution of 50–70% B-72 in a solvent with added silica is used to reassemble glass plate fragments. It takes 1–2 hours to dry. One issue with this adhesive is that it creates "snowflakes" in between pieces, making an invisible reassembly impossible. Epoxy resin – This adhesive is powerful and has minimal shrinkage. An issue with this method is that it yellows over time and is not advisable to be used on glass plates with a collodion binder. This is due to the potential damage to the collodion binder of the reversibility method. Cyanoacrylates – This adhesive bonds firmly with alkaline surfaces but is very brittle and is only used for temporary repairs. Pressure-sensitive tape – This adhesive type is ubiquitous, easy to use, and completely removable but only provides minimal support. Sticky wax – As the pieces are assembled, sticky resin, such as that used for lost wax casting in jewelry making, is handy for holding the shards in place. === Backing material === Silpat sheet – This is made of silicone and fibreglass; textured and provides air pockets to prevent damage from the capillary application; it does not traumatize the emulsion side of the glass. Secondary support – This method is used for glass plates broken into many pieces or over 5 by 7 inches (130 mm × 180 mm) in size. A second piece of glass is used with silicone to be inserted as a barrier layer. === Application === Wicking – This is used by conservators to apply the adhesive to the glass with a wooden or glass applicator. A capillary tube or bottle puts the appropriate amount of glue on the glass shard without excess. Direct application – When repairing a broken plate on an inclined plane, conservators apply the adhesive to the fracture interface. The shard is placed directly next to its corresponding bit on the inclined plane. === Repair methods and techniques === Photoshop Software assembly – Virtually assembling broken glass shards through Photoshop, after scanning or photographing all pieces, is used by conservators. Once all the details are within Photoshop, conservators will construct a copy of the glass plate by moving and rotating the parts until the glass plate is fully assembled. This allows conservators to understand how the glass plate should be reconstructed while avoiding further damage to the glass plate. This method allows for further research and study of the plate without the risk of further injury through continual handling. Inclined assembly – This method involves applying an adhesive to the glass shard interfaces and assembling them on an inclined surface covered with Mylar or Silpat. The glass shards are reassembled by direct application, which involves applying the adhesive directly to the shard interface and attaching it to its corresponding piece or assembling through wicking. Vertical assembly – This method is used because the glass shards fit back together most accurately vertically. This also helps to protect the side of the binder layer. The adhesive is not applied until all the pieces are assembled, enabling conservators to recognize any misalignment before an adhesive is applied. As the last step, the adhesive is applied through wicking. Light-line – This is often used to ensure all pieces are aligned, allowing a conservator to see any misalignment, which would show as a crooked line. Once the details are aligned, the light bar will be straight again. === Projects === The vertical assembly method along with a light line is used in The Glass Plate Negative Project at the Heritage Conservation Centre, as outlined in the case study. This study shows how conservators also deal with other conservation issues, including accretions and residue. For instance, while the plates were considered structurally stable, they may have needed surface cleaning. This was completed by using swabs dampened with water/ethanol solutions to reduce stains or do away with any tape residue. Pressure-sensitive labels were removed mechanically. Conservators used Whatman lens tissues to wipe off any other residue marks. == References == == External links == Conservation of Glass Plate Negatives at the Smithsonian Case Study: Repair of a Broken Glass Plate Negative The Conservation of Glass Plates: Student Placement The Preservation of Glass Plate Negatives A Method of Rehousing Glass Plate Negatives Gaylord Archival – rehousing resources Guidelines for Exhibition Light Levels for Photographic Materials Digitizing glass plate negatives
Wikipedia/Conservation_and_restoration_of_photographic_plates
The diagonal method (DM) is a rule of thumb in photography, painting and drawing. Dutch photographer and lecturer Edwin Westhoff discovered the method when, after having long taught the rule of thirds in photography courses, he conducted visual experiments to investigate why this rule of thirds only loosely prescribes that points of interest should be placed more or less near the intersection of lines, rather than being rigid and demanding placement to be precisely on these intersections. Having studied many photographs, paintings and etchings, he discovered that details of interest were often placed precisely on the diagonals of a square, instead of any "strong points" that the rule of thirds or the photographic adaptation of the golden ratio suggests. A photograph is usually a rectangular shape with a ratio of 4:3 or 3:2, from which the diagonals of the photograph are placed at the bisection of each corner. Manually placing certain elements of interest on these lines results in a more pleasing photograph. == Theory == Diagonals, the middle perpendiculars, the center and the corners of a square are said to comprise the force lines in a square, and are regarded by some as more powerful than other parts in a square. According to the DM, details that are of interest (to the artist and the viewer) are placed on one or more diagonals of 45 degrees from the four corners of the image. Contrary to other rules of thumb involving composition, such as the rule of thirds and the golden ratio, the DM does not ascribe value to the intersections of its lines. Rather, a detail of interest can be located on any point of the four bisections, to which the viewer’s attention will be drawn. However, the DM is very strict about placing details exactly on the bisection, allowing for a maximum deviation of one millimeter on an A4-sized picture. Another difference from other rules of thumb is that the DM is not used for improve composition. == Application == The diagonal method was derived from an analysis of how artists intuitively locate details within a composition, and can be used for such analyses. Westhoff discovered that by drawing lines with an angle of 45 degrees from the corners of an image, one can find out which details the artist (deliberately or unconsciously) intended to emphasize. Artists and photographers intuitively place areas of interest within a composition. The DM can assist in determining which details the artist wishes to emphasize. Research by Westhoff has resulted in the finding that important details in paintings and on etchings of Rembrandt, such as eyes, hands or utilities, were placed exactly on the diagonals. It is very difficult to consciously place points of attention precisely on the diagonals during the making of photos or artworks, yet it is possible to do this in post-production using guidelines. For instance, the DM can be applied to move the subject of a picture further into a corner. The DM can only be applied to images where certain details are supposed to be emphasized or exaggerated, such as a portrait in which a specific body part deserves extra attention by the viewer, or a photograph for advertising a product. Photographs of landscapes and architecture usually rely on the composition as a whole, or have lines other than the bisections to determine the composition, such as the horizon. Only if the picture includes details such as persons, (standalone) trees, or buildings is the DM applicable. == See also == Golden triangle (composition) Another way to use diagonal lines to place elements in a composition == References == == External links == www.diagonalmethod.info
Wikipedia/Diagonal_method
In photography, toning is a method of altering the color of black-and-white photographs. In analog photography, it is a chemical process carried out on metal salt-based prints, such as silver prints, iron-based prints (cyanotype or Van Dyke brown), or platinum or palladium prints. This darkroom process cannot be performed with a color photograph. The effects of this process can be emulated with software in digital photography. Sepia is considered a form of black-and-white or monochrome photography. == Chemical toning == Most toners work by replacing the metallic silver in the emulsion with a silver compound, such as silver sulfide (Ag2S) in the case of sepia toning. The compound may be more stable than metallic silver and may also have a different color or tone. Different toning processes give different colors to the final print. In some cases, the printer may choose to tone some parts of a print more than others. Toner also can increase the range of shades visible in a print without reducing the contrast. Selenium toning is especially effective in this regard. Some toning processes can improve the chemical stability of the print, increasing its potential longevity. Other toning processes, such as those including iron and copper, can make the print less stable. Many chemical toners are highly toxic, some even containing chemicals that are carcinogenic. It is therefore extremely important that the chemicals be used in a well ventilated area, and rubber gloves and face protection should be worn when handling them. === Selenium toning === Selenium toning is a popular archival toning process, converting metallic silver to silver selenide. In a diluted toning solution, selenium toning gives a red-brown tone, while a strong solution gives a purple-brown tone. The change in color depends upon the chemical make-up of the photographic emulsion being toned. Chloro-bromide papers change dramatically, whilst pure bromide papers change little. Fibre-based papers are more responsive to selenium toning. Selenium toning may not produce prints quite as stable as sepia or gold toning. Recently, doubts have surfaced as to the effectiveness of selenium toner in ensuring print longevity. === Sepia toning === Sepia toning is a specialized treatment to give a black-and-white photographic print a warmer tone and to enhance its archival qualities. The metallic silver in the print is converted to a sulfide compound, which is much more resistant to the effects of environmental pollutants such as atmospheric sulfur compounds. Silver sulfide is at least 50% more stable than silver. There are three types of sepia toner in modern use: Sodium sulfide toners – the traditional "rotten egg" toners (sodium sulfide smells of rotten eggs when exposed to moisture); Thiourea (or "thiocarbamide") toners – these are odorless, and the tone can be varied according to the chemical mixture; Polysulfide or "direct" toners – these do not require a bleaching stage. Except for polysulfide toners, sepia toning is done in three stages. The print is first soaked in a potassium ferricyanide bleach to reconvert the metallic silver to silver halide. The print is washed to remove excess potassium ferricyanide and then immersed into a bath of toner, which converts the silver halides to silver sulfide. Incomplete bleaching creates a multi-toned image with sepia highlights and gray mid-tones and shadows. This is called split toning. The untoned silver in the print can be treated with a different toner, such as gold or selenium. Fred Judge FRPS made extensive use of sepia toning for postcards produced by the British picture postcards manufacturer Judges Postcards. === Metal replacement toning === Metal replacement toners replace the metallic silver, through a series of chemical reactions, with a ferrocyanide salt of a transition metal. Some metals, such as platinum or gold, can protect the image. Others, such as iron (blue toner) or copper (red toner), may reduce the life of the image. Metal-replacement toning with gold alone results in a blue-black tone. It is often combined with a sepia toner to produce a more attractive orange-red tone. The archival Gold Protective Solution (GP-1) formula uses a 1% gold chloride stock solution with sodium or potassium thiocyanate. It is sometimes used to split tone photographs previously toned in selenium for artistic purposes. === Dye toning === Dye toners replace the metallic silver with a dye. The image will have a reduced lifetime compared with an ordinary silver print. == Digital toning == Toning can be simulated digitally, either in-camera or in post-processing. The in-camera effect, as well as beginner tutorials given for software like Photoshop or GIMP, use a simple tint. More sophisticated software tends to implement sepia tones using the duotone feature. Simpler photo-editing software usually has an option to sepia-tone an image in one step. == Examples == The examples below show a digital color photograph, a black-and-white version and a sepia-toned version. The following are examples of the three types using film: == See also == Color grading Cyanotype Film tinting Grisaille (painting) Image editing § Selective color change List of software palettes § Color gradient palettes Monochrome Platinum print == References == == External links == Chemical toning (formulas and technique): (Book) Photographic facts and formulas (1924) Many various toners (copper, iron, vanadium, selenium, sulphide, etc.)(p. 216) (Book) All about formulae for your darkroom (1942) Selenium, indirect sulphide toning, red chalk, blue and green tones (pp. 44–47) (Book) Kodak Chemicals and Formulae (1949) Selenium, sulphide-selenium and other toners (pp. 39–41) Ilford: Toning prints Sepia toning in a developing tray. Digital "toning": Sepia Toner Application to convert digital images Sepia EffectVintage sepia effect in Easy Photo Effects Archived 2012-04-26 at the Wayback Machine Sepia toning in Adobe Photoshop Sepia toning in the GIMP Sepia toning in Java
Wikipedia/Photographic_print_toning
In optics, aberration is a property of optical systems, such as lenses and mirrors, that causes the image created by the optical system to not be a faithful reproduction of the object being observed. Aberrations cause the image formed by a lens to be blurred, distorted in shape or have color fringing or other effects not seen in the object, with the nature of the distortion depending on the type of aberration. Aberration can be defined as a departure of the performance of an optical system from the predictions of paraxial optics. In an imaging system, it occurs when light from one point of an object does not converge into (or does not diverge from) a single point after transmission through the system. Aberrations occur because the simple paraxial theory is not a completely accurate model of the effect of an optical system on light, rather than due to flaws in the optical elements. An image-forming optical system with aberration will produce an image which is not sharp. Makers of optical instruments need to correct optical systems to compensate for aberration. Aberration can be analyzed with the techniques of geometrical optics. The articles on reflection, refraction and caustics discuss the general features of reflected and refracted rays. == Overview == With an ideal lens, light from any given point on an object would pass through the lens and come together at a single point in the image plane (or, more generally, the image surface). Real lenses, even when they are perfectly made, do not however focus light exactly to a single point. These deviations from the idealized lens performance are called aberrations of the lens. Aberrations fall into two classes: monochromatic and chromatic. Monochromatic aberrations are caused by the geometry of the lens or mirror and occur both when light is reflected and when it is refracted. They appear even when using monochromatic light, hence the name. Chromatic aberrations are caused by dispersion, the variation of a lens's refractive index with wavelength. Because of dispersion, different wavelengths of light come to focus at different points. Chromatic aberration does not appear when monochromatic light is used. === Monochromatic aberrations === The most common monochromatic aberrations are: Defocus Spherical aberration Coma Astigmatism Field curvature Image distortion Although defocus is technically the lowest-order of the optical aberrations, it is usually not considered as a lens aberration, since it can be corrected by moving the lens (or the image plane) to bring the image plane to the optical focus of the lens. In addition to these aberrations, piston and tilt are effects which shift the position of the focal point. Piston and tilt are not true optical aberrations, since when an otherwise perfect wavefront is altered by piston and tilt, it will still form a perfect, aberration-free image, only shifted to a different position. === Chromatic aberrations === Chromatic aberration occurs when different wavelengths are not focussed to the same point. Types of chromatic aberration are: Axial (or "longitudinal") chromatic aberration Lateral (or "transverse") chromatic aberration == Theory of monochromatic aberration == In a perfect optical system in the classical theory of optics, rays of light proceeding from any object point unite in an image point; and therefore the object space is reproduced in an image space. The introduction of simple auxiliary terms, due to Gauss, named the focal lengths and focal planes, permits the determination of the image of any object for any system. The Gaussian theory, however, is only true so long as the angles made by all rays with the optical axis (the symmetrical axis of the system) are infinitely small, i.e., with infinitesimal objects, images and lenses; in practice these conditions may not be realized, and the images projected by uncorrected systems are, in general, ill-defined and often blurred if the aperture or field of view exceeds certain limits. The investigations of James Clerk Maxwell and Ernst Abbe showed that the properties of these reproductions, i.e., the relative position and magnitude of the images, are not special properties of optical systems, but necessary consequences of the supposition (per Abbe) of the reproduction of all points of a space in image points, and are independent of the manner in which the reproduction is effected. These authors showed, however, that no optical system can justify these suppositions, since they are contradictory to the fundamental laws of reflection and refraction. Consequently, the Gaussian theory only supplies a convenient method of approximating reality; realistic optical systems fall short of this unattainable ideal. Currently, all that can be accomplished is the projection of a single plane onto another plane; but even in this, aberrations always occurs and it may be unlikely that these will ever be entirely corrected. === Aberration of axial points (spherical aberration in the restricted sense) === Let S (Figure 1) be any optical system, rays proceeding from an axis point O under an angle u1 will unite in the axis point O′1; and those under an angle u2 in the axis point O′2. If there is refraction at a collective spherical surface, or through a thin positive lens, O′2 will lie in front of O′1 so long as the angle u2 is greater than u1 (under correction); and conversely with a dispersive surface or lenses (over correction). The caustic, in the first case, resembles the sign '>' (greater than); in the second '<' (less than). If the angle u1 is very small, O′1 is the Gaussian image; and O′1 O′2 is termed the longitudinal aberration, and O′1R the lateral aberration of the pencils with aperture u2. If the pencil with the angle u2 is that of the maximum aberration of all the pencils transmitted, then in a plane perpendicular to the axis at O′1 there is a circular disk of confusion of radius O′1R, and in a parallel plane at O′2 another one of radius O′2R2; between these two is situated the disk of least confusion. The largest opening of the pencils, which take part in the reproduction of O, i.e., the angle u, is generally determined by the margin of one of the lenses or by a hole in a thin plate placed between, before, or behind the lenses of the system. This hole is termed the stop or diaphragm; Abbe used the term aperture stop for both the hole and the limiting margin of the lens. The component S1 of the system, situated between the aperture stop and the object O, projects an image of the diaphragm, termed by Abbe the entrance pupil; the exit pupil is the image formed by the component S2, which is placed behind the aperture stop. All rays which issue from O and pass through the aperture stop also pass through the entrance and exit pupils, since these are images of the aperture stop. Since the maximum aperture of the pencils issuing from O is the angle u subtended by the entrance pupil at this point, the magnitude of the aberration will be determined by the position and diameter of the entrance pupil. If the system be entirely behind the aperture stop, then this is itself the entrance pupil (front stop); if entirely in front, it is the exit pupil (back stop). If the object point be infinitely distant, all rays received by the first member of the system are parallel, and their intersections, after traversing the system, vary according to their perpendicular height of incidence, i.e. their distance from the axis. This distance replaces the angle u in the preceding considerations; and the aperture, i.e., the radius of the entrance pupil, is its maximum value. ==== Aberration of elements, i.e. smallest objects at right angles to the axis ==== If rays issuing from O (Figure 1) are concurrent, it does not follow that points in a portion of a plane perpendicular at O to the axis will be also concurrent, even if the part of the plane be very small. As the diameter of the lens increases (i.e., with increasing aperture), the neighboring point N will be reproduced, but attended by aberrations comparable in magnitude to ON. These aberrations are avoided if, according to Abbe, the sine condition, sin u′1/sin u1 = sin u′2/sin u2, holds for all rays reproducing the point O. If the object point O is infinitely distant, u1 and u2 are to be replaced by h1 and h2, the perpendicular heights of incidence; the sine condition then becomes sin u′1/h1 = sin u′2/h2. A system fulfilling this condition and free from spherical aberration is called aplanatic (Greek a-, privative; plann, a wandering). This word was first used by Robert Blair to characterize a superior achromatism, and, subsequently, by many writers to denote freedom from spherical aberration as well. Since the aberration increases with the distance of the ray from the center of the lens, the aberration increases as the lens diameter increases (or, correspondingly, with the diameter of the aperture), and hence can be minimized by reducing the aperture, at the cost of also reducing the amount of light reaching the image plane. === Aberration of lateral object points (points beyond the axis) with narrow pencils — astigmatism === A point O (Figure 2) at a finite distance from the axis (or with an infinitely distant object, a point which subtends a finite angle at the system) is, in general, even then not sharply reproduced if the pencil of rays issuing from it and traversing the system is made infinitely narrow by reducing the aperture stop; such a pencil consists of the rays which can pass from the object point through the now infinitely small entrance pupil. It is seen (ignoring exceptional cases) that the pencil does not meet the refracting or reflecting surface at right angles; therefore it is astigmatic (Greek a-, privative; stigmia, a point). Naming the central ray passing through the entrance pupil the axis of the pencil or principal ray, it can be said: the rays of the pencil intersect, not in one point, but in two focal lines, which can be assumed to be at right angles to the principal ray; of these, one lies in the plane containing the principal ray and the axis of the system, i.e. in the first principal section or meridional section, and the other at right angles to it, i.e. in the second principal section or sagittal section. We receive, therefore, in no single intercepting plane behind the system, as, for example, a focusing screen, an image of the object point; on the other hand, in each of two planes lines O′ and O″ are separately formed (in neighboring planes ellipses are formed), and in a plane between O′ and O″ a circle of least confusion. The interval O′O″, termed the astigmatic difference, increases, in general, with the angle W made by the principal ray OP with the axis of the system, i.e. with the field of view. Two astigmatic image surfaces correspond to one object plane; and these are in contact at the axis point; on the one lie the focal lines of the first kind, on the other those of the second. Systems in which the two astigmatic surfaces coincide are termed anastigmatic or stigmatic. Sir Isaac Newton was probably the discoverer of astigmation; the position of the astigmatic image lines was determined by Thomas Young; and the theory was developed by Allvar Gullstrand. A bibliography by P. Culmann is given in Moritz von Rohr's Die Bilderzeugung in optischen Instrumenten. === Aberration of lateral object points with broad pencils — coma === By opening the stop wider, similar deviations arise for lateral points as have been already discussed for axial points; but in this case they are much more complicated. The course of the rays in the meridional section is no longer symmetrical to the principal ray of the pencil; and on an intercepting plane there appears, instead of a luminous point, a patch of light, not symmetrical about a point, and often exhibiting a resemblance to a comet having its tail directed towards or away from the axis. From this appearance it takes its name. The unsymmetrical form of the meridional pencil – formerly the only one considered – is coma in the narrower sense only; other errors of coma have been treated by Arthur König and Moritz von Rohr, and later by Allvar Gullstrand. === Curvature of the field of the image === If the above errors be eliminated, the two astigmatic surfaces united, and a sharp image obtained with a wide aperture—there remains the necessity to correct the curvature of the image surface, especially when the image is to be received upon a plane surface, e.g. in photography. In most cases the surface is concave towards the system. === Distortion of the image === Even if the image is sharp, it may be distorted compared to ideal pinhole projection. In pinhole projection, the magnification of an object is inversely proportional to its distance to the camera along the optical axis so that a camera pointing directly at a flat surface reproduces that flat surface. Distortion can be thought of as stretching the image non-uniformly, or, equivalently, as a variation in magnification across the field. While "distortion" can include arbitrary deformation of an image, the most pronounced modes of distortion produced by conventional imaging optics is "barrel distortion", in which the center of the image is magnified more than the perimeter (Figure 3a). The reverse, in which the perimeter is magnified more than the center, is known as "pincushion distortion" (Figure 3b). This effect is called lens distortion or image distortion, and there are algorithms to correct it. Systems free of distortion are called orthoscopic (orthos, right; skopein, to look) or rectilinear (straight lines). This aberration is quite distinct from that of the sharpness of reproduction; in unsharp, reproduction, the question of distortion arises if only parts of the object can be recognized in the figure. If, in an unsharp image, a patch of light corresponds to an object point, the center of gravity of the patch may be regarded as the image point, this being the point where the plane receiving the image, e.g., a focusing screen, intersects the ray passing through the middle of the stop. This assumption is justified if a poor image on the focusing screen remains stationary when the aperture is diminished; in practice, this generally occurs. This ray, named by Abbe a principal ray (not to be confused with the principal rays of the Gaussian theory), passes through the center of the entrance pupil before the first refraction, and the center of the exit pupil after the last refraction. From this it follows that correctness of drawing depends solely upon the principal rays; and is independent of the sharpness or curvature of the image field. Referring to Figure 4, we have O′Q′/OQ = (a′ tan w′)/(a tan w) = 1/N, where N is the scale or magnification of the image. For N to be constant for all values of w, (a′ tan w′)/(a tan w) must also be constant. If the ratio a′/a be sufficiently constant, as is often the case, the above relation reduces to the condition of Airy, i.e. tan w′/tan w is a constant. This simple relation is fulfilled in all systems which are symmetrical with respect to their diaphragm (briefly named symmetrical or holosymmetrical objectives), or which consist of two like, but different-sized, components, placed from the diaphragm in the ratio of their size, and presenting the same curvature to it (hemisymmetrical objectives); in these systems tan w′ / tan w = 1. The constancy of a′/a necessary for this relation to hold was pointed out by R. H. Bow (Brit. Journ. Photog., 1861), and Thomas Sutton (Photographic Notes, 1862); it has been treated by O. Lummer and by M. von Rohr (Zeit. f. Instrumentenk., 1897, 17, and 1898, 18, p. 4). It requires the middle of the aperture stop to be reproduced in the centers of the entrance and exit pupils without spherical aberration. M. von Rohr showed that for systems fulfilling neither the Airy nor the Bow-Sutton condition, the ratio (a′ cos w′)/(a tan w) will be constant for one distance of the object. This combined condition is exactly fulfilled by holosymmetrical objectives reproducing with the scale 1, and by hemisymmetrical, if the scale of reproduction be equal to the ratio of the sizes of the two components. === Zernike model of aberrations === Circular wavefront profiles associated with aberrations may be mathematically modeled using Zernike polynomials. Developed by Frits Zernike in the 1930s, Zernike's polynomials are orthogonal over a circle of unit radius. A complex, aberrated wavefront profile may be curve-fitted with Zernike polynomials to yield a set of fitting coefficients that individually represent different types of aberrations. These Zernike coefficients are linearly independent, thus individual aberration contributions to an overall wavefront may be isolated and quantified separately. There are even and odd Zernike polynomials. The even Zernike polynomials are defined as Z n m ( ρ , ϕ ) = R n m ( ρ ) cos ⁡ ( m ϕ ) {\displaystyle Z_{n}^{m}(\rho ,\phi )=R_{n}^{m}(\rho )\,\cos(m\,\phi )} and the odd Zernike polynomials as Z n − m ( ρ , ϕ ) = R n m ( ρ ) sin ⁡ ( m ϕ ) {\displaystyle Z_{n}^{-m}(\rho ,\phi )=R_{n}^{m}(\rho )\,\sin(m\,\phi )} where m and n are nonnegative integers with n ≥ m, ϕ is the azimuthal angle in radians, and ρ is the normalized radial distance. The radial polynomials R n m {\displaystyle R_{n}^{m}} have no azimuthal dependence, and are defined as R n m ( ρ ) = { ∑ k = 0 ( n − m ) / 2 ( − 1 ) k ( n − k ) ! k ! ( n + m 2 − k ) ! ( n − m 2 − k ) ! ρ n − 2 k , if n − m is even 0 , if n − m is odd. {\displaystyle R_{n}^{m}(\rho )={\begin{cases}\sum _{k=0}^{(n-m)/2}\!\!\!{\frac {(-1)^{k}\,(n-k)!}{k!\,\left({n+m \over 2}-k\right)!\,\left({n-m \over 2}-k\right)!}}\;\rho ^{n-2\,k},&{\text{if }}n-m{\text{ is even}}\\0,&{\text{if }}n-m{\text{ is odd.}}\end{cases}}} The first few Zernike polynomials, multiplied by their respective fitting coefficients, are: where ρ is the normalized pupil radius with 0 ≤ ρ ≤ 1, ϕ is the azimuthal angle around the pupil with 0 ≤ ϕ ≤ 2π, and the fitting coefficients a0, ..., a8 are the wavefront errors in wavelengths. As in Fourier synthesis using sines and cosines, a wavefront may be perfectly represented by a sufficiently large number of higher-order Zernike polynomials. However, wavefronts with very steep gradients or very high spatial frequency structure, such as produced by propagation through atmospheric turbulence or aerodynamic flowfields, are not well modeled by Zernike polynomials, which tend to low-pass filter fine spatial definition in the wavefront. In this case, other fitting methods such as fractals or singular value decomposition may yield improved fitting results. The circle polynomials were introduced by Frits Zernike to evaluate the point image of an aberrated optical system taking into account the effects of diffraction. The perfect point image in the presence of diffraction had already been described by Airy, as early as 1835. It took almost hundred years to arrive at a comprehensive theory and modeling of the point image of aberrated systems (Zernike and Nijboer). The analysis by Nijboer and Zernike describes the intensity distribution close to the optimum focal plane. An extended theory that allows the calculation of the point image amplitude and intensity over a much larger volume in the focal region was recently developed (Extended Nijboer-Zernike theory). This Extended Nijboer-Zernike theory of point image or 'point-spread function' formation has found applications in general research on image formation, especially for systems with a high numerical aperture, and in characterizing optical systems with respect to their aberrations. == Analytic treatment of aberrations == The preceding review of the several errors of reproduction belongs to the Abbe theory of aberrations, in which definite aberrations are discussed separately; it is well suited to practical needs, for in the construction of an optical instrument certain errors are sought to be eliminated, the selection of which is justified by experience. In the mathematical sense, however, this selection is arbitrary; the reproduction of a finite object with a finite aperture entails, in all probability, an infinite number of aberrations. This number is only finite if the object and aperture are assumed to be infinitely small of a certain order; and with each order of infinite smallness, i.e. with each degree of approximation to reality (to finite objects and apertures), a certain number of aberrations is associated. This connection is only supplied by theories which treat aberrations generally and analytically by means of indefinite series. A ray proceeding from an object point O (Figure 5) can be defined by the coordinates (ξ, η). Of this point O in an object plane I, at right angles to the axis, and two other coordinates (x, y), the point in which the ray intersects the entrance pupil, i.e. the plane II. Similarly the corresponding image ray may be defined by the points (ξ′, η′), and (x′, y′), in the planes I′ and II′. The origins of these four plane coordinate systems may be collinear with the axis of the optical system; and the corresponding axes may be parallel. Each of the four coordinates ξ′, η′, x′, y′ are functions of ξ, η, x, y; and if it be assumed that the field of view and the aperture be infinitely small, then ξ, η, x, y are of the same order of infinitesimals; consequently by expanding ξ′, η′, x′, y′ in ascending powers of ξ, η, x, y, series are obtained in which it is only necessary to consider the lowest powers. It is readily seen that if the optical system be symmetrical, the origins of the coordinate systems collinear with the optical axis and the corresponding axes parallel, then by changing the signs of ξ, η, x, y, the values ξ′, η′, x′, y′ must likewise change their sign, but retain their arithmetical values; this means that the series are restricted to odd powers of the unmarked variables. The nature of the reproduction consists in the rays proceeding from a point O being united in another point O′; in general, this will not be the case, for ξ′, η′ vary if ξ, η be constant, but x, y variable. It may be assumed that the planes I′ and II′ are drawn where the images of the planes I and II are formed by rays near the axis by the ordinary Gaussian rules; and by an extension of these rules, not, however, corresponding to reality, the Gauss image point O′0, with coordinates ξ′0, η′0, of the point O at some distance from the axis could be constructed. Writing Δξ′ = ξ′ − ξ′0 and Δη′ = η′ − η′0, then Δξ′ and Δη′ are the aberrations belonging to ξ, η and x, y, and are functions of these magnitudes which, when expanded in series, contain only odd powers, for the same reasons as given above. On account of the aberrations of all rays which pass through O, a patch of light, depending in size on the lowest powers of ξ, η, x, y which the aberrations contain, will be formed in the plane I′. These degrees, named by J. Petzval the numerical orders of the image, are consequently only odd powers; the condition for the formation of an image of the mth order is that in the series for Δξ′ and Δη′ the coefficients of the powers of the 3rd, 5th, ... (m−2)th degrees must vanish. The images of the Gauss theory being of the third order, the next problem is to obtain an image of 5th order, or to make the coefficients of the powers of 3rd degree zero. This necessitates the satisfying of five equations; in other words, there are five alterations of the 3rd order, the vanishing of which produces an image of the 5th order. The expression for these coefficients in terms of the constants of the optical system, i.e. the radii, thicknesses, refractive indices and distances between the lenses, was solved by L. von Seidel; in 1840, J. Petzval constructed his portrait objective, from similar calculations which have never been published. The theory was elaborated by S. Finterswalder, who also published a posthumous paper of Seidel containing a short view of his work; a simpler form was given by A. Kerber. A. Konig and M. von Rohr: 317–323  have represented Kerber's method, and have deduced the Seidel formulae from geometrical considerations based on the Abbe method, and have interpreted the analytical results geometrically.: 212–316  The aberrations can also be expressed by means of the characteristic function of the system and its differential coefficients, instead of by the radii, etc., of the lenses; these formulae are not immediately applicable, but give, however, the relation between the number of aberrations and the order. Sir William Rowan Hamilton thus derived the aberrations of the third order; and in later times the method was pursued by Clerk Maxwell (Proc. London Math. Soc., 1874–1875; (see also the treatises of R. S. Heath and L. A. Herman), M. Thiesen (Berlin. Akad. Sitzber., 1890, 35, p. 804), H. Bruns (Leipzig. Math. Phys. Ber., 1895, 21, p. 410), and particularly successfully by K. Schwarzschild (Göttingen. Akad. Abhandl., 1905, 4, No. 1), who thus discovered the aberrations of the 5th order (of which there are nine), and possibly the shortest proof of the practical (Seidel) formulae. A. Gullstrand (vide supra, and Ann. d. Phys., 1905, 18, p. 941) founded his theory of aberrations on the differential geometry of surfaces. The aberrations of the third order are: (1) aberration of the axis point; (2) aberration of points whose distance from the axis is very small, less than of the third order — the deviation from the sine condition and coma here fall together in one class; (3) astigmatism; (4) curvature of the field; (5) distortion. Aberration of the third order of axis points is dealt with in all text-books on optics. It is very important in telescope design. In telescopes aperture is usually taken as the linear diameter of the objective. It is not the same as microscope aperture which is based on the entrance pupil or field of view as seen from the object and is expressed as an angular measurement. Higher order aberrations in telescope design can be mostly neglected. For microscopes it cannot be neglected. For a single lens of very small thickness and given power, the aberration depends upon the ratio of the radii r:r′, and is a minimum (but never zero) for a certain value of this ratio; it varies inversely with the refractive index (the power of the lens remaining constant). The total aberration of two or more very thin lenses in contact, being the sum of the individual aberrations, can be zero. This is also possible if the lenses have the same algebraic sign. Of thin positive lenses with n = 1.5, four are necessary to correct spherical aberration of the third order. These systems, however, are not of great practical importance. In most cases, two thin lenses are combined, one of which has just so strong a positive aberration (under-correction, vide supra) as the other a negative; the first must be a positive lens and the second a negative lens; the powers, however: may differ, so that the desired effect of the lens is maintained. It is generally an advantage to secure a great refractive effect by several weaker than by one high-power lens. By one, and likewise by several, and even by an infinite number of thin lenses in contact, no more than two axis points can be reproduced without aberration of the third order. Freedom from aberration for two axis points, one of which is infinitely distant, is known as Herschel's condition. All these rules are valid, inasmuch as the thicknesses and distances of the lenses are not to be taken into account. The condition for freedom from coma in the third order is also of importance for telescope objectives; it is known as Fraunhofer's condition. (4) After eliminating the aberration on the axis, coma and astigmatism, the relation for the flatness of the field in the third order is expressed by the Petzval equation, ∑ I / r ( n ′ − n ) = 0 {\textstyle \sum _{\mathrm {I} }/r(n'-n)=0} , where r is the radius of a refracting surface, n and n′ the refractive indices of the neighboring media, and Σ the sign of summation for all refracting surfaces. == Practical elimination of aberrations == The classical imaging problem is to reproduce perfectly a finite plane (the object) onto another plane (the image) through a finite aperture. It is impossible to do so perfectly for more than one such pair of planes (this was proven with increasing generality by Maxwell in 1858, by Bruns in 1895, and by Carathéodory in 1926, see summary in Walther, A., J. Opt. Soc. Am. A 6, 415–422 (1989)). For a single pair of planes (e.g. for a single focus setting of an objective), however, the problem can in principle be solved perfectly. Examples of such a theoretically perfect system include the Luneburg lens and the Maxwell fish-eye. Practical methods solve this problem with an accuracy which mostly suffices for the special purpose of each species of instrument. The problem of finding a system which reproduces a given object upon a given plane with given magnification (insofar as aberrations must be taken into account) could be dealt with by means of the approximation theory; in most cases, however, the analytical difficulties were too great for older calculation methods but may be ameliorated by application of modern computer systems. Solutions, however, have been obtained in special cases. At the present time constructors almost always employ the inverse method: they compose a system from certain, often quite personal experiences, and test, by the trigonometrical calculation of the paths of several rays, whether the system gives the desired reproduction (examples are given in A. Gleichen, Lehrbuch der geometrischen Optik, Leipzig and Berlin, 1902). The radii, thicknesses and distances are continually altered until the errors of the image become sufficiently small. By this method only certain errors of reproduction are investigated, especially individual members, or all, of those named above. The analytical approximation theory is often employed provisionally, since its accuracy does not generally suffice. In order to render spherical aberration and the deviation from the sine condition small throughout the whole aperture, there is given to a ray with a finite angle of aperture u* (width infinitely distant objects: with a finite height of incidence h*) the same distance of intersection, and the same sine ratio as to one neighboring the axis (u* or h* may not be much smaller than the largest aperture U or H to be used in the system). The rays with an angle of aperture smaller than u* would not have the same distance of intersection and the same sine ratio; these deviations are called zones, and the constructor endeavors to reduce these to a minimum. The same holds for the errors depending upon the angle of the field of view, w: astigmatism, curvature of field and distortion are eliminated for a definite value, w*, zones of astigmatism, curvature of field and distortion, attend smaller values of w. The practical optician names such systems: corrected for the angle of aperture u* (the height of incidence h*) or the angle of field of view w*. Spherical aberration and changes of the sine ratios are often represented graphically as functions of the aperture, in the same way as the deviations of two astigmatic image surfaces of the image plane of the axis point are represented as functions of the angles of the field of view. The final form of a practical system consequently rests on compromise; enlargement of the aperture results in a diminution of the available field of view, and vice versa. But the larger aperture will give the larger resolution. The following may be regarded as typical: Largest aperture; necessary corrections are — for the axis point, and sine condition; errors of the field of view are almost disregarded; example — high-power microscope objectives. Wide angle lens; necessary corrections are — for astigmatism, curvature of field and distortion; errors of the aperture only slightly regarded; examples — photographic widest angle objectives and oculars. Between these extreme examples stands the normal lens: this is corrected more with regard to aperture; objectives for groups more with regard to the field of view. Long focus lenses have small fields of view and aberrations on axis are very important. Therefore, zones will be kept as small as possible and design should emphasize simplicity. Because of this these lenses are the best for analytical computation. == Chromatic or color aberration == In optical systems composed of lenses, the position, magnitude and errors of the image depend upon the refractive indices of the glass employed (see Lens (optics) and Monochromatic aberration, above). Since the index of refraction varies with the color or wavelength of the light (see dispersion), it follows that a system of lenses (uncorrected) projects images of different colors in somewhat different places and sizes and with different aberrations; i.e. there are chromatic differences of the distances of intersection, of magnifications, and of monochromatic aberrations. If mixed light be employed (e.g. white light) all these images are formed and they cause a confusion, named chromatic aberration; for instance, instead of a white margin on a dark background, there is perceived a colored margin, or narrow spectrum. The absence of this error is termed achromatism, and an optical system so corrected is termed achromatic. A system is said to be chromatically under-corrected when it shows the same kind of chromatic error as a thin positive lens, otherwise it is said to be overcorrected. If, in the first place, monochromatic aberrations be neglected — in other words, the Gaussian theory be accepted — then every reproduction is determined by the positions of the focal planes, and the magnitude of the focal lengths, or if the focal lengths, as ordinarily happens, be equal, by three constants of reproduction. These constants are determined by the data of the system (radii, thicknesses, distances, indices, etc., of the lenses); therefore their dependence on the refractive index, and consequently on the color, are calculable. The refractive indices for different wavelengths must be known for each kind of glass made use of. In this manner the conditions are maintained that any one constant of reproduction is equal for two different colors, i.e. this constant is achromatized. For example, it is possible, with one thick lens in air, to achromatize the position of a focal plane of the magnitude of the focal length. If all three constants of reproduction be achromatized, then the Gaussian image for all distances of objects is the same for the two colors, and the system is said to be in stable achromatism. In practice it is more advantageous (after Abbe) to determine the chromatic aberration (for instance, that of the distance of intersection) for a fixed position of the object, and express it by a sum in which each component conlins the amount due to each refracting surface. In a plane containing the image point of one color, another colour produces a disk of confusion; this is similar to the confusion caused by two zones in spherical aberration. For infinitely distant objects the radius Of the chromatic disk of confusion is proportional to the linear aperture, and independent of the focal length (vide supra, Monochromatic Aberration of the Axis Point); and since this disk becomes the less harmful with an increasing image of a given object, or with increasing focal length, it follows that the deterioration of the image is proportional to the ratio of the aperture to the focal length, i.e. the relative aperture. (This explains the gigantic focal lengths in vogue before the discovery of achromatism.) Examples: Newton failed to perceive the existence of media of different dispersive powers required by achromatism; consequently he constructed large reflectors instead of refractors. James Gregory and Leonhard Euler arrived at the correct view from a false conception of the achromatism of the eye; this was determined by Chester More Hall in 1728, Klingenstierna in 1754 and by Dollond in 1757, who constructed the celebrated achromatic telescopes. (See telescope.) Glass with weaker dispersive power (greater v {\displaystyle v} ) is named crown glass; that with greater dispersive power, flint glass. For the construction of an achromatic collective lens ( f {\displaystyle f} positive) it follows, by means of equation (4), that a collective lens I. of crown glass and a dispersive lens II. of flint glass must be chosen; the latter, although the weaker, corrects the other chromatically by its greater dispersive power. For an achromatic dispersive lens the converse must be adopted. This is, at the present day, the ordinary type, e.g., of telescope objective; the values of the four radii must satisfy the equations (2) and (4). Two other conditions may also be postulated: one is always the elimination of the aberration on the axis; the second either the Herschel or Fraunhofer Condition, the latter being the best vide supra, Monochromatic Aberration). In practice, however, it is often more useful to avoid the second condition by making the lenses have contact, i.e. equal radii. According to P. Rudolph (Eder's Jahrb. f. Photog., 1891, 5, p. 225; 1893, 7, p. 221), cemented objectives of thin lenses permit the elimination of spherical aberration on the axis, if, as above, the collective lens has a smaller refractive index; on the other hand, they permit the elimination of astigmatism and curvature of the field, if the collective lens has a greater refractive index (this follows from the Petzval equation; see L. Seidel, Astr. Nachr., 1856, p. 289). Should the cemented system be positive, then the more powerful lens must be positive; and, according to (4), to the greater power belongs the weaker dispersive power (greater v {\displaystyle v} ), that is to say, crown glass; consequently the crown glass must have the greater refractive index for astigmatic and plane images. In all earlier kinds of glass, however, the dispersive power increased with the refractive index; that is, v {\displaystyle v} decreased as n {\displaystyle n} increased; but some of the Jena glasses by E. Abbe and O. Schott were crown glasses of high refractive index, and achromatic systems from such crown glasses, with flint glasses of lower refractive index, are called the new achromats, and were employed by P. Rudolph in the first anastigmats (photographic objectives). Instead of making d f {\displaystyle df} vanish, a certain value can be assigned to it which will produce, by the addition of the two lenses, any desired chromatic deviation, e.g. sufficient to eliminate one present in other parts of the system. If the lenses I. and II. be cemented and have the same refractive index for one color, then its effect for that one color is that of a lens of one piece; by such decomposition of a lens it can be made chromatic or achromatic at will, without altering its spherical effect. If its chromatic effect ( d f / f {\displaystyle df/f} ) be greater than that of the same lens, this being made of the more dispersive of the two glasses employed, it is termed hyper-chromatic. For two thin lenses separated by a distance D {\displaystyle D} the condition for achromatism is D = v 1 f 1 + v 2 f 2 {\displaystyle D=v_{1}f_{1}+v_{2}f_{2}} ; if v 1 = v 2 {\displaystyle v_{1}=v_{2}} (e.g. if the lenses be made of the same glass), this reduces to D = ( f 1 + f 2 ) / 2 {\displaystyle D=(f_{1}+f_{2})/2} , known as the condition for oculars. If a constant of reproduction, for instance the focal length, be made equal for two colors, then it is not the same for other colors, if two different glasses are employed. For example, the condition for achromatism (4) for two thin lenses in contact is fulfilled in only one part of the spectrum, since d n 2 / d n 1 {\displaystyle dn_{2}/dn_{1}} varies within the spectrum. This fact was first ascertained by J. Fraunhofer, who defined the colors by means of the dark lines in the solar spectrum; and showed that the ratio of the dispersion of two glasses varied about 20% from the red to the violet (the variation for glass and water is about 50%). If, therefore, for two colors, a and b, f a = f b = f {\displaystyle f_{a}=f_{b}=f} , then for a third color, c, the focal length is different; that is, if c lies between a and b, then f c < f {\displaystyle f_{c}<f} , and vice versa; these algebraic results follow from the fact that towards the red the dispersion of the positive crown glass preponderates, towards the violet that of the negative flint. These chromatic errors of systems, which are achromatic for two colors, are called the secondary spectrum, and depend upon the aperture and focal length in the same manner as the primary chromatic errors do. In Figure 6, taken from M. von Rohr's Theorie und Geschichte des photographischen Objectivs, the abscissae are focal lengths, and the ordinates wavelengths. The Fraunhofer lines used are shown in adjacent table. The focal lengths are made equal for the lines C and F. In the neighborhood of 550 nm the tangent to the curve is parallel to the axis of wavelengths; and the focal length varies least over a fairly large range of color, therefore in this neighborhood the color union is at its best. Moreover, this region of the spectrum is that which appears brightest to the human eye, and consequently this curve of the secondary on spectrum, obtained by making f C = f F {\displaystyle f_{C}=f_{F}} , is, according to the experiments of Sir G. G. Stokes (Proc. Roy. Soc., 1878), the most suitable for visual instruments (optical achromatism,). In a similar manner, for systems used in photography, the vertex of the color curve must be placed in the position of the maximum sensibility of the plates; this is generally supposed to be at G'; and to accomplish this the F and violet mercury lines are united. This artifice is specially adopted in objectives for astronomical photography (pure actinic achromatism). For ordinary photography, however, there is this disadvantage: the image on the focusing-screen and the correct adjustment of the photographic sensitive plate are not in register; in astronomical photography this difference is constant, but in other kinds it depends on the distance of the objects. On this account the lines D and G' are united for ordinary photographic objectives; the optical as well as the actinic image is chromatically inferior, but both lie in the same place; and consequently the best correction lies in F (this is known as the actinic correction or freedom from chemical focus). Should there be in two lenses in contact the same focal lengths for three colours a, b, and c, i.e. f a = f b = f c = f {\displaystyle f_{a}=f_{b}=f_{c}=f} , then the relative partial dispersion ( n c − n b ) ( n a − n b ) {\displaystyle (n_{c}-n_{b})(n_{a}-n_{b})} must be equal for the two kinds of glass employed. This follows by considering equation (4) for the two pairs of colors ac and bc. Until recently no glasses were known with a proportional degree of absorption; but R. Blair (Trans. Edin. Soc., 1791, 3, p. 3), P. Barlow, and F. S. Archer overcame the difficulty by constructing fluid lenses between glass walls. Fraunhofer prepared glasses which reduced the secondary spectrum; but permanent success was only assured on the introduction of the Jena glasses by E. Abbe and O. Schott. In using glasses not having proportional dispersion, the deviation of a third colour can be eliminated by two lenses, if an interval be allowed between them; or by three lenses in contact, which may not all consist of the old glasses. In uniting three colors an achromatism of a higher order is derived; there is yet a residual tertiary spectrum, but it can always be neglected. The Gaussian theory is only an approximation; monochromatic or spherical aberrations still occur, which will be different for different colors; and should they be compensated for one color, the image of another color would prove disturbing. The most important is the chromatic difference of aberration of the axis point, which is still present to disturb the image, after par-axial rays of different colors are united by an appropriate combination of glasses. If a collective system be corrected for the axis point for a definite wavelength, then, on account of the greater dispersion in the negative components — the flint glasses, — overcorrection will arise for the shorter wavelengths (this being the error of the negative components), and under-correction for the longer wavelengths (the error of crown glass lenses preponderating in the red). This error was treated by Jean le Rond d'Alembert, and, in special detail, by C. F. Gauss. It increases rapidly with the aperture, and is more important with medium apertures than the secondary spectrum of par-axial rays; consequently, spherical aberration must be eliminated for two colors, and if this be impossible, then it must be eliminated for those particular wavelengths which are most effectual for the instrument in question (a graphical representation of this error is given in M. von Rohr, Theorie und Geschichte des photographischen Objectivs). The condition for the reproduction of a surface element in the place of a sharply reproduced point — the constant of the sine relationship must also be fulfilled with large apertures for several colors. E. Abbe succeeded in computing microscope objectives free from error of the axis point and satisfying the sine condition for several colors, which therefore, according to his definition, were aplanatic for several colors; such systems he termed apochromatic. While, however, the magnification of the individual zones is the same, it is not the same for red as for blue; and there is a chromatic difference of magnification. This is produced in the same amount, but in the opposite sense, by the oculars, which Abbe used with these objectives (compensating oculars), so that it is eliminated in the image of the whole microscope. The best telescope objectives, and photographic objectives intended for three-color work, are also apochromatic, even if they do not possess quite the same quality of correction as microscope objectives do. The chromatic differences of other errors of reproduction seldom have practical importance. == See also == Aberrations of the eye Optical telescope § The five Seidel aberrations Wavefront coding == Notes == == References == == External links == Microscope Objectives: Optical Aberrations section of Molecular Expressions website, Michael W. Davidson, Mortimer Abramowitz, Olympus America Inc., and The Florida State University
Wikipedia/Aberration_in_optical_systems
Chiral resolution, or enantiomeric resolution, is a process in stereochemistry for the separation of racemic mixture into their enantiomers. It is an important tool in the production of optically active compounds, including drugs. Another term with the same meaning is optical resolution. The use of chiral resolution to obtain enantiomerically pure compounds has the disadvantage of necessarily discarding at least half of the starting racemic mixture. Asymmetric synthesis of one of the enantiomers is one means of avoiding this waste. == Crystallization of diastereomeric salts == The most common method for chiral resolution involves conversion of the racemic mixture to a pair of diastereomeric derivatives by reacting them with chiral derivatizing agents, also known as chiral resolving agents. The derivatives which are then separated by conventional crystallization, and converted back to the enantiomers by removal of the resolving agent. The process can be laborious and depends on the divergent solubilities of the diastereomers, which is difficult to predict. Often the less soluble diastereomer is targeted and the other is discarded or racemized for reuse. It is common to test several resolving agents. Typical derivatization involves salt formation between an amine and a carboxylic acid. Simple deprotonation then yields back the pure enantiomer. Examples of chiral derivatizing agents are tartaric acid and the amine brucine. The method was introduced (again) by Louis Pasteur in 1853 by resolving racemic tartaric acid with optically active (+)-cinchotoxine. === Case study === One modern-day method of chiral resolution is used in the organic synthesis of the drug duloxetine: In one of its steps the racemic alcohol 1 is dissolved in a mixture of toluene and methanol to which solution is added optically active (S)-mandelic acid 3. The alcohol (S)-enantiomer forms an insoluble diastereomeric salt with the mandelic acid and can be filtered from the solution. Simple deprotonation with sodium hydroxide liberates free (S)-alcohol. In the meanwhile the (R)-alcohol remains in solution unaffected and is recycled back to the racemic mixture by epimerization with hydrochloric acid in toluene. This process is known as RRR synthesis in which the R's stand for Resolution-Racemization-Recycle. === Common resolving agents === Antimony potassium tartrate, an anion, that forms diastereomeric salts with chiral cations. Camphorsulfonic acid, an acid that forms diastereomeric salts with chiral amines 1-Phenylethylamine, a base that forms diastereomeric salts with chiral acids. Many related chiral amines have been demonstrated. The chiral pool consists of many widely available resolving agents. == Spontaneous resolution and related specialized techniques == Via the process known as spontaneous resolution, 5-10% of all racemates crystallize as mixtures of enantiopure crystals. This phenomenon allowed Louis Pasteur to separate left-handed and right-handed sodium ammonium tartrate crystals. These experiments underpinned his discovery of optical activity. In 1882 he went on to demonstrate that by seeding a supersaturated solution of sodium ammonium tartrate with a d-crystal on one side of the reactor and a l-crystal on the opposite side, crystals of opposite handedness will form on the opposite sides of the reactor. Spontaneous resolution has also been demonstrated with racemic methadone. In a typical setup 50 grams dl-methadone is dissolved in petroleum ether and concentrated. Two millimeter-sized d- and l-crystals are added and after stirring for 125 hours at 40 °C two large d- and l-crystals are recovered in 50% yield. Another form of direct crystallization is preferential crystallization also called resolution by entrainment of one of the enantiomers. For example, seed crystals of (−)-hydrobenzoin induce crystallization of this enantiomer from an ethanol solution of (±)-hydrobenzoin. == Chiral column chromatography == In chiral column chromatography the stationary phase is made chiral with similar resolving agents as described above. == Further reading == Sheldon, Roger Arthur (1993). Chirotechnology: industrial synthesis of optically active compounds. New York, NY: Dekker. ISBN 978-0-8247-9143-8. == References ==
Wikipedia/Chiral_resolution
The point spread function (PSF) describes the response of a focused optical imaging system to a point source or point object. A more general term for the PSF is the system's impulse response; the PSF is the impulse response or impulse response function (IRF) of a focused optical imaging system. The PSF in many contexts can be thought of as the shapeless blob in an image that should represent a single point object. We can consider this as a spatial impulse response function. In functional terms, it is the spatial domain version (i.e., the inverse Fourier transform) of the optical transfer function (OTF) of an imaging system. It is a useful concept in Fourier optics, astronomical imaging, medical imaging, electron microscopy and other imaging techniques such as 3D microscopy (like in confocal laser scanning microscopy) and fluorescence microscopy. The degree of spreading (blurring) in the image of a point object for an imaging system is a measure of the quality of the imaging system. In non-coherent imaging systems, such as fluorescent microscopes, telescopes or optical microscopes, the image formation process is linear in the image intensity and described by a linear system theory. This means that when two objects A and B are imaged simultaneously by a non-coherent imaging system, the resulting image is equal to the sum of the independently imaged objects. In other words: the imaging of A is unaffected by the imaging of B and vice versa, owing to the non-interacting property of photons. In space-invariant systems, i.e. those in which the PSF is the same everywhere in the imaging space, the image of a complex object is then the convolution of that object and the PSF. The PSF can be derived from diffraction integrals. == Introduction == By virtue of the linearity property of optical non-coherent imaging systems, i.e., Image(Object1 + Object2) = Image(Object1) + Image(Object2) the image of an object in a microscope or telescope as a non-coherent imaging system can be computed by expressing the object-plane field as a weighted sum of 2D impulse functions, and then expressing the image plane field as a weighted sum of the images of these impulse functions. This is known as the superposition principle, valid for linear systems. The images of the individual object-plane impulse functions are called point spread functions (PSF), reflecting the fact that a mathematical point of light in the object plane is spread out to form a finite area in the image plane. (In some branches of mathematics and physics, these might be referred to as Green's functions or impulse response functions. PSFs are considered impulse response functions for imaging systems. When the object is divided into discrete point objects of varying intensity, the image is computed as a sum of the PSF of each point. As the PSF is typically determined entirely by the imaging system (that is, microscope or telescope), the entire image can be described by knowing the optical properties of the system. This imaging process is usually formulated by a convolution equation. In microscope image processing and astronomy, knowing the PSF of the measuring device is very important for restoring the (original) object with deconvolution. For the case of laser beams, the PSF can be mathematically modeled using the concepts of Gaussian beams. For instance, deconvolution of the mathematically modeled PSF and the image, improves visibility of features and removes imaging noise. == Theory == The point spread function may be independent of position in the object plane, in which case it is called shift invariant. In addition, if there is no distortion in the system, the image plane coordinates are linearly related to the object plane coordinates via the magnification M as: ( x i , y i ) = ( M x o , M y o ) {\displaystyle (x_{i},y_{i})=(Mx_{o},My_{o})} . If the imaging system produces an inverted image, we may simply regard the image plane coordinate axes as being reversed from the object plane axes. With these two assumptions, i.e., that the PSF is shift-invariant and that there is no distortion, calculating the image plane convolution integral is a straightforward process. Mathematically, we may represent the object plane field as: O ( x o , y o ) = ∬ O ( u , v ) δ ( x o − u , y o − v ) d u d v {\displaystyle O(x_{o},y_{o})=\iint O(u,v)~\delta (x_{o}-u,y_{o}-v)~du\,dv} i.e., as a sum over weighted impulse functions, although this is also really just stating the shifting property of 2D delta functions (discussed further below). Rewriting the object transmittance function in the form above allows us to calculate the image plane field as the superposition of the images of each of the individual impulse functions, i.e., as a superposition over weighted point spread functions in the image plane using the same weighting function as in the object plane, i.e., O ( x o , y o ) {\displaystyle O(x_{o},y_{o})} . Mathematically, the image is expressed as: I ( x i , y i ) = ∬ O ( u , v ) P S F ( x i / M − u , y i / M − v ) d u d v {\displaystyle I(x_{i},y_{i})=\iint O(u,v)~\mathrm {PSF} (x_{i}/M-u,y_{i}/M-v)\,du\,dv} in which PSF ( x i / M − u , y i / M − v ) {\textstyle {\mbox{PSF}}(x_{i}/M-u,y_{i}/M-v)} is the image of the impulse function δ ( x o − u , y o − v ) {\displaystyle \delta (x_{o}-u,y_{o}-v)} . The 2D impulse function may be regarded as the limit (as side dimension w tends to zero) of the "square post" function, shown in the figure below. We imagine the object plane as being decomposed into square areas such as this, with each having its own associated square post function. If the height, h, of the post is maintained at 1/w2, then as the side dimension w tends to zero, the height, h, tends to infinity in such a way that the volume (integral) remains constant at 1. This gives the 2D impulse the sifting property (which is implied in the equation above), which says that when the 2D impulse function, δ(x − u,y − v), is integrated against any other continuous function, f(u,v), it "sifts out" the value of f at the location of the impulse, i.e., at the point (x,y). The concept of a perfect point source object is central to the idea of PSF. However, there is no such thing in nature as a perfect mathematical point source radiator; the concept is completely non-physical and is rather a mathematical construct used to model and understand optical imaging systems. The utility of the point source concept comes from the fact that a point source in the 2D object plane can only radiate a perfect uniform-amplitude, spherical wave — a wave having perfectly spherical, outward travelling phase fronts with uniform intensity everywhere on the spheres (see Huygens–Fresnel principle). Such a source of uniform spherical waves is shown in the figure below. We also note that a perfect point source radiator will not only radiate a uniform spectrum of propagating plane waves, but a uniform spectrum of exponentially decaying (evanescent) waves as well, and it is these which are responsible for resolution finer than one wavelength (see Fourier optics). This follows from the following Fourier transform expression for a 2D impulse function, δ ( x , y ) ∝ ∬ e j ( k x x + k y y ) d k x d k y {\displaystyle \delta (x,y)\propto \iint e^{j(k_{x}x+k_{y}y)}\,dk_{x}\,dk_{y}} The quadratic lens intercepts a portion of this spherical wave, and refocuses it onto a blurred point in the image plane. For a single lens, an on-axis point source in the object plane produces an Airy disc PSF in the image plane. It can be shown (see Fourier optics, Huygens–Fresnel principle, Fraunhofer diffraction) that the field radiated by a planar object (or, by reciprocity, the field converging onto a planar image) is related to its corresponding source (or image) plane distribution via a Fourier transform (FT) relation. In addition, a uniform function over a circular area (in one FT domain) corresponds to J1(x)/x in the other FT domain, where J1(x) is the first-order Bessel function of the first kind. That is, a uniformly-illuminated circular aperture that passes a converging uniform spherical wave yields an Airy disk image at the focal plane. A graph of a sample Airy disk is shown in the adjoining figure. Therefore, the converging (partial) spherical wave shown in the figure above produces an Airy disc in the image plane. The argument of the function J1(x)/x is important, because this determines the scaling of the Airy disc (in other words, how big the disc is in the image plane). If Θmax is the maximum angle that the converging waves make with the lens axis, r is radial distance in the image plane, and wavenumber k = 2π/λ where λ = wavelength, then the argument of the function is: kr tan(Θmax). If Θmax is small (only a small portion of the converging spherical wave is available to form the image), then radial distance, r, has to be very large before the total argument of the function moves away from the central spot. In other words, if Θmax is small, the Airy disc is large (which is just another statement of Heisenberg's uncertainty principle for Fourier Transform pairs, namely that small extent in one domain corresponds to wide extent in the other domain, and the two are related via the space-bandwidth product). By virtue of this, high magnification systems, which typically have small values of Θmax (by the Abbe sine condition), can have more blur in the image, owing to the broader PSF. The size of the PSF is proportional to the magnification, so that the blur is no worse in a relative sense, but it is definitely worse in an absolute sense. The figure above illustrates the truncation of the incident spherical wave by the lens. In order to measure the point spread function — or impulse response function — of the lens, a perfect point source that radiates a perfect spherical wave in all directions of space is not needed. This is because the lens has only a finite (angular) bandwidth, or finite intercept angle. Therefore, any angular bandwidth contained in the source, which extends past the edge angle of the lens (i.e., lies outside the bandwidth of the system), is essentially wasted source bandwidth because the lens can't intercept it in order to process it. As a result, a perfect point source is not required in order to measure a perfect point spread function. All we need is a light source which has at least as much angular bandwidth as the lens being tested (and of course, is uniform over that angular sector). In other words, we only require a point source which is produced by a convergent (uniform) spherical wave whose half angle is greater than the edge angle of the lens. Due to intrinsic limited resolution of the imaging systems, measured PSFs are not free of uncertainty. In imaging, it is desired to suppress the side-lobes of the imaging beam by apodization techniques. In the case of transmission imaging systems with Gaussian beam distribution, the PSF is modeled by the following equation: P S F ( f , z ) = I r ( 0 , z , f ) exp ⁡ [ − z α ( f ) − 2 ρ 2 0.36 c k a NA f 1 + ( 2 ln ⁡ 2 c π ( NA 0.56 k ) 2 f z ) 2 ] , {\displaystyle \mathrm {PSF} (f,z)=I_{r}(0,z,f)\exp \left[-z\alpha (f)-{\dfrac {2\rho ^{2}}{0.36{\frac {cka}{{\text{NA}}f}}{\sqrt {{1+\left({\frac {2\ln 2}{c\pi }}\left({\frac {\text{NA}}{0.56k}}\right)^{2}fz\right)}^{2}}}}}\right],} where k-factor depends on the truncation ratio and level of the irradiance, NA is numerical aperture, c is the speed of light, f is the photon frequency of the imaging beam, Ir is the intensity of reference beam, a is an adjustment factor and ρ {\displaystyle \rho } is the radial position from the center of the beam on the corresponding z-plane. == History and methods == The diffraction theory of point spread functions was first studied by Airy in the nineteenth century. He developed an expression for the point spread function amplitude and intensity of a perfect instrument, free of aberrations (the so-called Airy disc). The theory of aberrated point spread functions close to the optimum focal plane was studied by Zernike and Nijboer in the 1930–40s. A central role in their analysis is played by Zernike's circle polynomials that allow an efficient representation of the aberrations of any optical system with rotational symmetry. Recent analytic results have made it possible to extend Nijboer and Zernike's approach for point spread function evaluation to a large volume around the optimum focal point. This extended Nijboer-Zernike (ENZ) theory allows studying the imperfect imaging of three-dimensional objects in confocal microscopy or astronomy under non-ideal imaging conditions. The ENZ-theory has also been applied to the characterization of optical instruments with respect to their aberration by measuring the through-focus intensity distribution and solving an appropriate inverse problem. == Applications == === Microscopy === In microscopy, experimental determination of PSF requires sub-resolution (point-like) radiating sources. Quantum dots and fluorescent beads are usually considered for this purpose. Theoretical models as described above, on the other hand, allow the detailed calculation of the PSF for various imaging conditions. The most compact diffraction limited shape of the PSF is usually preferred. However, by using appropriate optical elements (e.g., a spatial light modulator) the shape of the PSF can be engineered towards different applications. === Astronomy === In observational astronomy, the experimental determination of a PSF is often very straightforward due to the ample supply of point sources (stars or quasars). The form and source of the PSF may vary widely depending on the instrument and the context in which it is used. For radio telescopes and diffraction-limited space telescopes, the dominant terms in the PSF may be inferred from the configuration of the aperture in the Fourier domain. In practice, there may be multiple terms contributed by the various components in a complex optical system. A complete description of the PSF will also include diffusion of light (or photo-electrons) in the detector, as well as tracking errors in the spacecraft or telescope. For ground-based optical telescopes, atmospheric turbulence (known as astronomical seeing) dominates the contribution to the PSF. In high-resolution ground-based imaging, the PSF is often found to vary with position in the image (an effect called anisoplanatism). In ground-based adaptive optics systems, the PSF is a combination of the aperture of the system with residual uncorrected atmospheric terms. === Lithography === The PSF is also a fundamental limit to the conventional focused imaging of a hole, with the minimum printed size being in the range of 0.6-0.7 wavelength/NA, with NA being the numerical aperture of the imaging system. For example, in the case of an EUV system with wavelength of 13.5 nm and NA=0.33, the minimum individual hole size that can be imaged is in the range of 25-29 nm. A phase-shift mask has 180-degree phase edges which allow finer resolution. === Ophthalmology === Point spread functions have recently become a useful diagnostic tool in clinical ophthalmology. Patients are measured with a Shack-Hartmann wavefront sensor, and special software calculates the PSF for that patient's eye. This method allows a physician to simulate potential treatments on a patient, and estimate how those treatments would alter the patient's PSF. Additionally, once measured the PSF can be minimized using an adaptive optics system. This, in conjunction with a CCD camera and an adaptive optics system, can be used to visualize anatomical structures not otherwise visible in vivo, such as cone photoreceptors. == See also == Airy disc Circle of confusion, for the closely related topic in general photography. Deconvolution Encircled energy Impulse response function Microscope Microsphere PSF Lab == References == Hagai Kirshner; François Aguet; Daniel Sage; Michael Unser (2013). "3-D PSF Fitting for Fluorescence Microscopy: Implementation and Localization Application" (PDF). Journal of Microscopy. 249 (January 2013): 13–25. doi:10.1111/j.1365-2818.2012.03675.x. PMID 23126323. S2CID 5318333. Rachel Noek; Caleb Knoernschild; Justin Migacz; Taehyun Kim; Peter Maunz; True Merrill; Harley Hayden; C.S. Pai; Jungsang Kim (2010). "Multi-scale Optics for Enhanced Light Collection from a Point Source" (PDF). Optics Letters. 35 (June 2010): 2460–2. arXiv:1006.2188. Bibcode:2010OptL...35.2460N. doi:10.1364/OL.35.002460. hdl:10161/4222. PMID 20634863. S2CID 6838852.
Wikipedia/Point_spread_function
The EIA 1956 Resolution Chart (until 1975 called RETMA Resolution Chart 1956) is a test card originally designed in 1956 to be used with black and white analogue TV systems, based on the previous (and very similar) RMA 1946 Resolution Chart. It consisted of a printed chart filmed by a TV camera or monoscope to be displayed on a TV screen, and was also available as individual rolls of test film to test broadcasting equipment. Inspecting the chart allowed to check for defects like ringing, geometric distortions, raster scan linearity, cathode-ray tube uniformity and lack of image resolution. If needed, a technician could use it to perform the necessary hardware adjustments. Today, this chart continues to be used to measure image resolution of modern cameras and lenses and also in scientific research. == Features and operation == The chart is composed of several features, each designed for a specific test: Large white circle: Allows for image geometry adjustments (image should be centered with the circles being perfectly round). Vertical stripe boxes: A grating with a resolution of 200 Television Lines (TVL), a measurement of image resolution on analogue TV systems, allowing adjustment of horizontal linearity and geometry. Horizontal stripe boxes: A grating, allowing adjustment of vertical linearity. Grayscale steps: Evaluating gamma and transfer characteristics, they allow for contrast and brightness adjustments (at least 6 to 8 steps should be visible) Concentric circles: Allow to test cathode-ray beam sharpness and focus Resolution wedges: The gradually expanding lines near the center, labeled with periodic indications of the corresponding spatial frequency, allow checking of image resolution. Border arrows: Allow for overscan adjustments. Numbers: Going from 200 to 800, they correspond to TV Lines (TVL). Used with early monochrome TV systems, this chart was useful in measuring image resolution, determined by inspection of the image as displayed on a CRT. On such systems an important measure is the limiting horizontal resolution, affected by hardware and transmission quality (vertical resolution is fixed and determined by the video standard used, usually 525 lines or 625 lines). == Usage == === RMA 1946 Resolution Chart === The RMA 1946 Resolution Chart was transmitted by NTS and NOS in the Netherlands, SRG SSR in Switzerland, VRT and RTBF in Belgium, RTP in Portugal, TVP in Poland, TVB in Hong Kong, Venevisión in Venezuela (525-lines variant; in conjunction with Indian-head test pattern), WISN-TV in Milwaukee, Wisconsin (525-lines variant) and on low-powered experimental transmissions by Philips Natuurkundig Laboratorium in Eindhoven (NL) and Istanbul University in Turkey. === EIA 1956 resolution chart === The EIA 1956 resolution chart was transmitted by NRK in Norway (in conjunction with the monochrome Pye Test Card G), CKCK-TV in Saskatchewan, Canada (525-lines variant), CERTV in the Dominican Republic (525-lines variant), KRMA-TV, KVVV-TV, WVIZ-TV, WHYY-TV and WUAB-TV in the United States (525-lines variant; WUAB-TV's version later partially overlaid on SMPTE color bars), RTBF and VRT in Belgium, NTS in the Netherlands, Magyar Televízió in Hungary, TVP in Poland, American Forces Network in West Germany (525-lines variant, sometimes also with the centre portion overlaid on top of Multiburst test pattern), Yugoslav Radio Television in the former SFR Yugoslavia, Rediffusion Television in British Hong Kong (where it replaced a modified version of the 1950s Marconi-designed Associated-Rediffusion "diamond" test card), ERTU in Egypt and ORTAS in Syria. It was also used by the pirate TV Noordzee station broadcasting to the Netherlands in the 1960s. This chart, in conjunction with the RMA 1946 Resolution Chart and later widescreen patterns, is commonly used to test consumer and professional standalone, smartphone and tablet cameras for photo and videography and other imaging equipment like microscopes or CCTV cameras. == Variations == Some variations of the EIA resolution test chart exist. Two Japanese variants of the EIA 1956 resolution chart are called "ITE Resolution Chart /EIAJ Test Chart A" and "JEITA Test Chart II". A widescreen update of the EIA 1956 resolution chart was developed around the 1980s for the HD-MAC broadcasting standard, which was later modified by the Institute of Image Information and Television Engineers of Japan as its ITE Resolution Chart for High-definition Televisions. === Telefunken T 05 === In continental Europe, another variation known as Telefunken Test Card T05 was used. It had five diagonal bars on the top left of the centre white circle and different resolution wedges reminiscent of the RMA 1946 Resolution Chart. It was also available as individual rolls of test film, particularly in the DACH countries. As a test card, it was used on ARD (from the 1950s up to the 1970s), Hessischer Rundfunk, Bayerischer Rundfunk, WDR, NWRV in northern Germany, Yugoslav Radio Television, Österreichischer Rundfunk in Austria, BRT in Belgium, Doordarshan in India, some commercial TV stations in Australia, TVE in Spain, Israel Broadcasting Authority and Israeli Educational Television in Israel, TRT in Turkey, and in early-1950s trial television tests by the KTH Royal Institute of Technology in Stockholm, Sweden. The centre portion of the Telefunken T05 test card was depicted on the obverse side of the 50 Years of Television commemorative coin minted on 9 March 2005 in Austria. == In popular culture == The centre portion of the RMA 1946 Resolution Chart was featured on the cover of Die Kreuzen's 7" single of Pink Flag/Land of Treason, released in 1990. == See also == Television lines Philips PM5540 Indian-head test pattern Test card == References ==
Wikipedia/EIA_1956_resolution_chart
A 1951 USAF resolution test chart is a microscopic optical resolution test device originally defined by the U.S. Air Force MIL-STD-150A standard of 1951. The design provides numerous small target shapes exhibiting a stepped assortment of precise spatial frequency specimens. It is widely used in optical engineering laboratory work to analyze and validate imaging systems such as microscopes, cameras and image scanners. The full standard pattern consists of 9 groups, with each group consisting of 6 elements; thus there are 54 target elements provided in the full series. Each element consists of three bars which form a minimal Ronchi ruling. These 54 elements are provided in a standardized series of logarithmic steps in the spatial frequency range from 0.250 to 912.3 line pairs per millimeter (lp/mm). The series of elements spans the range of resolution of the unaided eye, down to the diffraction limits of conventional light microscopy. Commercially produced devices typically consist of a transparent square glass slide, 2 inches or 50 mm in dimension. The slide is printed in metallic chromium by photolithography with the standard pattern, photographically reduced from a large master plot. Slides are available as photographic positive or negative prints to best fit the illumination technique used in various testing methods. A less expensive, abbreviated version omits the two tiniest groups at the center of the pattern (groups number 8 and 9), since the lithography at that scale is costly, and the group elements represent resolution beyond the design of many imaging applications. In practice, the spatial resolution of an imaging system is measured by simply inspecting the system's image of the slide. The largest element observed without distinct image contrast indicates the approximate resolution limit. This element's label is noted by the observer (each group, and each element within a group, is labeled with a single digit). This pair of digits indicates a given element's row and column location in the series table, which in turn defines the spatial frequency of each element, and thus the available resolution of the system. An analytical characterization of resolution as the modulation transfer function is available by plotting the observed image contrast as a function of the various element spatial frequencies. Optical aberrations in the imaging system are readily detected and characterized by translating and rotating the elements within the imaging system's field of view. == Pattern format == The common MIL-STD-150A format consists of six groups in a compact spiral arrangement of three layers. The largest two groups, forming the first layer, are located on the outer sides. The smaller layers consist of repeating progressively smaller pairs toward the center. Each group consists of six elements, numbered from 1 to 6. Within the same layer, the odd-numbered groups appear contiguously from 1 through 6 from the upper right corner. The first element of the even-numbered groups is at the lower right of the layer, with the remaining 2 through 6, at the left. The scales and dimensions of the bars are given by the expression resolution (lp/mm) = 2 group + ( element − 1 ) / 6 , {\displaystyle {\text{resolution (lp/mm)}}=2^{{\text{group}}+({\text{element}}-1)/6},} although usually the following lookup table will be more convenient to use. The line pair (lp) means a black and a white line. == Images == == See also == USAF 1951 target section, in Optical resolution == References == == External links == efg's Tech Note: USAF 1951 and Microcopy Resolution Test Charts. A USAF 1951 resolution chart in PDF format is provided by Yoshihiko Takinami. This chart should be printed such that the side of the square of the 1st element of the group -2 should be 10 mm long. USAF 1951 Resolution Target Further explanations and examples Koren 2003: Norman Koren's updated resolution chart better suited for computer analysis
Wikipedia/1951_USAF_resolution_test_chart
Super-resolution imaging (SR) is a class of techniques that improve the resolution of an imaging system. In optical SR the diffraction limit of systems is transcended, while in geometrical SR the resolution of digital imaging sensors is enhanced. In some radar and sonar imaging applications (e.g. magnetic resonance imaging (MRI), high-resolution computed tomography), subspace decomposition-based methods (e.g. MUSIC) and compressed sensing-based algorithms (e.g., SAMV) are employed to achieve SR over standard periodogram algorithm. Super-resolution imaging techniques are used in general image processing and in super-resolution microscopy. == Basic concepts == Because some of the ideas surrounding super-resolution raise fundamental issues, there is need at the outset to examine the relevant physical and information-theoretical principles: Diffraction limit: The detail of a physical object that an optical instrument can reproduce in an image has limits that are mandated by laws of physics, whether formulated by the diffraction equations in the wave theory of light or equivalently the uncertainty principle for photons in quantum mechanics. Information transfer can never be increased beyond this boundary, but packets outside the limits can be cleverly swapped for (or multiplexed with) some inside it. One does not so much “break” as “run around” the diffraction limit. New procedures probing electro-magnetic disturbances at the molecular level (in the so-called near field) remain fully consistent with Maxwell's equations. Spatial-frequency domain: A succinct expression of the diffraction limit is given in the spatial-frequency domain. In Fourier optics light distributions are expressed as superpositions of a series of grating light patterns in a range of fringe widths, technically spatial frequencies. It is generally taught that diffraction theory stipulates an upper limit, the cut-off spatial-frequency, beyond which pattern elements fail to be transferred into the optical image, i.e., are not resolved. But in fact what is set by diffraction theory is the width of the passband, not a fixed upper limit. No laws of physics are broken when a spatial frequency band beyond the cut-off spatial frequency is swapped for one inside it: this has long been implemented in dark-field microscopy. Nor are information-theoretical rules broken when superimposing several bands, disentangling them in the received image needs assumptions of object invariance during multiple exposures, i.e., the substitution of one kind of uncertainty for another. Information: When the term super-resolution is used in techniques of inferring object details from statistical treatment of the image within standard resolution limits, for example, averaging multiple exposures, it involves an exchange of one kind of information (extracting signal from noise) for another (the assumption that the target has remained invariant). Resolution and localization: True resolution involves the distinction of whether a target, e.g. a star or a spectral line, is single or double, ordinarily requiring separable peaks in the image. When a target is known to be single, its location can be determined with higher precision than the image width by finding the centroid (center of gravity) of its image light distribution. The word ultra-resolution had been proposed for this process but it did not catch on, and the high-precision localization procedure is typically referred to as super-resolution. The technical achievements of enhancing the performance of imaging-forming and –sensing devices now classified as super-resolution use to the fullest but always stay within the bounds imposed by the laws of physics and information theory. == Techniques == === Optical or diffractive super-resolution === Substituting spatial-frequency bands: Though the bandwidth allowable by diffraction is fixed, it can be positioned anywhere in the spatial-frequency spectrum. Dark-field illumination in microscopy is an example. See also aperture synthesis. ==== Multiplexing spatial-frequency bands ==== An image is formed using the normal passband of the optical device. Then some known light structure, for example a set of light fringes that need not even be within the passband, is superimposed on the target. The image now contains components resulting from the combination of the target and the superimposed light structure, e.g. moiré fringes, and carries information about target detail which simple unstructured illumination does not. The “superresolved” components, however, need disentangling to be revealed. For an example, see structured illumination (figure to left). ==== Multiple parameter use within traditional diffraction limit ==== If a target has no special polarization or wavelength properties, two polarization states or non-overlapping wavelength regions can be used to encode target details, one in a spatial-frequency band inside the cut-off limit the other beyond it. Both would use normal passband transmission but are then separately decoded to reconstitute target structure with extended resolution. ==== Probing near-field electromagnetic disturbance ==== The usual discussion of super-resolution involved conventional imagery of an object by an optical system. But modern technology allows probing the electromagnetic disturbance within molecular distances of the source which has superior resolution properties, see also evanescent waves and the development of the new super lens. === Geometrical or image-processing super-resolution === ==== Multi-exposure image noise reduction ==== When an image is degraded by noise, there can be more detail in the average of many exposures, even within the diffraction limit. See example on the right. ==== Single-frame deblurring ==== Known defects in a given imaging situation, such as defocus or aberrations, can sometimes be mitigated in whole or in part by suitable spatial-frequency filtering of even a single image. Such procedures all stay within the diffraction-mandated passband, and do not extend it. ==== Sub-pixel image localization ==== The location of a single source can be determined by computing the "center of gravity" (centroid) of the light distribution extending over several adjacent pixels (see figure on the left). Provided that there is enough light, this can be achieved with arbitrary precision, very much better than pixel width of the detecting apparatus and the resolution limit for the decision of whether the source is single or double. This technique, which requires the presupposition that all the light comes from a single source, is at the basis of what has become known as super-resolution microscopy, e.g. stochastic optical reconstruction microscopy (STORM), where fluorescent probes attached to molecules give nanoscale distance information. It is also the mechanism underlying visual hyperacuity. ==== Bayesian induction beyond traditional diffraction limit ==== Some object features, though beyond the diffraction limit, may be known to be associated with other object features that are within the limits and hence contained in the image. Then conclusions can be drawn, using statistical methods, from the available image data about the presence of the full object. The classical example is Toraldo di Francia's proposition of judging whether an image is that of a single or double star by determining whether its width exceeds the spread from a single star. This can be achieved at separations well below the classical resolution bounds, and requires the prior limitation to the choice "single or double?" The approach can take the form of extrapolating the image in the frequency domain, by assuming that the object is an analytic function, and that we can exactly know the function values in some interval. This method is severely limited by the ever-present noise in digital imaging systems, but it can work for radar, astronomy, microscopy or magnetic resonance imaging. More recently, a fast single image super-resolution algorithm based on a closed-form solution to ℓ 2 − ℓ 2 {\displaystyle \ell _{2}-\ell _{2}} problems has been proposed and demonstrated to accelerate most of the existing Bayesian super-resolution methods significantly. == Aliasing == Geometrical SR reconstruction algorithms are possible if and only if the input low resolution images have been under-sampled and therefore contain aliasing. Because of this aliasing, the high-frequency content of the desired reconstruction image is embedded in the low-frequency content of each of the observed images. Given a sufficient number of observation images, and if the set of observations vary in their phase (i.e. if the images of the scene are shifted by a sub-pixel amount), then the phase information can be used to separate the aliased high-frequency content from the true low-frequency content, and the full-resolution image can be accurately reconstructed. In practice, this frequency-based approach is not used for reconstruction, but even in the case of spatial approaches (e.g. shift-add fusion), the presence of aliasing is still a necessary condition for SR reconstruction. == Technical implementations == There are many both single-frame and multiple-frame variants of SR. Multiple-frame SR uses the sub-pixel shifts between multiple low resolution images of the same scene. It creates an improved resolution image fusing information from all low resolution images, and the created higher resolution images are better descriptions of the scene. Single-frame SR methods attempt to magnify the image without producing blur. These methods use other parts of the low resolution images, or other unrelated images, to guess what the high-resolution image should look like. Algorithms can also be divided by their domain: frequency or space domain. Originally, super-resolution methods worked well only on grayscale images, but researchers have found methods to adapt them to color camera images. Recently, the use of super-resolution for 3D data has also been shown. == Research == There is promising research on using deep convolutional networks to perform super-resolution. In particular work has been demonstrated showing the transformation of a 20x microscope image of pollen grains into a 1500x scanning electron microscope image using it. While this technique can increase the information content of an image, there is no guarantee that the upscaled features exist in the original image and deep convolutional upscalers should not be used in analytical applications with ambiguous inputs. These methods can hallucinate image features, which can make them unsafe for medical use. == See also == Optical resolution Oversampling Video super-resolution Single-particle trajectory Superoscillation == References == === Other related work === Curtis, Craig H.; Milster, Tom D. (October 1992). "Analysis of Superresolution in Magneto-Optic Data Storage Devices". Applied Optics. 31 (29): 6272–6279. Bibcode:1992ApOpt..31.6272M. doi:10.1364/AO.31.006272. PMID 20733840. Zalevsky, Z.; Mendlovic, D. (2003). Optical Superresolution. Springer. ISBN 978-0-387-00591-1. Caron, J.N. (September 2004). "Rapid supersampling of multiframe sequences by use of blind deconvolution". Optics Letters. 29 (17): 1986–1988. Bibcode:2004OptL...29.1986C. doi:10.1364/OL.29.001986. PMID 15455755. Clement, G.T.; Huttunen, J.; Hynynen, K. (2005). "Superresolution ultrasound imaging using back-projected reconstruction". Journal of the Acoustical Society of America. 118 (6): 3953–3960. Bibcode:2005ASAJ..118.3953C. doi:10.1121/1.2109167. PMID 16419839. Geisler, W.S.; Perry, J.S. (2011). "Statistics for optimal point prediction in natural images". Journal of Vision. 11 (12): 14. doi:10.1167/11.12.14. PMC 5144165. PMID 22011382. Cheung, V.; Frey, B. J.; Jojic, N. (20–25 June 2005). Video epitomes (PDF). Conference on Computer Vision and Pattern Recognition (CVPR). Vol. 1. pp. 42–49. doi:10.1109/CVPR.2005.366. Bertero, M.; Boccacci, P. (October 2003). "Super-resolution in computational imaging". Micron. 34 (6–7): 265–273. doi:10.1016/s0968-4328(03)00051-9. PMID 12932769. Borman, S.; Stevenson, R. (1998). "Spatial Resolution Enhancement of Low-Resolution Image Sequences – A Comprehensive Review with Directions for Future Research" (Technical report). University of Notre Dame. Borman, S.; Stevenson, R. (1998). Super-resolution from image sequences — a review (PDF). Midwest Symposium on Circuits and Systems. Park, S. C.; Park, M. K.; Kang, M. G. (May 2003). "Super-resolution image reconstruction: a technical overview". IEEE Signal Processing Magazine. 20 (3): 21–36. Bibcode:2003ISPM...20...21P. doi:10.1109/MSP.2003.1203207. S2CID 12320918. Farsiu, S.; Robinson, D.; Elad, M.; Milanfar, P. (August 2004). "Advances and Challenges in Super-Resolution". International Journal of Imaging Systems and Technology. 14 (2): 47–57. doi:10.1002/ima.20007. S2CID 12351561. Elad, M.; Hel-Or, Y. (August 2001). "Fast Super-Resolution Reconstruction Algorithm for Pure Translational Motion and Common Space-Invariant Blur". IEEE Transactions on Image Processing. 10 (8): 1187–1193. Bibcode:2001ITIP...10.1187E. CiteSeerX 10.1.1.11.2502. doi:10.1109/83.935034. PMID 18255535. Irani, M.; Peleg, S. (June 1990). Super Resolution From Image Sequences (PDF). International Conference on Pattern Recognition. Vol. 2. pp. 115–120. Sroubek, F.; Cristobal, G.; Flusser, J. (2007). "A Unified Approach to Superresolution and Multichannel Blind Deconvolution". IEEE Transactions on Image Processing. 16 (9): 2322–2332. Bibcode:2007ITIP...16.2322S. doi:10.1109/TIP.2007.903256. PMID 17784605. S2CID 6367149. Calabuig, Alejandro; Micó, Vicente; Garcia, Javier; Zalevsky, Zeev; Ferreira, Carlos (March 2011). "Single-exposure super-resolved interferometric microscopy by red–green–blue multiplexing". Optics Letters. 36 (6): 885–887. Bibcode:2011OptL...36..885C. doi:10.1364/OL.36.000885. PMID 21403717. Chan, Wai-San; Lam, Edmund; Ng, Michael K.; Mak, Giuseppe Y. (September 2007). "Super-resolution reconstruction in a computational compound-eye imaging system". Multidimensional Systems and Signal Processing. 18 (2–3): 83–101. Bibcode:2007MSySP..18...83C. doi:10.1007/s11045-007-0022-3. S2CID 16452552. Ng, Michael K.; Shen, Huanfeng; Lam, Edmund Y.; Zhang, Liangpei (2007). "A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video". EURASIP Journal on Advances in Signal Processing. 2007: 074585. Bibcode:2007EJASP2007..104N. doi:10.1155/2007/74585. hdl:10722/73871. Glasner, D.; Bagon, S.; Irani, M. (October 2009). Super-Resolution from a Single Image (PDF). International Conference on Computer Vision (ICCV).; "example and results". Ben-Ezra, M.; Lin, Zhouchen; Wilburn, B.; Zhang, Wei (July 2011). "Penrose Pixels for Super-Resolution" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. 33 (7): 1370–1383. CiteSeerX 10.1.1.174.8804. doi:10.1109/TPAMI.2010.213. PMID 21135446. S2CID 184868. Berliner, L.; Buffa, A. (2011). "Super-resolution variable-dose imaging in digital radiography: quality and dose reduction with a fluoroscopic flat-panel detector". Int J Comput Assist Radiol Surg. 6 (5): 663–673. doi:10.1007/s11548-011-0545-9. PMID 21298404. Timofte, R.; De Smet, V.; Van Gool, L. (November 2014). A+: Adjusted Anchored Neighborhood Regression for Fast Super-Resolution (PDF). 12th Asian Conference on Computer Vision (ACCV).; "codes and data". Huang, J.-B; Singh, A.; Ahuja, N. (June 2015). Single Image Super-Resolution from Transformed Self-Exemplars. IEEE Conference on Computer Vision and Pattern Recognition.; "project page". CHRISTENSEN-JEFFRIES, T.; COUTURE, O.; DAYTON, P.A.; ELDAR, Y.C.; HYNYNEN, K.; KIESSLING, F.; O’REILLY, M.; PINTON, G.F.; SCHMITZ, G.; TANG, M.-X.; TANTER, M.; VAN SLOUN, R.J.G. (2020). "Super-resolution Ultrasound Imaging". Ultrasound Med. Biol. 46 (4): 865–891. doi:10.1016/j.ultrasmedbio.2019.11.013. PMC 8388823. PMID 31973952.
Wikipedia/Superresolution
The optical transfer function (OTF) of an optical system such as a camera, microscope, human eye, or projector is a scale-dependent description of their imaging contrast. Its magnitude is the image contrast of the harmonic intensity pattern, 1 + cos ⁡ ( 2 π ν ⋅ x ) {\displaystyle 1+\cos(2\pi \nu \cdot x)} , as a function of the spatial frequency, ν {\displaystyle \nu } , while its complex argument indicates a phase shift in the periodic pattern. The optical transfer function is used by optical engineers to describe how the optics project light from the object or scene onto a photographic film, detector array, retina, screen, or simply the next item in the optical transmission chain. Formally, the optical transfer function is defined as the Fourier transform of the point spread function (PSF, that is, the impulse response of the optics, the image of a point source). As a Fourier transform, the OTF is generally complex-valued; however, it is real-valued in the common case of a PSF that is symmetric about its center. In practice, the imaging contrast, as given by the magnitude or modulus of the optical-transfer function, is of primary importance. This derived function is commonly referred to as the modulation transfer function (MTF). The image on the right shows the optical transfer functions for two different optical systems in panels (a) and (d). The former corresponds to the ideal, diffraction-limited, imaging system with a circular pupil. Its transfer function decreases approximately gradually with spatial frequency until it reaches the diffraction-limit, in this case at 500 cycles per millimeter or a period of 2 μm. Since periodic features as small as this period are captured by this imaging system, it could be said that its resolution is 2 μm. Panel (d) shows an optical system that is out of focus. This leads to a sharp reduction in contrast compared to the diffraction-limited imaging system. It can be seen that the contrast is zero around 250 cycles/mm, or periods of 4 μm. This explains why the images for the out-of-focus system (e,f) are more blurry than those of the diffraction-limited system (b,c). Note that although the out-of-focus system has very low contrast at spatial frequencies around 250 cycles/mm, the contrast at spatial frequencies near the diffraction limit of 500 cycles/mm is diffraction-limited. Close observation of the image in panel (f) shows that the image of the large spoke densities near the center of the spoke target is relatively sharp. == Definition and related concepts == Since the optical transfer function (OTF) is defined as the Fourier transform of the point-spread function (PSF), it is generally speaking a complex-valued function of spatial frequency. The projection of a specific periodic pattern is represented by a complex number with absolute value and complex argument proportional to the relative contrast and translation of the projected projection, respectively. Often the contrast reduction is of most interest and the translation of the pattern can be ignored. The relative contrast is given by the absolute value of the optical transfer function, a function commonly referred to as the modulation transfer function (MTF). Its values indicate how much of the object's contrast is captured in the image as a function of spatial frequency. The MTF tends to decrease with increasing spatial frequency from 1 to 0 (at the diffraction limit); however, the function is often not monotonic. On the other hand, when also the pattern translation is important, the complex argument of the optical transfer function can be depicted as a second real-valued function, commonly referred to as the phase transfer function (PhTF). The complex-valued optical transfer function can be seen as a combination of these two real-valued functions: O T F ( ν ) = M T F ( ν ) e i P h T F ( ν ) {\displaystyle \mathrm {OTF} (\nu )=\mathrm {MTF} (\nu )e^{i\,\mathrm {PhTF} (\nu )}} where M T F ( ν ) = | O T F ( ν ) | , {\displaystyle \mathrm {MTF} (\nu )=\left\vert \mathrm {OTF} (\nu )\right\vert ,} P h T F ( ν ) = a r g ( O T F ( ν ) ) , {\displaystyle \mathrm {PhTF} (\nu )=\mathrm {arg} (\mathrm {OTF} (\nu )),} and a r g ( ⋅ ) {\displaystyle \mathrm {arg} (\cdot )} represents the complex argument function, while ν {\displaystyle \nu } is the spatial frequency of the periodic pattern. In general ν {\displaystyle \nu } is a vector with a spatial frequency for each dimension, i.e. it indicates also the direction of the periodic pattern. The impulse response of a well-focused optical system is a three-dimensional intensity distribution with a maximum at the focal plane, and could thus be measured by recording a stack of images while displacing the detector axially. By consequence, the three-dimensional optical transfer function can be defined as the three-dimensional Fourier transform of the impulse response. Although typically only a one-dimensional, or sometimes a two-dimensional section is used, the three-dimensional optical transfer function can improve the understanding of microscopes such as the structured illumination microscope. True to the definition of transfer function, O T F ( 0 ) = M T F ( 0 ) {\displaystyle \mathrm {OTF} (0)=\mathrm {MTF} (0)} should indicate the fraction of light that was detected from the point source object. However, typically the contrast relative to the total amount of detected light is most important. It is thus common practice to normalize the optical transfer function to the detected intensity, hence M T F ( 0 ) ≡ 1 {\displaystyle \mathrm {MTF} (0)\equiv 1} . Generally, the optical transfer function depends on factors such as the spectrum and polarization of the emitted light and the position of the point source. E.g. the image contrast and resolution are typically optimal at the center of the image, and deteriorate toward the edges of the field-of-view. When significant variation occurs, the optical transfer function may be calculated for a set of representative positions or colors. Sometimes it is more practical to define the transfer functions based on a binary black-white stripe pattern. The transfer function for an equal-width black-white periodic pattern is referred to as the contrast transfer function (CTF). == Examples == === Ideal lens system === A perfect lens system will provide a high contrast projection without shifting the periodic pattern, hence the optical transfer function is identical to the modulation transfer function. Typically the contrast will reduce gradually towards zero at a point defined by the resolution of the optics. For example, a perfect, non-aberrated, f/4 optical imaging system used, at the visible wavelength of 500 nm, would have the optical transfer function depicted in the right hand figure. It can be read from the plot that the contrast gradually reduces and reaches zero at the spatial frequency of 500 cycles per millimeter, in other words the optical resolution of the image projection is 1/500th of a millimeter, or 2 micrometer. Correspondingly, for this particular imaging device, the spokes become more and more blurred towards the center until they merge into a gray, unresolved, disc. Note that sometimes the optical transfer function is given in units of the object or sample space, observation angle, film width, or normalized to the theoretical maximum. Conversion between units is typically a matter of a multiplication or division. E.g. a microscope typically magnifies everything 10 to 100-fold, and a reflex camera will generally demagnify objects at a distance of 5 meter by a factor of 100 to 200. The resolution of a digital imaging device is not only limited by the optics, but also by the number of pixels, more in particular by their separation distance. As explained by the Nyquist–Shannon sampling theorem, to match the optical resolution of the given example, the pixels of each color channel should be separated by 1 micrometer, half the period of 500 cycles per millimeter. A higher number of pixels on the same sensor size will not allow the resolution of finer detail. On the other hand, when the pixel spacing is larger than 1 micrometer, the resolution will be limited by the separation between pixels; moreover, aliasing may lead to a further reduction of the image fidelity. === Imperfect lens system === An imperfect, aberrated imaging system could possess the optical transfer function depicted in the following figure. As the ideal lens system, the contrast reaches zero at the spatial frequency of 500 cycles per millimeter. However, at lower spatial frequencies the contrast is considerably lower than that of the perfect system in the previous example. In fact, the contrast becomes zero on several occasions even for spatial frequencies lower than 500 cycles per millimeter. This explains the gray circular bands in the spoke image shown in the above figure. In between the gray bands, the spokes appear to invert from black to white and vice versa, this is referred to as contrast inversion, directly related to the sign reversal in the real part of the optical transfer function, and represents itself as a shift by half a period for some periodic patterns. While it could be argued that the resolution of both the ideal and the imperfect system is 2 μm, or 500 LP/mm, it is clear that the images of the latter example are less sharp. A definition of resolution that is more in line with the perceived quality would instead use the spatial frequency at which the first zero occurs, 10 μm, or 100 LP/mm. Definitions of resolution, even for perfect imaging systems, vary widely. A more complete, unambiguous picture is provided by the optical transfer function. === Optical system with a non-rotational symmetric aberration === Optical systems, and in particular optical aberrations are not always rotationally symmetric. Periodic patterns that have a different orientation can thus be imaged with different contrast even if their periodicity is the same. Optical transfer function or modulation transfer functions are thus generally two-dimensional functions. The following figures shows the two-dimensional equivalent of the ideal and the imperfect system discussed earlier, for an optical system with trefoil, a non-rotational-symmetric aberration. Optical transfer functions are not always real-valued. Period patterns can be shifted by any amount, depending on the aberration in the system. This is generally the case with non-rotational-symmetric aberrations. The hue of the colors of the surface plots in the above figure indicate phase. It can be seen that, while for the rotational symmetric aberrations the phase is either 0 or π and thus the transfer function is real valued, for the non-rotational symmetric aberration the transfer function has an imaginary component and the phase varies continuously. === Practical example – high-definition video system === While optical resolution, as commonly used with reference to camera systems, describes only the number of pixels in an image, and hence the potential to show fine detail, the transfer function describes the ability of adjacent pixels to change from black to white in response to patterns of varying spatial frequency, and hence the actual capability to show fine detail, whether with full or reduced contrast. An image reproduced with an optical transfer function that 'rolls off' at high spatial frequencies will appear 'blurred' in everyday language. Taking the example of a current high definition (HD) video system, with 1920 by 1080 pixels, the Nyquist theorem states that it should be possible, in a perfect system, to resolve fully (with true black to white transitions) a total of 1920 black and white alternating lines combined, otherwise referred to as a spatial frequency of 1920/2=960 line pairs per picture width, or 960 cycles per picture width, (definitions in terms of cycles per unit angle or per mm are also possible but generally less clear when dealing with cameras and more appropriate to telescopes etc.). In practice, this is far from the case, and spatial frequencies that approach the Nyquist rate will generally be reproduced with decreasing amplitude, so that fine detail, though it can be seen, is greatly reduced in contrast. This gives rise to the interesting observation that, for example, a standard definition television picture derived from a film scanner that uses oversampling, as described later, may appear sharper than a high definition picture shot on a camera with a poor modulation transfer function. The two pictures show an interesting difference that is often missed, the former having full contrast on detail up to a certain point but then no really fine detail, while the latter does contain finer detail, but with such reduced contrast as to appear inferior overall. == The three-dimensional optical transfer function == Although one typically thinks of an image as planar, or two-dimensional, the imaging system will produce a three-dimensional intensity distribution in image space that in principle can be measured. e.g. a two-dimensional sensor could be translated to capture a three-dimensional intensity distribution. The image of a point source is also a three dimensional (3D) intensity distribution which can be represented by a 3D point-spread function. As an example, the figure on the right shows the 3D point-spread function in object space of a wide-field microscope (a) alongside that of a confocal microscope (c). Although the same microscope objective with a numerical aperture of 1.49 is used, it is clear that the confocal point spread function is more compact both in the lateral dimensions (x,y) and the axial dimension (z). One could rightly conclude that the resolution of a confocal microscope is superior to that of a wide-field microscope in all three dimensions. A three-dimensional optical transfer function can be calculated as the three-dimensional Fourier transform of the 3D point-spread function. Its color-coded magnitude is plotted in panels (b) and (d), corresponding to the point-spread functions shown in panels (a) and (c), respectively. The transfer function of the wide-field microscope has a support that is half of that of the confocal microscope in all three-dimensions, confirming the previously noted lower resolution of the wide-field microscope. Note that along the z-axis, for x = y = 0, the transfer function is zero everywhere except at the origin. This missing cone is a well-known problem that prevents optical sectioning using a wide-field microscope. The two-dimensional optical transfer function at the focal plane can be calculated by integration of the 3D optical transfer function along the z-axis. Although the 3D transfer function of the wide-field microscope (b) is zero on the z-axis for z ≠ 0; its integral, the 2D optical transfer, reaching a maximum at x = y = 0. This is only possible because the 3D optical transfer function diverges at the origin x = y = z = 0. The function values along the z-axis of the 3D optical transfer function correspond to the Dirac delta function. == Calculation == Most optical design software has functionality to compute the optical or modulation transfer function of a lens design. Ideal systems such as in the examples here are readily calculated numerically using software such as Julia, GNU Octave or Matlab, and in some specific cases even analytically. The optical transfer function can be calculated following two approaches: as the Fourier transform of the incoherent point spread function, or as the auto-correlation of the pupil function of the optical system Mathematically both approaches are equivalent. Numeric calculations are typically most efficiently done via the Fourier transform; however, analytic calculation may be more tractable using the auto-correlation approach. === Example === ==== Ideal lens system with circular aperture ==== ===== Auto-correlation of the pupil function ===== Since the optical transfer function is the Fourier transform of the point spread function, and the point spread function is the square absolute of the inverse Fourier transformed pupil function, the optical transfer function can also be calculated directly from the pupil function. From the convolution theorem it can be seen that the optical transfer function is in fact the autocorrelation of the pupil function. The pupil function of an ideal optical system with a circular aperture is a disk of unit radius. The optical transfer function of such a system can thus be calculated geometrically from the intersecting area between two identical disks at a distance of 2 ν {\displaystyle 2\nu } , where ν {\displaystyle \nu } is the spatial frequency normalized to the highest transmitted frequency. In general the optical transfer function is normalized to a maximum value of one for ν = 0 {\displaystyle \nu =0} , so the resulting area should be divided by π {\displaystyle \pi } . The intersecting area can be calculated as the sum of the areas of two identical circular segments: θ / 2 − sin ⁡ ( θ ) / 2 {\displaystyle \theta /2-\sin(\theta )/2} , where θ {\displaystyle \theta } is the circle segment angle. By substituting | ν | = cos ⁡ ( θ / 2 ) {\displaystyle |\nu |=\cos(\theta /2)} , and using the equalities sin ⁡ ( θ ) / 2 = sin ⁡ ( θ / 2 ) cos ⁡ ( θ / 2 ) {\displaystyle \sin(\theta )/2=\sin(\theta /2)\cos(\theta /2)} and 1 = ν 2 + sin ⁡ ( arccos ⁡ ( | ν | ) ) 2 {\displaystyle 1=\nu ^{2}+\sin(\arccos(|\nu |))^{2}} , the equation for the area can be rewritten as arccos ⁡ ( | ν | ) − | ν | 1 − ν 2 {\displaystyle \arccos(|\nu |)-|\nu |{\sqrt {1-\nu ^{2}}}} . Hence the normalized optical transfer function is given by: OTF ⁡ ( ν ) = 2 π ( arccos ⁡ ( | ν | ) − | ν | 1 − ν 2 ) . {\displaystyle \operatorname {OTF} (\nu )={\frac {2}{\pi }}\left(\arccos(|\nu |)-|\nu |{\sqrt {1-\nu ^{2}}}\right).} A more detailed discussion can be found in and.: 152–153  === Numerical evaluation === The one-dimensional optical transfer function can be calculated as the discrete Fourier transform of the line spread function. This data is graphed against the spatial frequency data. In this case, a sixth order polynomial is fitted to the MTF vs. spatial frequency curve to show the trend. The 50% cutoff frequency is determined to yield the corresponding spatial frequency. Thus, the approximate position of best focus of the unit under test is determined from this data. The Fourier transform of the line spread function (LSF) can not be determined analytically by the following equations: MTF = F [ LSF ] MTF = ∫ f ( x ) e − i 2 π x s d x {\displaystyle \operatorname {MTF} ={\mathcal {F}}\left[\operatorname {LSF} \right]\qquad \qquad \operatorname {MTF} =\int f(x)e^{-i2\pi \,xs}\,dx} Therefore, the Fourier Transform is numerically approximated using the discrete Fourier transform D F T {\displaystyle {\mathcal {DFT}}} . MTF = D F T [ LSF ] = Y k = ∑ n = 0 N − 1 y n e − i k 2 π N n k ∈ [ 0 , N − 1 ] {\displaystyle \operatorname {MTF} ={\mathcal {DFT}}[\operatorname {LSF} ]=Y_{k}=\sum _{n=0}^{N-1}y_{n}e^{-ik{\frac {2\pi }{N}}n}\qquad k\in [0,N-1]} where Y k {\displaystyle Y_{k}\,} = the k th {\displaystyle k^{\text{th}}} value of the MTF N {\displaystyle N\,} = number of data points n {\displaystyle n\,} = index k {\displaystyle k\,} = k th {\displaystyle k^{\text{th}}} term of the LSF data y n {\displaystyle y_{n}\,} = n th {\displaystyle n^{\text{th}}\,} pixel position i = − 1 {\displaystyle i={\sqrt {-1}}} e ± i a = cos ⁡ ( a ) ± i sin ⁡ ( a ) {\displaystyle e^{\pm ia}=\cos(a)\,\pm \,i\sin(a)} MTF = D F T [ LSF ] = Y k = ∑ n = 0 N − 1 y n [ cos ⁡ ( k 2 π N n ) − i sin ⁡ ( k 2 π N n ) ] k ∈ [ 0 , N − 1 ] {\displaystyle \operatorname {MTF} ={\mathcal {DFT}}[\operatorname {LSF} ]=Y_{k}=\sum _{n=0}^{N-1}y_{n}\left[\cos \left(k{\frac {2\pi }{N}}n\right)-i\sin \left(k{\frac {2\pi }{N}}n\right)\right]\qquad k\in [0,N-1]} The MTF is then plotted against spatial frequency and all relevant data concerning this test can be determined from that graph. === The vectorial transfer function === At high numerical apertures such as those found in microscopy, it is important to consider the vectorial nature of the fields that carry light. By decomposing the waves in three independent components corresponding to the Cartesian axes, a point spread function can be calculated for each component and combined into a vectorial point spread function. Similarly, a vectorial optical transfer function can be determined as shown in () and (). == Measurement == The optical transfer function is not only useful for the design of optical system, it is also valuable to characterize manufactured systems. === Starting from the point spread function === The optical transfer function is defined as the Fourier transform of the impulse response of the optical system, also called the point spread function. The optical transfer function is thus readily obtained by first acquiring the image of a point source, and applying the two-dimensional discrete Fourier transform to the sampled image. Such a point-source can, for example, be a bright light behind a screen with a pin hole, a fluorescent or metallic microsphere, or simply a dot painted on a screen. Calculation of the optical transfer function via the point spread function is versatile as it can fully characterize optics with spatial varying and chromatic aberrations by repeating the procedure for various positions and wavelength spectra of the point source. === Using extended test objects for spatially invariant optics === When the aberrations can be assumed to be spatially invariant, alternative patterns can be used to determine the optical transfer function such as lines and edges. The corresponding transfer functions are referred to as the line-spread function and the edge-spread function, respectively. Such extended objects illuminate more pixels in the image, and can improve the measurement accuracy due to the larger signal-to-noise ratio. The optical transfer function is in this case calculated as the two-dimensional discrete Fourier transform of the image and divided by that of the extended object. Typically either a line or a black-white edge is used. ==== The line-spread function ==== The two-dimensional Fourier transform of a line through the origin, is a line orthogonal to it and through the origin. The divisor is thus zero for all but a single dimension, by consequence, the optical transfer function can only be determined for a single dimension using a single line-spread function (LSF). If necessary, the two-dimensional optical transfer function can be determined by repeating the measurement with lines at various angles. The line spread function can be found using two different methods. It can be found directly from an ideal line approximation provided by a slit test target or it can be derived from the edge spread function, discussed in the next sub section. ==== Edge-spread function ==== The two-dimensional Fourier transform of an edge is also only non-zero on a single line, orthogonal to the edge. This function is sometimes referred to as the edge spread function (ESF). However, the values on this line are inversely proportional to the distance from the origin. Although the measurement images obtained with this technique illuminate a large area of the camera, this mainly benefits the accuracy at low spatial frequencies. As with the line spread function, each measurement only determines a single axes of the optical transfer function, repeated measurements are thus necessary if the optical system cannot be assumed to be rotational symmetric. As shown in the right hand figure, an operator defines a box area encompassing the edge of a knife-edge test target image back-illuminated by a black body. The box area is defined to be approximately 10% of the total frame area. The image pixel data is translated into a two-dimensional array (pixel intensity and pixel position). The amplitude (pixel intensity) of each line within the array is normalized and averaged. This yields the edge spread function. ESF = X − μ σ σ = ∑ i = 0 n − 1 ( x i − μ ) 2 n μ = ∑ i = 0 n − 1 x i n {\displaystyle \operatorname {ESF} ={\frac {X-\mu }{\sigma }}\qquad \qquad \sigma \,={\sqrt {\frac {\sum _{i=0}^{n-1}(x_{i}-\mu \,)^{2}}{n}}}\qquad \qquad \mu \,={\frac {\sum _{i=0}^{n-1}x_{i}}{n}}} where ESF = the output array of normalized pixel intensity data X {\displaystyle X\,} = the input array of pixel intensity data x i {\displaystyle x_{i}\,} = the ith element of X {\displaystyle X\,} μ {\displaystyle \mu \,} = the average value of the pixel intensity data σ {\displaystyle \sigma \,} = the standard deviation of the pixel intensity data n {\displaystyle n\,} = number of pixels used in average The line spread function is identical to the first derivative of the edge spread function, which is differentiated using numerical methods. In case it is more practical to measure the edge spread function, one can determine the line spread function as follows: LSF = d d x ESF ⁡ ( x ) {\displaystyle \operatorname {LSF} ={\frac {d}{dx}}\operatorname {ESF} (x)} Typically the ESF is only known at discrete points, so the LSF is numerically approximated using the finite difference: LSF = d d x ESF ⁡ ( x ) ≈ Δ ESF Δ x {\displaystyle \operatorname {LSF} ={\frac {d}{dx}}\operatorname {ESF} (x)\approx {\frac {\Delta \operatorname {ESF} }{\Delta x}}} LSF ≈ ESF i + 1 − ESF i − 1 2 ( x i + 1 − x i ) {\displaystyle \operatorname {LSF} \approx {\frac {\operatorname {ESF} _{i+1}-\operatorname {ESF} _{i-1}}{2(x_{i+1}-x_{i})}}} where: i {\displaystyle i\,} = the index i = 1 , 2 , … , n − 1 {\displaystyle i=1,2,\dots ,n-1} x i {\displaystyle x_{i}\,} = i th {\displaystyle i^{\text{th}}\,} position of the i th {\displaystyle i^{\text{th}}\,} pixel ESF i {\displaystyle \operatorname {ESF} _{i}\,} = ESF of the i th {\displaystyle i^{\text{th}}\,} pixel ==== Using a grid of black and white lines ==== Although 'sharpness' is often judged on grid patterns of alternate black and white lines, it should strictly be measured using a sine-wave variation from black to white (a blurred version of the usual pattern). Where a square wave pattern is used (simple black and white lines) not only is there more risk of aliasing, but account must be taken of the fact that the fundamental component of a square wave is higher than the amplitude of the square wave itself (the harmonic components reduce the peak amplitude). A square wave test chart will therefore show optimistic results (better resolution of high spatial frequencies than is actually achieved). The square wave result is sometimes referred to as the 'contrast transfer function' (CTF). == Factors affecting MTF in typical camera systems == In practice, many factors result in considerable blurring of a reproduced image, such that patterns with spatial frequency just below the Nyquist rate may not even be visible, and the finest patterns that can appear 'washed out' as shades of grey, not black and white. A major factor is usually the impossibility of making the perfect 'brick wall' optical filter (often realized as a 'phase plate' or a lens with specific blurring properties in digital cameras and video camcorders). Such a filter is necessary to reduce aliasing by eliminating spatial frequencies above the Nyquist rate of the display. === Oversampling and downconversion to maintain the optical transfer function === The only way in practice to approach the theoretical sharpness possible in a digital imaging system such as a camera is to use more pixels in the camera sensor than samples in the final image, and 'downconvert' or 'interpolate' using special digital processing which cuts off high frequencies above the Nyquist rate to avoid aliasing whilst maintaining a reasonably flat MTF up to that frequency. This approach was first taken in the 1970s when flying spot scanners, and later CCD line scanners were developed, which sampled more pixels than were needed and then downconverted, which is why movies have always looked sharper on television than other material shot with a video camera. The only theoretically correct way to interpolate or downconvert is by use of a steep low-pass spatial filter, realized by convolution with a two-dimensional sin(x)/x weighting function which requires powerful processing. In practice, various mathematical approximations to this are used to reduce the processing requirement. These approximations are now implemented widely in video editing systems and in image processing programs such as Photoshop. Just as standard definition video with a high contrast MTF is only possible with oversampling, so HD television with full theoretical sharpness is only possible by starting with a camera that has a significantly higher resolution, followed by digitally filtering. With movies now being shot in 4k and even 8k video for the cinema, we can expect to see the best pictures on HDTV only from movies or material shot at the higher standard. However much we raise the number of pixels used in cameras, this will always remain true in absence of a perfect optical spatial filter. Similarly, a 5-megapixel image obtained from a 5-megapixel still camera can never be sharper than a 5-megapixel image obtained after down-conversion from an equal quality 10-megapixel still camera. Because of the problem of maintaining a high contrast MTF, broadcasters like the BBC did for a long time consider maintaining standard definition television, but improving its quality by shooting and viewing with many more pixels (though as previously mentioned, such a system, though impressive, does ultimately lack the very fine detail which, though attenuated, enhances the effect of true HD viewing). Another factor in digital cameras and camcorders is lens resolution. A lens may be said to 'resolve' 1920 horizontal lines, but this does not mean that it does so with full modulation from black to white. The 'modulation transfer function' (just a term for the magnitude of the optical transfer function with phase ignored) gives the true measure of lens performance, and is represented by a graph of amplitude against spatial frequency. Lens aperture diffraction also limits MTF. Whilst reducing the aperture of a lens usually reduces aberrations and hence improves the flatness of the MTF, there is an optimum aperture for any lens and image sensor size beyond which smaller apertures reduce resolution because of diffraction, which spreads light across the image sensor. This was hardly a problem in the days of plate cameras and even 35 mm film, but has become an insurmountable limitation with the very small format sensors used in some digital cameras and especially video cameras. First generation HD consumer camcorders used 1/4-inch sensors, for which apertures smaller than about f4 begin to limit resolution. Even professional video cameras mostly use 2/3 inch sensors, prohibiting the use of apertures around f16 that would have been considered normal for film formats. Certain cameras (such as the Pentax K10D) feature an "MTF autoexposure" mode, where the choice of aperture is optimized for maximum sharpness. Typically this means somewhere in the middle of the aperture range. === Trend to large-format DSLRs and improved MTF potential === There has recently been a shift towards the use of large image format digital single-lens reflex cameras driven by the need for low-light sensitivity and narrow depth of field effects. This has led to such cameras becoming preferred by some film and television program makers over even professional HD video cameras, because of their 'filmic' potential. In theory, the use of cameras with 16- and 21-megapixel sensors offers the possibility of almost perfect sharpness by downconversion within the camera, with digital filtering to eliminate aliasing. Such cameras produce very impressive results, and appear to be leading the way in video production towards large-format downconversion with digital filtering becoming the standard approach to the realization of a flat MTF with true freedom from aliasing. == Digital inversion of the OTF == Due to optical effects the contrast may be sub-optimal and approaches zero before the Nyquist frequency of the display is reached. The optical contrast reduction can be partially reversed by digitally amplifying spatial frequencies selectively before display or further processing. Although more advanced digital image restoration procedures exist, the Wiener deconvolution algorithm is often used for its simplicity and efficiency. Since this technique multiplies the spatial spectral components of the image, it also amplifies noise and errors due to e.g. aliasing. It is therefore only effective on good quality recordings with a sufficiently high signal-to-noise ratio. == Limitations == In general, the point spread function, the image of a point source also depends on factors such as the wavelength (color), and field angle (lateral point source position). When such variation is sufficiently gradual, the optical system could be characterized by a set of optical transfer functions. However, when the image of the point source changes abruptly, the optical transfer function does not describe the optical system accurately. Inaccuracies can often be mitigated by a collection of optical transfer functions at well-chosen wavelengths or field-positions. However, a more complex characterization may be necessary for some imaging systems such as the Light field camera. == See also == Bokeh Gamma correction Minimum resolvable contrast Minimum resolvable temperature difference Optical resolution Signal-to-noise ratio Signal transfer function Strehl ratio Transfer function Wavefront coding == References == == External links == "Modulation transfer function", by Glenn D. Boreman on SPIE Optipedia. "How to Measure MTF and other Properties of Lenses", by Optikos Corporation.
Wikipedia/Optical_Transfer_Function
The contrast transfer function (CTF) mathematically describes how aberrations in a transmission electron microscope (TEM) modify the image of a sample. This contrast transfer function (CTF) sets the resolution of high-resolution transmission electron microscopy (HRTEM), also known as phase contrast TEM. By considering the recorded image as a CTF-degraded true object, describing the CTF allows the true object to be reverse-engineered. This is typically denoted CTF-correction, and is vital to obtain high resolution structures in three-dimensional electron microscopy, especially electron cryo-microscopy. Its equivalent in light-based optics is the optical transfer function. == Phase contrast in HRTEM == The contrast in HRTEM comes from interference in the image plane between the phases of scattered electron waves with the phase of the transmitted electron wave. Complex interactions occur when an electron wave passes through a sample in the TEM. Above the sample, the electron wave can be approximated as a plane wave. As the electron wave, or wavefunction, passes through the sample, both the phase and the amplitude of the electron beam is altered. The resultant scattered and transmitted electron beam is then focused by an objective lens, and imaged by a detector in the image plane. Detectors are only able to measure the amplitude, not the phase directly. However, with the correct microscope parameters, the phase interference can be indirectly measured via the intensity in the image plane. Electrons interact very strongly with crystalline solids. As a result, the phase changes due to very small features, down to the atomic scale, can be recorded via HRTEM. == Contrast transfer theory == Contrast transfer theory provides a quantitative method to translate the exit wavefunction to a final image. Part of the analysis is based on Fourier transforms of the electron beam wavefunction. When an electron wavefunction passes through a lens, the wavefunction goes through a Fourier transform. This is a concept from Fourier optics. Contrast transfer theory consists of four main operations: Take the Fourier transform of the exit wave to obtain the wave amplitude in back focal plane of objective lens Modify the wavefunction in reciprocal space by a phase factor, also known as the Phase Contrast Transfer Function, to account for aberrations Inverse Fourier transform the modified wavefunction to obtain the wavefunction in the image plane Find the square modulus of the wavefunction in the image plane to find the image intensity (this is the signal that is recorded on a detector, and creates an image) == Mathematical form == If we incorporate some assumptions about our sample, then an analytical expression can be found for both phase contrast and the phase contrast transfer function. As discussed earlier, when the electron wave passes through a sample, the electron beam interacts with the sample via scattering, and experiences a phase shift. This is represented by the electron wavefunction exiting from the bottom of the sample. This expression assumes that the scattering causes a phase shift (and no amplitude shift). This is called the Phase Object Approximation. === The exit wavefunction === Following Wade's notation, the exit wavefunction expression is represented by: τ ( r , z ) = τ o exp ⁡ [ − i π λ ∫ d z ′ U ( r , z ′ ) ] {\displaystyle \tau (r,z)=\tau _{o}\exp[-i\pi \lambda \int dz'U(r,z')]} τ o = τ ( r , 0 ) {\displaystyle \tau _{o}=\tau (r,0)} U ( r , z ) = 2 m V ( r , z ) / h 2 {\displaystyle U(r,z)=2mV(r,z)/h^{2}} Where the exit wavefunction τ is a function of both r {\displaystyle r} in the plane of the sample, and z {\displaystyle z} perpendicular to the plane of the sample. τ o {\displaystyle \tau _{o}} represents the wavefunction incident on the top of the sample. λ {\displaystyle \lambda } is the wavelength of the electron beam, which is set by the accelerating voltage. U {\displaystyle U} is the effective potential of the sample, which depends on the atomic potentials within the crystal, represented by V {\displaystyle V} . Within the exit wavefunction, the phase shift is represented by: ϕ ( r ) = π λ ∫ d z ′ U ( r , z ′ ) {\displaystyle \phi (r)=\pi \lambda \int dz'U(r,z')} This expression can be further simplified taken into account some more assumptions about the sample. If the sample is considered very thin, and a weak scatterer, so that the phase shift is << 1, then the wave function can be approximated by a linear Taylor polynomial expansion. This approximation is called the Weak Phase Object Approximation. The exit wavefunction can then be expressed as: τ ( r , z ) = τ o [ 1 + i ϕ ( r ) ] {\displaystyle \tau (r,z)=\tau _{o}[1+i\phi (r)]} === The phase contrast transfer function === Passing through the objective lens incurs a Fourier transform and phase shift. As such, the wavefunction on the back focal plane of the objective lens can be represented by: I ( θ ) = δ ( θ ) + Φ K ( θ ) {\displaystyle I(\theta )=\delta (\theta )+\Phi K(\theta )} θ {\displaystyle \theta } = the scattering angle between the transmitted electron wave and the scattered electron wave δ {\displaystyle \delta } = a delta function representing the non-scattered, transmitted, electron wave Φ {\displaystyle \Phi } = the Fourier transform of the wavefunction's phase K ( θ ) {\displaystyle K(\theta )} = the phase shift incurred by the microscope's aberrations, also known as the Contrast Transfer Function: K ( θ ) = sin ⁡ [ ( 2 π / λ ) W ( θ ) ] {\displaystyle K(\theta )=\sin[(2\pi /\lambda )W(\theta )]} W ( θ ) = − z θ 2 / 2 + C s θ 4 / 4 {\displaystyle W(\theta )=-z\theta ^{2}/2+C_{s}\theta ^{4}/4} λ {\displaystyle \lambda } = the relativistic wavelength of the electron wave, C s {\displaystyle C_{s}} = The spherical aberration of the objective lens The contrast transfer function can also be given in terms of spatial frequencies, or reciprocal space. With the relationship θ = λ k {\textstyle \theta =\lambda k} , the phase contrast transfer function becomes: K ( k ) = sin ⁡ [ ( 2 π ) W ( k ) ] {\displaystyle K(k)=\sin[(2\pi )W(k)]} W ( k ) = − z λ k 2 / 2 + C s λ 3 k 4 / 4 {\displaystyle W(k)=-z\lambda k^{2}/2+C_{s}\lambda ^{3}k^{4}/4} z {\displaystyle z} = the defocus of the objective lens (using the convention that underfocus is positive and overfocus is negative), λ {\displaystyle \lambda } = the relativistic wavelength of the electron wave, C s {\displaystyle C_{s}} = The spherical aberration of the objective lens, k {\displaystyle k} = the spatial frequency (units of m−1) === Spherical aberration === Spherical aberration is a blurring effect arising when a lens is not able to converge incoming rays at higher angles of incidence to the focus point, but rather focuses them to a point closer to the lens. This will have the effect of spreading an imaged point (which is ideally imaged as a single point in the gaussian image plane) out over a finite size disc in the image plane. Giving the measure of aberration in a plane normal to the optical axis is called a transversal aberration. The size (radius) of the aberration disc in this plane can be shown to be proportional to the cube of the incident angle (θ) under the small-angle approximation, and that the explicit form in this case is r s = C s ⋅ θ 3 ⋅ M {\displaystyle r_{s}=C_{s}\cdot \theta ^{3}\cdot M} where C s {\displaystyle C_{s}} is the spherical aberration and M {\displaystyle M} is the magnification, both effectively being constants of the lens settings. One can then go on to note that the difference in refracted angle between an ideal ray and one which suffers from spherical aberration, is α s = arctan ⁡ ( b R ) − arctan ⁡ ( b R + r s ) {\displaystyle \alpha _{s}=\arctan \left({\frac {b}{R}}\right)-\arctan \left({\frac {b}{R+r_{s}}}\right)} where b {\displaystyle b} is the distance from the lens to the gaussian image plane and R {\displaystyle R} is the radial distance from the optical axis to the point on the lens which the ray passed through. Simplifying this further (without applying any approximations) shows that α s = arctan ⁡ ( b r s R 2 + R r s + b 2 ) {\displaystyle \alpha _{s}=\arctan \left({\frac {br_{s}}{R^{2}+Rr_{s}+b^{2}}}\right)} Two approximations can now be applied to proceed further in a straightforward manner. They rely on the assumption that both r s {\displaystyle r_{s}} and R {\displaystyle R} are much smaller than b {\displaystyle b} , which is equivalent to stating that we are considering relatively small angles of incidence and consequently also very small spherical aberrations. Under such an assumption, the two leading terms in the denominator are insignificant, and can be approximated as not contributing. By way of these assumptions we have also implicitly stated that the fraction itself can be considered small, and this results in the elimination of the arctan ⁡ ( ) {\displaystyle \arctan()} function by way of the small-angle approximation; α s ≈ arctan ⁡ ( b r s b 2 ) ≈ b r s b 2 = r s b = C s ⋅ θ 3 ⋅ M b {\displaystyle \alpha _{s}\approx \arctan \left({\frac {br_{s}}{b^{2}}}\right)\approx {\frac {br_{s}}{b^{2}}}={\frac {r_{s}}{b}}={\frac {C_{s}\cdot \theta ^{3}\cdot M}{b}}} If the image is considered to be approximately in focus, and the angle of incidence θ {\displaystyle \theta } is again considered small, then R f ≈ tan ⁡ ( θ ) ≈ θ and M ≈ b f {\displaystyle {\frac {R}{f}}\approx \tan \left(\theta \right)\approx \theta ~~{\text{and}}~~M\approx {\frac {b}{f}}} meaning that an approximate expression for the difference in refracted angle between an ideal ray and one which suffers from spherical aberration, is given by α s ≈ C s ⋅ R 3 f 4 {\displaystyle \alpha _{s}\approx {\frac {C_{s}\cdot R^{3}}{f^{4}}}} === Defocus === As opposed to the spherical aberration, we will proceed by estimating the deviation of a defocused ray from the ideal by stating the longitudinal aberration; a measure of how much a ray deviates from the focal point along the optical axis. Denoting this distance Δ b {\displaystyle \Delta b} , it is possible to show that the difference α f {\displaystyle \alpha _{f}} in refracted angle between rays originating from a focused and defocused object, can be related to the refracted angle as R 2 + b 2 ⋅ sin ⁡ ( α f ) = Δ b ⋅ sin ⁡ ( θ ′ − α f ) {\displaystyle {\sqrt {R^{2}+b^{2}}}\cdot \sin(\alpha _{f})=\Delta b\cdot \sin(\theta '-\alpha _{f})} where R {\displaystyle R} and b {\displaystyle b} are defined in the same way as they were for spherical aberration. Assuming that α f << θ ′ {\displaystyle \alpha _{f}<<\theta '} (or equivalently that | b ⋅ sin ⁡ ( α f ) | << | R | {\displaystyle |b\cdot \sin(\alpha _{f})|<<|R|} ), we can show that sin ⁡ ( α f ) ≈ Δ b sin ⁡ ( θ ′ ) R 2 + b 2 = Δ b ⋅ R R 2 + b 2 {\displaystyle \sin(\alpha _{f})\approx {\frac {\Delta b\sin(\theta ')}{\sqrt {R^{2}+b^{2}}}}={\frac {\Delta b\cdot R}{R^{2}+b^{2}}}} Since we required α f {\displaystyle \alpha _{f}} to be small, and since θ {\displaystyle \theta } being small implies R << b {\displaystyle R<<b} , we are given an approximation of α f {\displaystyle \alpha _{f}} as α f ≈ Δ b ⋅ R b 2 {\displaystyle \alpha _{f}\approx {\frac {\Delta b\cdot R}{b^{2}}}} From the thin-lens formula it can be shown that Δ b / b 2 ≈ Δ f / f 2 {\displaystyle \Delta b/b^{2}\approx \Delta f/f^{2}} , yielding a final estimation of the difference in refracted angle between in-focus and off-focus rays as α f ≈ Δ f ⋅ R f 2 {\displaystyle \alpha _{f}\approx {\frac {\Delta f\cdot R}{f^{2}}}} == Examples == The contrast transfer function determines how much phase signal gets transmitted to the real space wavefunction in the image plane. As the modulus squared of the real space wavefunction gives the image signal, the contrast transfer function limits how much information can ultimately be translated into an image. The form of the contrast transfer function determines the quality of real space image formation in the TEM. This is an example contrast transfer function. There are a number of things to note: The function exists in the spatial frequency domain, or k-space Whenever the function is equal to zero, that means there is no transmittance, or no phase signal is incorporated into the real space image The first time the function crosses the x-axis is called the point resolution To maximize phase signal, it is generally better to use imaging conditions that push the point resolution to higher spatial frequencies When the function is negative, that represents positive phase contrast, leading to a bright background, with dark atomic features Every time the CTF crosses the x-axis, there is an inversion in contrast Accordingly, past the point resolution of the microscope the phase information is not directly interpretable, and must be modeled via computer simulation === Scherzer defocus === The defocus value ( z {\textstyle z} ) can be used to counteract the spherical aberration to allow for greater phase contrast. This analysis was developed by Scherzer, and is called the Scherzer defocus. z s = ( C s λ ) 1 / 2 {\displaystyle z_{s}=(C_{s}\lambda )^{1/2}} The variables are the same as from the mathematical treatment section, with z s {\displaystyle z_{s}} setting the specific Scherzer defocus, C s {\displaystyle C_{s}} as the spherical aberration, and λ as the relativistic wavelength for the electron wave. The figure in the following section shows the CTF function for a CM300 Microscope at the Scherzer Defocus. Compared to the CTF Function showed above, there is a larger window, also known as a passband, of spatial frequencies with high transmittance. This allows more phase signal to pass through to the image plane. === Envelope function === The envelope function represents the effect of additional aberrations that damp the contrast transfer function, and in turn the phase. The envelope terms comprising the envelope function tend to suppress high spatial frequencies. The exact form of the envelope functions can differ from source to source. Generally, they are applied by multiplying the Contrast Transfer Function by an envelope term Et representing temporal aberrations, and an envelope term Es representing spatial aberrations. This yields a modified, or effective Contrast Transfer Function: K e f f ( k ) = E t E s ( sin ⁡ [ ( 2 π / λ ) W ( k ) ] {\displaystyle K_{eff}(k)=E_{t}E_{s}(\sin[(2\pi /\lambda )W(k)]} Examples of temporal aberrations include chromatic aberrations, energy spread, focal spread, instabilities in the high voltage source, and instabilities in the objective lens current. An example of a spatial aberration includes the finite incident beam convergence. As shown in the figure, the most restrictive envelope term will dominate in damping the contrast transfer function. In this particular example, the temporal envelope term is the most restrictive. Because the envelope terms damp more strongly at higher spatial frequencies, there comes a point where no more phase signal can pass through. This is called the Information Limit of the microscope, and is one measure of the resolution. Modeling the envelope function can give insight into both TEM instrument design, and imaging parameters. By modeling the different aberrations via envelope terms, it is possible to see which aberrations are most limiting the phase signal. Various software have been developed to model both the Contrast Transfer Function and Envelope Function for particular microscopes, and particular imaging parameters. == Linear imaging theory vs. non-linear imaging theory == The previous description of the contrast transfer function depends on linear imaging theory. Linear imaging theory assumes that the transmitted beam is dominant, there is only weak phase shift by the sample. In many cases, this precondition is not fulfilled. In order to account for these effects, non-linear imaging theory is required. With strongly scattering samples, diffracted electrons will not only interfere with the transmitted beam, but will also interfere with each other. This will produce second order diffraction intensities. Non-linear imaging theory is required to model these additional interference effects. Contrary to a widespread assumption, the linear/nonlinear imaging theory has nothing to do with kinematical diffraction or dynamical diffraction, respectively. Linear imaging theory is still used, however, because it has some computational advantages. In Linear imaging theory, the Fourier coefficients for the image plane wavefunction are separable. This greatly reduces computational complexity, allowing for faster computer simulations of HRTEM images. == See also == Airy disk, different but similar phenomena in light Optical transfer function Point spread function Transmission electron microscopy == References == == External links == Contrast transfer function (CTF) correction Talk on the CTF by Henning Stahlberg CTF reading list Interactive CTF Modeling
Wikipedia/Contrast_transfer_function
Eye surgery, also known as ophthalmic surgery or ocular surgery, is surgery performed on the eye or its adnexa. Eye surgery is part of ophthalmology and is performed by an ophthalmologist or eye surgeon. The eye is a fragile organ, and requires due care before, during, and after a surgical procedure to minimize or prevent further damage. An eye surgeon is responsible for selecting the appropriate surgical procedure for the patient, and for taking the necessary safety precautions. Mentions of eye surgery can be found in several ancient texts dating back as early as 1800 BC, with cataract treatment starting in the fifth century BC. It continues to be a widely practiced class of surgery, with various techniques having been developed for treating eye problems. == Preparation and precautions == Since the eye is heavily supplied by nerves, anesthesia is essential. Local anesthesia is most commonly used. Topical anesthesia using lidocaine topical gel is often used for quick procedures. Since topical anesthesia requires cooperation from the patient, general anesthesia is often used for children, traumatic eye injuries, or major orbitotomies, and for apprehensive patients. The physician administering anesthesia, or a nurse anesthetist or anesthetist assistant with expertise in anesthesia of the eye, monitors the patient's cardiovascular status. Sterile precautions are taken to prepare the area for surgery and lower the risk of infection. These precautions include the use of antiseptics, such as povidone-iodine, and sterile drapes, gowns, and gloves. == Laser eye surgery == Although the terms laser eye surgery and refractive surgery are commonly used as if they were interchangeable, this is not the case. Lasers may be used to treat nonrefractive conditions (e.g. to seal a retinal tear). Laser eye surgery or laser corneal surgery is a medical procedure that uses a laser to reshape the surface of the eye to correct myopia (short-sightedness), hypermetropia (long-sightedness), and astigmatism (uneven curvature of the eye's surface). Importantly, refractive surgery is not compatible with everyone, and people may find on occasion that eyewear is still needed after surgery. Recent developments also include procedures that can change eye color from brown to blue. Before proceeding with laser surgery, the eye specialist needs to certify that the patient is a suitable candidate for the surgery and there are several factors to be considered before doing laser surgery. == Cataract surgery == A cataract is an opacification or cloudiness of the eye's crystalline lens due to aging, disease, or trauma that typically prevents light from forming a clear image on the retina. If visual loss is significant, surgical removal of the lens may be warranted, with lost optical power usually replaced with a plastic intraocular lens. Owing to the high prevalence of cataracts, cataract extraction is the most common eye surgery. Rest after surgery is recommended. == Glaucoma surgery == Glaucoma is a group of diseases affecting the optic nerve that results in vision loss and is frequently characterized by raised intraocular pressure. Many types of glaucoma surgery exist, and variations or combinations of those types can facilitate the escape of excess aqueous humor from the eye to lower intraocular pressure, and a few that lower it by decreasing the production of aqueous humor. === Canaloplasty === Canaloplasty is an advanced, nonpenetrating procedure designed to enhance drainage through the eye's natural drainage system to provide sustained reduction of intraocular pressure. Canaloplasty uses microcatheter technology in a simple and minimally invasive procedure. To perform a canaloplasty, an ophthalmologist creates a tiny incision to gain access to a canal in the eye. A microcatheter circumnavigates the canal around the iris, enlarging the main drainage channel and its smaller collector channels through the injection of a sterile, gel-like material called viscoelastic. The catheter is then removed and a suture is placed within the canal and tightened. By opening up the canal, the pressure inside the eye can be reduced. == Refractive surgery == Refractive surgery aims to correct errors of refraction in the eye, reducing or eliminating the need for corrective lenses. Keratomileusis is a method of reshaping the corneal surface to change its optical power. A disc of the cornea is shaved off, quickly frozen, lathe-ground, then returned to its original power. Automated lamellar keratoplasty Laser-assisted in situ keratomileusis (LASIK) Laser assisted subepithelial keratomileusis (LASEK), a.k.a. Epi-LASIK Photorefractive keratectomy Laser thermal keratoplasty Conductive keratoplasty uses radio-frequency waves to shrink corneal collagen. It is used to treat mild to moderate hyperopia. Limbal relaxing incisions can correct minor astigmatism Astigmatic keratotomy, arcuate keratotomy, or transverse keratotomy Radial keratotomy Hexagonal keratotomy Epikeratophakia is the removal of the corneal epithelium and replacement with a lathe-cut corneal button. Intracorneal rings or corneal ring segments Implantable contact lenses Presbyopia reversal Anterior ciliary sclerotomy Scleral reinforcement surgery for the mitigation of degenerative myopia Small incision lenticule extraction == Corneal surgery == Corneal surgery includes most refractive surgery, as well as: Corneal transplant surgery is used to remove a cloudy/diseased cornea and replace it with a clear donor cornea. Penetrating keratoplasty Keratoprosthesis Phototherapeutic keratectomy Pterygium excision Corneal tattooing Osteo-odonto-keratoprosthesis is surgery in which support for an artificial cornea is created from a tooth and its surrounding jawbone. This is a still-experimental procedure used for patients with severely damaged eyes, generally from burns. Eye color-change surgery through an iris implant, known as Brightocular, or the stripping away the top layer of eye pigment, known as the stroma procedure == Vitreoretinal surgery == Vitreoretinal surgery includes: Vitrectomy Anterior vitrectomy is the removal of the front portion of vitreous tissue. It is used for preventing or treating vitreous loss during cataract or corneal surgery, or to remove misplaced vitreous in conditions such as aphakia pupillary block glaucoma. Pars plana vitrectomy or trans pars plana vitrectomy is a procedure to remove vitreous opacities and membranes through a pars plana incision. It is frequently combined with other intraocular procedures for the treatment of giant retinal tears, tractional retinal detachments, and posterior vitreous detachments. Pan retinal photocoagulation is a type of photocoagulation therapy used in the treatment of diabetic retinopathy. Retinal detachment repair Ignipuncture is an obsolete procedure that involves cauterization of the retina with a very hot, pointed instrument. A scleral buckle is used in the repair of a retinal detachment to indent or "buckle" the sclera inward, usually by sewing a piece of preserved sclera or silicone rubber to its surface. Laser photocoagulation, or photocoagulation therapy, is the use of a laser to seal a retinal tear. Pneumatic retinopexy Retinal cryopexy, or retinal cryotherapy, is a procedure that uses intense cold to induce a chorioretinal scar and to destroy retinal or choroidal tissue. Macular hole repair Partial lamellar sclerouvectomy Partial lamellar sclerocyclochoroidectomy Partial lamellar sclerochoroidectomy Posterior sclerotomy is an opening made into the vitreous through the sclera, as for detached retina or the removal of a foreign body. Radial optic neurotomy Macular translocation surgery through 360° retinotomy through scleral imbrication technique == Eye muscle surgery == With about 1.2 million procedures each year, extraocular muscle surgery is the third-most common eye surgery in the United States. [1] Archived 2016-08-18 at the Wayback Machine Eye muscle surgery typically corrects strabismus and includes: Loosening or weakening procedures Recession involves moving the insertion of a muscle posteriorly towards its origin. Myectomy Myotomy Tenectomy Tenotomy Tightening or strengthening procedures Resection Tucking Advancement is the movement of an eye muscle from its original place of attachment on the eyeball to a more forward position. Transposition or repositioning procedures Adjustable suture surgery is a method of reattaching an extraocular muscle by means of a stitch that can be shortened or lengthened within the first postoperative day, to obtain better ocular alignment. == Oculoplastic surgery == Oculoplastic surgery, or oculoplastics, is the subspecialty of ophthalmology that deals with the reconstruction of the eye and associated structures. Oculoplastic surgeons perform procedures such as the repair of droopy eyelids (blepharoplasty), repair of tear duct obstructions, orbital fracture repairs, removal of tumors in and around the eyes, and facial rejuvenation procedures including laser skin resurfacing, eye lifts, brow lifts, and even facelifts. Common procedures are: === Eyelid surgery === Blepharoplasty (eyelift) is plastic surgery of the eyelids to remove excessive skin or subcutaneous fat. East Asian blepharoplasty, also known as double eyelid surgery, is used to create a double eyelid crease for patients who have a single crease (monolid). Ptosis repair for droopy eyelid Ectropion repair Entropion repair Canthal resection A canthectomy is the surgical removal of tissue at the junction of the upper and lower eyelids. Cantholysis is the surgical division of the canthus. Canthopexy A canthoplasty is plastic surgery at the canthus. A canthorrhaphy is suturing of the outer canthus to shorten the palpebral fissure. A canthotomy is the surgical division of the canthus, usually the outer canthus. A lateral canthotomy is the surgical division of the outer canthus. Epicanthoplasty Tarsorrhaphy is a procedure in which the eyelids are partially sewn together to narrow the opening (i.e. palpebral fissure). === Orbital surgery === Orbital reconstruction or ocular prosthetics (false eyes) Orbital decompression is used for Grave's disease, a condition (often associated with overactive thyroid problems) in which the eye muscles swell. Because the eye socket is bone, the swelling cannot be accommodated and as a result, the eye is pushed forward into a protruded position. In some patients, this is very pronounced. Orbitial decompression involves removing some bone from the eye socket to open up one or more sinuses and so make space for the swollen tissue and allowing the eye to move back into normal position. === Other oculoplastic surgery === Botox injections Ultrapeel microdermabrasion Endoscopic forehead and browlift Face lift (rhytidectomy) Liposuction of the face and neck Browplasty == Surgery involving the lacrimal apparatus == A dacryocystorhinostomy or dacryocystorhinotomy is a procedure to restore the flow of tears into the nose from the lacrimal sac when the nasolacrimal duct does not function. Canaliculodacryocystostomy is a surgical correction for a congenitally blocked tear duct in which the closed segment is excised and the open end is joined to the lacrimal sac. Canaliculotomy involves slitting of the lacrimal punctum and canaliculus for the relief of epiphora A dacryoadenectomy is the surgical removal of a lacrimal gland. A dacryocystectomy is the surgical removal of a part of the lacrimal sac. A dacryocystostomy is an incision into the lacrimal sac, usually to promote drainage. A dacryocystotomy is an incision into the lacrimal sac. == Eye removal == An enucleation is the removal of the eye leaving the eye muscles and remaining orbital contents intact. An evisceration is the removal of the eye's contents, leaving the scleral shell intact. Usually performed to reduce pain in a blind eye. An exenteration is the removal of the entire orbital contents, including the eye, extraocular muscles, fat, and connective tissues; usually for malignant orbital tumors. == Other surgery == Many of these described procedures are historical and are not recommended due to a risk of complications. Particularly, these include operations done on ciliary body in an attempt to control glaucoma, since highly safer surgeries for glaucoma, including lasers, nonpenetrating surgery, guarded filtration surgery, and seton valve implants have been invented. A ciliarotomy is a surgical division of the ciliary zone in the treatment of glaucoma. A ciliectomy is the surgical removal of part of the ciliary body or the surgical removal of part of a margin of an eyelid containing the roots of the eyelashes. A ciliotomy is a surgical section of the ciliary nerves. A conjunctivoanstrostomy is an opening made from the inferior conjunctival cul-de-sac into the maxillary sinus for the treatment of epiphora. Conjuctivoplasty is plastic surgery of the conjunctiva. A conjunctivorhinostomy is a surgical correction of the total obstruction of a lacrimal canaliculus by which the conjunctiva is anastomosed with the nasal cavity to improve tear flow. A corectomedialysis, or coretomedialysis, is an excision of a small portion of the iris at its junction with the ciliary body to form an artificial pupil. A corectomy, or coretomy, is any surgical cutting operation on the iris at the pupil. A corelysis is a surgical detachment of adhesions of the iris to the capsule of the crystalline lens or cornea. A coremorphosis is the surgical formation of an artificial pupil. A coreplasty, or coreoplasty, is plastic surgery of the iris, usually for the formation of an artificial pupil. A coreoplasy, or laser pupillomydriasis, is any procedure that changes the size or shape of the pupil. A cyclectomy is an excision of portion of the ciliary body. A cyclotomy (surgery), or cyclicotomy, is a surgical incision of the ciliary body, usually for the relief of glaucoma. A cycloanemization is a surgical obliteration of the long ciliary arteries in the treatment of glaucoma. An iridectomesodialysis is the formation of an artificial pupil by detaching and excising a portion of the iris at its periphery. An iridodialysis, sometimes known as a coredialysis, is a localized separation or tearing away of the iris from its attachment to the ciliary body. An iridencleisis, or corenclisis, is a surgical procedure for glaucoma in which a portion of the iris is incised and incarcerated in a limbal incision. (Subdivided into basal iridencleisis and total iridencleisis.) An iridesis is a surgical procedure in which a portion of the iris is brought through and incarcerated in a corneal incision in order to reposition the pupil. An iridocorneosclerectomy is the surgical removal of a portion of the iris, the cornea, and the sclera. An iridocyclectomy is the surgical removal of the iris and the ciliary body. An iridocystectomy is the surgical removal of a portion of the iris to form an artificial pupil. An iridosclerectomy is the surgical removal of a portion of the sclera and a portion of the iris in the region of the limbus for the treatment of glaucoma. An iridosclerotomy is the surgical puncture of the sclera and the margin of the iris for the treatment of glaucoma. A rhinommectomy is the surgical removal of a portion of the internal canthus. A trepanotrabeculectomy is used in the treatment of chronic open- and chronic closed-angle glaucoma. == References ==
Wikipedia/Eye_surgery
The transfer-matrix method is a method used in optics and acoustics to analyze the propagation of electromagnetic or acoustic waves through a stratified medium; a stack of thin films. This is, for example, relevant for the design of anti-reflective coatings and dielectric mirrors. The reflection of light from a single interface between two media is described by the Fresnel equations. However, when there are multiple interfaces, such as in the figure, the reflections themselves are also partially transmitted and then partially reflected. Depending on the exact path length, these reflections can interfere destructively or constructively. The overall reflection of a layer structure is the sum of an infinite number of reflections. The transfer-matrix method is based on the fact that, according to Maxwell's equations, there are simple continuity conditions for the electric field across boundaries from one medium to the next. If the field is known at the beginning of a layer, the field at the end of the layer can be derived from a simple matrix operation. A stack of layers can then be represented as a system matrix, which is the product of the individual layer matrices. The final step of the method involves converting the system matrix back into reflection and transmission coefficients. == Formalism for electromagnetic waves == Below is described how the transfer matrix is applied to electromagnetic waves (for example light) of a given frequency propagating through a stack of layers at normal incidence. It can be generalized to deal with incidence at an angle, absorbing media, and media with magnetic properties. We assume that the stack layers are normal to the z {\displaystyle z\,} axis and that the field within one layer can be represented as the superposition of a left- and right-traveling wave with wave number k {\displaystyle k\,} , E ( z ) = E r e i k z + E l e − i k z {\displaystyle E(z)=E_{r}e^{ikz}+E_{l}e^{-ikz}\,} . Because it follows from Maxwell's equation that electric field E {\displaystyle E\,} and magnetic field (its normalized derivative) H = 1 i k Z c d E d z {\textstyle H={\frac {1}{ik}}Z_{c}{\frac {dE}{dz}}\,} must be continuous across a boundary, it is convenient to represent the field as the vector ( E ( z ) , H ( z ) ) {\textstyle (E(z),H(z))\,} , where H ( z ) = 1 Z c E r e i k z − 1 Z c E l e − i k z {\displaystyle H(z)={\frac {1}{Z_{c}}}E_{r}e^{ikz}-{\frac {1}{Z_{c}}}E_{l}e^{-ikz}\,} . Since there are two equations relating E {\displaystyle E\,} and H {\displaystyle H\,} to E r {\displaystyle E_{r}\,} and E l {\displaystyle E_{l}\,} , these two representations are equivalent. In the new representation, propagation over a distance L {\displaystyle L\,} into the positive direction of z {\displaystyle z\,} is described by the matrix belonging to the special linear group SL(2, C) M = ( cos ⁡ k L i Z c sin ⁡ k L i Z c sin ⁡ k L cos ⁡ k L ) , {\displaystyle M=\left({\begin{array}{cc}\cos kL&iZ_{c}\sin kL\\{\frac {i}{Z_{c}}}\sin kL&\cos kL\end{array}}\right),} and ( E ( z + L ) H ( z + L ) ) = M ⋅ ( E ( z ) H ( z ) ) {\displaystyle \left({\begin{array}{c}E(z+L)\\H(z+L)\end{array}}\right)=M\cdot \left({\begin{array}{c}E(z)\\H(z)\end{array}}\right)} Such a matrix can represent propagation through a layer if k {\displaystyle k\,} is the wave number in the medium and L {\displaystyle L\,} the thickness of the layer: For a system with N {\displaystyle N\,} layers, each layer j {\displaystyle j\,} has a transfer matrix M j {\displaystyle M_{j}\,} , where j {\displaystyle j\,} increases towards higher z {\displaystyle z\,} values. The system transfer matrix is then M s = M N ⋅ … ⋅ M 2 ⋅ M 1 . {\displaystyle M_{s}=M_{N}\cdot \ldots \cdot M_{2}\cdot M_{1}.} Typically, one would like to know the reflectance and transmittance of the layer structure. If the layer stack starts at z = 0 {\displaystyle z=0\,} , then for negative z {\displaystyle z\,} , the field is described as E L ( z ) = E 0 e i k L z + r E 0 e − i k L z , z < 0 , {\displaystyle E_{L}(z)=E_{0}e^{ik_{L}z}+rE_{0}e^{-ik_{L}z},\qquad z<0,} where E 0 {\displaystyle E_{0}\,} is the amplitude of the incoming wave, k L {\displaystyle k_{L}\,} the wave number in the left medium, and r {\displaystyle r\,} is the amplitude (not intensity!) reflectance coefficient of the layer structure. On the other side of the layer structure, the field consists of a right-propagating transmitted field E R ( z ) = t E 0 e i k R z , z > L ′ , {\displaystyle E_{R}(z)=tE_{0}e^{ik_{R}z},\qquad z>L',} where t {\displaystyle t\,} is the amplitude transmittance, k R {\displaystyle k_{R}\,} is the wave number in the rightmost medium, and L ′ {\displaystyle L'} is the total thickness. If H L = 1 i k Z c d E L d z {\textstyle H_{L}={\frac {1}{ik}}Z_{c}{\frac {dE_{L}}{dz}}\,} and H R = 1 i k Z c d E R d z {\textstyle H_{R}={\frac {1}{ik}}Z_{c}{\frac {dE_{R}}{dz}}\,} , then one can solve ( E ( z R ) H ( z R ) ) = M ⋅ ( E ( 0 ) H ( 0 ) ) {\displaystyle \left({\begin{array}{c}E(z_{R})\\H(z_{R})\end{array}}\right)=M\cdot \left({\begin{array}{c}E(0)\\H(0)\end{array}}\right)} in terms of the matrix elements M m n {\displaystyle M_{mn}\,} of the system matrix M s {\displaystyle M_{s}\,} and obtain t = 2 i k L e − i k R L [ 1 − M 21 + k L k R M 12 + i ( k R M 11 + k L M 22 ) ] {\displaystyle t=2ik_{L}e^{-ik_{R}L}\left[{\frac {1}{-M_{21}+k_{L}k_{R}M_{12}+i(k_{R}M_{11}+k_{L}M_{22})}}\right]} and r = [ ( M 21 + k L k R M 12 ) + i ( k L M 22 − k R M 11 ) ( − M 21 + k L k R M 12 ) + i ( k L M 22 + k R M 11 ) ] {\displaystyle r=\left[{\frac {(M_{21}+k_{L}k_{R}M_{12})+i(k_{L}M_{22}-k_{R}M_{11})}{(-M_{21}+k_{L}k_{R}M_{12})+i(k_{L}M_{22}+k_{R}M_{11})}}\right]} . The transmittance and reflectance (i.e., the fractions of the incident intensity | E 0 | 2 {\textstyle \left|E_{0}\right|^{2}} transmitted and reflected by the layer) are often of more practical use and are given by T = k R k L | t | 2 {\textstyle T={\frac {k_{R}}{k_{L}}}|t|^{2}\,} and R = | r | 2 {\displaystyle R=|r|^{2}\,} , respectively (at normal incidence). === Example === As an illustration, consider a single layer of glass with a refractive index n and thickness d suspended in air at a wave number k (in air). In glass, the wave number is k ′ = n k {\displaystyle k'=nk\,} . The transfer matrix is M = ( cos ⁡ k ′ d sin ⁡ ( k ′ d ) / k ′ − k ′ sin ⁡ k ′ d cos ⁡ k ′ d ) {\displaystyle M=\left({\begin{array}{cc}\cos k'd&\sin(k'd)/k'\\-k'\sin k'd&\cos k'd\end{array}}\right)} . The amplitude reflection coefficient can be simplified to r = ( 1 / n − n ) sin ⁡ ( k ′ d ) ( n + 1 / n ) sin ⁡ ( k ′ d ) + 2 i cos ⁡ ( k ′ d ) {\displaystyle r={\frac {(1/n-n)\sin(k'd)}{(n+1/n)\sin(k'd)+2i\cos(k'd)}}} . This configuration effectively describes a Fabry–Pérot interferometer or etalon: for k ′ d = 0 , π , 2 π , ⋯ {\textstyle k'd=0,\pi ,2\pi ,\cdots \,} , the reflection vanishes. == Acoustic waves == It is possible to apply the transfer-matrix method to sound waves. Instead of the electric field E and its derivative H, the displacement u and the stress σ = C d u / d z {\displaystyle \sigma =Cdu/dz} , where C {\displaystyle C} is the p-wave modulus, should be used. == Abeles matrix formalism == The Abeles matrix method is a computationally fast and easy way to calculate the specular reflectivity from a stratified interface, as a function of the perpendicular momentum transfer, Qz: Q z = 4 π λ sin ⁡ θ = 2 k z {\displaystyle Q_{z}={\frac {4\pi }{\lambda }}\sin \theta =2k_{z}} where θ is the angle of incidence/reflection of the incident radiation and λ is the wavelength of the radiation. The measured reflectivity depends on the variation in the scattering length density (SLD) profile, ρ(z), perpendicular to the interface. Although the scattering length density profile is normally a continuously varying function, the interfacial structure can often be well approximated by a slab model in which layers of thickness (dn), scattering length density (ρn) and roughness (σn,n+1) are sandwiched between the super- and sub-phases. One then uses a refinement procedure to minimise the differences between the theoretical and measured reflectivity curves, by changing the parameters that describe each layer. In this description the interface is split into n layers. Since the incident neutron beam is refracted by each of the layers the wavevector k, in layer n, is given by: k n = k z 2 − 4 π ( ρ n − ρ 0 ) {\displaystyle k_{n}={\sqrt {{k_{z}}^{2}-4\pi ({\rho }_{n}-{\rho }_{0})}}} The Fresnel reflection coefficient between layer n and n+1 is then given by: r n , n + 1 = k n − k n + 1 k n + k n + 1 {\displaystyle r_{n,n+1}={\frac {k_{n}-k_{n+1}}{k_{n}+k_{n+1}}}} Because the interface between each layer is unlikely to be perfectly smooth the roughness/diffuseness of each interface modifies the Fresnel coefficient and is accounted for by an error function, r n , n + 1 = k n − k n + 1 k n + k n + 1 exp ⁡ ( − 2 k n k n + 1 σ n , n + 1 2 ) . {\displaystyle r_{n,n+1}={\frac {k_{n}-k_{n+1}}{k_{n}+k_{n+1}}}\exp(-2k_{n}k_{n+1}{\sigma _{n,n+1}}^{2}).} A phase factor, β, is introduced, which accounts for the thickness of each layer. β 0 = 0 {\displaystyle \beta _{0}=0} β n = i k n d n {\displaystyle \beta _{n}=ik_{n}d_{n}} where i2 = −1. A characteristic matrix, cn is then calculated for each layer. c n = [ exp ⁡ ( β n ) r n , n + 1 exp ⁡ ( β n ) r n , n + 1 exp ⁡ ( − β n ) exp ⁡ ( − β n ) ] {\displaystyle c_{n}=\left[{\begin{array}{cc}\exp \left(\beta _{n}\right)&r_{n,n+1}\exp \left(\beta _{n}\right)\\r_{n,n+1}\exp \left(-\beta _{n}\right)&\exp \left(-\beta _{n}\right)\end{array}}\right]} The resultant matrix is defined as the ordered product of these characteristic matrices M = ∏ n c n {\displaystyle M=\prod _{n}c_{n}} from which the reflectivity is calculated as: R = | M 10 M 00 | 2 {\displaystyle R=\left|{\frac {M_{10}}{M_{00}}}\right|^{2}} == See also == Neutron reflectometry Ellipsometry Jones calculus X-ray reflectivity Scattering-matrix method == References == == Further reading == Multilayer Reflectivity: first-principles derivation of the transmission and reflection probabilities from a multilayer with complex indices of refraction. Layered Materials and Photonic Band Diagrams (Lecture 23) in MIT Open Course Electronic, Optical and Magnetic Properties of Materials. EM Wave Propagation Through Thin Films & Multilayers (Lecture 13) in MIT Open Course Nano-to-Macro Transport Processes. Includes short discussion acoustic waves. == External links == There are a number of computer programs that implement this calculation: FreeSnell is a stand-alone computer program that implements the transfer-matrix method, including more advanced aspects such as granular films. Thinfilm is a web interface that implements the transfer-matrix method, outputting reflection and transmission coefficients, and also ellipsometric parameters Psi and Delta. Luxpop.com is another web interface that implements the transfer-matrix method. Transfer-matrix calculating programs in Python and in Mathematica. EMPy ("Electromagnetic Python") software. motofit is a program for analysing neutron and X-ray reflectometry data. OpenFilters is a program for designing optical filters. Py_matrix is an open source Python code that implements the transfer-matrix method for multilayers with arbitrary dielectric tensors. It was especially created for plasmonic and magnetoplasmonic calculations. In-browser calculator and fitter Javascript interactive reflectivity calculator using matrix method and Nevot-Croce roughness approximation (calculation kernel converted from C via Emscripten)
Wikipedia/Transfer-matrix_method_(optics)
In 3D computer graphics, Schlick’s approximation, named after Christophe Schlick, is a formula for approximating the contribution of the Fresnel factor in the specular reflection of light from a non-conducting interface (surface) between two media. According to Schlick’s model, the specular reflection coefficient R can be approximated by: R ( θ ) = R 0 + ( 1 − R 0 ) ( 1 − cos ⁡ θ ) 5 {\displaystyle R(\theta )=R_{0}+(1-R_{0})(1-\cos \theta )^{5}} where R 0 = ( n 1 − n 2 n 1 + n 2 ) 2 {\displaystyle R_{0}=\left({\frac {n_{1}-n_{2}}{n_{1}+n_{2}}}\right)^{2}} where θ {\displaystyle \theta } is half the angle between the incoming and outgoing light directions. And n 1 , n 2 {\displaystyle n_{1},\,n_{2}} are the indices of refraction of the two media at the interface and R 0 {\displaystyle R_{0}} is the reflection coefficient for light incoming parallel to the normal (i.e., the value of the Fresnel term when θ = 0 {\displaystyle \theta =0} or minimal reflection). In computer graphics, one of the interfaces is usually air, meaning that n 1 {\displaystyle n_{1}} very well can be approximated as 1. In microfacet models it is assumed that there is always a perfect reflection, but the normal changes according to a certain distribution, resulting in a non-perfect overall reflection. When using Schlick’s approximation, the normal in the above computation is replaced by the halfway vector. Either the viewing or light direction can be used as the second vector. == See also == Phong reflection model Blinn-Phong shading model Fresnel equations == References ==
Wikipedia/Schlick's_approximation
In linear algebra, a pseudoscalar is a quantity that behaves like a scalar, except that it changes sign under a parity inversion while a true scalar does not. A pseudoscalar, when multiplied by an ordinary vector, becomes a pseudovector (or axial vector); a similar construction creates the pseudotensor. A pseudoscalar also results from any scalar product between a pseudovector and an ordinary vector. The prototypical example of a pseudoscalar is the scalar triple product, which can be written as the scalar product between one of the vectors in the triple product and the cross product between the two other vectors, where the latter is a pseudovector. == In physics == In physics, a pseudoscalar denotes a physical quantity analogous to a scalar. Both are physical quantities which assume a single value which is invariant under proper rotations. However, under the parity transformation, pseudoscalars flip their signs while scalars do not. As reflections through a plane are the combination of a rotation with the parity transformation, pseudoscalars also change signs under reflections. === Motivation === One of the most powerful ideas in physics is that physical laws do not change when one changes the coordinate system used to describe these laws. That a pseudoscalar reverses its sign when the coordinate axes are inverted suggests that it is not the best object to describe a physical quantity. In 3D-space, quantities described by a pseudovector are antisymmetric tensors of order 2, which are invariant under inversion. The pseudovector may be a simpler representation of that quantity, but suffers from the change of sign under inversion. Similarly, in 3D-space, the Hodge dual of a scalar is equal to a constant times the 3-dimensional Levi-Civita pseudotensor (or "permutation" pseudotensor); whereas the Hodge dual of a pseudoscalar is an antisymmetric (pure) tensor of order three. The Levi-Civita pseudotensor is a completely antisymmetric pseudotensor of order 3. Since the dual of the pseudoscalar is the product of two "pseudo-quantities", the resulting tensor is a true tensor, and does not change sign upon an inversion of axes. The situation is similar to the situation for pseudovectors and antisymmetric tensors of order 2. The dual of a pseudovector is an antisymmetric tensor of order 2 (and vice versa). The tensor is an invariant physical quantity under a coordinate inversion, while the pseudovector is not invariant. The situation can be extended to any dimension. Generally in an n-dimensional space the Hodge dual of an order r tensor will be an antisymmetric pseudotensor of order (n − r) and vice versa. In particular, in the four-dimensional spacetime of special relativity, a pseudoscalar is the dual of a fourth-order tensor and is proportional to the four-dimensional Levi-Civita pseudotensor. === Examples === The stream function ψ ( x , y ) {\displaystyle \psi (x,y)} for a two-dimensional, incompressible fluid flow v ( x , y ) = ⟨ ∂ y ψ , − ∂ x ψ ⟩ {\displaystyle \mathbf {v} (x,y)=\langle \partial _{y}\psi ,-\partial _{x}\psi \rangle } . Magnetic charge is a pseudoscalar as it is mathematically defined, regardless of whether it exists physically. Magnetic flux is the result of a dot product between a vector (the surface normal) and pseudovector (the magnetic field). Helicity is the projection (dot product) of a spin pseudovector onto the direction of momentum (a true vector). Pseudoscalar particles, i.e. particles with spin 0 and odd parity, that is, a particle with no intrinsic spin with wave function that changes sign under parity inversion. Examples are pseudoscalar mesons. == In geometric algebra == A pseudoscalar in a geometric algebra is a highest-grade element of the algebra. For example, in two dimensions there are two orthogonal basis vectors, e 1 {\displaystyle e_{1}} , e 2 {\displaystyle e_{2}} and the associated highest-grade basis element is e 1 e 2 = e 12 . {\displaystyle e_{1}e_{2}=e_{12}.} So a pseudoscalar is a multiple of e 12 {\displaystyle e_{12}} . The element e 12 {\displaystyle e_{12}} squares to −1 and commutes with all even elements – behaving therefore like the imaginary scalar i {\displaystyle i} in the complex numbers. It is these scalar-like properties which give rise to its name. In this setting, a pseudoscalar changes sign under a parity inversion, since if ( e 1 , e 2 ) ↦ ( u 1 , u 2 ) {\displaystyle (e_{1},e_{2})\mapsto (u_{1},u_{2})} is a change of basis representing an orthogonal transformation, then e 1 e 2 ↦ u 1 u 2 = ± e 1 e 2 , {\displaystyle e_{1}e_{2}\mapsto u_{1}u_{2}=\pm e_{1}e_{2},} where the sign depends on the determinant of the transformation. Pseudoscalars in geometric algebra thus correspond to the pseudoscalars in physics. == References ==
Wikipedia/Pseudoscalar_(physics)
In numerical analysis, the uniform theory of diffraction (UTD) is a high-frequency method for solving electromagnetic scattering problems from electrically small discontinuities or discontinuities in more than one dimension at the same point. UTD is an extension of Joseph Keller's geometrical theory of diffraction (GTD) and was introduced by Robert Kouyoumjian and Prabhakar Pathak in 1974. The uniform theory of diffraction approximates near field electromagnetic fields as quasi optical and uses knife-edge diffraction to determine diffraction coefficients for each diffracting object-source combination. These coefficients are then used to calculate the field strength and phase for each direction away from the diffracting point. These fields are then added to the incident fields and reflected fields to obtain a total solution. == See also == Electromagnetic modeling Biot–Tolstoy–Medwin diffraction model == References == == External links == Overview of Asymptotic Expansion Methods in Electromagnetics
Wikipedia/Geometric_theory_of_diffraction
An eikonal equation (from Greek εἰκών, image) is a non-linear first-order partial differential equation that is encountered in problems of wave propagation. The classical eikonal equation in geometric optics is a differential equation of the form where x {\displaystyle x} lies in an open subset of R n {\displaystyle \mathbb {R} ^{n}} , n ( x ) {\displaystyle n(x)} is a positive function, ∇ {\displaystyle \nabla } denotes the gradient, and | ⋅ | {\displaystyle |\cdot |} is the Euclidean norm. The function n {\displaystyle n} is given and one seeks solutions u {\displaystyle u} . In the context of geometric optics, the function n {\displaystyle n} is the refractive index of the medium. More generally, an eikonal equation is an equation of the form where H {\displaystyle H} is a function of 2 n {\displaystyle 2n} variables. Here the function H {\displaystyle H} is given, and u {\displaystyle u} is the solution. If H ( x , y ) = | y | − n ( x ) {\displaystyle H(x,y)=|y|-n(x)} , then equation (2) becomes (1). Eikonal equations naturally arise in the WKB method and the study of Maxwell's equations. Eikonal equations provide a link between physical (wave) optics and geometric (ray) optics. One fast computational algorithm to approximate the solution to the eikonal equation is the fast marching method. == History == The term "eikonal" was first used in the context of geometric optics by Heinrich Bruns. However, the actual equation appears earlier in the seminal work of William Rowan Hamilton on geometric optics. == Physical interpretation == === Continuous shortest-path problems === Suppose that Ω {\displaystyle \Omega } is an open set with suitably smooth boundary ∂ Ω {\displaystyle \partial \Omega } . The solution to the eikonal equation | ∇ u ( x ) | = 1 f ( x ) for x ∈ Ω ⊂ R n , {\displaystyle \left|\nabla u(x)\right|={\frac {1}{f(x)}}{\text{ for }}x\in \Omega \subset \mathbb {R} ^{n},} u ( x ) = q ( x ) for x ∈ ∂ Ω {\displaystyle u(x)=q(x){\text{ for }}x\in \partial \Omega } can be interpreted as the minimal amount of time required to travel from x {\displaystyle x} to ∂ Ω {\displaystyle \partial \Omega } , where f : Ω ¯ → ( 0 , + ∞ ) {\displaystyle f:{\bar {\Omega }}\to (0,+\infty )} is the speed of travel, and q : ∂ Ω → [ 0 , + ∞ ) {\displaystyle q:\partial \Omega \to [0,+\infty )} is an exit-time penalty. (Alternatively this can be posed as a minimal cost-to-exit by making the right-side C ( x ) / f ( x ) {\displaystyle C(x)/f(x)} and q {\displaystyle q} an exit-cost penalty.) In the special case when f = 1 {\displaystyle f=1} , the solution gives the signed distance from ∂ Ω {\displaystyle \partial \Omega } . By assuming that ∇ u ( x ) {\displaystyle \nabla u(x)} exists at all points, it is easy to prove that u ( x ) {\displaystyle u(x)} corresponds to a time-optimal control problem using Bellman's optimality principle and a Taylor expansion. Unfortunately, it is not guaranteed that ∇ u ( x ) {\displaystyle \nabla u(x)} exists at all points, and more advanced techniques are necessary to prove this. This led to the development of viscosity solutions in the 1980s by Pierre-Louis Lions and Michael G. Crandall, and Lions won a Fields Medal for his contributions. === Electromagnetic potential === The physical meaning of the eikonal equation is related to the formula E = − ∇ V , {\displaystyle \mathbf {E} =-\nabla V,} where E {\displaystyle \mathbf {E} } is the electric field strength, and V {\displaystyle V} is the electric potential. There is a similar equation for velocity potential in fluid flow and temperature in heat transfer. The physical meaning of this equation in the electromagnetic example is that any charge in the region is pushed to move at right angles to the lines of constant potential, and along lines of force determined by the field of the E vector and the sign of the charge. Ray optics and electromagnetism are related by the fact that the eikonal equation gives a second electromagnetic formula of the same form as the potential equation above where the line of constant potential has been replaced by a line of constant phase, and the force lines have been replaced by normal vectors coming out of the constant phase line at right angles. The magnitude of these normal vectors is given by the square root of the relative permittivity. The line of constant phase can be considered the edge of one of the advancing light waves (wavefront). The normal vectors are the rays the light is traveling down in ray optics. == Computational algorithms == Several fast and efficient algorithms to solve the eikonal equation have been developed since the 1990s. Many of these algorithms take advantage of algorithms developed much earlier for shortest path problems on graphs with nonnegative edge lengths. These algorithms take advantage of the causality provided by the physical interpretation and typically discretize the domain using a mesh or regular grid and calculate the solution at each discretized point. Eikonal solvers on triangulated surfaces were introduced by Kimmel and Sethian in 1998. Sethian's fast marching method (FMM) was the first "fast and efficient" algorithm created to solve the Eikonal equation. The original description discretizes the domain Ω ⊂ R n {\displaystyle \Omega \subset \mathbb {R} ^{n}} into a regular grid and "marches" the solution from "known" values to the undiscovered regions, precisely mirroring the logic of Dijkstra's algorithm. If Ω {\displaystyle \Omega } is discretized and has M {\displaystyle M} meshpoints, then the computational complexity is O ( M log ⁡ M ) {\displaystyle O(M\log M)} where the log {\displaystyle \log } term comes from the use of a heap (typically binary). A number of modifications can be prescribed to FMM since it is classified as a label-setting method. In addition, FMM has been generalized to operate on general meshes that discretize the domain. Label-correcting methods such as the Bellman–Ford algorithm can also be used to solve the discretized Eikonal equation also with numerous modifications allowed (e.g. "Small Labels First" or "Large Labels Last" ). Two-queue methods have also been developed that are essentially a version of the Bellman-Ford algorithm except two queues are used with a threshold used to determine which queue a gridpoint should be assigned to based on local information. Sweeping algorithms such as the fast sweeping method (FSM) are highly efficient for solving Eikonal equations when the corresponding characteristic curves do not change direction very often. These algorithms are label-correcting but do not make use of a queue or heap, and instead prescribe different orderings for the gridpoints to be updated and iterate through these orderings until convergence. Some improvements were introduced such as "locking" gridpoints during a sweep if does not receive an update, but on highly refined grids and higher-dimensional spaces there is still a large overhead due to having to pass through every gridpoint. Parallel methods have been introduced that attempt to decompose the domain and perform sweeping on each decomposed subset. Zhao's parallel implementation decomposes the domain into n {\displaystyle n} -dimensional subsets and then runs an individual FSM on each subset. Detrixhe's parallel implementation also decomposes the domain, but parallelizes each individual sweep so that processors are responsible for updating gridpoints in an ( n − 1 ) {\displaystyle (n-1)} -dimensional hyperplane until the entire domain is fully swept. Hybrid methods have also been introduced that take advantage of FMM's efficiency with FSM's simplicity. For example, the Heap Cell Method (HCM) decomposes the domain into cells and performs FMM on the cell-domain, and each time a "cell" is updated FSM is performed on the local gridpoint-domain that lies within that cell. A parallelized version of HCM has also been developed. == Numerical approximation == For simplicity assume that Ω {\displaystyle \Omega } is discretized into a uniform grid with spacings h x {\displaystyle h_{x}} and h y {\displaystyle h_{y}} in the x and y directions, respectively. === 2D approximation on a Cartesian grid === Assume that a gridpoint x i j {\displaystyle x_{ij}} has value U i j = U ( x i j ) ≈ u ( x i j ) {\displaystyle U_{ij}=U(x_{ij})\approx u(x_{ij})} . A first-order scheme to approximate the partial derivatives is max ( D i j − x U , − D i j + x U , 0 ) 2 + max ( D i j − y U , − D i j + y U , 0 ) 2 = 1 f i j 2 {\displaystyle \max \left(D_{ij}^{-x}U,-D_{ij}^{+x}U,0\right)^{2}+\max \left(D_{ij}^{-y}U,-D_{ij}^{+y}U,0\right)^{2}\ =\ {\frac {1}{f_{ij}^{2}}}} where u x ( x i j ) ≈ D i j ± x U = U i ± 1 , j − U i j ± h x and u y ( x i j ) ≈ D i j ± y U = U i , j ± 1 − U i j ± h y . {\displaystyle u_{x}(x_{ij})\approx D_{ij}^{\pm x}U={\frac {U_{i\pm 1,j}-U_{ij}}{\pm h_{x}}}\quad {\text{ and }}\quad u_{y}(x_{ij})\approx D_{ij}^{\pm y}U={\frac {U_{i,j\pm 1}-U_{ij}}{\pm h_{y}}}.} Due to the consistent, monotone, and causal properties of this discretization it is easy to show that if U X = min ( U i − 1 , j , U i + 1 , j ) {\displaystyle U_{X}=\min(U_{i-1,j},U_{i+1,j})} and U Y = min ( U i , j − 1 , U i , j + 1 ) {\displaystyle U_{Y}=\min(U_{i,j-1},U_{i,j+1})} and | U X / h x − U Y / h y | ≤ 1 / f i j {\displaystyle |U_{X}/h_{x}-U_{Y}/h_{y}|\leq 1/f_{ij}} then ( U i j − U X h x ) 2 + ( U i j − U Y h y ) 2 = 1 f i j 2 {\displaystyle \left({\frac {U_{ij}-U_{X}}{h_{x}}}\right)^{2}+\left({\frac {U_{ij}-U_{Y}}{h_{y}}}\right)^{2}={\frac {1}{f_{ij}^{2}}}} which can be solved as a quadratic. In the limiting case of h x = h y = h {\displaystyle h_{x}=h_{y}=h} , this reduces to U i j = U X + U Y 2 + 1 2 ( U X + U Y ) 2 − 2 ( U X 2 + U Y 2 − h 2 f i j 2 ) . {\displaystyle U_{ij}={\frac {U_{X}+U_{Y}}{2}}+{\frac {1}{2}}{\sqrt {(U_{X}+U_{Y})^{2}-2\left(U_{X}^{2}+U_{Y}^{2}-{\frac {h^{2}}{f_{ij}^{2}}}\right)}}.} This solution will always exist as long as | U X − U Y | ≤ 2 h / f i j {\displaystyle |U_{X}-U_{Y}|\leq {\sqrt {2}}h/f_{ij}} is satisfied and is larger than both, U X {\displaystyle U_{X}} and U Y {\displaystyle U_{Y}} , as long as | U X − U Y | ≤ h / f i j {\displaystyle |U_{X}-U_{Y}|\leq h/f_{ij}} . If | U X / h x − U Y / h y | ≥ 1 / f i j {\displaystyle |U_{X}/h_{x}-U_{Y}/h_{y}|\geq 1/f_{ij}} , a lower-dimensional update must be performed by assuming one of the partial derivatives is 0 {\displaystyle 0} : U i j = min ( U X + h x f i j , U Y + h y f i j ) . {\displaystyle U_{ij}=\min \left(U_{X}+{\frac {h_{x}}{f_{ij}}},U_{Y}+{\frac {h_{y}}{f_{ij}}}\right).} === n-D approximation on a Cartesian grid === Assume that a grid point x {\displaystyle x} has value U = U ( x ) ≈ u ( x ) {\displaystyle U=U(x)\approx u(x)} . Repeating the same steps as in the n = 2 {\displaystyle n=2} case we can use a first-order scheme to approximate the partial derivatives. Let U i {\displaystyle U_{i}} be the minimum of the values of the neighbors in the ± e i {\displaystyle \pm \mathbf {e} _{i}} directions, where e i {\displaystyle \mathbf {e} _{i}} is a standard unit basis vector. The approximation is then ∑ j = 1 n ( U − U j h ) 2 = 1 f i 2 . {\displaystyle \sum _{j=1}^{n}\left({\frac {U-U_{j}}{h}}\right)^{2}\ =\ {\frac {1}{f_{i}^{2}}}.} Solving this quadratic equation for U {\displaystyle U} yields: U = 1 n ∑ j = 1 n U j + 1 n ( ∑ j = 1 n U j ) 2 − n ( ∑ j = 1 n U j 2 − h 2 f i 2 ) . {\displaystyle U={\frac {1}{n}}\sum _{j=1}^{n}U_{j}+{\frac {1}{n}}{\sqrt {\left(\sum _{j=1}^{n}U_{j}\right)^{2}-n\left(\sum _{j=1}^{n}U_{j}^{2}-{\frac {h^{2}}{f_{i}^{2}}}\right)}}.} If the discriminant in the square root is negative, then a lower-dimensional update must be performed (i.e. one of the partial derivatives is 0 {\displaystyle 0} ). If n = 2 {\displaystyle n=2} then perform the one-dimensional update U = h f i + min j = 1 , … , n U j . {\displaystyle U={\frac {h}{f_{i}}}+\min _{j=1,\ldots ,n}U_{j}.} If n ≥ 3 {\displaystyle n\geq 3} then perform an n − 1 {\displaystyle n-1} dimensional update using the values { U 1 , … , U n } ∖ { U i } {\displaystyle \{U_{1},\ldots ,U_{n}\}\setminus \{U_{i}\}} for every i = 1 , … , n {\displaystyle i=1,\ldots ,n} and choose the smallest. == Mathematical description == An eikonal equation is one of the form H ( x , ∇ u ( x ) ) = 0 {\displaystyle H(x,\nabla u(x))=0} u ( 0 , x ′ ) = u 0 ( x ′ ) , for x = ( x 1 , x ′ ) {\displaystyle u(0,x')=u_{0}(x'),{\text{ for }}x=(x_{1},x')} The plane x = ( 0 , x ′ ) {\displaystyle x=(0,x')} can be thought of as the initial condition, by thinking of x 1 {\displaystyle x_{1}} as t . {\displaystyle t.} We could also solve the equation on a subset of this plane, or on a curved surface, with obvious modifications. The eikonal equation shows up in geometrical optics, which is a way of studying solutions of the wave equation c 2 | ∇ x u | 2 = | ∂ t u | 2 {\displaystyle c^{2}|\nabla _{x}u|^{2}=|\partial _{t}u|^{2}} , where c ( x ) {\displaystyle c(x)} and u ( x , t ) {\displaystyle u(x,t)} . In geometric optics, the eikonal equation describes the phase fronts of waves. Under reasonable hypothesis on the "initial" data, the eikonal equation admits a local solution, but a global smooth solution (e.g. a solution for all time in the geometrical optics case) is not possible. The reason is that caustics may develop. In the geometrical optics case, this means that wavefronts cross. We can solve the eikonal equation using the method of characteristics. One must impose the "non-characteristic" hypothesis ∂ p 1 H ( x , p ) ≠ 0 {\displaystyle \partial _{p_{1}}H(x,p)\neq 0} along the initial hypersurface x = ( 0 , x ′ ) {\displaystyle x=(0,x')} , where H = H(x,p) and p = (p1,...,pn) is the variable that gets replaced by ∇u. Here x = (x1,...,xn) = (t,x′). First, solve the problem H ( x , ξ ( x ) ) = 0 {\displaystyle H(x,\xi (x))=0} , ξ ( x ) = ∇ u ( x ) , x ∈ H {\displaystyle \xi (x)=\nabla u(x),x\in H} . This is done by defining curves (and values of ξ {\displaystyle \xi } on those curves) as x ˙ ( s ) = ∇ ξ H ( x ( s ) , ξ ( s ) ) , ξ ˙ ( s ) = − ∇ x H ( x ( s ) , ξ ( s ) ) . {\displaystyle {\dot {x}}(s)=\nabla _{\xi }H(x(s),\xi (s)),\;\;\;\;{\dot {\xi }}(s)=-\nabla _{x}H(x(s),\xi (s)).} x ( 0 ) = x 0 , ξ ( x ( 0 ) ) = ∇ u ( x ( 0 ) ) . {\displaystyle x(0)=x_{0},\;\;\;\;\xi (x(0))=\nabla u(x(0)).} Note that even before we have a solution u {\displaystyle u} , we know ∇ u ( x ) {\displaystyle \nabla u(x)} for x = ( 0 , x ′ ) {\displaystyle x=(0,x')} due to our equation for H {\displaystyle H} . That these equations have a solution for some interval 0 ≤ s < s 1 {\displaystyle 0\leq s<s_{1}} follows from standard ODE theorems (using the non-characteristic hypothesis). These curves fill out an open set around the plane x = ( 0 , x ′ ) {\displaystyle x=(0,x')} . Thus the curves define the value of ξ {\displaystyle \xi } in an open set about our initial plane. Once defined as such it is easy to see using the chain rule that ∂ s H ( x ( s ) , ξ ( s ) ) = 0 {\displaystyle \partial _{s}H(x(s),\xi (s))=0} , and therefore H = 0 {\displaystyle H=0} along these curves. We want our solution u {\displaystyle u} to satisfy ∇ u = ξ {\displaystyle \nabla u=\xi } , or more specifically, for every s {\displaystyle s} , ( ∇ u ) ( x ( s ) ) = ξ ( x ( s ) ) . {\displaystyle (\nabla u)(x(s))=\xi (x(s)).} Assuming for a minute that this is possible, for any solution u ( x ) {\displaystyle u(x)} we must have d d s u ( x ( s ) ) = ∇ u ( x ( s ) ) ⋅ x ˙ ( s ) = ξ ⋅ ∂ H ∂ ξ , {\displaystyle {\frac {d}{ds}}u(x(s))=\nabla u(x(s))\cdot {\dot {x}}(s)=\xi \cdot {\frac {\partial H}{\partial \xi }},} and therefore u ( x ( t ) ) = u ( x ( 0 ) ) + ∫ 0 t ξ ( x ( s ) ) ⋅ x ˙ ( s ) d s . {\displaystyle u(x(t))=u(x(0))+\int _{0}^{t}\xi (x(s))\cdot {\dot {x}}(s)\,ds.} In other words, the solution u {\displaystyle u} will be given in a neighborhood of the initial plane by an explicit equation. However, since the different paths x ( t ) {\displaystyle x(t)} , starting from different initial points may cross, the solution may become multi-valued, at which point we have developed caustics. We also have (even before showing that u {\displaystyle u} is a solution) ξ ( x ( t ) ) = ξ ( x ( 0 ) ) − ∫ 0 t ∇ x H ( x ( s ) , ξ ( x ( s ) ) ) d s . {\displaystyle \xi (x(t))=\xi (x(0))-\int _{0}^{t}\nabla _{x}H(x(s),\xi (x(s)))\,ds.} It remains to show that ξ {\displaystyle \xi } , which we have defined in a neighborhood of our initial plane, is the gradient of some function u {\displaystyle u} . This will follow if we show that the vector field ξ {\displaystyle \xi } is curl free. Consider the first term in the definition of ξ {\displaystyle \xi } . This term, ξ ( x ( 0 ) ) = ∇ u ( x ( 0 ) ) {\displaystyle \xi (x(0))=\nabla u(x(0))} is curl free as it is the gradient of a function. As for the other term, we note ∂ 2 ∂ x k ∂ x j H = ∂ 2 ∂ x j ∂ x k H . {\displaystyle {\frac {\partial ^{2}}{\partial x_{k}\,\partial x_{j}}}H={\frac {\partial ^{2}}{\partial x_{j}\,\partial x_{k}}}H.} The result follows. == Applications == A concrete application is the computation of radiowave attenuation in the atmosphere. Finding the shape from shading in computer vision. Geometric optics Continuous shortest path problems Image segmentation Study of the shape for a solid propellant rocket grain == See also == Hamilton–Jacobi–Bellman equation Hamilton–Jacobi equation Fermat's principle == References == == Further reading == Paris, D. T.; Hurd, F. K. (1969). Basic Electromagnetic Theory. McGraw-Hill. pp. 383–385. ISBN 0-07-048470-8. Arnold, V. I. (2004). Lectures on Partial Differential Equations (2nd ed.). Springer. pp. 2–3. ISBN 3-540-40448-1. == External links == The linearized eikonal equation English translation of "Das Eikonal" by Heinrich Bruns
Wikipedia/Eikonal_equation
The optical transfer function (OTF) of an optical system such as a camera, microscope, human eye, or projector is a scale-dependent description of their imaging contrast. Its magnitude is the image contrast of the harmonic intensity pattern, 1 + cos ⁡ ( 2 π ν ⋅ x ) {\displaystyle 1+\cos(2\pi \nu \cdot x)} , as a function of the spatial frequency, ν {\displaystyle \nu } , while its complex argument indicates a phase shift in the periodic pattern. The optical transfer function is used by optical engineers to describe how the optics project light from the object or scene onto a photographic film, detector array, retina, screen, or simply the next item in the optical transmission chain. Formally, the optical transfer function is defined as the Fourier transform of the point spread function (PSF, that is, the impulse response of the optics, the image of a point source). As a Fourier transform, the OTF is generally complex-valued; however, it is real-valued in the common case of a PSF that is symmetric about its center. In practice, the imaging contrast, as given by the magnitude or modulus of the optical-transfer function, is of primary importance. This derived function is commonly referred to as the modulation transfer function (MTF). The image on the right shows the optical transfer functions for two different optical systems in panels (a) and (d). The former corresponds to the ideal, diffraction-limited, imaging system with a circular pupil. Its transfer function decreases approximately gradually with spatial frequency until it reaches the diffraction-limit, in this case at 500 cycles per millimeter or a period of 2 μm. Since periodic features as small as this period are captured by this imaging system, it could be said that its resolution is 2 μm. Panel (d) shows an optical system that is out of focus. This leads to a sharp reduction in contrast compared to the diffraction-limited imaging system. It can be seen that the contrast is zero around 250 cycles/mm, or periods of 4 μm. This explains why the images for the out-of-focus system (e,f) are more blurry than those of the diffraction-limited system (b,c). Note that although the out-of-focus system has very low contrast at spatial frequencies around 250 cycles/mm, the contrast at spatial frequencies near the diffraction limit of 500 cycles/mm is diffraction-limited. Close observation of the image in panel (f) shows that the image of the large spoke densities near the center of the spoke target is relatively sharp. == Definition and related concepts == Since the optical transfer function (OTF) is defined as the Fourier transform of the point-spread function (PSF), it is generally speaking a complex-valued function of spatial frequency. The projection of a specific periodic pattern is represented by a complex number with absolute value and complex argument proportional to the relative contrast and translation of the projected projection, respectively. Often the contrast reduction is of most interest and the translation of the pattern can be ignored. The relative contrast is given by the absolute value of the optical transfer function, a function commonly referred to as the modulation transfer function (MTF). Its values indicate how much of the object's contrast is captured in the image as a function of spatial frequency. The MTF tends to decrease with increasing spatial frequency from 1 to 0 (at the diffraction limit); however, the function is often not monotonic. On the other hand, when also the pattern translation is important, the complex argument of the optical transfer function can be depicted as a second real-valued function, commonly referred to as the phase transfer function (PhTF). The complex-valued optical transfer function can be seen as a combination of these two real-valued functions: O T F ( ν ) = M T F ( ν ) e i P h T F ( ν ) {\displaystyle \mathrm {OTF} (\nu )=\mathrm {MTF} (\nu )e^{i\,\mathrm {PhTF} (\nu )}} where M T F ( ν ) = | O T F ( ν ) | , {\displaystyle \mathrm {MTF} (\nu )=\left\vert \mathrm {OTF} (\nu )\right\vert ,} P h T F ( ν ) = a r g ( O T F ( ν ) ) , {\displaystyle \mathrm {PhTF} (\nu )=\mathrm {arg} (\mathrm {OTF} (\nu )),} and a r g ( ⋅ ) {\displaystyle \mathrm {arg} (\cdot )} represents the complex argument function, while ν {\displaystyle \nu } is the spatial frequency of the periodic pattern. In general ν {\displaystyle \nu } is a vector with a spatial frequency for each dimension, i.e. it indicates also the direction of the periodic pattern. The impulse response of a well-focused optical system is a three-dimensional intensity distribution with a maximum at the focal plane, and could thus be measured by recording a stack of images while displacing the detector axially. By consequence, the three-dimensional optical transfer function can be defined as the three-dimensional Fourier transform of the impulse response. Although typically only a one-dimensional, or sometimes a two-dimensional section is used, the three-dimensional optical transfer function can improve the understanding of microscopes such as the structured illumination microscope. True to the definition of transfer function, O T F ( 0 ) = M T F ( 0 ) {\displaystyle \mathrm {OTF} (0)=\mathrm {MTF} (0)} should indicate the fraction of light that was detected from the point source object. However, typically the contrast relative to the total amount of detected light is most important. It is thus common practice to normalize the optical transfer function to the detected intensity, hence M T F ( 0 ) ≡ 1 {\displaystyle \mathrm {MTF} (0)\equiv 1} . Generally, the optical transfer function depends on factors such as the spectrum and polarization of the emitted light and the position of the point source. E.g. the image contrast and resolution are typically optimal at the center of the image, and deteriorate toward the edges of the field-of-view. When significant variation occurs, the optical transfer function may be calculated for a set of representative positions or colors. Sometimes it is more practical to define the transfer functions based on a binary black-white stripe pattern. The transfer function for an equal-width black-white periodic pattern is referred to as the contrast transfer function (CTF). == Examples == === Ideal lens system === A perfect lens system will provide a high contrast projection without shifting the periodic pattern, hence the optical transfer function is identical to the modulation transfer function. Typically the contrast will reduce gradually towards zero at a point defined by the resolution of the optics. For example, a perfect, non-aberrated, f/4 optical imaging system used, at the visible wavelength of 500 nm, would have the optical transfer function depicted in the right hand figure. It can be read from the plot that the contrast gradually reduces and reaches zero at the spatial frequency of 500 cycles per millimeter, in other words the optical resolution of the image projection is 1/500th of a millimeter, or 2 micrometer. Correspondingly, for this particular imaging device, the spokes become more and more blurred towards the center until they merge into a gray, unresolved, disc. Note that sometimes the optical transfer function is given in units of the object or sample space, observation angle, film width, or normalized to the theoretical maximum. Conversion between units is typically a matter of a multiplication or division. E.g. a microscope typically magnifies everything 10 to 100-fold, and a reflex camera will generally demagnify objects at a distance of 5 meter by a factor of 100 to 200. The resolution of a digital imaging device is not only limited by the optics, but also by the number of pixels, more in particular by their separation distance. As explained by the Nyquist–Shannon sampling theorem, to match the optical resolution of the given example, the pixels of each color channel should be separated by 1 micrometer, half the period of 500 cycles per millimeter. A higher number of pixels on the same sensor size will not allow the resolution of finer detail. On the other hand, when the pixel spacing is larger than 1 micrometer, the resolution will be limited by the separation between pixels; moreover, aliasing may lead to a further reduction of the image fidelity. === Imperfect lens system === An imperfect, aberrated imaging system could possess the optical transfer function depicted in the following figure. As the ideal lens system, the contrast reaches zero at the spatial frequency of 500 cycles per millimeter. However, at lower spatial frequencies the contrast is considerably lower than that of the perfect system in the previous example. In fact, the contrast becomes zero on several occasions even for spatial frequencies lower than 500 cycles per millimeter. This explains the gray circular bands in the spoke image shown in the above figure. In between the gray bands, the spokes appear to invert from black to white and vice versa, this is referred to as contrast inversion, directly related to the sign reversal in the real part of the optical transfer function, and represents itself as a shift by half a period for some periodic patterns. While it could be argued that the resolution of both the ideal and the imperfect system is 2 μm, or 500 LP/mm, it is clear that the images of the latter example are less sharp. A definition of resolution that is more in line with the perceived quality would instead use the spatial frequency at which the first zero occurs, 10 μm, or 100 LP/mm. Definitions of resolution, even for perfect imaging systems, vary widely. A more complete, unambiguous picture is provided by the optical transfer function. === Optical system with a non-rotational symmetric aberration === Optical systems, and in particular optical aberrations are not always rotationally symmetric. Periodic patterns that have a different orientation can thus be imaged with different contrast even if their periodicity is the same. Optical transfer function or modulation transfer functions are thus generally two-dimensional functions. The following figures shows the two-dimensional equivalent of the ideal and the imperfect system discussed earlier, for an optical system with trefoil, a non-rotational-symmetric aberration. Optical transfer functions are not always real-valued. Period patterns can be shifted by any amount, depending on the aberration in the system. This is generally the case with non-rotational-symmetric aberrations. The hue of the colors of the surface plots in the above figure indicate phase. It can be seen that, while for the rotational symmetric aberrations the phase is either 0 or π and thus the transfer function is real valued, for the non-rotational symmetric aberration the transfer function has an imaginary component and the phase varies continuously. === Practical example – high-definition video system === While optical resolution, as commonly used with reference to camera systems, describes only the number of pixels in an image, and hence the potential to show fine detail, the transfer function describes the ability of adjacent pixels to change from black to white in response to patterns of varying spatial frequency, and hence the actual capability to show fine detail, whether with full or reduced contrast. An image reproduced with an optical transfer function that 'rolls off' at high spatial frequencies will appear 'blurred' in everyday language. Taking the example of a current high definition (HD) video system, with 1920 by 1080 pixels, the Nyquist theorem states that it should be possible, in a perfect system, to resolve fully (with true black to white transitions) a total of 1920 black and white alternating lines combined, otherwise referred to as a spatial frequency of 1920/2=960 line pairs per picture width, or 960 cycles per picture width, (definitions in terms of cycles per unit angle or per mm are also possible but generally less clear when dealing with cameras and more appropriate to telescopes etc.). In practice, this is far from the case, and spatial frequencies that approach the Nyquist rate will generally be reproduced with decreasing amplitude, so that fine detail, though it can be seen, is greatly reduced in contrast. This gives rise to the interesting observation that, for example, a standard definition television picture derived from a film scanner that uses oversampling, as described later, may appear sharper than a high definition picture shot on a camera with a poor modulation transfer function. The two pictures show an interesting difference that is often missed, the former having full contrast on detail up to a certain point but then no really fine detail, while the latter does contain finer detail, but with such reduced contrast as to appear inferior overall. == The three-dimensional optical transfer function == Although one typically thinks of an image as planar, or two-dimensional, the imaging system will produce a three-dimensional intensity distribution in image space that in principle can be measured. e.g. a two-dimensional sensor could be translated to capture a three-dimensional intensity distribution. The image of a point source is also a three dimensional (3D) intensity distribution which can be represented by a 3D point-spread function. As an example, the figure on the right shows the 3D point-spread function in object space of a wide-field microscope (a) alongside that of a confocal microscope (c). Although the same microscope objective with a numerical aperture of 1.49 is used, it is clear that the confocal point spread function is more compact both in the lateral dimensions (x,y) and the axial dimension (z). One could rightly conclude that the resolution of a confocal microscope is superior to that of a wide-field microscope in all three dimensions. A three-dimensional optical transfer function can be calculated as the three-dimensional Fourier transform of the 3D point-spread function. Its color-coded magnitude is plotted in panels (b) and (d), corresponding to the point-spread functions shown in panels (a) and (c), respectively. The transfer function of the wide-field microscope has a support that is half of that of the confocal microscope in all three-dimensions, confirming the previously noted lower resolution of the wide-field microscope. Note that along the z-axis, for x = y = 0, the transfer function is zero everywhere except at the origin. This missing cone is a well-known problem that prevents optical sectioning using a wide-field microscope. The two-dimensional optical transfer function at the focal plane can be calculated by integration of the 3D optical transfer function along the z-axis. Although the 3D transfer function of the wide-field microscope (b) is zero on the z-axis for z ≠ 0; its integral, the 2D optical transfer, reaching a maximum at x = y = 0. This is only possible because the 3D optical transfer function diverges at the origin x = y = z = 0. The function values along the z-axis of the 3D optical transfer function correspond to the Dirac delta function. == Calculation == Most optical design software has functionality to compute the optical or modulation transfer function of a lens design. Ideal systems such as in the examples here are readily calculated numerically using software such as Julia, GNU Octave or Matlab, and in some specific cases even analytically. The optical transfer function can be calculated following two approaches: as the Fourier transform of the incoherent point spread function, or as the auto-correlation of the pupil function of the optical system Mathematically both approaches are equivalent. Numeric calculations are typically most efficiently done via the Fourier transform; however, analytic calculation may be more tractable using the auto-correlation approach. === Example === ==== Ideal lens system with circular aperture ==== ===== Auto-correlation of the pupil function ===== Since the optical transfer function is the Fourier transform of the point spread function, and the point spread function is the square absolute of the inverse Fourier transformed pupil function, the optical transfer function can also be calculated directly from the pupil function. From the convolution theorem it can be seen that the optical transfer function is in fact the autocorrelation of the pupil function. The pupil function of an ideal optical system with a circular aperture is a disk of unit radius. The optical transfer function of such a system can thus be calculated geometrically from the intersecting area between two identical disks at a distance of 2 ν {\displaystyle 2\nu } , where ν {\displaystyle \nu } is the spatial frequency normalized to the highest transmitted frequency. In general the optical transfer function is normalized to a maximum value of one for ν = 0 {\displaystyle \nu =0} , so the resulting area should be divided by π {\displaystyle \pi } . The intersecting area can be calculated as the sum of the areas of two identical circular segments: θ / 2 − sin ⁡ ( θ ) / 2 {\displaystyle \theta /2-\sin(\theta )/2} , where θ {\displaystyle \theta } is the circle segment angle. By substituting | ν | = cos ⁡ ( θ / 2 ) {\displaystyle |\nu |=\cos(\theta /2)} , and using the equalities sin ⁡ ( θ ) / 2 = sin ⁡ ( θ / 2 ) cos ⁡ ( θ / 2 ) {\displaystyle \sin(\theta )/2=\sin(\theta /2)\cos(\theta /2)} and 1 = ν 2 + sin ⁡ ( arccos ⁡ ( | ν | ) ) 2 {\displaystyle 1=\nu ^{2}+\sin(\arccos(|\nu |))^{2}} , the equation for the area can be rewritten as arccos ⁡ ( | ν | ) − | ν | 1 − ν 2 {\displaystyle \arccos(|\nu |)-|\nu |{\sqrt {1-\nu ^{2}}}} . Hence the normalized optical transfer function is given by: OTF ⁡ ( ν ) = 2 π ( arccos ⁡ ( | ν | ) − | ν | 1 − ν 2 ) . {\displaystyle \operatorname {OTF} (\nu )={\frac {2}{\pi }}\left(\arccos(|\nu |)-|\nu |{\sqrt {1-\nu ^{2}}}\right).} A more detailed discussion can be found in and.: 152–153  === Numerical evaluation === The one-dimensional optical transfer function can be calculated as the discrete Fourier transform of the line spread function. This data is graphed against the spatial frequency data. In this case, a sixth order polynomial is fitted to the MTF vs. spatial frequency curve to show the trend. The 50% cutoff frequency is determined to yield the corresponding spatial frequency. Thus, the approximate position of best focus of the unit under test is determined from this data. The Fourier transform of the line spread function (LSF) can not be determined analytically by the following equations: MTF = F [ LSF ] MTF = ∫ f ( x ) e − i 2 π x s d x {\displaystyle \operatorname {MTF} ={\mathcal {F}}\left[\operatorname {LSF} \right]\qquad \qquad \operatorname {MTF} =\int f(x)e^{-i2\pi \,xs}\,dx} Therefore, the Fourier Transform is numerically approximated using the discrete Fourier transform D F T {\displaystyle {\mathcal {DFT}}} . MTF = D F T [ LSF ] = Y k = ∑ n = 0 N − 1 y n e − i k 2 π N n k ∈ [ 0 , N − 1 ] {\displaystyle \operatorname {MTF} ={\mathcal {DFT}}[\operatorname {LSF} ]=Y_{k}=\sum _{n=0}^{N-1}y_{n}e^{-ik{\frac {2\pi }{N}}n}\qquad k\in [0,N-1]} where Y k {\displaystyle Y_{k}\,} = the k th {\displaystyle k^{\text{th}}} value of the MTF N {\displaystyle N\,} = number of data points n {\displaystyle n\,} = index k {\displaystyle k\,} = k th {\displaystyle k^{\text{th}}} term of the LSF data y n {\displaystyle y_{n}\,} = n th {\displaystyle n^{\text{th}}\,} pixel position i = − 1 {\displaystyle i={\sqrt {-1}}} e ± i a = cos ⁡ ( a ) ± i sin ⁡ ( a ) {\displaystyle e^{\pm ia}=\cos(a)\,\pm \,i\sin(a)} MTF = D F T [ LSF ] = Y k = ∑ n = 0 N − 1 y n [ cos ⁡ ( k 2 π N n ) − i sin ⁡ ( k 2 π N n ) ] k ∈ [ 0 , N − 1 ] {\displaystyle \operatorname {MTF} ={\mathcal {DFT}}[\operatorname {LSF} ]=Y_{k}=\sum _{n=0}^{N-1}y_{n}\left[\cos \left(k{\frac {2\pi }{N}}n\right)-i\sin \left(k{\frac {2\pi }{N}}n\right)\right]\qquad k\in [0,N-1]} The MTF is then plotted against spatial frequency and all relevant data concerning this test can be determined from that graph. === The vectorial transfer function === At high numerical apertures such as those found in microscopy, it is important to consider the vectorial nature of the fields that carry light. By decomposing the waves in three independent components corresponding to the Cartesian axes, a point spread function can be calculated for each component and combined into a vectorial point spread function. Similarly, a vectorial optical transfer function can be determined as shown in () and (). == Measurement == The optical transfer function is not only useful for the design of optical system, it is also valuable to characterize manufactured systems. === Starting from the point spread function === The optical transfer function is defined as the Fourier transform of the impulse response of the optical system, also called the point spread function. The optical transfer function is thus readily obtained by first acquiring the image of a point source, and applying the two-dimensional discrete Fourier transform to the sampled image. Such a point-source can, for example, be a bright light behind a screen with a pin hole, a fluorescent or metallic microsphere, or simply a dot painted on a screen. Calculation of the optical transfer function via the point spread function is versatile as it can fully characterize optics with spatial varying and chromatic aberrations by repeating the procedure for various positions and wavelength spectra of the point source. === Using extended test objects for spatially invariant optics === When the aberrations can be assumed to be spatially invariant, alternative patterns can be used to determine the optical transfer function such as lines and edges. The corresponding transfer functions are referred to as the line-spread function and the edge-spread function, respectively. Such extended objects illuminate more pixels in the image, and can improve the measurement accuracy due to the larger signal-to-noise ratio. The optical transfer function is in this case calculated as the two-dimensional discrete Fourier transform of the image and divided by that of the extended object. Typically either a line or a black-white edge is used. ==== The line-spread function ==== The two-dimensional Fourier transform of a line through the origin, is a line orthogonal to it and through the origin. The divisor is thus zero for all but a single dimension, by consequence, the optical transfer function can only be determined for a single dimension using a single line-spread function (LSF). If necessary, the two-dimensional optical transfer function can be determined by repeating the measurement with lines at various angles. The line spread function can be found using two different methods. It can be found directly from an ideal line approximation provided by a slit test target or it can be derived from the edge spread function, discussed in the next sub section. ==== Edge-spread function ==== The two-dimensional Fourier transform of an edge is also only non-zero on a single line, orthogonal to the edge. This function is sometimes referred to as the edge spread function (ESF). However, the values on this line are inversely proportional to the distance from the origin. Although the measurement images obtained with this technique illuminate a large area of the camera, this mainly benefits the accuracy at low spatial frequencies. As with the line spread function, each measurement only determines a single axes of the optical transfer function, repeated measurements are thus necessary if the optical system cannot be assumed to be rotational symmetric. As shown in the right hand figure, an operator defines a box area encompassing the edge of a knife-edge test target image back-illuminated by a black body. The box area is defined to be approximately 10% of the total frame area. The image pixel data is translated into a two-dimensional array (pixel intensity and pixel position). The amplitude (pixel intensity) of each line within the array is normalized and averaged. This yields the edge spread function. ESF = X − μ σ σ = ∑ i = 0 n − 1 ( x i − μ ) 2 n μ = ∑ i = 0 n − 1 x i n {\displaystyle \operatorname {ESF} ={\frac {X-\mu }{\sigma }}\qquad \qquad \sigma \,={\sqrt {\frac {\sum _{i=0}^{n-1}(x_{i}-\mu \,)^{2}}{n}}}\qquad \qquad \mu \,={\frac {\sum _{i=0}^{n-1}x_{i}}{n}}} where ESF = the output array of normalized pixel intensity data X {\displaystyle X\,} = the input array of pixel intensity data x i {\displaystyle x_{i}\,} = the ith element of X {\displaystyle X\,} μ {\displaystyle \mu \,} = the average value of the pixel intensity data σ {\displaystyle \sigma \,} = the standard deviation of the pixel intensity data n {\displaystyle n\,} = number of pixels used in average The line spread function is identical to the first derivative of the edge spread function, which is differentiated using numerical methods. In case it is more practical to measure the edge spread function, one can determine the line spread function as follows: LSF = d d x ESF ⁡ ( x ) {\displaystyle \operatorname {LSF} ={\frac {d}{dx}}\operatorname {ESF} (x)} Typically the ESF is only known at discrete points, so the LSF is numerically approximated using the finite difference: LSF = d d x ESF ⁡ ( x ) ≈ Δ ESF Δ x {\displaystyle \operatorname {LSF} ={\frac {d}{dx}}\operatorname {ESF} (x)\approx {\frac {\Delta \operatorname {ESF} }{\Delta x}}} LSF ≈ ESF i + 1 − ESF i − 1 2 ( x i + 1 − x i ) {\displaystyle \operatorname {LSF} \approx {\frac {\operatorname {ESF} _{i+1}-\operatorname {ESF} _{i-1}}{2(x_{i+1}-x_{i})}}} where: i {\displaystyle i\,} = the index i = 1 , 2 , … , n − 1 {\displaystyle i=1,2,\dots ,n-1} x i {\displaystyle x_{i}\,} = i th {\displaystyle i^{\text{th}}\,} position of the i th {\displaystyle i^{\text{th}}\,} pixel ESF i {\displaystyle \operatorname {ESF} _{i}\,} = ESF of the i th {\displaystyle i^{\text{th}}\,} pixel ==== Using a grid of black and white lines ==== Although 'sharpness' is often judged on grid patterns of alternate black and white lines, it should strictly be measured using a sine-wave variation from black to white (a blurred version of the usual pattern). Where a square wave pattern is used (simple black and white lines) not only is there more risk of aliasing, but account must be taken of the fact that the fundamental component of a square wave is higher than the amplitude of the square wave itself (the harmonic components reduce the peak amplitude). A square wave test chart will therefore show optimistic results (better resolution of high spatial frequencies than is actually achieved). The square wave result is sometimes referred to as the 'contrast transfer function' (CTF). == Factors affecting MTF in typical camera systems == In practice, many factors result in considerable blurring of a reproduced image, such that patterns with spatial frequency just below the Nyquist rate may not even be visible, and the finest patterns that can appear 'washed out' as shades of grey, not black and white. A major factor is usually the impossibility of making the perfect 'brick wall' optical filter (often realized as a 'phase plate' or a lens with specific blurring properties in digital cameras and video camcorders). Such a filter is necessary to reduce aliasing by eliminating spatial frequencies above the Nyquist rate of the display. === Oversampling and downconversion to maintain the optical transfer function === The only way in practice to approach the theoretical sharpness possible in a digital imaging system such as a camera is to use more pixels in the camera sensor than samples in the final image, and 'downconvert' or 'interpolate' using special digital processing which cuts off high frequencies above the Nyquist rate to avoid aliasing whilst maintaining a reasonably flat MTF up to that frequency. This approach was first taken in the 1970s when flying spot scanners, and later CCD line scanners were developed, which sampled more pixels than were needed and then downconverted, which is why movies have always looked sharper on television than other material shot with a video camera. The only theoretically correct way to interpolate or downconvert is by use of a steep low-pass spatial filter, realized by convolution with a two-dimensional sin(x)/x weighting function which requires powerful processing. In practice, various mathematical approximations to this are used to reduce the processing requirement. These approximations are now implemented widely in video editing systems and in image processing programs such as Photoshop. Just as standard definition video with a high contrast MTF is only possible with oversampling, so HD television with full theoretical sharpness is only possible by starting with a camera that has a significantly higher resolution, followed by digitally filtering. With movies now being shot in 4k and even 8k video for the cinema, we can expect to see the best pictures on HDTV only from movies or material shot at the higher standard. However much we raise the number of pixels used in cameras, this will always remain true in absence of a perfect optical spatial filter. Similarly, a 5-megapixel image obtained from a 5-megapixel still camera can never be sharper than a 5-megapixel image obtained after down-conversion from an equal quality 10-megapixel still camera. Because of the problem of maintaining a high contrast MTF, broadcasters like the BBC did for a long time consider maintaining standard definition television, but improving its quality by shooting and viewing with many more pixels (though as previously mentioned, such a system, though impressive, does ultimately lack the very fine detail which, though attenuated, enhances the effect of true HD viewing). Another factor in digital cameras and camcorders is lens resolution. A lens may be said to 'resolve' 1920 horizontal lines, but this does not mean that it does so with full modulation from black to white. The 'modulation transfer function' (just a term for the magnitude of the optical transfer function with phase ignored) gives the true measure of lens performance, and is represented by a graph of amplitude against spatial frequency. Lens aperture diffraction also limits MTF. Whilst reducing the aperture of a lens usually reduces aberrations and hence improves the flatness of the MTF, there is an optimum aperture for any lens and image sensor size beyond which smaller apertures reduce resolution because of diffraction, which spreads light across the image sensor. This was hardly a problem in the days of plate cameras and even 35 mm film, but has become an insurmountable limitation with the very small format sensors used in some digital cameras and especially video cameras. First generation HD consumer camcorders used 1/4-inch sensors, for which apertures smaller than about f4 begin to limit resolution. Even professional video cameras mostly use 2/3 inch sensors, prohibiting the use of apertures around f16 that would have been considered normal for film formats. Certain cameras (such as the Pentax K10D) feature an "MTF autoexposure" mode, where the choice of aperture is optimized for maximum sharpness. Typically this means somewhere in the middle of the aperture range. === Trend to large-format DSLRs and improved MTF potential === There has recently been a shift towards the use of large image format digital single-lens reflex cameras driven by the need for low-light sensitivity and narrow depth of field effects. This has led to such cameras becoming preferred by some film and television program makers over even professional HD video cameras, because of their 'filmic' potential. In theory, the use of cameras with 16- and 21-megapixel sensors offers the possibility of almost perfect sharpness by downconversion within the camera, with digital filtering to eliminate aliasing. Such cameras produce very impressive results, and appear to be leading the way in video production towards large-format downconversion with digital filtering becoming the standard approach to the realization of a flat MTF with true freedom from aliasing. == Digital inversion of the OTF == Due to optical effects the contrast may be sub-optimal and approaches zero before the Nyquist frequency of the display is reached. The optical contrast reduction can be partially reversed by digitally amplifying spatial frequencies selectively before display or further processing. Although more advanced digital image restoration procedures exist, the Wiener deconvolution algorithm is often used for its simplicity and efficiency. Since this technique multiplies the spatial spectral components of the image, it also amplifies noise and errors due to e.g. aliasing. It is therefore only effective on good quality recordings with a sufficiently high signal-to-noise ratio. == Limitations == In general, the point spread function, the image of a point source also depends on factors such as the wavelength (color), and field angle (lateral point source position). When such variation is sufficiently gradual, the optical system could be characterized by a set of optical transfer functions. However, when the image of the point source changes abruptly, the optical transfer function does not describe the optical system accurately. Inaccuracies can often be mitigated by a collection of optical transfer functions at well-chosen wavelengths or field-positions. However, a more complex characterization may be necessary for some imaging systems such as the Light field camera. == See also == Bokeh Gamma correction Minimum resolvable contrast Minimum resolvable temperature difference Optical resolution Signal-to-noise ratio Signal transfer function Strehl ratio Transfer function Wavefront coding == References == == External links == "Modulation transfer function", by Glenn D. Boreman on SPIE Optipedia. "How to Measure MTF and other Properties of Lenses", by Optikos Corporation.
Wikipedia/Optical_transfer_function
Photolithography (also known as optical lithography) is a process used in the manufacturing of integrated circuits. It involves using light to transfer a pattern onto a substrate, typically a silicon wafer. The process begins with a photosensitive material, called a photoresist, being applied to the substrate. A photomask that contains the desired pattern is then placed over the photoresist. Light is shone through the photomask, exposing the photoresist in certain areas. The exposed areas undergo a chemical change, making them either soluble or insoluble in a developer solution. After development, the pattern is transferred onto the substrate through etching, chemical vapor deposition, or ion implantation processes. Ultraviolet (UV) light is typically used. Photolithography processes can be classified according to the type of light used, including ultraviolet lithography, deep ultraviolet lithography, extreme ultraviolet lithography (EUVL), and X-ray lithography. The wavelength of light used determines the minimum feature size that can be formed in the photoresist. Photolithography is the most common method for the semiconductor fabrication of integrated circuits ("ICs" or "chips"), such as solid-state memories and microprocessors. It can create extremely small patterns, down to a few nanometers in size. It provides precise control of the shape and size of the objects it creates. It can create patterns over an entire wafer in a single step, quickly and with relatively low cost. In complex integrated circuits, a wafer may go through the photolithographic cycle as many as 50 times. It is also an important technique for microfabrication in general, such as the fabrication of microelectromechanical systems. However, photolithography cannot be used to produce masks on surfaces that are not perfectly flat. And, like all chip manufacturing processes, it requires extremely clean operating conditions. Photolithography is a subclass of microlithography, the general term for processes that generate patterned thin films. Other technologies in this broader class include the use of steerable electron beams, or more rarely, nanoimprinting, interference, magnetic fields, or scanning probes. On a broader level, it may compete with directed self-assembly of micro- and nanostructures. Photolithography shares some fundamental principles with photography in that the pattern in the photoresist is created by exposing it to light — either directly by projection through a lens, or by illuminating a mask placed directly over the substrate, as in contact printing. The technique can also be seen as a high precision version of the method used to make printed circuit boards. The name originated from a loose analogy with the traditional photographic method of producing plates for lithographic printing on paper; however, subsequent stages in the process have more in common with etching than with traditional lithography. Conventional photoresists typically consist of three components: resin, sensitizer, and solvent. == Etymology == The root words photo, litho, and graphy all have Greek origins, with the meanings 'light', 'stone' and 'writing' respectively. As suggested by the name compounded from them, photolithography is a printing method (originally based on the use of limestone printing plates) in which light plays an essential role. == History == In the 1820s, Nicephore Niepce invented a photographic process that used Bitumen of Judea, a natural asphalt, as the first photoresist. A thin coating of the bitumen on a sheet of metal, glass or stone became less soluble where it was exposed to light; the unexposed parts could then be rinsed away with a suitable solvent, baring the material beneath, which was then chemically etched in an acid bath to produce a printing plate. The light-sensitivity of bitumen was very poor and very long exposures were required, but despite the later introduction of more sensitive alternatives, its low cost and superb resistance to strong acids prolonged its commercial life into the early 20th century. In 1940, Oskar Süß created a positive photoresist by using diazonaphthoquinone, which worked in the opposite manner: the coating was initially insoluble and was rendered soluble where it was exposed to light. In 1954, Louis Plambeck Jr. developed the Dycryl polymeric letterpress plate, which made the platemaking process faster. Development of photoresists used to be carried out in batches of wafers (batch processing) dipped into a bath of developer, but modern process offerings do development one wafer at a time (single wafer processing) to improve process control. In 1957 Jules Andrus patented a photolitographic process for semiconductor fabrication, while working at Bell Labs. At the same time Moe Abramson and Stanislaus Danko of the US Army Signal Corps developed a technique for printing circuits. In 1952, the U.S. military assigned Jay W. Lathrop and James R. Nall at the National Bureau of Standards (later the U.S. Army Diamond Ordnance Fuze Laboratory, which eventually merged to form the now-present Army Research Laboratory) with the task of finding a way to reduce the size of electronic circuits in order to better fit the necessary circuitry in the limited space available inside a proximity fuze. Inspired by the application of photoresist, a photosensitive liquid used to mark the boundaries of rivet holes in metal aircraft wings, Nall determined that a similar process can be used to protect the germanium in the transistors and even pattern the surface with light. During development, Lathrop and Nall were successful in creating a 2D miniaturized hybrid integrated circuit with transistors using this technique. In 1958, during the IRE Professional Group on Electron Devices (PGED) conference in Washington, D.C., they presented the first paper to describe the fabrication of transistors using photographic techniques and adopted the term "photolithography" to describe the process, marking the first published use of the term to describe semiconductor device patterning. Despite the fact that photolithography of electronic components concerns etching metal duplicates, rather than etching stone to produce a "master" as in conventional lithographic printing, Lathrop and Nall chose the term "photolithography" over "photoetching" because the former sounded "high tech." A year after the conference, Lathrop and Nall's patent on photolithography was formally approved on June 9, 1959. Photolithography would later contribute to the development of the first semiconductor ICs as well as the first microchips. == Process == A single iteration of photolithography combines several steps in sequence. Modern cleanrooms use automated, robotic wafer track systems to coordinate the process. The procedure described here omits some advanced treatments, such as thinning agents. The photolithography process is carried out by the wafer track and stepper/scanner, and the wafer track system and the stepper/scanner are installed side by side. Wafer track systems are also known as wafer coater/developer systems, which perform the same functions. Wafer tracks are named after the "tracks" used to carry wafers inside the machine, but modern machines do not use tracks. === Cleaning === If organic or inorganic contaminations are present on the wafer surface, they are usually removed by wet chemical treatment, e.g. the RCA clean procedure based on solutions containing hydrogen peroxide. Other solutions made with trichloroethylene, acetone or methanol can also be used to clean. === Preparation === The wafer is initially heated to a temperature sufficient to drive off any moisture that may be present on the wafer surface; 150 °C for ten minutes is sufficient. Wafers that have been in storage must be chemically cleaned to remove contamination. A liquid or gaseous "adhesion promoter", such as Bis(trimethylsilyl)amine ("hexamethyldisilazane", HMDS), is applied to promote adhesion of the photoresist to the wafer. The surface layer of silicon dioxide on the wafer reacts with HMDS to form tri-methylated silicon-dioxide, a highly water repellent layer not unlike the layer of wax on a car's paint. This water repellent layer prevents the aqueous developer from penetrating between the photoresist layer and the wafer's surface, thus preventing so-called lifting of small photoresist structures in the (developing) pattern. In order to ensure the development of the image, it is best covered and placed over a hot plate and let it dry while stabilizing the temperature at 120 °C. === Photoresist application === The wafer is covered with photoresist liquid by spin coating. Thus, the top layer of resist is quickly ejected from the wafer's edge while the bottom layer still creeps slowly radially along the wafer. In this way, any 'bump' or 'ridge' of resist is removed, leaving a very flat layer. However, viscous films may result in large edge beads which are areas at the edges of the wafer or photomask with increased resist thickness whose planarization has physical limits. Often, Edge bead removal (EBR) is carried out, usually with a nozzle, to remove this extra resist as it could otherwise cause particulate contamination. Final thickness is also determined by the evaporation of liquid solvents from the resist. For very small, dense features (< 125 or so nm), lower resist thicknesses (< 0.5 microns) are needed to overcome collapse effects at high aspect ratios; typical aspect ratios are < 4:1. The photoresist-coated wafer is then prebaked to drive off excess photoresist solvent, typically at 90 to 100 °C for 30 to 60 seconds on a hotplate. A BARC coating (Bottom Anti-Reflectant Coating) may be applied before the photoresist is applied, to avoid reflections from occurring under the photoresist and to improve the photoresist's performance at smaller semiconductor nodes such as 45 nm and below. Top Anti-Reflectant Coatings (TARCs) also exist. EUV lithography is unique in the sense it allows for the use of photoresists with metal oxides. === Exposure and developing === After prebaking, the photoresist is exposed to a pattern of intense light. The exposure to light causes a chemical change that allows some of the photoresist to be removed by a special solution, called "developer" by analogy with photographic developer. Positive photoresist, the most common type, becomes soluble in the developer when exposed; with negative photoresist, unexposed regions are soluble in the developer. A post-exposure bake (PEB) is performed before developing, typically to help reduce standing wave phenomena caused by the destructive and constructive interference patterns of the incident light. In deep ultraviolet lithography, chemically amplified resist (CAR) chemistry is used. This resist is much more sensitive to PEB time, temperature, and delay, as the resist works by creating acid when it is hit by photons, and then undergoes an "exposure" reaction (creating acid, making the polymer soluble in the basic developer, and performing a chemical reaction catalyzed by acid) which mostly occurs in the PEB. The develop chemistry is delivered on a spinner, much like photoresist. Developers originally often contained sodium hydroxide (NaOH). However, sodium is considered an extremely undesirable contaminant in MOSFET fabrication because it degrades the insulating properties of gate oxides (specifically, sodium ions can migrate in and out of the gate, changing the threshold voltage of the transistor and making it harder or easier to turn the transistor on over time). Metal-ion-free developers such as tetramethylammonium hydroxide (TMAH) are now used. The temperature of the developer might be tightly controlled using jacketed (dual walled) hoses to within 0.2 °C. The nozzle that coats the wafer with developer may influence the amount of developer that is necessary. The resulting wafer is then "hard-baked" if a non-chemically amplified resist was used, typically at 120 to 180 °C for 20 to 30 minutes. The hard bake solidifies the remaining photoresist, to make a more durable protecting layer in future ion implantation, wet chemical etching, or plasma etching. From preparation until this step, the photolithography procedure has been carried out by two machines: the photolithography stepper or scanner, and the coater/developer. The two machines are usually installed side by side, and are "linked" together. === Etching, implantation === In etching, a liquid ("wet") or plasma ("dry") chemical agent removes the uppermost layer of the substrate in the areas that are not protected by photoresist. In semiconductor fabrication, dry etching techniques are generally used, as they can be made anisotropic, in order to avoid significant undercutting of the photoresist pattern. This is essential when the width of the features to be defined is similar to or less than the thickness of the material being etched (i.e. when the aspect ratio approaches unity). Wet etch processes are generally isotropic in nature, which is often indispensable for microelectromechanical systems, where suspended structures must be "released" from the underlying layer. The development of low-defectivity anisotropic dry-etch process has enabled the ever-smaller features defined photolithographically in the resist to be transferred to the substrate material. === Photoresist removal === After a photoresist is no longer needed, it must be removed from the substrate. This usually requires a liquid "resist stripper", which chemically alters the resist so that it no longer adheres to the substrate. Alternatively, the photoresist may be removed by a plasma containing oxygen, which oxidizes it. This process is called plasma ashing and resembles dry etching. The use of 1-Methyl-2-pyrrolidone (NMP) solvent for photoresist is another method used to remove an image. When the resist has been dissolved, the solvent can be removed by heating to 80 °C without leaving any residue. == Exposure ("printing") systems == Exposure systems typically produce an image on the wafer using a photomask. The photomask blocks light in some areas and lets it pass in others. (Maskless lithography projects a precise beam directly onto the wafer without using a mask, but it is not widely used in commercial processes.) Exposure systems may be classified by the optics that transfer the image from the mask to the wafer. Photolithography produces better thin film transistor structures than printed electronics, due to smoother printed layers, less wavy patterns, and more accurate drain-source electrode registration. === Contact and proximity === A contact aligner, the simplest exposure system, puts a photomask in direct contact with the wafer and exposes it to a uniform light. A proximity aligner puts a small gap of around 5 microns between the photomask and wafer. In both cases, the mask covers the entire wafer, and simultaneously patterns every die. Contact printing/lithography is liable to damage both the mask and the wafer, and this was the primary reason it was abandoned for high volume production. Both contact and proximity lithography require the light intensity to be uniform across an entire wafer, and the mask to align precisely to features already on the wafer. As modern processes use increasingly large wafers, these conditions become increasingly difficult. Research and prototyping processes often use contact or proximity lithography, because it uses inexpensive hardware and can achieve high optical resolution. The resolution in proximity lithography is approximately the square root of the product of the wavelength and the gap distance. Hence, except for projection lithography (see below), contact printing offers the best resolution, because its gap distance is approximately zero (neglecting the thickness of the photoresist itself). In addition, nanoimprint lithography may revive interest in this familiar technique, especially since the cost of ownership is expected to be low; however, the shortcomings of contact printing discussed above remain as challenges. === Projection === Very-large-scale integration (VLSI) lithography uses projection systems. Unlike contact or proximity masks, which cover an entire wafer, projection masks (known as "reticles") show only one die or an array of dies (known as a "field") in a portion of the wafer at a time. Projection exposure systems (steppers or scanners) project the mask onto the wafer many times, changing the position of the wafer with every projection, to create the complete pattern, fully patterning the wafer. The difference between steppers and scanners is that, during exposure, a scanner moves the photomask and the wafer simultaneously, while a stepper only moves the wafer. Contact, proximity and projection Mask aligners preceded steppers and do not move the photomask nor the wafer during exposure and use masks that cover the entire wafer. Immersion lithography scanners use a layer of Ultrapure water between the lens and the wafer to increase resolution. An alternative to photolithography is nanoimprint lithography. The maximum size of the image that can be projected onto a wafer is known as the reticle limit. == Photomasks == The image for the mask originates from a computerized data file. This data file is converted to a series of polygons and written onto a square of fused quartz substrate covered with a layer of chromium using a photolithographic process. A laser beam (laser writer) or a beam of electrons (e-beam writer) is used to expose the pattern defined by the data file and travels over the surface of the substrate in either a vector or raster scan manner. Where the photoresist on the mask is exposed, the chrome can be etched away, leaving a clear path for the illumination light in the stepper/scanner system to travel through. == Resolution in projection systems == The ability to project a clear image of a small feature onto the wafer is limited by the wavelength of the light that is used, and the ability of the reduction lens system to capture enough diffraction orders from the illuminated mask. Current state-of-the-art photolithography tools use deep ultraviolet (DUV) light from excimer lasers with wavelengths of 248 (KrF) and 193 (ArF) nm (the dominant lithography technology today is thus also called "excimer laser lithography"), which allow minimum feature sizes down to 50 nm. Excimer laser lithography has thus played a critical role in the continued advance of the Moore's Law for the last 20 years (see below). The minimum feature size that a projection system can print is given approximately by: C D = k 1 ⋅ λ N A {\displaystyle CD=k_{1}\cdot {\frac {\lambda }{NA}}} where C D {\displaystyle \,CD} is the minimum feature size (also called the critical dimension, target design rule, or "half-pitch"), λ {\displaystyle \,\lambda } is the wavelength of light used, and N A {\displaystyle \,NA} is the numerical aperture of the lens as seen from the wafer. k 1 {\displaystyle \,k_{1}} (commonly called k1 factor) is a coefficient that encapsulates process-related factors and typically equals 0.4 for production. ( k 1 {\displaystyle \,k_{1}} is actually a function of process factors such as the angle of incident light on a reticle and the incident light intensity distribution. It is fixed per process.) The minimum feature size can be reduced by decreasing this coefficient through computational lithography. According to this equation, minimum feature sizes can be decreased by decreasing the wavelength, and increasing the numerical aperture (to achieve a tighter focused beam and a smaller spot size). However, this design method runs into a competing constraint. In modern systems, the depth of focus is also a concern: D F = k 2 ⋅ λ N A 2 {\displaystyle D_{F}=k_{2}\cdot {\frac {\lambda }{\,{NA}^{2}}}} Here, k 2 {\displaystyle \,k_{2}} is another process-related coefficient. The depth of focus restricts the thickness of the photoresist and the depth of the topography on the wafer. Chemical mechanical polishing is often used to flatten topography before high-resolution lithographic steps. From classical optics, k1=0.61 by the Rayleigh criterion. The image of two points separated by less than 1.22 wavelength/NA will not maintain that separation but will be larger due to the interference between the Airy discs of the two points. It must also be remembered, though, that the distance between two features can also change with defocus. Resolution is also nontrivial in a two-dimensional context. For example, a tighter line pitch results in wider gaps (in the perpendicular direction) between the ends of the lines. More fundamentally, straight edges become rounded for shortened rectangular features, where both x and y pitches are near the resolution limit. For advanced nodes, blur, rather than wavelength, becomes the key resolution-limiting factor. Minimum pitch is given by blur sigma/0.14. Blur is affected by dose as well as quantum yield, leading to a tradeoff with stochastic defects, in the case of EUV. == Stochastic effects == As light consists of photons, at low doses the image quality ultimately depends on the photon number. This affects the use of extreme ultraviolet lithography or EUVL, which is limited to the use of low doses on the order of 20 photons/nm2. This is due to fewer photons for the same energy dose for a shorter wavelength (higher energy per photon). With fewer photons making up the image, there is noise in the edge placement. The stochastic effects would become more complicated with larger pitch patterns with more diffraction orders and using more illumination source points. Secondary electrons in EUV lithography aggravate the stochastic characteristics. == Light sources == Historically, photolithography has used ultraviolet light from gas-discharge lamps using mercury, sometimes in combination with noble gases such as xenon. These lamps produce light across a broad spectrum with several strong peaks in the ultraviolet range. This spectrum is filtered to select a single spectral line. From the early 1960s through the mid-1980s, Hg lamps had been used in lithography for their spectral lines at 436 nm ("g-line"), 405 nm ("h-line") and 365 nm ("i-line"). However, with the semiconductor industry's need for both higher resolution (to produce denser and faster chips) and higher throughput (for lower costs), lamp-based lithography tools were no longer able to meet the industry's high-end requirements. This challenge was overcome in 1982 when excimer laser lithography was proposed and demonstrated at IBM by Kanti Jain. Excimer laser lithography machines (steppers and scanners) became the primary tools in microelectronics production, and has enabled minimum features sizes in chip manufacturing to shrink from 800 nanometers in 1990 to 7 nanometers in 2018. From an even broader scientific and technological perspective, in the 50-year history of the laser since its first demonstration in 1960, the invention and development of excimer laser lithography has been recognized as a major milestone. The commonly used deep ultraviolet excimer lasers in lithography systems are the krypton fluoride (KrF) laser at 248 nm wavelength and the argon fluoride laser (ArF) at 193 nm wavelength. The primary manufacturers of excimer laser light sources in the 1980s were Lambda Physik (now part of Coherent, Inc.) and Lumonics. Since the mid-1990s Cymer Inc. has become the dominant supplier of excimer laser sources to the lithography equipment manufacturers, with Gigaphoton Inc. as their closest rival. Generally, an excimer laser is designed to operate with a specific gas mixture; therefore, changing wavelength is not a trivial matter, as the method of generating the new wavelength is completely different, and the absorption characteristics of materials change. For example, air begins to absorb significantly around the 193 nm wavelength; moving to sub-193 nm wavelengths would require installing vacuum pump and purge equipment on the lithography tools (a significant challenge). An inert gas atmosphere can sometimes be used as a substitute for a vacuum, to avoid the need for hard plumbing. Furthermore, insulating materials such as silicon dioxide, when exposed to photons with energy greater than the band gap, release free electrons and holes which subsequently cause adverse charging. Optical lithography has been extended to feature sizes below 50 nm using the 193 nm ArF excimer laser and liquid immersion techniques. Also termed immersion lithography, this enables the use of optics with numerical apertures exceeding 1.0. The liquid used is typically ultra-pure, deionised water, which provides for a refractive index above that of the usual air gap between the lens and the wafer surface. The water is continually circulated to eliminate thermally-induced distortions. Water will only allow NA's of up to ~1.4, but fluids with higher refractive indices would allow the effective NA to be increased further. Experimental tools using the 157 nm wavelength from the F2 excimer laser in a manner similar to current exposure systems have been built. These were once targeted to succeed 193 nm lithography at the 65 nm feature size node but have now all but been eliminated by the introduction of immersion lithography. This was due to persistent technical problems with the 157 nm technology and economic considerations that provided strong incentives for the continued use of 193 nm excimer laser lithography technology. High-index immersion lithography is the newest extension of 193 nm lithography to be considered. In 2006, features less than 30 nm were demonstrated by IBM using this technique. These systems used CaF2 calcium fluoride lenses. Immersion lithography at 157 nm was explored. UV excimer lasers have been demonstrated to about 126 nm (for Ar2*). Mercury arc lamps are designed to maintain a steady DC current of 50 to 150 Volts, however excimer lasers have a higher resolution. Excimer lasers are gas-based light systems that are usually filled with inert and halide gases (Kr, Ar, Xe, F and Cl) that are charged by an electric field. The higher the frequency, the greater the resolution of the image. KrF lasers are able to function at a frequency of 4 kHz . In addition to running at a higher frequency, excimer lasers are compatible with more advanced machines than mercury arc lamps are. They are also able to operate from greater distances (up to 25 meters) and are able to maintain their accuracy with a series of mirrors and antireflective-coated lenses. By setting up multiple lasers and mirrors, the amount of energy loss is minimized, also since the lenses are coated with antireflective material, the light intensity remains relatively the same from when it left the laser to when it hits the wafer. Lasers have been used to indirectly generate non-coherent extreme UV (EUV) light at 13.5 nm for extreme ultraviolet lithography. The EUV light is not emitted by the laser, but rather by a tin or xenon plasma which is excited by an excimer or CO2 laser. This technique does not require a synchrotron, and EUV sources, as noted, do not produce coherent light. However vacuum systems and a number of novel technologies (including much higher EUV energies than are now produced) are needed to work with UV at the edge of the X-ray spectrum (which begins at 10 nm). As of 2020, EUV is in mass production use by leading edge foundries such as TSMC and Samsung. Theoretically, an alternative light source for photolithography, especially if and when wavelengths continue to decrease to extreme UV or X-ray, is the free-electron laser (or one might say xaser for an X-ray device). Free-electron lasers can produce high quality beams at arbitrary wavelengths. Visible and infrared femtosecond lasers were also applied for lithography. In that case photochemical reactions are initiated by multiphoton absorption. Usage of these light sources have a lot of benefits, including possibility to manufacture true 3D objects and process non-photosensitized (pure) glass-like materials with superb optical resiliency. == Experimental methods == Photolithography has been defeating predictions of its demise for many years. For instance, by the early 1980s, many in the semiconductor industry had come to believe that features smaller than 1 micron could not be printed optically. Modern techniques using excimer laser lithography already print features with dimensions a fraction of the wavelength of light used – an amazing optical feat. New techniques such as immersion lithography, dual-tone resist and multiple patterning continue to improve the resolution of 193 nm lithography. Meanwhile, current research is exploring alternatives to conventional UV, such as electron beam lithography, X-ray lithography, extreme ultraviolet lithography and ion projection lithography. Extreme ultraviolet lithography has entered mass production use, as of 2018 by Samsung and other manufacturers have followed suit. Massively parallel electron beam lithography has been explored as an alternative to photolithography, and was tested by TSMC, but it did not succeed and the technology from the main developer of the technique, MAPPER, was purchased by ASML, although electron beam lithography was at one point used in chip production by IBM. Electron beam lithography is only used in niche applications such as photomask production. == Economy == In 2001 NIST publication has reported that photolithography process constituted about 35% of total cost of a wafer processing costs.: 11  In 2021, the photolithography industry was valued over 8 billion USD. == See also == Dip-pen nanolithography Soft lithography Magnetolithography Nanochannel glass materials Stereolithography, a macroscale process used to produce three-dimensional shapes Wafer foundry Chemistry of photolithography Computational lithography ASML Holding Alvéole Lab Semiconductor device fabrication == References == == External links == BYU Photolithography Resources Semiconductor Lithography – an overview of lithography Optical Lithography Introduction – IBM site with lithography-related articles
Wikipedia/Photolithographic
DNA separation by silica adsorption is a method of DNA separation that is based on DNA molecules binding to silica surfaces in the presence of certain salts and under certain pH conditions. == Operations == In order to separate DNA through silica adsorption, a sample is first lysed, releasing proteins, DNA, phospholipids, etc. from the cells. The remaining tissue is discarded. The supernatant containing the DNA is then exposed to silica in a solution with high ionic strength. The highest DNA adsorption efficiencies occur in the presence of buffer solution with a pH at or below the pKa of the surface silanol groups. The mechanism behind DNA adsorption onto silica is not fully understood; one possible explanation involves reduction of the silica surface's negative charge due to the high ionic strength of the buffer. This decrease in surface charge leads to a decrease in the electrostatic repulsion between the negatively charged DNA and the negatively charged silica. Meanwhile, the buffer also reduces the activity of water by formatting hydrated ions. This leads to the silica surface and DNA becoming dehydrated. These conditions lead to an energetically favorable situation for DNA to adsorb to the silica surface. A further explanation of how DNA binds to silica is based on the action of guanidinium chloride (GuHCl), which acts as a chaotrope. A chaotrope denatures biomolecules by disrupting the shell of hydration around them. This allows positively charged ions to form a salt bridge between the negatively charged silica and the negatively charged DNA backbone in high salt concentration. The DNA can then be washed with high salt and ethanol, and ultimately eluted with low salt. After the DNA is bound to the silica it is then washed to remove contaminants and finally eluted using an elution buffer or distilled water. == See also == Spin column-based nucleic acid purification == References ==
Wikipedia/DNA_separation_by_silica_adsorption
Weighted correlation network analysis, also known as weighted gene co-expression network analysis (WGCNA), is a widely used data mining method especially for studying biological networks based on pairwise correlations between variables. While it can be applied to most high-dimensional data sets, it has been most widely used in genomic applications. It allows one to define modules (clusters), intramodular hubs, and network nodes with regard to module membership, to study the relationships between co-expression modules, and to compare the network topology of different networks (differential network analysis). WGCNA can be used as a data reduction technique (related to oblique factor analysis), as a clustering method (fuzzy clustering), as a feature selection method (e.g. as gene screening method), as a framework for integrating complementary (genomic) data (based on weighted correlations between quantitative variables), and as a data exploratory technique. Although WGCNA incorporates traditional data exploratory techniques, its intuitive network language and analysis framework transcend any standard analysis technique. Since it uses network methodology and is well suited for integrating complementary genomic data sets, it can be interpreted as systems biologic or systems genetic data analysis method. By selecting intramodular hubs in consensus modules, WGCNA also gives rise to network based meta analysis techniques. == History == The WGCNA method was developed by Steve Horvath, a professor of human genetics at the David Geffen School of Medicine at UCLA and of biostatistics at the UCLA Fielding School of Public Health and his colleagues at UCLA, and (former) lab members (in particular Peter Langfelder, Bin Zhang, Jun Dong). Much of the work arose from collaborations with applied researchers. In particular, weighted correlation networks were developed in joint discussions with cancer researchers Paul Mischel, Stanley F. Nelson, and neuroscientists Daniel H. Geschwind, Michael C. Oldham, according to the acknowledgement section in. == Comparison between weighted and unweighted correlation networks == A weighted correlation network can be interpreted as special case of a weighted network, dependency network or correlation network. Weighted correlation network analysis can be attractive for the following reasons: The network construction (based on soft thresholding the correlation coefficient) preserves the continuous nature of the underlying correlation information. For example, weighted correlation networks that are constructed on the basis of correlations between numeric variables do not require the choice of a hard threshold. Dichotomizing information and (hard)-thresholding may lead to information loss. The network construction gives highly robust results with respect to different choices of the soft threshold. In contrast, results based on unweighted networks, constructed by thresholding a pairwise association measure, often strongly depend on the threshold. Weighted correlation networks facilitate a geometric interpretation based on the angular interpretation of the correlation, chapter 6 in. Resulting network statistics can be used to enhance standard data-mining methods such as cluster analysis since (dis)-similarity measures can often be transformed into weighted networks; see chapter 6 in. WGCNA provides powerful module preservation statistics which can be used to quantify similarity to another condition. Also module preservation statistics allow one to study differences between the modular structure of networks. Weighted networks and correlation networks can often be approximated by "factorizable" networks. Such approximations are often difficult to achieve for sparse, unweighted networks. Therefore, weighted (correlation) networks allow for a parsimonious parametrization (in terms of modules and module membership) (chapters 2, 6 in ) and. == Method == First, one defines a gene co-expression similarity measure which is used to define the network. We denote the gene co-expression similarity measure of a pair of genes i and j by s i j {\displaystyle s_{ij}} . Many co-expression studies use the absolute value of the correlation as an unsigned co-expression similarity measure, s i j u n s i g n e d = | c o r ( x i , x j ) | {\displaystyle s_{ij}^{unsigned}=|cor(x_{i},x_{j})|} where gene expression profiles x i {\displaystyle x_{i}} and x j {\displaystyle x_{j}} consist of the expression of genes i and j across multiple samples. However, using the absolute value of the correlation may obfuscate biologically relevant information, since no distinction is made between gene repression and activation. In contrast, in signed networks the similarity between genes reflects the sign of the correlation of their expression profiles. Varied transformation (or scaling) approaches can be considered if a signed co-expression measure between gene expression profiles x i {\displaystyle x_{i}} and x j {\displaystyle x_{j}} is needed. For example, one can (linearly) scale the correlations to be within the [ 0 , 1 ] {\displaystyle [0,1]} range by performing a simple transformation of the correlations as follows: s i j s i g n e d = 0.5 + 0.5 c o r ( x i , x j ) {\displaystyle s_{ij}^{signed}=0.5+0.5cor(x_{i},x_{j})} As the unsigned measure s i j u n s i g n e d {\displaystyle s_{ij}^{unsigned}} , the signed similarity s i j s i g n e d {\displaystyle s_{ij}^{signed}} takes on a value between 0 and 1. Note that the unsigned similarity between two oppositely expressed genes ( c o r ( x i , x j ) = − 1 {\displaystyle cor(x_{i},x_{j})=-1} ) equals 1 while it equals 0 for the signed similarity. Similarly, while the unsigned co-expression measure of two genes with zero correlation remains zero, the signed similarity equals 0.5. Next, an adjacency matrix (network), A = [ a i j ] {\displaystyle A=[a_{ij}]} , is used to quantify how strongly genes are connected to one another. A {\displaystyle A} is defined by thresholding the co-expression similarity matrix S = [ s i j ] {\displaystyle S=[s_{ij}]} . 'Hard' thresholding (dichotomizing) the similarity measure S {\displaystyle S} results in an unweighted gene co-expression network. Specifically an unweighted network adjacency is defined to be 1 if s i j > τ {\displaystyle s_{ij}>\tau } and 0 otherwise. Because hard thresholding encodes gene connections in a binary fashion, it can be sensitive to the choice of the threshold and result in the loss of co-expression information. The continuous nature of the co-expression information can be preserved by employing soft thresholding, which results in a weighted network. Specifically, WGCNA uses the following power function assess their connection strength: a i j = ( s i j ) β {\textstyle a_{ij}=(s_{ij})^{\beta }} , where the power β {\displaystyle \beta } is the soft thresholding parameter. The default values β = 6 {\displaystyle \beta =6} and β = 12 {\displaystyle \beta =12} are used for unsigned and signed networks, respectively. Alternatively, β {\displaystyle \beta } can be chosen using the scale-free topology criterion which amounts to choosing the smallest value of β {\displaystyle \beta } such that approximate scale free topology is reached. Since l o g ( a i j ) = β l o g ( s i j ) {\displaystyle log(a_{ij})=\beta log(s_{ij})} , the weighted network adjacency is linearly related to the co-expression similarity on a logarithmic scale. Note that a high power β {\displaystyle \beta } transforms high similarities into high adjacencies, while pushing low similarities towards 0. Since this soft-thresholding procedure applied to a pairwise correlation matrix leads to weighted adjacency matrix, the ensuing analysis is referred to as weighted gene co-expression network analysis. A major step in the module centric analysis is to cluster genes into network modules using a network proximity measure. Roughly speaking, a pair of genes has a high proximity if it is closely interconnected. By convention, the maximal proximity between two genes is 1 and the minimum proximity is 0. Typically, WGCNA uses the topological overlap measure (TOM) as proximity. which can also be defined for weighted networks. The TOM combines the adjacency of two genes and the connection strengths these two genes share with other "third party" genes. The TOM is a highly robust measure of network interconnectedness (proximity). This proximity is used as input of average linkage hierarchical clustering. Modules are defined as branches of the resulting cluster tree using the dynamic branch cutting approach. Next the genes inside a given module are summarized with the module eigengene, which can be considered as the best summary of the standardized module expression data. The module eigengene of a given module is defined as the first principal component of the standardized expression profiles. Eigengenes define robust biomarkers, and can be used as features in complex machine learning models such as Bayesian networks. To find modules that relate to a clinical trait of interest, module eigengenes are correlated with the clinical trait of interest, which gives rise to an eigengene significance measure. Eigengenes can be used as features in more complex predictive models including decision trees and Bayesian networks. One can also construct co-expression networks between module eigengenes (eigengene networks), i.e. networks whose nodes are modules. To identify intramodular hub genes inside a given module, one can use two types of connectivity measures. The first, referred to as k M E i = c o r ( x i , M E ) {\displaystyle kME_{i}=cor(x_{i},ME)} , is defined based on correlating each gene with the respective module eigengene. The second, referred to as kIN, is defined as a sum of adjacencies with respect to the module genes. In practice, these two measures are equivalent. To test whether a module is preserved in another data set, one can use various network statistics, e.g. Z s u m m a r y {\displaystyle Zsummary} . == Applications == WGCNA has been widely used for analyzing gene expression data (i.e. transcriptional data), e.g. to find intramodular hub genes. Such as, WGCNA study reveals novel transcription factors are associated with Bisphenol A (BPA) dose-response. It is often used as data reduction step in systems genetic applications where modules are represented by "module eigengenes" e.g. Module eigengenes can be used to correlate modules with clinical traits. Eigengene networks are coexpression networks between module eigengenes (i.e. networks whose nodes are modules) . WGCNA is widely used in neuroscientific applications, e.g. and for analyzing genomic data including microarray data, single cell RNA-Seq data DNA methylation data, miRNA data, peptide counts and microbiota data (16S rRNA gene sequencing). Other applications include brain imaging data, e.g. functional MRI data. == R software package == The WGCNA R software package provides functions for carrying out all aspects of weighted network analysis (module construction, hub gene selection, module preservation statistics, differential network analysis, network statistics). The WGCNA package is available from the Comprehensive R Archive Network (CRAN), the standard repository for R add-on packages. == References ==
Wikipedia/Weighted_correlation_network_analysis
In molecular biology, hybridization (or hybridisation) is a phenomenon in which single-stranded deoxyribonucleic acid (DNA) or ribonucleic acid (RNA) molecules anneal to complementary DNA or RNA. Though a double-stranded DNA sequence is generally stable under physiological conditions, changing these conditions in the laboratory (generally by raising the surrounding temperature) will cause the molecules to separate into single strands. These strands are complementary to each other but may also be complementary to other sequences present in their surroundings. Lowering the surrounding temperature allows the single-stranded molecules to anneal or “hybridize” to each other. DNA replication and transcription of DNA into RNA both rely upon nucleotide hybridization, as do molecular biology techniques including Southern blots and Northern blots, the polymerase chain reaction (PCR), and most approaches to DNA sequencing. == Applications == Hybridization is a basic property of nucleotide sequences and is taken advantage of in numerous molecular biology techniques. Overall, genetic relatedness of two species can be determined by hybridizing segments of their DNA (DNA-DNA hybridization). Due to sequence similarity between closely related organisms, higher temperatures are required to melt such DNA hybrids when compared to more distantly related organisms. A variety of different methods use hybridization to pinpoint the origin of a DNA sample, including the polymerase chain reaction (PCR). In another technique, short DNA sequences are hybridized to cellular mRNAs to identify expressed genes. Pharmaceutical drug companies are exploring the use of antisense RNA to bind to undesired mRNA, preventing the ribosome from translating the mRNA into protein. === DNA-DNA hybridization === === Fluorescence In Situ Hybridization === Fluorescence in situ hybridization (FISH) is a laboratory method used to detect and locate a DNA sequence, often on a particular chromosome. In the 1960s, researchers Joseph Gall and Mary Lou Pardue found that molecular hybridization could be used to identify the position of DNA sequences in situ (i.e., in their natural positions within a chromosome). In 1969, the two scientists published a paper demonstrating that radioactive copies of a ribosomal DNA sequence could be used to detect complementary DNA sequences in the nucleus of a frog egg. Since those original observations, many refinements have increased the versatility and sensitivity of the procedure to the extent that in situ hybridization is now considered an essential tool in cytogenetics. == References == == External links == "James Watson, Francis Crick, Maurice Wilkins, and Rosalind Franklin". Science History Institute. Archived from the original on 21 March 2018. Retrieved 20 March 2018. In 1962 James Watson (b. 1928), Francis Crick (1916–2004), and Maurice Wilkins (1916–2004) jointly received the Nobel Prize in physiology or medicine for their 1953 determination of the structure of deoxyribonucleic acid (DNA). Southern hybridization & Northern hybridization
Wikipedia/DNA_hybridization
A DNA microarray (also commonly known as DNA chip or biochip) is a collection of microscopic DNA spots attached to a solid surface. Scientists use DNA microarrays to measure the expression levels of large numbers of genes simultaneously or to genotype multiple regions of a genome. Each DNA spot contains picomoles (10−12 moles) of a specific DNA sequence, known as probes (or reporters or oligos). These can be a short section of a gene or other DNA element that are used to hybridize a cDNA or cRNA (also called anti-sense RNA) sample (called target) under high-stringency conditions. Probe-target hybridization is usually detected and quantified by detection of fluorophore-, silver-, or chemiluminescence-labeled targets to determine relative abundance of nucleic acid sequences in the target. The original nucleic acid arrays were macro arrays approximately 9 cm × 12 cm and the first computerized image based analysis was published in 1981. It was invented by Patrick O. Brown. An example of its application is in SNPs arrays for polymorphisms in cardiovascular diseases, cancer, pathogens and GWAS analysis. It is also used for the identification of structural variations and the measurement of gene expression. == Principle == The core principle behind microarrays is hybridization between two DNA strands, the property of complementary nucleic acid sequences to specifically pair with each other by forming hydrogen bonds between complementary nucleotide base pairs. A high number of complementary base pairs in a nucleotide sequence means tighter non-covalent bonding between the two strands. After washing off non-specific bonding sequences, only strongly paired strands will remain hybridized. Fluorescently labeled target sequences that bind to a probe sequence generate a signal that depends on the hybridization conditions (such as temperature), and washing after hybridization. Total strength of the signal, from a spot (feature), depends upon the amount of target sample binding to the probes present on that spot. Microarrays use relative quantitation in which the intensity of a feature is compared to the intensity of the same feature under a different condition, and the identity of the feature is known by its position. == Uses and types == Many types of arrays exist and the broadest distinction is whether they are spatially arranged on a surface or on coded beads: The traditional solid-phase array is a collection of orderly microscopic "spots", called features, each with thousands of identical and specific probes attached to a solid surface, such as glass, plastic or silicon biochip (commonly known as a genome chip, DNA chip or gene array). Thousands of these features can be placed in known locations on a single DNA microarray. The alternative bead array is a collection of microscopic polystyrene beads, each with a specific probe and a ratio of two or more dyes, which do not interfere with the fluorescent dyes used on the target sequence. DNA microarrays can be used to detect DNA (as in comparative genomic hybridization), or detect RNA (most commonly as cDNA after reverse transcription) that may or may not be translated into proteins. The process of measuring gene expression via cDNA is called expression analysis or expression profiling. Applications include: Specialised arrays tailored to particular crops are becoming increasingly popular in molecular breeding applications. In the future they could be used to screen seedlings at early stages to lower the number of unneeded seedlings tried out in breeding operations. === Fabrication === Microarrays can be manufactured in different ways, depending on the number of probes under examination, costs, customization requirements, and the type of scientific question being asked. Arrays from commercial vendors may have as few as 10 probes or as many as 5 million or more micrometre-scale probes. === Spotted vs. in situ synthesised arrays === Microarrays can be fabricated using a variety of technologies, including printing with fine-pointed pins onto glass slides, photolithography using pre-made masks, photolithography using dynamic micromirror devices, ink-jet printing, or electrochemistry on microelectrode arrays. In spotted microarrays, the probes are oligonucleotides, cDNA or small fragments of PCR products that correspond to mRNAs. The probes are synthesized prior to deposition on the array surface and are then "spotted" onto glass. A common approach utilizes an array of fine pins or needles controlled by a robotic arm that is dipped into wells containing DNA probes and then depositing each probe at designated locations on the array surface. The resulting "grid" of probes represents the nucleic acid profiles of the prepared probes and is ready to receive complementary cDNA or cRNA "targets" derived from experimental or clinical samples. This technique is used by research scientists around the world to produce "in-house" printed microarrays in their own labs. These arrays may be easily customized for each experiment, because researchers can choose the probes and printing locations on the arrays, synthesize the probes in their own lab (or collaborating facility), and spot the arrays. They can then generate their own labeled samples for hybridization, hybridize the samples to the array, and finally scan the arrays with their own equipment. This provides a relatively low-cost microarray that may be customized for each study, and avoids the costs of purchasing often more expensive commercial arrays that may represent vast numbers of genes that are not of interest to the investigator. Publications exist which indicate in-house spotted microarrays may not provide the same level of sensitivity compared to commercial oligonucleotide arrays, possibly owing to the small batch sizes and reduced printing efficiencies when compared to industrial manufactures of oligo arrays. In oligonucleotide microarrays, the probes are short sequences designed to match parts of the sequence of known or predicted open reading frames. Although oligonucleotide probes are often used in "spotted" microarrays, the term "oligonucleotide array" most often refers to a specific technique of manufacturing. Oligonucleotide arrays are produced by printing short oligonucleotide sequences designed to represent a single gene or family of gene splice-variants by synthesizing this sequence directly onto the array surface instead of depositing intact sequences. Sequences may be longer (60-mer probes such as the Agilent design) or shorter (25-mer probes produced by Affymetrix) depending on the desired purpose; longer probes are more specific to individual target genes, shorter probes may be spotted in higher density across the array and are cheaper to manufacture. One technique used to produce oligonucleotide arrays include photolithographic synthesis (Affymetrix) on a silica substrate where light and light-sensitive masking agents are used to "build" a sequence one nucleotide at a time across the entire array. Each applicable probe is selectively "unmasked" prior to bathing the array in a solution of a single nucleotide, then a masking reaction takes place and the next set of probes are unmasked in preparation for a different nucleotide exposure. After many repetitions, the sequences of every probe become fully constructed. More recently, Maskless Array Synthesis from NimbleGen Systems has combined flexibility with large numbers of probes. === Two-channel vs. one-channel detection === Two-color microarrays or two-channel microarrays are typically hybridized with cDNA prepared from two samples to be compared (e.g. diseased tissue versus healthy tissue) and that are labeled with two different fluorophores. Fluorescent dyes commonly used for cDNA labeling include Cy3, which has a fluorescence emission wavelength of 570 nm (corresponding to the green part of the light spectrum), and Cy5 with a fluorescence emission wavelength of 670 nm (corresponding to the red part of the light spectrum). The two Cy-labeled cDNA samples are mixed and hybridized to a single microarray that is then scanned in a microarray scanner to visualize fluorescence of the two fluorophores after excitation with a laser beam of a defined wavelength. Relative intensities of each fluorophore may then be used in ratio-based analysis to identify up-regulated and down-regulated genes. Oligonucleotide microarrays often carry control probes designed to hybridize with RNA spike-ins. The degree of hybridization between the spike-ins and the control probes is used to normalize the hybridization measurements for the target probes. Although absolute levels of gene expression may be determined in the two-color array in rare instances, the relative differences in expression among different spots within a sample and between samples is the preferred method of data analysis for the two-color system. Examples of providers for such microarrays includes Agilent with their Dual-Mode platform, Eppendorf with their DualChip platform for colorimetric Silverquant labeling, and TeleChem International with Arrayit. In single-channel microarrays or one-color microarrays, the arrays provide intensity data for each probe or probe set indicating a relative level of hybridization with the labeled target. However, they do not truly indicate abundance levels of a gene but rather relative abundance when compared to other samples or conditions when processed in the same experiment. Each RNA molecule encounters protocol and batch-specific bias during amplification, labeling, and hybridization phases of the experiment making comparisons between genes for the same microarray uninformative. The comparison of two conditions for the same gene requires two separate single-dye hybridizations. Several popular single-channel systems are the Affymetrix "Gene Chip", Illumina "Bead Chip", Agilent single-channel arrays, the Applied Microarrays "CodeLink" arrays, and the Eppendorf "DualChip & Silverquant". One strength of the single-dye system lies in the fact that an aberrant sample cannot affect the raw data derived from other samples, because each array chip is exposed to only one sample (as opposed to a two-color system in which a single low-quality sample may drastically impinge on overall data precision even if the other sample was of high quality). Another benefit is that data are more easily compared to arrays from different experiments as long as batch effects have been accounted for. One channel microarray may be the only choice in some situations. Suppose i {\displaystyle i} samples need to be compared: then the number of experiments required using the two channel arrays quickly becomes unfeasible, unless a sample is used as a reference. === A typical protocol === This is an example of a DNA microarray experiment which includes details for a particular case to better explain DNA microarray experiments, while listing modifications for RNA or other alternative experiments. The two samples to be compared (pairwise comparison) are grown/acquired. In this example treated sample (case) and untreated sample (control). The nucleic acid of interest is purified: this can be RNA for expression profiling, DNA for comparative hybridization, or DNA/RNA bound to a particular protein which is immunoprecipitated (ChIP-on-chip) for epigenetic or regulation studies. In this example total RNA is isolated (both nuclear and cytoplasmic) by guanidinium thiocyanate-phenol-chloroform extraction (e.g. Trizol) which isolates most RNA (whereas column methods have a cut off of 200 nucleotides) and if done correctly has a better purity. The purified RNA is analysed for quality (by capillary electrophoresis) and quantity (for example, by using a NanoDrop or NanoPhotometer spectrometer). If the material is of acceptable quality and sufficient quantity is present (e.g., >1μg, although the required amount varies by microarray platform), the experiment can proceed. The labeled product is generated via reverse transcription and followed by an optional PCR amplification. The RNA is reverse transcribed with either polyT primers (which amplify only mRNA) or random primers (which amplify all RNA, most of which is rRNA). miRNA microarrays ligate an oligonucleotide to the purified small RNA (isolated with a fractionator), which is then reverse transcribed and amplified. The label is added either during the reverse transcription step, or following amplification if it is performed. The sense labeling is dependent on the microarray; e.g. if the label is added with the RT mix, the cDNA is antisense and the microarray probe is sense, except in the case of negative controls. The label is typically fluorescent; only one machine uses radiolabels. The labeling can be direct (not used) or indirect (requires a coupling stage). For two-channel arrays, the coupling stage occurs before hybridization, using aminoallyl uridine triphosphate (aminoallyl-UTP, or aaUTP) and NHS amino-reactive dyes (such as cyanine dyes); for single-channel arrays, the coupling stage occurs after hybridization, using biotin and labeled streptavidin. The modified nucleotides (usually in a ratio of 1 aaUTP: 4 TTP (thymidine triphosphate)) are added enzymatically in a low ratio to normal nucleotides, typically resulting in 1 every 60 bases. The aaDNA is then purified with a column (using a phosphate buffer solution, as Tris contains amine groups). The aminoallyl group is an amine group on a long linker attached to the nucleobase, which reacts with a reactive dye. A form of replicate known as a dye flip can be performed to control for dye artifacts in two-channel experiments; for a dye flip, a second slide is used, with the labels swapped (the sample that was labeled with Cy3 in the first slide is labeled with Cy5, and vice versa). In this example, aminoallyl-UTP is present in the reverse-transcribed mixture. The labeled samples are then mixed with a proprietary hybridization solution which can consist of SDS, SSC, dextran sulfate, a blocking agent (such as Cot-1 DNA, salmon sperm DNA, calf thymus DNA, PolyA, or PolyT), Denhardt's solution, or formamine. The mixture is denatured and added to the pinholes of the microarray. The holes are sealed and the microarray hybridized, either in a hyb oven, where the microarray is mixed by rotation, or in a mixer, where the microarray is mixed by alternating pressure at the pinholes. After an overnight hybridization, all nonspecific binding is washed off (SDS and SSC). The microarray is dried and scanned by a machine that uses a laser to excite the dye and measures the emission levels with a detector. The image is gridded with a template and the intensities of each feature (composed of several pixels) is quantified. The raw data is normalized; the simplest normalization method is to subtract background intensity and scale so that the total intensities of the features of the two channels are equal, or to use the intensity of a reference gene to calculate the t-value for all of the intensities. More sophisticated methods include z-ratio, loess and lowess regression and RMA (robust multichip analysis) for Affymetrix chips (single-channel, silicon chip, in situ synthesized short oligonucleotides). == Microarrays and bioinformatics == The advent of inexpensive microarray experiments created several specific bioinformatics challenges: the multiple levels of replication in experimental design (Experimental design); the number of platforms and independent groups and data format (Standardization); the statistical treatment of the data (Data analysis); mapping each probe to the mRNA transcript that it measures (Annotation); the sheer volume of data and the ability to share it (Data warehousing). === Experimental design === Due to the biological complexity of gene expression, the considerations of experimental design that are discussed in the expression profiling article are of critical importance if statistically and biologically valid conclusions are to be drawn from the data. There are three main elements to consider when designing a microarray experiment. First, replication of the biological samples is essential for drawing conclusions from the experiment. Second, technical replicates (e.g. two RNA samples obtained from each experimental unit) may help to quantitate precision. The biological replicates include independent RNA extractions. Technical replicates may be two aliquots of the same extraction. Third, spots of each cDNA clone or oligonucleotide are present as replicates (at least duplicates) on the microarray slide, to provide a measure of technical precision in each hybridization. It is critical that information about the sample preparation and handling is discussed, in order to help identify the independent units in the experiment and to avoid inflated estimates of statistical significance. === Standardization === Microarray data is difficult to exchange due to the lack of standardization in platform fabrication, assay protocols, and analysis methods. This presents an interoperability problem in bioinformatics. Various grass-roots open-source projects are trying to ease the exchange and analysis of data produced with non-proprietary chips: For example, the "Minimum Information About a Microarray Experiment" (MIAME) checklist helps define the level of detail that should exist and is being adopted by many journals as a requirement for the submission of papers incorporating microarray results. But MIAME does not describe the format for the information, so while many formats can support the MIAME requirements, as of 2007 no format permits verification of complete semantic compliance. The "MicroArray Quality Control (MAQC) Project" is being conducted by the US Food and Drug Administration (FDA) to develop standards and quality control metrics which will eventually allow the use of MicroArray data in drug discovery, clinical practice and regulatory decision-making. The MGED Society has developed standards for the representation of gene expression experiment results and relevant annotations. === Data analysis === Microarray data sets are commonly very large, and analytical precision is influenced by a number of variables. Statistical challenges include taking into account effects of background noise and appropriate normalization of the data. Normalization methods may be suited to specific platforms and, in the case of commercial platforms, the analysis may be proprietary. Algorithms that affect statistical analysis include: Image analysis: gridding, spot recognition of the scanned image (segmentation algorithm), removal or marking of poor-quality and low-intensity features (called flagging). Data processing: background subtraction (based on global or local background), determination of spot intensities and intensity ratios, visualisation of data (e.g. see MA plot), and log-transformation of ratios, global or local normalization of intensity ratios, and segmentation into different copy number regions using step detection algorithms. Class discovery analysis: This analytic approach, sometimes called unsupervised classification or knowledge discovery, tries to identify whether microarrays (objects, patients, mice, etc.) or genes cluster together in groups. Identifying naturally existing groups of objects (microarrays or genes) which cluster together can enable the discovery of new groups that otherwise were not previously known to exist. During knowledge discovery analysis, various unsupervised classification techniques can be employed with DNA microarray data to identify novel clusters (classes) of arrays. This type of approach is not hypothesis-driven, but rather is based on iterative pattern recognition or statistical learning methods to find an "optimal" number of clusters in the data. Examples of unsupervised analyses methods include self-organizing maps, neural gas, k-means cluster analyses, hierarchical cluster analysis, Genomic Signal Processing based clustering and model-based cluster analysis. For some of these methods the user also has to define a distance measure between pairs of objects. Although the Pearson correlation coefficient is usually employed, several other measures have been proposed and evaluated in the literature. The input data used in class discovery analyses are commonly based on lists of genes having high informativeness (low noise) based on low values of the coefficient of variation or high values of Shannon entropy, etc. The determination of the most likely or optimal number of clusters obtained from an unsupervised analysis is called cluster validity. Some commonly used metrics for cluster validity are the silhouette index, Davies-Bouldin index, Dunn's index, or Hubert's Γ {\displaystyle \Gamma } statistic. Class prediction analysis: This approach, called supervised classification, establishes the basis for developing a predictive model into which future unknown test objects can be input in order to predict the most likely class membership of the test objects. Supervised analysis for class prediction involves use of techniques such as linear regression, k-nearest neighbor, learning vector quantization, decision tree analysis, random forests, naive Bayes, logistic regression, kernel regression, artificial neural networks, support vector machines, mixture of experts, and supervised neural gas. In addition, various metaheuristic methods are employed, such as genetic algorithms, covariance matrix self-adaptation, particle swarm optimization, and ant colony optimization. Input data for class prediction are usually based on filtered lists of genes which are predictive of class, determined using classical hypothesis tests (next section), Gini diversity index, or information gain (entropy). Hypothesis-driven statistical analysis: Identification of statistically significant changes in gene expression are commonly identified using the t-test, ANOVA, Bayesian method Mann–Whitney test methods tailored to microarray data sets, which take into account multiple comparisons or cluster analysis. These methods assess statistical power based on the variation present in the data and the number of experimental replicates, and can help minimize type I and type II errors in the analyses. Dimensional reduction: Analysts often reduce the number of dimensions (genes) prior to data analysis. This may involve linear approaches such as principal components analysis (PCA), or non-linear manifold learning (distance metric learning) using kernel PCA, diffusion maps, Laplacian eigenmaps, local linear embedding, locally preserving projections, and Sammon's mapping. Network-based methods: Statistical methods that take the underlying structure of gene networks into account, representing either associative or causative interactions or dependencies among gene products. Weighted gene co-expression network analysis is widely used for identifying co-expression modules and intramodular hub genes. Modules may corresponds to cell types or pathways. Highly connected intramodular hubs best represent their respective modules. Microarray data may require further processing aimed at reducing the dimensionality of the data to aid comprehension and more focused analysis. Other methods permit analysis of data consisting of a low number of biological or technical replicates; for example, the Local Pooled Error (LPE) test pools standard deviations of genes with similar expression levels in an effort to compensate for insufficient replication. === Annotation === The relation between a probe and the mRNA that it is expected to detect is not trivial. Some mRNAs may cross-hybridize probes in the array that are supposed to detect another mRNA. In addition, mRNAs may experience amplification bias that is sequence or molecule-specific. Thirdly, probes that are designed to detect the mRNA of a particular gene may be relying on genomic EST information that is incorrectly associated with that gene. === Data warehousing === Microarray data was found to be more useful when compared to other similar datasets. The sheer volume of data, specialized formats (such as MIAME), and curation efforts associated with the datasets require specialized databases to store the data. A number of open-source data warehousing solutions, such as InterMine and BioMart, have been created for the specific purpose of integrating diverse biological datasets, and also support analysis. == Alternative technologies == Advances in massively parallel sequencing has led to the development of RNA-Seq technology, that enables a whole transcriptome shotgun approach to characterize and quantify gene expression. Unlike microarrays, which need a reference genome and transcriptome to be available before the microarray itself can be designed, RNA-Seq can also be used for new model organisms whose genome has not been sequenced yet. == Glossary == An array or slide is a collection of features spatially arranged in a two dimensional grid, arranged in columns and rows. Block or subarray: a group of spots, typically made in one print round; several subarrays/ blocks form an array. Case/control: an experimental design paradigm especially suited to the two-colour array system, in which a condition chosen as control (such as healthy tissue or state) is compared to an altered condition (such as a diseased tissue or state). Channel: the fluorescence output recorded in the scanner for an individual fluorophore and can even be ultraviolet. Dye flip or dye swap or fluor reversal: reciprocal labelling of DNA targets with the two dyes to account for dye bias in experiments. Scanner: an instrument used to detect and quantify the intensity of fluorescence of spots on a microarray slide, by selectively exciting fluorophores with a laser and measuring the fluorescence with a filter (optics) photomultiplier system. Spot or feature: a small area on an array slide that contains picomoles of specific DNA samples. For other relevant terms see: Glossary of gene expression terms Protocol (natural sciences) == See also == == References == == External links == Microarray Animation 1Lec.com PLoS Biology Primer: Microarray Analysis Rundown of microarray technology ArrayMining.net – a free web-server for online microarray analysis Microarray – How does it work? PNAS Commentary: Discovery of Principles of Nature from Mathematical Modeling of DNA Microarray Data DNA microarray virtual experiment
Wikipedia/DNA_microarray_experiment
In genetics, complementary DNA (cDNA) is DNA that was reverse transcribed (via reverse transcriptase) from an RNA (e.g., messenger RNA or microRNA). cDNA exists in both single-stranded and double-stranded forms and in both natural and engineered forms. In engineered forms, it often is a copy (replicate) of the naturally occurring DNA from any particular organism's natural genome; the organism's own mRNA was naturally transcribed from its DNA, and the cDNA is reverse transcribed from the mRNA, yielding a duplicate of the original DNA. Engineered cDNA is often used to express a specific protein in a cell that does not normally express that protein (i.e., heterologous expression), or to sequence or quantify mRNA molecules using DNA based methods (qPCR, RNA-seq). cDNA that codes for a specific protein can be transferred to a recipient cell for expression as part of recombinant DNA, often bacterial or yeast expression systems. cDNA is also generated to analyze transcriptomic profiles in bulk tissue, single cells, or single nuclei in assays such as microarrays, qPCR, and RNA-seq. In natural forms, cDNA is produced by retroviruses (such as HIV-1, HIV-2, simian immunodeficiency virus, etc.) and then integrated into the host's genome, where it creates a provirus. The term cDNA is also used, typically in a bioinformatics context, to refer to an mRNA transcript's sequence, expressed as DNA bases (deoxy-GCAT) rather than RNA bases (GCAU). Patentability of cDNA was a subject of a 2013 US Supreme Court decision in Association for Molecular Pathology v. Myriad Genetics, Inc. As a compromise, the Court declared, that exons-only cDNA is patent-eligible, whereas isolated sequences of naturally occurring DNA comprising introns are not. == Synthesis == RNA serves as a template for cDNA synthesis. In cellular life, cDNA is generated by viruses and retrotransposons for integration of RNA into target genomic DNA. In molecular biology, RNA is purified from source material after genomic DNA, proteins and other cellular components are removed. cDNA is then synthesized through in vitro reverse transcription. === RNA purification === RNA is transcribed from genomic DNA in host cells and is extracted by first lysing cells then purifying RNA utilizing widely used methods such as phenol-chloroform, silica column, and bead-based RNA extraction methods. Extraction methods vary depending on the source material. For example, extracting RNA from plant tissue requires additional reagents, such as polyvinylpyrrolidone (PVP), to remove phenolic compounds, carbohydrates, and other compounds that will otherwise render RNA unusable. To remove DNA and proteins, enzymes such as DNase and Proteinase K are used for degradation. Importantly, RNA integrity is maintained by inactivating RNases with chaotropic agents such as guanidinium isothiocyanate, sodium dodecyl sulphate (SDS), phenol or chloroform. Total RNA is then separated from other cellular components and precipitated with alcohol. Various commercial kits exist for simple and rapid RNA extractions for specific applications. Additional bead-based methods can be used to isolate specific sub-types of RNA (e.g. mRNA and microRNA) based on size or unique RNA regions. === Reverse transcription === ==== First-strand synthesis ==== Using a reverse transcriptase enzyme and purified RNA templates, one strand of cDNA is produced (first-strand cDNA synthesis). The M-MLV reverse transcriptase from the Moloney murine leukemia virus is commonly used due to its reduced RNase H activity suited for transcription of longer RNAs. The AMV reverse transcriptase from the avian myeloblastosis virus may also be used for RNA templates with strong secondary structures (i.e. high melting temperature). cDNA is commonly generated from mRNA for gene expression analyses such as RT-qPCR and RNA-seq. mRNA is selectively reverse transcribed using oligo-dT primers that are the reverse complement of the poly-adenylated tail on the 3' end of all mRNA. The oligo-dT primer anneals to the poly-adenylated tail of the mRNA to serve as a binding site for the reverse transcriptase to begin reverse transcription. An optimized mixture of oligo-dT and random hexamer primers increases the chance of obtaining full-length cDNA while reducing 5' or 3' bias. Ribosomal RNA may also be depleted to enrich both mRNA and non-poly-adenylated transcripts such as some non-coding RNA. ==== Second-strand synthesis ==== The result of first-strand syntheses, RNA-DNA hybrids, can be processed through multiple second-strand synthesis methods or processed directly in downstream assays. An early method known as hairpin-primed synthesis relied on hairpin formation on the 3' end of the first-strand cDNA to prime second-strand synthesis. However, priming is random and hairpin hydrolysis leads to loss of information. The Gubler and Hoffman Procedure uses E. Coli RNase H to nick mRNA that is replaced with E. Coli DNA Polymerase I and sealed with E. Coli DNA Ligase. An optimization of this procedure relies on low RNase H activity of M-MLV to nick mRNA with remaining RNA later removed by adding RNase H after DNA Polymerase translation of the second-strand cDNA. This prevents lost sequence information at the 5' end of the mRNA. == Applications == Complementary DNA is often used in gene cloning or as gene probes or in the creation of a cDNA library. When scientists transfer a gene from one cell into another cell in order to express the new genetic material as a protein in the recipient cell, the cDNA will be added to the recipient (rather than the entire gene), because the DNA for an entire gene may include DNA that does not code for the protein or that interrupts the coding sequence of the protein (e.g., introns). Partial sequences of cDNAs are often obtained as expressed sequence tags. With amplification of DNA sequences via polymerase chain reaction (PCR) now commonplace, one will typically conduct reverse transcription as an initial step, followed by PCR to obtain an exact sequence of cDNA for intra-cellular expression. This is achieved by designing sequence-specific DNA primers that hybridize to the 5' and 3' ends of a cDNA region coding for a protein. Once amplified, the sequence can be cut at each end with nucleases and inserted into one of many small circular DNA sequences known as expression vectors. Such vectors allow for self-replication, inside the cells, and potentially integration in the host DNA. They typically also contain a strong promoter to drive transcription of the target cDNA into mRNA, which is then translated into protein. cDNA is also used to study gene expression via methods such as RNA-seq or RT-qPCR. For sequencing, RNA must be fragmented due to sequencing platform size limitations. Additionally, second-strand synthesized cDNA must be ligated with adapters that allow cDNA fragments to be PCR amplified and bind to sequencing flow cells. Gene-specific analysis methods commonly use microarrays and RT-qPCR to quantify cDNA levels via fluorometric and other methods. On 13 June 2013, the United States Supreme Court ruled in the case of Association for Molecular Pathology v. Myriad Genetics that while naturally occurring genes cannot be patented, cDNA is patent-eligible because it does not occur naturally. == Viruses and retrotransposons == Some viruses also use cDNA to turn their viral RNA into mRNA (viral RNA → cDNA → mRNA). The mRNA is used to make viral proteins to take over the host cell. An example of this first step from viral RNA to cDNA can be seen in the HIV cycle of infection. Here, the host cell membrane becomes attached to the virus' lipid envelope which allows the viral capsid with two copies of viral genome RNA to enter the host. The cDNA copy is then made through reverse transcription of the viral RNA, a process facilitated by the chaperone CypA and a viral capsid associated reverse transcriptase. cDNA is also generated by retrotransposons in eukaryotic genomes. Retrotransposons are mobile genetic elements that move themselves within, and sometimes between, genomes via RNA intermediates. This mechanism is shared with viruses with the exclusion of the generation of infectious particles. == See also == cDNA library – Type of DNA library cDNA microarray – Collection of microscopic DNA spots attached to a solid surfacePages displaying short descriptions of redirect targets RNA-Seq – Lab technique in cellular biology Real-time polymerase chain reaction – Laboratory technique of molecular biology (RT-qPCR) == References == Mark D. Adams et al. "Complementary DNA Sequencing: Expressed Sequence Tags and Human Genome Project." Science (American Association for the Advancement of Science) 252.5013 (1991): 1651–1656. Web. Philip M. Murphy, and H. Lee Tiffany. "Cloning of Complementary DNA Encoding a Functional Human Interleukin-8 Receptor." Science (American Association for the Advancement of Science) 253.5025 (1991): 1280–1283. Web. == External links == H-Invitational Database Functional Annotation of the Mouse database Complementary DNA tool http://news.icecric.com/today-match-prediction/
Wikipedia/CDNA
Radioactivity is generally used in life sciences for highly sensitive and direct measurements of biological phenomena, and for visualizing the location of biomolecules radiolabelled with a radioisotope. All atoms exist as stable or unstable isotopes and the latter decay at a given half-life ranging from attoseconds to billions of years; radioisotopes useful to biological and experimental systems have half-lives ranging from minutes to months. In the case of the hydrogen isotope tritium (half-life = 12.3 years) and carbon-14 (half-life = 5,730 years), these isotopes derive their importance from all organic life containing hydrogen and carbon and therefore can be used to study countless living processes, reactions, and phenomena. Most short lived isotopes are produced in cyclotrons, linear particle accelerators, or nuclear reactors and their relatively short half-lives give them high maximum theoretical specific activities which is useful for detection in biological systems. Radiolabeling is a technique used to track the passage of a molecule that incorporates a radioisotope through a reaction, metabolic pathway, cell, tissue, organism, or biological system. The reactant is 'labeled' by replacing specific atoms by their isotope. Replacing an atom with its own radioisotope is an intrinsic label that does not alter the structure of the molecule. Alternatively, molecules can be radiolabeled by chemical reactions that introduce an atom, moiety, or functional group that contains a radionuclide. For example, radio-iodination of peptides and proteins with biologically useful iodine isotopes is easily done by an oxidation reaction that replaces the hydroxyl group with iodine on tyrosine and histadine residues. Another example is to use chelators such DOTA that can be chemically coupled to a protein; the chelator in turn traps radiometals thus radiolabeling the protein. This has been used for introducing Yttrium-90 onto a monoclonal antibody for therapeutic purposes and for introducing Gallium-68 onto the peptide Octreotide for diagnostic imaging by PET imaging. (See DOTA uses.) Radiolabeling is not necessary for some applications. For some purposes, soluble ionic salts can be used directly without further modification (e.g., gallium-67, gallium-68, and radioiodine isotopes). These uses rely on the chemical and biological properties of the radioisotope itself, to localize it within the organism or biological system. Molecular imaging is the biomedical field that employs radiotracers to visualize and quantify biological processes using positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging. Again, a key feature of using radioactivity in life science applications is that it is a quantitative technique, so PET/SPECT not only reveals where a radiolabelled molecule is but how much is there. Radiobiology (also known as radiation biology) is a field of clinical and basic medical sciences that involves the study of the action of radioactivity on biological systems. The controlled action of deleterious radioactivity on living systems is the basis of radiation therapy. == Examples of biologically useful radionuclides == === Hydrogen === Tritium (hydrogen-3) is a very low beta energy emitter that can be used to label proteins, nucleic acids, drugs and almost any organic biomolecule. The maximum theoretical specific activity of tritium is 28.8 kCi/mol (1,070 TBq/mol). However, there is often more than one tritium atom per molecule: for example, tritiated UTP is sold by most suppliers with carbons 5 and 6 each bonded to a tritium atom. For tritium detection, liquid scintillation counters have been classically employed, in which the energy of a tritium decay is transferred to a scintillant molecule in solution which in turn gives off photons whose intensity and spectrum can be measured by a photomultiplier array. The efficiency of this process is 4–50%, depending on the scintillation cocktail used. The measurements are typically expressed in counts per minute (CPM) or disintegrations per minute (DPM). Alternatively, a solid-state, tritium-specific phosphor screen can be used together with a phosphorimager to measure and simultaneously image the radiotracer. Measurements/images are digital in nature and can be expressed in intensity or densitometry units within a region of interest (ROI). === Carbon === Carbon-14 has a long half-life of 5730±40 years. Its maximum specific activity is 0.0624 kCi/mol (2.31 TBq/mol). It is used in applications such as radiometric dating or drug tests. Carbon-14 labeling is common in drug development to do ADME (absorption, distribution, metabolism and excretion) studies in animal models and in human toxicology and clinical trials. Since tritium exchange may occur in some radiolabeled compounds, this does not happen with carbon-14 and may thus be preferred. === Sodium === Sodium-22 and chlorine-36 are commonly used to study ion transporters. However, sodium-22 is hard to screen off and chlorine-36, with a half-life of 300,000 years, has low activity. === Sulfur === Sulfur-35 is used to label proteins and nucleic acids. Cysteine is an amino acid containing a thiol group which can be labeled by sulfur-35. For nucleotides that do not contain a sulfur group, the oxygen on one of the phosphate groups can be substituted with a sulfur. This thiophosphate acts the same as a normal phosphate group, although there is a slight bias against it by most polymerases. The maximum theoretical specific activity is 1,494 kCi/mol (55.3 PBq/mol). === Phosphorus === Phosphorus-32 is widely used for labeling nucleic acids and phosphoproteins. It has the highest emission energy (1.7 MeV) of all common research radioisotopes. This is a major advantage in experiments for which sensitivity is a primary consideration, such as titrations of very strong interactions (i.e., very low dissociation constant), footprinting experiments, and detection of low-abundance phosphorylated species. Phosphorus-32 is also relatively inexpensive. Because of its high energy, however, its safe use requires a number of engineering controls (e.g., acrylic glass) and administrative controls. The half-life of phosphorus-32 is 14.2 days, and its maximum specific activity is 9,131 kCi/mol (337.8 PBq/mol). Phosphorus-33 is used to label nucleotides. It is less energetic than phosphorus-32 and does not require protection with plexiglass. A disadvantage is its higher cost compared to phosphorus-32, as most of the bombarded phosphorus-31 will have acquired only one neutron, while only some will have acquired two or more. Its maximum specific activity is 5,118 kCi/mol (189.4 PBq/mol). === Iodine === Iodine-125 is commonly used for labeling proteins, usually at tyrosine residues. Unbound iodine is volatile and must be handled in a fume hood. Its maximum specific activity is 2,176 kCi/mol (80.5 PBq/mol). A good example of the difference in energy of the various radionuclei is the detection window ranges used to detect them, which are generally proportional to the energy of the emission, but vary from machine to machine: in a Perkin elmer TriLux Beta scintillation counter , the hydrogen-3 energy range window is between channel 5–360; carbon-14, sulfur-35 and phosphorus-33 are in the window of 361–660; and phosphorus-32 is in the window of 661–1024. == Detection == === Quantitative === In liquid scintillation counting, a small aliquot, filter or swab is added to scintillation fluid and the plate or vial is placed in a scintillation counter to measure the radioactive emissions. Manufacturers have incorporated solid scintillants into multi-well plates to eliminate the need for scintillation fluid and make this into a high-throughput technique. A gamma counter is similar in format to scintillation counting but it detects gamma emissions directly and does not require a scintillant. A Geiger counter is a quick and rough approximation of activity. Lower energy emitters such as tritium can not be detected. === Qualitative AND Quantitative === Autoradiography: A tissue section affixed to a microscope slide or a membrane such as a Northern blot or a hybridized slot blot can be placed against x-ray film or phosphor screens to acquire a photographic or digital image. The density of exposure, if calibrated, can supply exacting quantitative information. Phosphor storage screen: The slide or membrane is placed against a phosphor screen which is then scanned in a phosphorimager. This is many times faster than film/emulsion techniques and outputs data in a digital form, thus it has largely replaced film/emulsion techniques. === Microscopy === Electron microscopy: The sample is not exposed to a beam of electrons but detectors picks up the expelled electrons from the radionuclei. Micro-autoradiography: A tissue section, typically cryosectioned, is placed against a phosphor screen as above. Quantitative Whole Body Autoradiography (QWBA): Larger than micro-autoradiography, whole animals, typically rodents, can be analyzed for biodistribution studies. == Scientific methods == Schild regression is a radioligand binding assay. It is used for DNA labelling (5' and 3'), leaving the nucleic acids intact. == Radioactivity concentration == A vial of radiolabel has a "total activity". Taking as an example γ32P ATP, from the catalogues of the two major suppliers, Perkin Elmer NEG502H500UC or GE AA0068-500UCI, in this case, the total activity is 500 μCi (other typical numbers are 250 μCi or 1 mCi). This is contained in a certain volume, depending on the radioactive concentration, such as 5 to 10 mCi/mL (185 to 370 TBq/m3); typical volumes include 50 or 25 μL. Not all molecules in the solution have a P-32 on the last (i.e., gamma) phosphate: the "specific activity" gives the radioactivity concentration and depends on the radionuclei's half-life. If every molecule were labelled, the maximum theoretical specific activity is obtained that for P-32 is 9131 Ci/mmol. Due to pre-calibration and efficiency issues this number is never seen on a label; the values often found are 800, 3000 and 6000 Ci/mmol. With this number it is possible to calculate the total chemical concentration and the hot-to-cold ratio. "Calibration date" is the date in which the vial's activity is the same as on the label. "Pre-calibration" is when the activity is calibrated in a future date to compensate for the decay occurred during shipping. == Comparison with fluorescence == Prior to the widespread use of fluorescence in the past three decades radioactivity was the most common label. The primary advantage of fluorescence over radiotracers is that it does not require radiological controls and their associated expenses and safety measures. The decay of radioisotopes may limit the shelf life of a reagent, requiring its replacement and thus increasing expenses. Several fluorescent molecules can be used simultaneously (given that they do not overlap, cf. FRET), whereas with radioactivity two isotopes can be used (tritium and a low energy isotope, e.g. 33P due to different intensities) but require special equipment (a tritium screen and a regular phosphor-imaging screen, a specific dual channel detector, e.g. [1]). Fluorescence is not necessary easier or more convenient to use because fluorescence requires specialized equipment of its own and because quenching makes absolute and/or reproducible quantification difficult. The primary disadvantage of fluorescence versus radiotracers is a significant biological problem: chemically tagging a molecule with a fluorescent dye radically changes the structure of the molecule, which in turn can radically change the way that molecule interacts with other molecules. In contrast, intrinsic radiolabeling of a molecule can be done without altering its structure in any way. For example, substituting a H-3 for a hydrogen atom or C-14 for a carbon atom does not change the conformation, structure, or any other property of the molecule, it's just switching forms of the same atom. Thus an intrinsically radiolabeled molecule is identical to its unlabeled counterpart. Measurement of biological phenomena by radiotracers is always direct. In contrast, many life science fluorescence applications are indirect, consisting of a fluorescent dye increasing, decreasing, or shifting in wavelength emission upon binding to the molecule of interest. == Safety == If good health physics controls are maintained in a laboratory where radionuclides are used, it is unlikely that the overall radiation dose received by workers will be of much significance. Nevertheless, the effects of low doses are mostly unknown so many regulations exist to avoid unnecessary risks, such as skin or internal exposure. Due to the low penetration power and many variables involved it is hard to convert a radioactive concentration to a dose. 1 μCi of P-32 on a square centimetre of skin (through a dead layer of a thickness of 70 μm) gives 7961 rads (79.61 grays) per hour. Similarly a mammogram gives an exposure of 300 mrem (3 mSv) on a larger volume (in the US, the average annual dose is 620 mrem or 6.2 mSv ). == See also == Radiopharmacology Radiation biology Radiation poisoning Background radiation Radiography Prehydrated electrons == References ==
Wikipedia/Radioactivity_in_the_life_sciences
Polycomb-group proteins (PcG proteins) are a family of protein complexes first discovered in fruit flies that can remodel chromatin such that epigenetic silencing of genes takes place. Polycomb-group proteins are well known for silencing Hox genes through modulation of chromatin structure during embryonic development in fruit flies (Drosophila melanogaster). They derive their name from the fact that the first sign of a decrease in PcG function is often a homeotic transformation of posterior legs towards anterior legs, which have a characteristic comb-like set of bristles. == In insects == In Drosophila, the Trithorax-group (trxG) and Polycomb-group (PcG) proteins act antagonistically and interact with chromosomal elements, termed Cellular Memory Modules (CMMs). Trithorax-group (trxG) proteins maintain the active state of gene expression while the Polycomb-group (PcG) proteins counteract this activation with a repressive function that is stable over many cell generations and can only be overcome by germline differentiation processes. Polycomb Gene complexes or PcG silencing consist of at least three kinds of multiprotein complex Polycomb Repressive Complex 1 (PRC1), PRC2 and PhoRC. These complexes work together to carry out their repressive effect. PcGs proteins are evolutionarily conserved and exist in at least two separate protein complexes; the PcG repressive complex 1 (PRC1) and the PcG repressive complex 2–4 (PRC2/3/4). PRC2 catalyzes trimethylation of lysine 27 on histone H3 (H3K27me2/3), while PRC1 mono- ubiquitinates histone H2A on lysine 119 (H2AK119Ub1). == In mammals == In mammals Polycomb Group gene expression is important in many aspects of development like homeotic gene regulation and X chromosome inactivation, being recruited to the inactive X by Xist RNA, the master regulator of XCI or embryonic stem cell self-renewal. The Bmi1 polycomb ring finger protein promotes neural stem cell self-renewal. Murine null mutants in PRC2 genes are embryonic lethals while most PRC1 mutants are live born homeotic mutants that die perinatally. In contrast overexpression of PcG proteins correlates with the severity and invasiveness of several cancer types. The mammalian PRC1 core complexes are very similar to Drosophila. Polycomb Bmi1 is known to regulate ink4 locus (p16Ink4a, p19Arf). Regulation of Polycomb-group proteins at bivalent chromatin sites is performed by SWI/SNF complexes, which oppose the accumulation of Polycomb complexes through ATP-dependent eviction. == In plants == In Physcomitrella patens the PcG protein FIE is specifically expressed in stem cells such as the unfertilized egg cell. Soon after fertilisation the FIE gene is inactivated in the young embryo. The Polycomb gene FIE is expressed in unfertilised egg cells of the moss Physcomitrella patens and expression ceases after fertilisation in the developing diploid sporophyte. It has been shown that unlike in mammals the PcG are necessary to keep the cells in a differentiated state. Consequently, loss of PcG causes de-differentiation and promotes embryonic development. Polycomb-group proteins also intervene in the control of flowering by silencing the Flowering Locus C gene. This gene is a central part of the pathway that inhibits flowering in plants and its silencing during winter is suspected to be one of the main factors intervening in plant vernalization. == See also == PRC1 PRC2 PHC1 PHC2 Heterochromatin protein 1 (Cbx) BMI1 PCGF1, KDM2B PCGF2 (Polycomb group RING finger protein 2) ortolog Bmi1 RYBP RING1 SUV39H1 (histone-lysine N-methyltransferase) L3mbtl2 EZH2 (Enhancer of Zeste Homolog 2) EED SUZ12 (Suppressor of Zeste 12) Jarid2 (jumonji, AT rich interactive domain 2) RE1-silencing transcription factor (REST) RNF2 CBFβ YY1 Bivalent chromatin == References == == Further reading == == External links == "polycomb group proteins". Humpath.com. The Polycomb and Trithorax page of the Cavalli lab This page contains useful information on Polycomb and trithorax proteins, in the form of an introduction, links to published reviews, list of Polycomb and trithorax proteins, illustrative power point slides and a link to a genome browser showing the genome-wide distribution of these proteins in Drosophila melanogaster. Drosophila Genes in Development: Polycomb-group in the Homeobox Genes DataBase Chromatin organization and the Polycomb and Trithorax groups in The Interactive Fly polycomb+group+proteins at the U.S. National Library of Medicine Medical Subject Headings (MeSH)
Wikipedia/Polycomb-group_protein
DNA adenine methyltransferase identification, often abbreviated DamID, is a molecular biology protocol used to map the binding sites of DNA- and chromatin-binding proteins in eukaryotes. DamID identifies binding sites by expressing the proposed DNA-binding protein as a fusion protein with DNA methyltransferase. Binding of the protein of interest to DNA localizes the methyltransferase in the region of the binding site. Adenine methylation does not occur naturally in eukaryotes and therefore adenine methylation in any region can be concluded to have been caused by the fusion protein, implying the region is located near a binding site. DamID is an alternate method to ChIP-on-chip or ChIP-seq. == Description == === Principle === N6-methyladenine (m6A) is the product of the addition of a methyl group (CH3) at position 6 of the adenine. This modified nucleotide is absent from the vast majority of eukaryotes, with the exception of C. elegans, but is widespread in bacterial genomes, as part of the restriction modification or DNA repair systems. In Escherichia coli, adenine methylation is catalyzed by the adenine methyltransferase Dam (DNA adenine methyltransferase), which catalyses adenine methylation exclusively in the palindromic sequence GATC. Ectopic expression of Dam in eukaryotic cells leads to methylation of adenine in GATC sequences without any other noticeable side effect. Based on this, DamID consists in fusing Dam to a protein of interest (usually a protein that interacts with DNA such as transcription factors) or a chromatin component. The protein of interest thus targets Dam to its cognate in vivo binding site, resulting in the methylation of neighboring GATCs. The presence of m6A, coinciding with the binding sites of the proteins of interest, is revealed by methyl PCR. === Methyl PCR === In this assay the genome is digested by DpnI, which cuts only methylated GATCs. Double-stranded adapters with a known sequence are then ligated to the ends generated by DpnI. Ligation products are then digested by DpnII. This enzyme cuts non-methylated GATCs, ensuring that only fragments flanked by consecutive methylated GATCs are amplified in the subsequent PCR. A PCR with primers matching the adaptors is then carried out, leading to the specific amplification of genomic fragments flanked by methylated GATCs. === Specificities of DamID versus Chromatin Immuno-Precipitation === Chromatin Immuno-Precipitation, or (ChIP), is an alternative method to assay protein binding at specific loci of the genome. Unlike ChIP, DamID does not require a specific antibody against the protein of interest. On the one hand, this allows to map proteins for which no such antibody is available. On the other hand, this makes it impossible to specifically map posttranslationally modified proteins. Another fundamental difference is that ChIP assays where the protein of interests is at a given time, whereas DamID assays where it has been. The reason is that m6A stays in the DNA after the Dam fusion protein goes away. For proteins that are either bound or unbound on their target sites this does not change the big picture. However, this can lead to strong differences in the case of proteins that slide along the DNA (e.g. RNA polymerase). == Known biases and technical issues == === Plasmid methylation bias === Depending on how the experiment is carried out, DamID can be subject to plasmid methylation biases. Because plasmids are usually amplified in E. coli where Dam is naturally expressed, they are methylated on every GATC. In transient transfection experiments, the DNA of those plasmids is recovered along with the DNA of the transfected cells, meaning that fragments of the plasmid are amplified in the methyl PCR. Every sequence of the genome that shares homology or identity with the plasmid may thus appear to be bound by the protein of interest. In particular, this is true of the open reading frame of the protein of interest, which is present in both the plasmid and the genome. In microarray experiments, this bias can be used to ensure that the proper material was hybridized. In stable cell lines or fully transgenic animals, this bias is not observed as no plasmid DNA is recovered. === Apoptosis === Apoptotic cells degrade their DNA in a characteristic nucleosome ladder pattern. This generates DNA fragments that can be ligated and amplified during the DamID procedure (van Steensel laboratory, unpublished observations). The influence of these nucleosomal fragments on the binding profile of a protein is not known. === Resolution === The resolution of DamID is a function of the availability of GATC sequences in the genome. A protein can only be mapped within two consecutive GATC sites. The median spacing between GATC fragments is 205 bp in Drosophila (FlyBase release 5), 260 in mouse (Mm9), and 460 in human (HG19). A modified protocol (DamIP), which combines immunoprecipitation of m6A with a Dam variant with less specific target site recognition, may be used to obtain higher resolution data. == Cell-type specific methods == A major advantage of DamID over ChIP seq is that profiling of protein binding sites can be assayed in a particular cell type in vivo without requiring the physical separation of a subpopulation of cells. This allows for investigation into developmental or physiological processes in animal models. === Targeted DamID === The targeted DamID (TaDa) approach uses the phenomenon of ribosome reinitiation to express Dam-fusion proteins at appropriately low levels for DamID (i.e. Dam is non-saturating, thus avoiding toxicity). This construct can be combined with cell-type specific promoters resulting in tissue-specific methylation. This approach can be used to assay transcription factor binding in a cell type of interest or alternatively, dam can be fused to Pol II subunits to determine binding of RNA polymerase and thus infer cell-specific gene expression. Targeted DamID has been demonstrated in Drosophila and mouse cells. === FRT/FLP-out DamID === Cell-specific DamID can also be achieved using recombination mediated excision of a transcriptional terminator cassette upstream of the Dam-fusion protein. The terminator cassette is flanked by FRT recombination sites which can be removed when combined with tissue specific expression of FLP recombinase. Upon removal of the cassette, the Dam-fusion is expressed at low levels under the control of a basal promoter. == Variants == As well as detection of standard protein-DNA interactions, DamID can be used to investigate other aspects of chromatin biology. === Split DamID === This method can be used to detect co-binding of two factors to the same genomic locus. The Dam methylase may be expressed in two halves which are fused to different proteins of interest. When both proteins bind to the same region of DNA, the Dam enzyme is reconstituted and is able to methylate the surrounding GATC sites. === Chromatin accessibility === Due to the high activity of the enzyme, expression of untethered Dam results in methylation of all regions of accessible chromatin. This approach can be used as an alternative to ATAC-seq or DNAse-seq. When combined with cell-type specific DamID methods, expression of untethered Dam can be used to identify cell-type specific promoter or enhancer regions. === RNA-DNA interactions === A DamID variant known as RNA-DamID can be used to detect interactions between RNA molecules and DNA. This method relies on the expression of a Dam-MCP fusion protein which is able to bind to an RNA that has been modified with MS2 stem-loops. Binding of the Dam-fusion protein to the RNA results in detectable methylation at sites of RNA binding to the genome. === Long-range regulatory interactions === DNA sequences distal to a protein binding site may be brought into physical proximity through looping of chromosomes. For example, such interactions mediate enhancer and promoter function. These interactions can be detected through the action of Dam methylation. If Dam is targeted to a specific known DNA locus, distal sites brought into proximity due to the 3D configuration of the DNA will also be methylated and can be detected as in conventional DamID. === Single cell DamID === DamID is usually performed on around 10,000 cells, (although it has been demonstrated with fewer). This means that the data obtained represents the average binding, or probability of a binding event across that cell population. A DamID protocol for single cells has also been developed and applied to human cells. Single cell approaches can highlight the heterogeneity of chromatin associations between cells. == References == == Further reading == == External links == Frequently asked questions about DamID
Wikipedia/DNA_adenine_methyltransferase_identification
Protein methods are the techniques used to study proteins. There are experimental methods for studying proteins (e.g., for detecting proteins, for isolating and purifying proteins, and for characterizing the structure and function of proteins, often requiring that the protein first be purified). Computational methods typically use computer programs to analyze proteins. However, many experimental methods (e.g., mass spectrometry) require computational analysis of the raw data. == Genetic methods == Experimental analysis of proteins typically requires expression and purification of proteins. Expression is achieved by manipulating DNA that encodes the protein(s) of interest. Hence, protein analysis usually requires DNA methods, especially cloning. Some examples of genetic methods include conceptual translation, Site-directed mutagenesis, using a fusion protein, and matching allele with disease states. Some proteins have never been directly sequenced, however by translating codons from known mRNA sequences into amino acids by a method known as conceptual translation. (See genetic code.) Site-directed mutagenesis selectively introduces mutations that change the structure of a protein. The function of parts of proteins can be better understood by studying the change in phenotype as a result of this change. Fusion proteins are made by inserting protein tags, such as the His-tag, to produce a modified protein that is easier to track. An example of this would be GFP-Snf2H which consists of a protein bound to a green fluorescent protein to form a hybrid protein. By analyzing DNA alleles can be identified as being associated with disease states, such as in calculation of LOD scores. == Protein extraction from tissues == Protein extraction from tissues with tough extracellular matrices (e.g., biopsy samples, venous tissues, cartilage, skin) is often achieved in a laboratory setting by impact pulverization in liquid nitrogen. Samples are frozen in liquid nitrogen and subsequently subjected to impact or mechanical grinding. As water in the samples becomes very brittle at these temperature, the samples are often reduced to a collection of fine fragments, which can then be dissolved for protein extraction. Stainless steel devices known as tissue pulverizers are sometimes used for this purpose. Advantages of these devices include high levels of protein extraction from small, valuable samples, disadvantages include low-level cross-over contamination. == Protein purification == Protein purification is a critical process in molecular biology and biochemistry, aimed at isolating a specific protein from a complex mixture, such as cell lysates or tissue extracts. The goal is to obtain the protein in a pure form that retains its biological activity for further study, including functional assays, structural analysis, or therapeutic applications. The purification process typically involves several steps, including cell lysis, protein extraction, and a combination of chromatographic and electrophoretic techniques. === Protein isolation === Protein isolation refers to the extraction of proteins from biological samples, which can include tissues, cells, or other materials. The process often begins with cell lysis, where the cellular membranes are disrupted to release proteins into a solution. This can be achieved through physical methods (e.g., sonication, homogenization) or chemical methods (e.g., detergents, enzymes). Following lysis, the mixture is usually clarified by centrifugation to remove cell debris and insoluble material, allowing soluble proteins to be collected for further purification. == Chromatography methods == Chromatography is a widely used technique for protein purification, allowing for the separation of proteins based on various properties, including charge, size, and binding affinity. Here are the main types of chromatography used in protein purification: === Ion Exchange Chromatography === Ion exchange chromatography separates proteins based on their net charge at a given pH. The stationary phase consists of charged resin beads that interact with oppositely charged proteins. As the sample passes through the column, proteins bind to the resin while unbound proteins are washed away. By gradually changing the ionic strength or pH of the elution buffer, bound proteins can be released in a controlled manner, allowing for effective separation. === Size-Exclusion Chromatography (Gel Filtration) === Size-exclusion chromatography separates proteins based on their size. The stationary phase is composed of porous beads that allow smaller molecules to enter the pores while larger molecules pass around them. As a result, larger proteins elute first, followed by smaller ones. This method is particularly useful for desalting or removing small contaminants from protein samples. === Affinity Chromatography === Affinity chromatography exploits the specific interactions between proteins and their ligands. A target protein is captured on a column containing a ligand that specifically binds to it, such as an antibody, enzyme substrate, or metal ion. After washing away non-specifically bound proteins, the target protein is eluted using a solution that disrupts the protein-ligand interaction. This method provides high specificity and is often used for purifying recombinant proteins that have affinity tags. === Protein Extraction and Solubilization === Protein extraction involves isolating proteins from complex biological samples while maintaining their functionality. It often requires a careful choice of extraction buffers that contain salts, detergents, or stabilizers to preserve protein structure and activity. The solubilization step is crucial for proteins that are membrane-bound or insoluble in aqueous solutions. Detergents such as Triton X-100 or SDS can be used to solubilize proteins from membranes by disrupting lipid bilayers, allowing for effective extraction. === Concentrating protein solutions === After initial purification, protein solutions may need to be concentrated to increase the protein's concentration for downstream applications. This can be achieved through various methods, including ultrafiltration, which uses semi-permeable membranes to separate proteins from smaller molecules and salts, and lyophilization (freeze-drying), which removes water and allows proteins to be stored in a stable form. Precipitation methods, such as ammonium sulfate precipitation, can also be employed to concentrate proteins by altering the solubility conditions. === Gel electrophoresis === Gel electrophoresis is a powerful analytical technique used to separate proteins based on their size and charge. Proteins are loaded onto a gel matrix, typically made of polyacrylamide or agarose, and an electric current is applied. The negatively charged proteins migrate towards the positive electrode, with smaller proteins moving faster through the gel matrix than larger ones. This method is crucial for assessing the purity and size of protein samples. ==== Gel electrophoresis under denaturing conditions ==== Denaturing gel electrophoresis, commonly performed using SDS-PAGE (sodium dodecyl sulfate-polyacrylamide gel electrophoresis), involves treating proteins with SDS, a detergent that denatures proteins and imparts a uniform negative charge. This allows proteins to be separated solely based on their molecular weight, providing a clear picture of the protein composition of a sample. ==== Gel electrophoresis under non-denaturing conditions ==== Non-denaturing gel electrophoresis allows proteins to maintain their native structure while being separated. This method is useful for studying protein-protein interactions and enzyme activities. Proteins migrate through the gel based on their size and charge, but their functional properties remain intact, making it ideal for analyzing native protein complexes. ==== 2D gel electrophoresis ==== 2D gel electrophoresis combines isoelectric focusing (IEF) and SDS-PAGE to achieve a high-resolution separation of proteins. In the first dimension, proteins are separated based on their isoelectric points (pI), while in the second dimension, they are separated by molecular weight. This technique allows for the analysis of complex protein mixtures, facilitating the identification of differentially expressed proteins in various conditions. ==== Electrofocusing ==== Electrofocusing is a specialized technique that separates proteins based on their isoelectric points in a pH gradient. As an electric field is applied, proteins migrate until they reach the point where their net charge is zero, effectively focusing them into narrow bands. This method provides high resolution and is often used in combination with other techniques for comprehensive protein analysis. == Detecting proteins == The considerably small size of protein macromolecules makes identification and quantification of unknown protein samples particularly difficult. Several reliable methods for quantifying protein have been developed to simplify the process. These methods include Warburg–Christian method, Lowry assay, and Bradford assay (all of which rely on absorbance properties of macromolecules). Bradford assay method uses a dye to bind to protein. Most commonly, Coomassie brilliant blue G-250 dye is used. When free of protein, the dye is red but once bound to protein it turns blue. The dye-protein complex absorbs light maximally at the wavelength 595 nanometers and is sensitive for samples containing anywhere from 1 ug to 60 ug. Unlike Lowry and Warburg-Christian Methods, Bradford assays do not rely on Tryptophan and Tyrosine content in proteins which allows the method to be more accurate hypothetically. Lowry assay is similar to biuret assays, but it uses Folin reagent which is more accurate for quantification. Folin reagent is stable at only acidic conditions and the method is susceptible to skewing results depending on how much tryptophan and tyrosine is present in the examined protein. The Folin reagent binds to tryptophan and tyrosine which means the concentration of the two amino acids affects the sensitivity of the method. The method is sensitive at concentration ranges similar to the Bradford method, but requires a minuscule amount more of protein. Warburg-Christian method screens proteins at their naturally occurring absorbance ranges. Most proteins absorb light very well at 280 nanometers due to the presence of tryptophan and tyrosine, but the method is susceptible to varying amounts of the amino acids it relies on. More methods are listed below which link to more detailed accounts for their respective methods. === Non-specific methods that detect total protein only === Absorbance: Read at 280 or 215 nm. Can be very inaccurate. Detection in the range of 100 μg/mL to 1 mg/mL. Ratio of absorbance readings taken at 260/280 can indicate purity/contamination of the sample (pure samples have a ratio <0.8) Bradford protein assay: Detection in the range of ~1 mg/mL Biuret Test Derived Assays: Bicinchoninic acid assay (BCA assay): Detection down to 0.5 μg/mL Lowry Protein assay: Detection in the range of 0.01–1.0 mg/mL Fluorescamine: Quantifies proteins and peptides in solution if primary amine are present in the amino acids Amido black: Detection in the range of 1-12 μg/mL Colloidal gold: Detection in the range of 20 - 640 ng/mL Nitrogen detection: Kjeldahl method: used primarily for food and requires oxidation of material Dumas method: used primarily for food and requires combustion of material === Specific methods which can detect amount of a single protein === Spectrometry methods: High-performance liquid chromatography (HPLC): Chromatography method to detect proteins or peptides Liquid chromatography–mass spectrometry (LC/MS): Can detect proteins at low concentrations (ng/mL to pg/mL) in blood and body fluids, such as for Pharmacokinetics. Antibody dependent methods: Enzyme-linked immunosorbent assay (ELISA): Specifically can detect protein down to pg/mL. Protein immunoprecipitation: technique of precipitating a protein antigen out of solution using an antibody that specifically binds to that particular protein. Immunoelectrophoresis: separation and characterization of proteins based on electrophoresis and reaction with antibodies. Western blot: couples gel electrophoresis and incubation with antibodies to detect specific proteins in a sample of tissue homogenate or extract (a type of Immunoelectrophoresis technique). Protein immunostaining == Protein structures == X-ray crystallography Protein NMR Cryo-electron microscopy Small-angle X-ray scattering Circular Dichroism == Interactions involving proteins == Protein footprinting === Protein–protein interactions === (Yeast) two-hybrid system Protein-fragment complementation assay Co-immunoprecipitation Affinity purification and mass spectrometry Proximity ligation assay Proximity labeling === Protein–DNA interactions === ChIP-on-chip Chip-sequencing DamID Microscale thermophoresis === Protein–RNA interactions === Toeprinting assay TCP-seq == Computational methods == Molecular dynamics Protein structure prediction Protein sequence alignment (sequence comparison, including BLAST) Protein structural alignment Protein ontology (see gene ontology) == Other methods == Hydrogen–deuterium exchange Mass spectrometry Protein sequencing Protein synthesis Proteomics Peptide mass fingerprinting Ligand binding assay Eastern blotting Metabolic labeling Heavy isotope labeling Radioactive isotope labeling == See also == CSH Protocols Current Protocols == Bibliography == Daniel M. Bollag, Michael D. Rozycki and Stuart J. Edelstein. (1996.) Protein Methods, 2 ed., Wiley Publishers. ISBN 0-471-11837-0. == References ==
Wikipedia/Protein_methods
Trithorax-group proteins (TrxG) are a heterogeneous collection of proteins whose main action is to maintain gene expression. They can be categorized into three general classes based on molecular function: histone-modifying TrxG proteins chromatin-remodeling TrxG proteins DNA-binding TrxG proteins, plus other TrxG proteins not categorized in the first three classes. == Discovery == The founding member of TrxG proteins, trithorax (trx), was discovered ~1978 by Philip Ingham as part of his doctoral thesis while a graduate student in the laboratory of J.R.S. Whittle at the University of Sussex. Histone-lysine N-methyltransferase 2A is the human homolog of trx. The table contains names of Drosophila TrxG members. Homologs in other species may have different names. == Function == Trithorax-group proteins typically function in large complexes formed with other proteins. The complexes formed by TrxG proteins are divided into two groups: histone-modifying complexes and ATP-dependent chromatin-remodeling complexes. The main function of TrxG proteins, along with polycomb group (PcG) proteins, is regulating gene expression. Whereas PcG proteins are typically associated with gene silencing, TrxG proteins are most commonly linked to gene activation. The trithorax complex activates gene transcription by inducing trimethylation of lysine 4 of histone H3 (H3K4me3) at specific sites in chromatin recognized by the complex. Ash1 domain is involved in H3K36 methylation. Trithorax complex also interacts with CBP (CREB binding protein) which is an acetyltransferase to acetylate H3K27. This gene activation is reinforced by acetylation of histone H4. The actions of TrxG proteins are often described as 'antagonistic' of PcG proteins function. Aside from gene regulation, evidence suggests TrxG proteins are also involved in other processes including apoptosis, cancer, and stress responses. == Role in development == During development, TrxG proteins maintain activation of required genes, particularly the Hox genes, after maternal factors are depleted. This is accomplished by preserving the epigenetic marks, specifically H3K4me3, established by maternally-supplied factors. TrxG proteins are also implicated in X-chromosome inactivation, which occurs during early embryogenesis. As of 2011 it is unclear whether TrxG activity is required in every cell during the entire development of an organism or only during certain stages in certain cell types. == See also == HIstome Histone acetyltransferase Histone deacetylases Histone methyltransferase Histone-Modifying Enzymes Nucleosome PRMT4 pathway == References == == External links == The Polycomb and Trithorax page of the Cavalli lab at IGH (Institut de Génétique Humaine) This page contains useful information on Polycomb and trithorax proteins, in the form of an introduction, links to published reviews, list of Polycomb and trithorax proteins, illustrative power point slides and a link to a genome browser showing the genome-wide distribution of these proteins in Drosophila melanogaster. The Interactive Fly – Society for Developmental Biology
Wikipedia/Trithorax-group_protein
Open source is source code that is made freely available for possible modification and redistribution. Products include permission to use and view the source code, design documents, or content of the product. The open source model is a decentralized software development model that encourages open collaboration. A main principle of open source software development is peer production, with products such as source code, blueprints, and documentation freely available to the public. The open source movement in software began as a response to the limitations of proprietary code. The model is used for projects such as in open source appropriate technology, and open source drug discovery. Open source promotes universal access via an open-source or free license to a product's design or blueprint, and universal redistribution of that design or blueprint. Before the phrase open source became widely adopted, developers and producers used a variety of other terms, such as free software, shareware, and public domain software. Open source gained hold with the rise of the Internet. The open-source software movement arose to clarify copyright, licensing, domain, and consumer issues. Generally, open source refers to a computer program in which the source code is available to the general public for usage, modification from its original design, and publication of their version (fork) back to the community. Many large formal institutions have sprung up to support the development of the open-source movement, including the Apache Software Foundation, which supports community projects such as the open-source framework and the open-source HTTP server Apache HTTP. == History == The sharing of technical information predates the Internet and the personal computer considerably. For instance, in the early years of automobile development a group of capital monopolists owned the rights to a 2-cycle gasoline-engine patent originally filed by George B. Selden. By controlling this patent, they were able to monopolize the industry and force car manufacturers to adhere to their demands, or risk a lawsuit. In 1911, independent automaker Henry Ford won a challenge to the Selden patent. The result was that the Selden patent became virtually worthless and a new association (which would eventually become the Motor Vehicle Manufacturers Association) was formed. The new association instituted a cross-licensing agreement among all US automotive manufacturers: although each company would develop technology and file patents, these patents were shared openly and without the exchange of money among all the manufacturers. By the time the US entered World War II, 92 Ford patents and 515 patents from other companies were being shared among these manufacturers, without any exchange of money (or lawsuits). Early instances of the free sharing of source code include IBM's source releases of its operating systems and other programs in the 1950s and 1960s, and the SHARE user group that formed to facilitate the exchange of software. Beginning in the 1960s, ARPANET researchers used an open "Request for Comments" (RFC) process to encourage feedback in early telecommunication network protocols. This led to the birth of the early Internet in 1969. The sharing of source code on the Internet began when the Internet was relatively primitive, with software distributed via UUCP, Usenet, IRC, and Gopher. BSD, for example, was first widely distributed by posts to comp.os.linux on the Usenet, which is also where its development was discussed. Linux followed in this model. === Open source as a term === Open source as a term emerged in the late 1990s by a group of people in the free software movement who were critical of the political agenda and moral philosophy implied in the term "free software" and sought to reframe the discourse to reflect a more commercially minded position. In addition, the ambiguity of the term "free software" was seen as discouraging business adoption. However, the ambiguity of the word "free" exists primarily in English as it can refer to cost. The group included Christine Peterson, Todd Anderson, Larry Augustin, Jon Hall, Sam Ockman, Michael Tiemann and Eric S. Raymond. Peterson suggested "open source" at a meeting held at Palo Alto, California, in reaction to Netscape's announcement in January 1998 of a source code release for Navigator. Linus Torvalds gave his support the following day, and Phil Hughes backed the term in Linux Journal. Richard Stallman, the founder of the Free Software Foundation (FSF) in 1985, quickly decided against endorsing the term. The FSF's goal was to promote the development and use of free software, which they defined as software that grants users the freedom to run, study, share, and modify the code. This concept is similar to open source but places a greater emphasis on the ethical and political aspects of software freedom. Netscape released its source code under the Netscape Public License and later under the Mozilla Public License. Raymond was especially active in the effort to popularize the new term. He made the first public call to the free software community to adopt it in February 1998. Shortly after, he founded The Open Source Initiative in collaboration with Bruce Perens. The term gained further visibility through an event organized in April 1998 by technology publisher O'Reilly Media . Originally titled the "Freeware Summit" and later known as the "Open Source Summit", the event was attended by the leaders of many of the most important free and open-source projects, including Linus Torvalds, Larry Wall, Brian Behlendorf, Eric Allman, Guido van Rossum, Michael Tiemann, Paul Vixie, Jamie Zawinski, and Eric Raymond. At that meeting, alternatives to the term "free software" were discussed. Tiemann argued for "sourceware" as a new term, while Raymond argued for "open source." The assembled developers took a vote, and the winner was announced at a press conference the same evening. == Economics == Some economists agree that open-source is an information good or "knowledge good" with original work involving a significant amount of time, money, and effort. The cost of reproducing the work is low enough that additional users may be added at zero or near zero cost – this is referred to as the marginal cost of a product. Copyright creates a monopoly so that the price charged to consumers can be significantly higher than the marginal cost of production. This allows the author to recoup the cost of making the original work. Copyright thus creates access costs for consumers who value the work more than the marginal cost but less than the initial production cost. Access costs also pose problems for authors who wish to create a derivative work—such as a copy of a software program modified to fix a bug or add a feature, or a remix of a song—but are unable or unwilling to pay the copyright holder for the right to do so. Being organized as effectively a "consumers' cooperative", open source eliminates some of the access costs of consumers and creators of derivative works by reducing the restrictions of copyright. Basic economic theory predicts that lower costs would lead to higher consumption and also more frequent creation of derivative works. Organizations such as Creative Commons host websites where individuals can file for alternative "licenses", or levels of restriction, for their works. These self-made protections free the general society of the costs of policing copyright infringement. Others argue that since consumers do not pay for their copies, creators are unable to recoup the initial cost of production and thus have little economic incentive to create in the first place. By this argument, consumers would lose out because some of the goods they would otherwise purchase would not be available. In practice, content producers can choose whether to adopt a proprietary license and charge for copies, or an open license. Some goods which require large amounts of professional research and development, such as the pharmaceutical industry (which depends largely on patents, not copyright for intellectual property protection) are almost exclusively proprietary, although increasingly sophisticated technologies are being developed on open-source principles. There is evidence that open-source development creates enormous value. For example, in the context of open-source hardware design, digital designs are shared for free and anyone with access to digital manufacturing technologies (e.g. RepRap 3D printers) can replicate the product for the cost of materials. The original sharer may receive feedback and potentially improvements on the original design from the peer production community. Many open-source projects have a high economic value. According to the Battery Open Source Software Index (BOSS), the ten economically most important open-source projects are: The rank given is based on the activity regarding projects in online discussions, on GitHub, on search activity in search engines and on the influence on the labour market. === Licensing alternatives === Alternative arrangements have also been shown to result in good creation outside of the proprietary license model. Examples include: Creation for its own sake – For example, Wikipedia editors add content for recreation. Artists have a drive to create. Both communities benefit from free starting material. Voluntary after-the-fact donations – used by shareware, street performers, and public broadcasting in the United States. Patron – For example, open-access publishing relies on institutional and government funding of research faculty, who also have a professional incentive to publish for reputation and career advancement. Works of the US government are automatically released into the public domain. Freemium – Give away a limited version for free and charge for a premium version (potentially using a dual license). Give away the product and charge something related – charge for support of open-source enterprise software, give away music but charge for concert admission. Give away work to gain market share – used by artists, in corporate software to spoil a dominant competitor (for example in the browser wars and the Android operating system). For own use – Businesses or individual software developers often create software to solve a problem, bearing the full cost of initial creation. They will then open source the solution, and benefit from the improvements others make for their own needs. Communalizing the maintenance burden distributes the cost across more users; free riders can also benefit without undermining the creation process. Drupal's founder Dries Buytaert has summarized this as the Maker/Taker problem. Blockchain based licensing. Developers register their contributions on a blockchain and when usage licenses are generated the revenue is shared through the blockchain. == Open collaboration == The open-source model is a decentralized software development model that encourages open collaboration, meaning "any system of innovation or production that relies on goal-oriented yet loosely coordinated participants who interact to create a product (or service) of economic value, which they make available to contributors and noncontributors alike." A main principle of open-source software development is peer production, with products such as source code, blueprints, and documentation freely available to the public. The open-source movement in software began as a response to the limitations of proprietary code. The model is used for projects such as in open-source appropriate technology, and open-source drug discovery. The open-source model for software development inspired the use of the term to refer to other forms of open collaboration, such as in Internet forums, mailing lists and online communities. Open collaboration is also thought to be the operating principle underlining a gamut of diverse ventures, including TEDx and Wikipedia. Open collaboration is the principle underlying peer production, mass collaboration, and wikinomics. It was observed initially in open-source software, but can also be found in many other instances, such as in Internet forums, mailing lists, Internet communities, and many instances of open content, such as Creative Commons. It also explains some instances of crowdsourcing, collaborative consumption, and open innovation. Riehle et al. define open collaboration as collaboration based on three principles of egalitarianism, meritocracy, and self-organization. Levine and Prietula define open collaboration as "any system of innovation or production that relies on goal-oriented yet loosely coordinated participants who interact to create a product (or service) of economic value, which they make available to contributors and noncontributors alike." This definition captures multiple instances, all joined by similar principles. For example, all of the elements – goods of economic value, open access to contribute and consume, interaction and exchange, purposeful yet loosely coordinated work – are present in an open-source software project, in Wikipedia, or in a user forum or community. They can also be present in a commercial website that is based on user-generated content. In all of these instances of open collaboration, anyone can contribute and anyone can freely partake in the fruits of sharing, which are produced by interacting participants who are loosely coordinated. An annual conference dedicated to the research and practice of open collaboration is the International Symposium on Wikis and Open Collaboration (OpenSym, formerly WikiSym). As per its website, the group defines open collaboration as "collaboration that is egalitarian (everyone can join, no principled or artificial barriers to participation exist), meritocratic (decisions and status are merit-based rather than imposed) and self-organizing (processes adapt to people rather than people adapt to pre-defined processes)." == Open-source license == Open source promotes universal access via an open-source or free license to a product's design or blueprint, and universal redistribution of that design or blueprint. Before the phrase open source became widely adopted, developers and producers used a variety of other terms. Open source gained hold in part due to the rise of the Internet. The open-source software movement arose to clarify copyright, licensing, domain, and consumer issues. An open-source license is a type of license for computer software and other products that allows the source code, blueprint or design to be used, modified or shared (with or without modification) under defined terms and conditions. This allows end users and commercial companies to review and modify the source code, blueprint or design for their own customization, curiosity or troubleshooting needs. Open-source licensed software is mostly available free of charge, though this does not necessarily have to be the case. Licenses which only permit non-commercial redistribution or modification of the source code for personal use only are generally not considered as open-source licenses. However, open-source licenses may have some restrictions, particularly regarding the expression of respect to the origin of software, such as a requirement to preserve the name of the authors and a copyright statement within the code, or a requirement to redistribute the licensed software only under the same license (as in a copyleft license). One popular set of open-source software licenses are those approved by the Open Source Initiative (OSI) based on their Open Source Definition (OSD). == Applications == Social and political views have been affected by the growth of the concept of open source. Advocates in one field often support the expansion of open source in other fields. But Eric Raymond and other founders of the open-source movement have sometimes publicly argued against speculation about applications outside software, saying that strong arguments for software openness should not be weakened by overreaching into areas where the story may be less compelling. The broader impact of the open-source movement, and the extent of its role in the development of new information sharing procedures, remain to be seen. The open-source movement has inspired increased transparency and liberty in biotechnology research, for example CAMBIA Even the research methodologies themselves can benefit from the application of open-source principles. It has also given rise to the rapidly-expanding open-source hardware movement. === Computer software === Open-source software is software which source code is published and made available to the public, enabling anyone to copy, modify and redistribute the source code without paying royalties or fees. LibreOffice and the GNU Image Manipulation Program are examples of open source software. As they do with proprietary software, users must accept the terms of a license when they use open source software—but the legal terms of open source licenses differ dramatically from those of proprietary licenses. Open-source code can evolve through community cooperation. These communities are composed of individual programmers as well as large companies. Some of the individual programmers who start an open-source project may end up establishing companies offering products or services incorporating open-source programs. Examples of open-source software products are: Linux (that much of world's server parks are running) MediaWiki (that Wikipedia is based upon) Many more: List of free and open-source software packages List of formerly proprietary software The Google Summer of Code, often abbreviated to GSoC, is an international annual program in which Google awards stipends to contributors who successfully complete a free and open-source software coding project during the summer. GSoC is a large scale project with 202 participating organizations in 2021. There are similar smaller scale projects such as the Talawa Project run by the Palisadoes Foundation (a non profit based in California, originally to promote the use of information technology in Jamaica, but now also supporting underprivileged communities in the US) === Electronics === Open-source hardware is hardware which initial specification, usually in a software format, is published and made available to the public, enabling anyone to copy, modify and redistribute the hardware and source code without paying royalties or fees. Open-source hardware evolves through community cooperation. These communities are composed of individual hardware/software developers, hobbyists, as well as very large companies. Examples of open-source hardware initiatives are: Openmoko: a family of open-source mobile phones, including the hardware specification and the operating system. OpenRISC: an open-source microprocessor family, with architecture specification licensed under GNU GPL and implementation under LGPL. Sun Microsystems's OpenSPARC T1 Multicore processor. Sun has released it under GPL. Arduino, a microcontroller platform for hobbyists, artists and designers. Simputer, an open hardware handheld computer, designed in India for use in environments where computing devices such as personal computers are deemed inappropriate. LEON: A family of open-source microprocessors distributed in a library with peripheral IP cores, open SPARC V8 specification, implementation available under GNU GPL. Tinkerforge: A system of open-source stackable microcontroller building blocks. Allows control of motors and read out sensors with the programming languages C, C++, C#, Object Pascal, Java, PHP, Python and Ruby over a USB or Wifi connection on Windows, Linux and Mac OS X. All of the hardware is licensed under CERN OHL (CERN Open Hardware License). Open Compute Project: designs for computer data center including power supply, Intel motherboard, AMD motherboard, chassis, racks, battery cabinet, and aspects of electrical and mechanical design. === Food and beverages === Some publishers of open-access journals have argued that data from food science and gastronomy studies should be freely available to aid reproducibility. A number of people have published creative commons licensed recipe books. Open-source colas – cola soft drinks, similar to Coca-Cola and Pepsi, whose recipe is open source and developed by volunteers. The taste is said to be comparable to that of the standard beverages. Most corporations producing beverages keep their formulas secret and unknown to the general public. Free Beer (originally Vores Øl) – is an open-source beer created by students at the IT-University in Copenhagen together with Superflex, an artist collective, to illustrate how open-source concepts might be applied outside the digital world. === Digital content === Open-content projects organized by the Wikimedia Foundation – Sites such as Wikipedia and Wiktionary have embraced the open-content Creative Commons content licenses. These licenses were designed to adhere to principles similar to various open-source software development licenses. Many of these licenses ensure that content remains free for re-use, that source documents are made readily available to interested parties, and that changes to content are accepted easily back into the system. Important sites embracing open-source-like ideals are Project Gutenberg and Wikisource, both of which post many books on which the copyright has expired and are thus in the public domain, ensuring that anyone has free, unlimited access to that content. Open ICEcat is an open catalog for the IT, CE and Lighting sectors with product data-sheets based on Open Content License agreement. The digital content are distributed in XML and URL formats. SketchUp's 3D Warehouse is an open-source design community centered around the use of proprietary software that's distributed free of charge. The University of Waterloo Stratford Campus invites students every year to use its three-storey Christie MicroTiles wall as a digital canvas for their creative work. === Medicine === Pharmaceuticals – There have been several proposals for open-source pharmaceutical development, which led to the establishment of the Tropical Disease Initiative and the Open Source Drug Discovery for Malaria Consortium. Genomics – The term "open-source genomics" refers to the combination of rapid release of sequence data (especially raw reads) and crowdsourced analyses from bioinformaticians around the world that characterized the analysis of the 2011 E. coli O104:H4 outbreak. OpenEMR – OpenEMR is an ONC-ATB Ambulatory EHR 2011-2012 certified electronic health records and medical practice management application. It features fully integrated electronic health, records, practice management, scheduling, electronic billing, and is the base for many EHR programs. === Science and engineering === Research – The Science Commons was created as an alternative to the expensive legal costs of sharing and reusing scientific works in journals etc. Research – The Open Solar Outdoors Test Field (OSOTF) is a grid-connected photovoltaic test system, which continuously monitors the output of a number of photovoltaic modules and correlates their performance to a long list of highly accurate meteorological readings. The OSOTF is organized under open-source principles – All data and analysis is to be made freely available to the entire photovoltaic community and the general public. Engineering – Hyperloop, a form of high-speed transport proposed by entrepreneur Elon Musk, which he describes as "an elevated, reduced-pressure tube that contains pressurized capsules driven within the tube by a number of linear electric motors". Construction – WikiHouse is an open-source project for designing and building houses. Energy research – The Open Energy Modelling Initiative promotes open-source models and open data in energy research and policy advice. ==== Robotics ==== An open-source robot is a robot whose blueprints, schematics, or source code are released under an open-source model. === Other === Open-source principles can be applied to technical areas such as digital communication protocols and data storage formats. Open-design – which involves applying open-source methodologies to the design of artifacts and systems in the physical world. It is very nascent but has huge potential. Open-source appropriate technology (OSAT) refers to technologies that are designed in the same fashion as free and open-source software. These technologies must be "appropriate technology" (AT) – meaning technology that is designed with special consideration to the environmental, ethical, cultural, social, political, and economic aspects of the community it is intended for. An example of this application is the use of open-source 3D printers like the RepRap to manufacture appropriate technology. Teaching – which involves applying the concepts of open source to instruction using a shared web space as a platform to improve upon learning, organizational, and management challenges. An example of an Open-source courseware is the Java Education & Development Initiative (JEDI). Other examples include Khan Academy and wikiversity. At the university level, the use of open-source-appropriate technology classroom projects has been shown to be successful in forging the connection between science/engineering and social benefit: This approach has the potential to use university students' access to resources and testing equipment in furthering the development of appropriate technology. Similarly OSAT has been used as a tool for improving service learning. There are few examples of business information (methodologies, advice, guidance, practices) using the open-source model, although this is another case where the potential is enormous. ITIL is close to open source. It uses the Cathedral model (no mechanism exists for user contribution) and the content must be bought for a fee that is small by business consulting standards (hundreds of British pounds). Various checklists are published by government, banks or accounting firms. An open-source group emerged in 2012 that is attempting to design a firearm that may be downloaded from the internet and "printed" on a 3D Printer. Calling itself Defense Distributed, the group wants to facilitate "a working plastic gun that could be downloaded and reproduced by anybody with a 3D printer". Agrecol, a German NGO has developed an open-source licence for seeds operating with copyleft and created OpenSourceSeeds as a respective service provider. Breeders that apply the license to their new invented material prevent it from the threat of privatisation and help to establish a commons-based breeding sector as an alternative to the commercial sector. Open Source Ecology, farm equipment and global village construction kit. == "Open" versus "free" versus "free and open" == Free and open-source software (FOSS) or free/libre and open-source software (FLOSS) is openly shared source code that is licensed without any restrictions on usage, modification, or distribution. Confusion persists about this definition because the "free", also known as "libre", refers to the freedom of the product, not the price, expense, cost, or charge. For example, "being free to speak" is not the same as "free beer". Conversely, Richard Stallman argues the "obvious meaning" of term "open source" is that the source code is public/accessible for inspection, without necessarily any other rights granted, although the proponents of the term say the conditions in the Open Source Definition must be fulfilled. "Free and open" should not be confused with public ownership (state ownership), deprivatization (nationalization), anti-privatization (anti-corporate activism), or transparent behavior. GNU GNU Manifesto Richard Stallman Gratis versus libre (no cost vs no restriction) == Software == Generally, open source refers to a computer program in which the source code is available to the general public for use for any (including commercial) purpose, or modification from its original design. Open-source code is meant to be a collaborative effort, where programmers improve upon the source code and share the changes within the community. Code is released under the terms of a software license. Depending on the license terms, others may then download, modify, and publish their version (fork) back to the community. List of free and open-source software packages Open-source license, a copyright license that makes the source code available with a product The Open Source Definition, as used by the Open Source Initiative for open source software Open-source model, a decentralized software development model that encourages open collaboration Open-source software, software which permits the use and modification of its source code History of free and open-source software Open-source software advocacy Open-source software development Open-source-software movement Open-source video games List of open-source video games Business models for open-source software Comparison of open-source and closed-source software Diversity in open-source software MapGuide Open Source, a web-based map-making platform to develop and deploy web mapping applications and geospatial web services (not to be confused with OpenStreetMap (OSM), a collaborative project to create a free editable map of the world). == Hardware == RISC-V == Agriculture, economy, manufacturing and production == Open-source appropriate technology (OSAT), is designed for environmental, ethical, cultural, social, political, economic, and community aspects Open-design movement, development of physical products, machines and systems via publicly shared design information, including free and open-source software and open-source hardware, among many others: Open Architecture Network, improving global living conditions through innovative sustainable design OpenCores, a community developing digital electronic open-source hardware Open Design Alliance, develops Teigha, a software development platform to create engineering applications including CAD software Open Hardware and Design Alliance (OHANDA), sharing open hardware and designs via free online services Open Source Ecology (OSE), a network of farmers, engineers, architects and supporters striving to manufacture the Global Village Construction Set (GVCS) OpenStructures (OSP), a modular construction model where everyone designs on the basis of one shared geometrical OS grid Open manufacturing or "Open Production" or "Design Global, Manufacture Local", a new socioeconomic production model to openly and collaboratively produce and distribute physical objects Open-source architecture (OSArc), emerging procedures in imagination and formation of virtual and real spaces within an inclusive universal infrastructure Open-source cola, cola soft drinks made to open-sourced recipes Open-source hardware, or open hardware, computer hardware, such as microprocessors, that is designed in the same fashion as open source software List of open-source hardware projects Open-source product development (OSPD), collaborative product and process openness of open-source hardware for any interested participants Open-source robotics, physical artifacts of the subject are offered by the open design movement Open Source Seed Initiative, open source varieties of crop seeds, as an alternative to patent-protected seeds sold by large agriculture companies. == Science and medicine == Open science, the movement to make scientific research, data and dissemination accessible to all levels of an inquiring society, amateur or professional Open science data, a type of open data focused on publishing observations and results of scientific activities available for anyone to analyze and reuse Open Science Framework and the Center for Open Science Open Source Lab (disambiguation), several laboratories Open-Source Lab (book), a 2014 book by Joshua M. Pearce Open-notebook science, the practice of making the entire primary record of a research project publicly available online as it is recorded Open Source Physics (OSP), a National Science Foundation and Davidson College project to spread the use of open source code libraries that take care of much of the heavy lifting for physics Open Source Geospatial Foundation NASA Open Source Agreement (NOSA), an OSI-approved software license List of open-source software for mathematics List of open-source bioinformatics software List of open-source health software List of open-source health hardware == Media == Open-source film, open source movies List of open-source films Open Source Cinema, a collaborative website to produce a documentary film Open-source journalism, commonly describes a spectrum on online publications, forms of innovative publishing of online journalism, and content voting, rather than the sourcing of news stories by "professional" journalists Open-source investigation See also: Crowdsourcing, crowdsourced journalism, crowdsourced investigation, trutherism, and historical revisionism considered "fringe" by corporate media. Open-source record label, open source music "Open Source", a 1960s rock song performed by The Magic Mushrooms Open Source (radio show), a radio show using open content information gathering methods hosted by Christopher Lydon Open textbook, an open copyright licensed textbook made freely available online for students, teachers, and the public CAD libraries - such as SketchUp 3D Warehouse and GrabCAD == Organizations == Open Source Initiative (OSI), an organization dedicated to promote open source Open Source Software Institute Journal of Open Source Software Open Source Day, the dated varies from year to year for an international conference for fans of open solutions from Central and Eastern Europe Open Source Developers' Conference Open Source Development Labs (OSDL), a non-profit corporation that provides space for open-source project Open Source Drug Discovery, a collaborative drug discovery platform for neglected tropical diseases Open Source Technology Group (OSTG), news, forums, and other SourceForge resources for IT Open source in Kosovo Open Source University Meetup New Zealand Open Source Awards == Procedures == Open security, application of open source philosophies to computer security Open Source Information System, the former name of an American unclassified network serving the U.S. intelligence community with open-source intelligence, since mid-2006 the content of OSIS is now known as Intelink-U while the network portion is known as DNI-U Open-source intelligence, an intelligence gathering discipline based on information collected from open sources (not to be confused with open-source artificial intelligence such as Mycroft (software)). == Society == The rise of open-source culture in the 20th century resulted from a growing tension between creative practices that involve require access to content that is often copyrighted, and restrictive intellectual property laws and policies governing access to copyrighted content. The two main ways in which intellectual property laws became more restrictive in the 20th century were extensions to the term of copyright (particularly in the United States) and penalties, such as those articulated in the Digital Millennium Copyright Act (DMCA), placed on attempts to circumvent anti-piracy technologies. Although artistic appropriation is often permitted under fair-use doctrines, the complexity and ambiguity of these doctrines create an atmosphere of uncertainty among cultural practitioners. Also, the protective actions of copyright owners create what some call a "chilling effect" among cultural practitioners. The idea of an "open-source" culture runs parallel to "Free Culture", but is substantively different. Free culture is a term derived from the free software movement, and in contrast to that vision of culture, proponents of open-source culture (OSC) maintain that some intellectual property law needs to exist to protect cultural producers. Yet they propose a more nuanced position than corporations have traditionally sought. Instead of seeing intellectual property law as an expression of instrumental rules intended to uphold either natural rights or desirable outcomes, an argument for OSC takes into account diverse goods (as in "the Good life") and ends. Sites such as ccMixter offer up free web space for anyone willing to license their work under a Creative Commons license. The resulting cultural product is then available to download free (generally accessible) to anyone with an Internet connection. Older, analog technologies such as the telephone or television have limitations on the kind of interaction users can have. Through various technologies such as peer-to-peer networks and blogs, cultural producers can take advantage of vast social networks to distribute their products. As opposed to traditional media distribution, redistributing digital media on the Internet can be virtually costless. Technologies such as BitTorrent and Gnutella take advantage of various characteristics of the Internet protocol (TCP/IP) in an attempt to totally decentralize file distribution. === Government === Open politics (sometimes known as Open-source politics) is a political process that uses Internet technologies such as blogs, email and polling to provide for a rapid feedback mechanism between political organizations and their supporters. There is also an alternative conception of the term Open-source politics which relates to the development of public policy under a set of rules and processes similar to the open-source software movement. Open-source governance is similar to open-source politics, but it applies more to the democratic process and promotes the freedom of information. Open-source political campaigns refer specifically to political campaigns. The South Korean government wants to increase its use of free and open-source software, to decrease its dependence on proprietary software solutions. It plans to make open standards a requirement, to allow the government to choose between multiple operating systems and web browsers. Korea's Ministry of Science, ICT & Future Planning is also preparing ten pilots on using open-source software distributions. === Ethics === Open-source ethics is split into two strands: Open-source ethics as an ethical school – Charles Ess and David Berry are researching whether ethics can learn anything from an open-source approach. Ess famously even defined the AoIR Research Guidelines as an example of open-source ethics. Open-source ethics as a professional body of rules – This is based principally on the computer ethics school, studying the questions of ethics and professionalism in the computer industry in general and software development in particular. === Religion === Irish philosopher Richard Kearney has used the term "open-source Hinduism" to refer to the way historical figures such as Mohandas Gandhi and Swami Vivekananda worked upon this ancient tradition. === Media === Open-source journalism formerly referred to the standard journalistic techniques of news gathering and fact checking, reflecting open-source intelligence, a similar term used in military intelligence circles. Now, open-source journalism commonly refers to forms of innovative publishing of online journalism, rather than the sourcing of news stories by a professional journalist. In the 25 December 2006 issue of TIME magazine this is referred to as user created content and listed alongside more traditional open-source projects such as OpenSolaris and Linux. Weblogs, or blogs, are another significant platform for open-source culture. Blogs consist of periodic, reverse chronologically ordered posts, using a technology that makes webpages easily updatable with no understanding of design, code, or file transfer required. While corporations, political campaigns and other formal institutions have begun using these tools to distribute information, many blogs are used by individuals for personal expression, political organizing, and socializing. Some, such as LiveJournal or WordPress, use open-source software that is open to the public and can be modified by users to fit their own tastes. Whether the code is open or not, this format represents a nimble tool for people to borrow and re-present culture; whereas traditional websites made the illegal reproduction of culture difficult to regulate, the mutability of blogs makes "open sourcing" even more uncontrollable since it allows a larger portion of the population to replicate material more quickly in the public sphere. Messageboards are another platform for open-source culture. Messageboards (also known as discussion boards or forums), are places online where people with similar interests can congregate and post messages for the community to read and respond to. Messageboards sometimes have moderators who enforce community standards of etiquette such as banning spammers. Other common board features are private messages (where users can send messages to one another) as well as chat (a way to have a real time conversation online) and image uploading. Some messageboards use phpBB, which is a free open-source package. Where blogs are more about individual expression and tend to revolve around their authors, messageboards are about creating a conversation amongst its users where information can be shared freely and quickly. Messageboards are a way to remove intermediaries from everyday life—for instance, instead of relying on commercials and other forms of advertising, one can ask other users for frank reviews of a product, movie or CD. By removing the cultural middlemen, messageboards help speed the flow of information and exchange of ideas. OpenDocument is an open document file format for saving and exchanging editable office documents such as text documents (including memos, reports, and books), spreadsheets, charts, and presentations. Organizations and individuals that store their data in an open format such as OpenDocument avoid being locked into a single software vendor, leaving them free to switch software if their current vendor goes out of business, raises their prices, changes their software, or changes their licensing terms to something less favorable. Open-source movie production is either an open call system in which a changing crew and cast collaborate in movie production, a system in which the result is made available for re-use by others or in which exclusively open-source products are used in the production. The 2006 movie Elephants Dream is said to be the "world's first open movie", created entirely using open-source technology. An open-source documentary film has a production process allowing the open contributions of archival material footage, and other filmic elements, both in unedited and edited form, similar to crowdsourcing. By doing so, on-line contributors become part of the process of creating the film, helping to influence the editorial and visual material to be used in the documentary, as well as its thematic development. The first open-source documentary film is the non-profit WBCN and the American Revolution, which went into development in 2006, and will examine the role media played in the cultural, social and political changes from 1968 to 1974 through the story of radio station WBCN-FM in Boston. The film is being produced by Lichtenstein Creative Media and the non-profit Center for Independent Documentary. Open Source Cinema is a website to create Basement Tapes, a feature documentary about copyright in the digital age, co-produced by the National Film Board of Canada. Open-source film-making refers to a form of film-making that takes a method of idea formation from open-source software, but in this case the 'source' for a filmmaker is raw unedited footage rather than programming code. It can also refer to a method of film-making where the process of creation is 'open' i.e. a disparate group of contributors, at different times contribute to the final piece. Open-IPTV is IPTV that is not limited to one recording studio, production studio, or cast. Open-IPTV uses the Internet or other means to pool efforts and resources together to create an online community that all contributes to a show. === Education === Within the academic community, there is discussion about expanding what could be called the "intellectual commons" (analogous to the Creative Commons). Proponents of this view have hailed the Connexions Project at Rice University, OpenCourseWare project at MIT, Eugene Thacker's article on "open-source DNA", the "Open Source Cultural Database", Salman Khan's Khan Academy and Wikipedia as examples of applying open source outside the realm of computer software. Open-source curricula are instructional resources whose digital source can be freely used, distributed and modified. Another strand to the academic community is in the area of research. Many funded research projects produce software as part of their work. Due to the benefits of sharing software openly in scientific endeavours, there is an increasing interest in making the outputs of research projects available under an open-source license. In the UK the Joint Information Systems Committee (JISC) has developed a policy on open-source software. JISC also funds a development service called OSS Watch which acts as an advisory service for higher and further education institutions wishing to use, contribute to and develop open-source software. On 30 March 2010, President Barack Obama signed the Health Care and Education Reconciliation Act, which included $2 billion over four years to fund the TAACCCT program, which is described as "the largest OER (open education resources) initiative in the world and uniquely focused on creating curricula in partnership with industry for credentials in vocational industry sectors like manufacturing, health, energy, transportation, and IT". === Innovation communities === The principle of sharing pre-dates the open-source movement; for example, the free sharing of information has been institutionalized in the scientific enterprise since at least the 19th century. Open-source principles have always been part of the scientific community. The sociologist Robert K. Merton described the four basic elements of the community—universalism (an international perspective), communalism (sharing information), objectivity (removing one's personal views from the scientific inquiry) and organized skepticism (requirements of proof and review) that describe the (idealised) scientific community. These principles are, in part, complemented by US law's focus on protecting expression and method but not the ideas themselves. There is also a tradition of publishing research results to the scientific community instead of keeping all such knowledge proprietary. One of the recent initiatives in scientific publishing has been open access—the idea that research should be published in such a way that it is free and available to the public. There are currently many open access journals where the information is available free online, however most journals do charge a fee (either to users or libraries for access). The Budapest Open Access Initiative is an international effort with the goal of making all research articles available free on the Internet. The National Institutes of Health has recently proposed a policy on "Enhanced Public Access to NIH Research Information". This policy would provide a free, searchable resource of NIH-funded results to the public and with other international repositories six months after its initial publication. The NIH's move is an important one because there is significant amount of public funding in scientific research. Many of the questions have yet to be answered—the balancing of profit vs. public access, and ensuring that desirable standards and incentives do not diminish with a shift to open access. Benjamin Franklin was an early contributor eventually donating all his inventions including the Franklin stove, bifocals, and the lightning rod to the public domain. New NGO communities are starting to use the open-source technology as a tool. One example is the Open Source Youth Network started in 2007 in Lisboa by ISCA members. Open innovation is also a new emerging concept which advocate putting R&D in a common pool. The Eclipse platform is openly presenting itself as an Open innovation network. === Arts and recreation === Copyright protection is used in the performing arts and even in athletic activities. Some groups have attempted to remove copyright from such practices. In 2012, Russian music composer, scientist and Russian Pirate Party member Victor Argonov presented detailed raw files of his electronic opera "2032" under free license CC BY-NC 3.0 (later relicensed under CC BY-SA 4.0). This opera was originally composed and published in 2007 by Russian label MC Entertainment as a commercial product, but then the author changed its status to free. In his blog he said that he decided to open raw files (including wav, midi and other used formats) to the public to support worldwide pirate actions against SOPA and PIPA. Several Internet resources called "2032" the first open-source musical opera in history. === Other related movements === Notable events and applications that have been developed via the open source community, and echo the ideologies of the open source movement, include the Open Education Consortium, Project Gutenberg, Synthethic Biology, and Wikipedia. The Open Education Consortium is an organization composed of various colleges that support open source and share some of their material online. This organization, headed by Massachusetts Institute of Technology, was established to aid in the exchange of open source educational materials. Wikipedia is a user-generated online encyclopedia with sister projects in academic areas, such as Wikiversity—a community dedicated to the creation and exchange of learning materials. Prior to the existence of Google Scholar Beta, Project Gutenberg was the first supplier of electronic books and the first free library project. Synthetic Biology is a new technology that promises to enable cheap, lifesaving new drugs, as well as helping to yield biofuels that may help to solve our energy problem. Although synthetic biology has not yet come out of its lab stage, it has potential to become industrialized in the near future. To industrialize open source science, there are some scientists who are trying to build their own brand of it. === Ideologically-related movements === The open-access movement is a movement that is similar in ideology to the open source movement. Members of this movement maintain that academic material should be readily available to provide help with "future research, assist in teaching and aid in academic purposes." The open-access movement aims to eliminate subscription fees and licensing restrictions of academic materials. The free-culture movement is a movement that seeks to achieve a culture that engages in collective freedom via freedom of expression, free public access to knowledge and information, full demonstration of creativity and innovation in various arenas, and promotion of citizen liberties. Creative Commons is an organization that "develops, supports, and stewards legal and technical infrastructure that maximizes digital creativity, sharing, and innovation." It encourages the use of protected properties online for research, education, and creative purposes in pursuit of a universal access. Creative Commons provides an infrastructure through a set of copyright licenses and tools that creates a better balance within the realm of "all rights reserved" properties. The Creative Commons license offers a slightly more lenient alternative to "all rights reserved" copyrights for those who do not wish to exclude the use of their material. The Zeitgeist Movement (TZM) is an international social movement that advocates a transition into a sustainable "resource-based economy" based on collaboration in which monetary incentives are replaced by commons-based ones with everyone having access to everything (from code to products) as in "open source everything". While its activism and events are typically focused on media and education, TZM is a major supporter of open source projects worldwide since they allow for uninhibited advancement of science and technology, independent of constraints posed by institutions of patenting and capitalist investment. P2P Foundation is an "international organization focused on studying, researching, documenting and promoting peer to peer practices in a very broad sense." Its objectives incorporate those of the open source movement, whose principles are integrated in a larger socio-economic model. == See also == === Terms based on open source === Open implementation Open security Open-source record label Open standard Shared Source Source-available software === Other === Open Sources: Voices from the Open Source Revolution (book) Commons-based peer production Digital rights Diseconomies of scale Free content Gift economy Glossary of legal terms in technology Mass collaboration Network effect Open Source Initiative Openness Proprietary software Digital public goods == References == == Further reading == Benkler, Yochai (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedom (PDF). Yale University Press. Berry, David M. (2008). Copy, Rip, Burn: The Politics of Copyleft and Open Source. London:Pluto Press. ISBN 978-0745324142.{{cite book}}: CS1 maint: publisher location (link) Karl Fogel. Producing Open Source Software (How to run a successful free-software project). Free PDF version available. Goldman, Ron; Gabriel, Richard P. (2005). Innovation Happens Elsewhere: Open Source as Business Strategy. Richard P. Gabriel. ISBN 978-1-55860-889-4. Dunlap, Isaac Hunter (2006). Open Source Database Driven Web Development: A Guide for Information Professionals. Oxford: Chandos. ISBN 978-1-84334-161-1.{{cite book}}: CS1 maint: publisher location (link) Kostakis, V.; Bauwens, M. (2014). Network Society and Future Scenarios for a Collaborative Economy. Palgrave Macmillan. ISBN 978-1-137-41506-6. (wiki) Nettingsmeier, Jörn. "So What? I Don't Hack!" eContact! 11.3 – Logiciels audio " open source " / Open Source for Audio Application (September 2009). Montréal: CEC. Stallman, Richard M. Free Software Free Society: Selected essays of Richard M. Stallman. Schrape, Jan-Felix (2019). "Open-source projects as incubators of innovation. From niche phenomenon to integral part of the industry". Convergence. 25 (3): 409–427. doi:10.1177/1354856517735795. ISSN 1354-8565. S2CID 149165772. Various authors. eContact! 11.3 – Logiciels audio " open source " / Open Source for Audio Application (September 2009). Montréal: CEC. Various authors. "Open Source Travel Guide [wiki]". eContact! 11.3 – Logiciels audio " open source " / Open Source for Audio Application (September 2009). Montréal: CEC. Weber, Steve (2004). The Success of Open Source. Harvard University Press. ISBN 978-0-674-01292-9. Ray, Partha Pratim; Rai, Rebika (2013). Open Source Hardware: An Introductory Approach. Lap Lambert Publishing House. ISBN 978-3-659-46591-8. === Literature on legal and economic aspects === Benkler, Y. (December 2002). "Coase's Penguin, or, Linux and The Nature of the Firm" (PDF). Yale Law Journal. 112 (3): 369–446. arXiv:cs/0109077. doi:10.2307/1562247. hdl:10535/2974. ISSN 0044-0094. JSTOR 1562247. S2CID 16684329. Berry, D.M.; Moss, G. (2008). "Libre Culture: Meditations on Free Culture" (PDF). Canada: Pygmalion Books. Bitzer, J.; Schröder, P.J.H. (2005). "The Impact of Entry and Competition by Open Source Software on Innovation Activity" (PDF). Industrial Organization. EconWPA. v. Engelhardt, S. (2008). "The Economic Properties of Software" (PDF). Jena Economic Research Papers. 2: 2008–045. v. Engelhardt, S. (2008): "Intellectual Property Rights and Ex-Post Transaction Costs: the Case of Open and Closed Source Software", Jena Economic Research Papers 2008-047. (PDF) v. Engelhardt, S. (2008): "Intellectual Property Rights and Ex-Post Transaction Costs: the Case of Open and Closed Source Software", Jena Economic Research Papers 2008-047. (PDF) v. Engelhardt, S.; Swaminathan, S. (2008). "Open Source Software, Closed Source Software or Both: Impacts on Industry Growth and the Role of Intellectual Property Rights" (PDF). Discussion Papers of Diw Berlin. v. Engelhardt, S.; Swaminathan, S. (2008). "Open Source Software, Closed Source Software or Both: Impacts on Industry Growth and the Role of Intellectual Property Rights" (PDF). Discussion Papers of Diw Berlin. European Commission. (2006). Economic impact of open source software on innovation and the competitiveness of the Information and Communication Technologies sector in the EU. Brussels. v. Hippel, E.; v. Krogh, G. (2003). "Open source software and the "private-collective" innovation model: Issues for organization science" (PDF). Organization Science. 14 (2): 209–223. doi:10.1287/orsc.14.2.209.14992. hdl:1721.1/66145. ISSN 1047-7039. S2CID 11947692. Kostakis, V.; Bauwens, M. (2014). Network Society and Future Scenarios for a Collaborative Economy. Palgrave Macmillan. ISBN 978-1-137-41506-6. (wiki) Lerner J., Pathak P. A., Tirole, J. (2006). "The Dynamics of Open Source Contributors". American Economic Review. 96 (2): 114–8. CiteSeerX 10.1.1.510.9948. doi:10.1257/000282806777211874. ISSN 0002-8282.{{cite journal}}: CS1 maint: multiple names: authors list (link) Lerner, J., Tirole, J. (2002). "Some simple economics on open source". Journal of Industrial Economics. 50 (2): 197–234. CiteSeerX 10.1.1.461.3373. doi:10.1111/1467-6451.00174. ISSN 0022-1821. S2CID 219722756.{{cite journal}}: CS1 maint: multiple names: authors list (link) earlier revision (PDF) Lerner, J.; Tirole, J. (2005). "The Scope of Open Source Licensing". The Journal of Law, Economics, and Organization. 21: 20–56. CiteSeerX 10.1.1.72.465. doi:10.1093/jleo/ewi002. ISSN 8756-6222. Lerner, J.; Tirole, J. (2005). "The Economics of Technology Sharing: Open Source and Beyond" (PDF). Journal of Economic Perspectives. 19 (2): 99–120. doi:10.1257/0895330054048678. ISSN 0895-3309. S2CID 17968894. Maurer, S.M. (2008). "Open source biology: Finding a niche (or maybe several)". UMKC Law Review. 76 (2). doi:10.2139/ssrn.1114371. ISSN 1556-5068. S2CID 54046895. SSRN 1114371. Osterloh, M.; Rota, S. (2007). "Open source software development — Just another case of collective invention?" (PDF). Research Policy. 36 (2): 157–171. doi:10.1016/j.respol.2006.10.004. hdl:10419/214322. ISSN 0048-7333. Riehle, D. (April 2007). "The Economic Motivation of Open Source: Stakeholder Perspectives". IEEE Computer. 40 (4): 25–32. doi:10.1109/MC.2007.147. ISSN 0018-9162. S2CID 168544. Rossi, M.A. (2006). "Decoding the free/open source software puzzle: A survey of theoretical and empirical contributions" (PDF). In Bitzer, J.; Schröder, P. (eds.). The Economics of Open Source Software Development. Elsevier. pp. 15–55. ISBN 978-0-444-52769-1. Schiff, A. (2002). "The Economics of Open Source Software: A Survey of the Early Literature" (PDF). Review of Network Economics. 1 (1): 66–74. doi:10.2202/1446-9022.1004. ISSN 2194-5993. S2CID 201280221. Archived from the original on 7 May 2003. Schwarz, M.; Takhteyev, Y. (2010). "Half a Century of Public Software Institutions: Open Source as a Solution to the Hold-Up Problem". Journal of Public Economic Theory. 12 (4): 609–639. CiteSeerX 10.1.1.625.2368. doi:10.1111/j.1467-9779.2010.01467.x. ISSN 1097-3923. S2CID 154317482. earlier revision Spagnoletti, P.; Federici, T. (2011). "Exploring the Interplay Between FLOSS Adoption and Organizational Innovation". Communications of the Association for Information Systems. 29 (15): 279–298. Abramson, Bruce (2005). Digital Phoenix; Why the Information Economy Collapsed and How it Will Rise Again. MIT Press. ISBN 978-0-262-51196-4. Sampathkumar, K.S. Understanding FOSS Version 4.0 revised. ISBN 978-8-184-65469-1. == External links ==
Wikipedia/Open-source_model
Citric acid is an organic compound with the formula C6H8O7. It is a colorless weak organic acid. It occurs naturally in citrus fruits. In biochemistry, it is an intermediate in the citric acid cycle, which occurs in the metabolism of all aerobic organisms. More than two million tons of citric acid are manufactured every year. It is used widely as acidifier, flavoring, preservative, and chelating agent. A citrate is a derivative of citric acid; that is, the salts, esters, and the polyatomic anion found in solutions and salts of citric acid. An example of the former, a salt is trisodium citrate; an ester is triethyl citrate. When citrate trianion is part of a salt, the formula of the citrate trianion is written as C6H5O3−7 or C3H5O(COO)3−3. == Natural occurrence and industrial production == Citric acid occurs in a variety of fruits and vegetables, most notably citrus fruits. Lemons and limes have particularly high concentrations of the acid; it can constitute as much as 8% of the dry weight of these fruits (about 47 g/L in the juices). The concentrations of citric acid in citrus fruits range from 0.005 mol/L for oranges and grapefruits to 0.30 mol/L in lemons and limes; these values vary within species depending upon the cultivar and the circumstances under which the fruit was grown. Citric acid was first isolated in 1784 by the chemist Carl Wilhelm Scheele, who crystallized it from lemon juice. Industrial-scale citric acid production first began in 1890 based on the Italian citrus fruit industry, where the juice was treated with hydrated lime (calcium hydroxide) to precipitate calcium citrate, which was isolated and converted back to the acid using diluted sulfuric acid. In 1893, C. Wehmer discovered Penicillium mold could produce citric acid from sugar. However, microbial production of citric acid did not become industrially important until World War I disrupted Italian Citrus exports. In 1917, American food chemist James Currie discovered that certain strains of the mold Aspergillus niger could be efficient citric acid producers, and the pharmaceutical company Pfizer began industrial-level production using this technique two years later, followed by Citrique Belge in 1929. In this production technique, which is still the major industrial route to citric acid used today, cultures of Aspergillus niger are fed on a sucrose or glucose-containing medium to produce citric acid. The source of sugar is corn steep liquor, molasses, hydrolyzed corn starch, or other inexpensive, carbohydrate solution. After the mold is filtered out of the resulting suspension, citric acid is isolated by precipitating it with calcium hydroxide to yield calcium citrate salt, from which citric acid is regenerated by treatment with sulfuric acid, as in the direct extraction from citrus fruit juice. In 1977, a patent was granted to Lever Brothers for the chemical synthesis of citric acid starting either from aconitic or isocitrate (also called alloisocitrate) calcium salts under high pressure conditions; this produced citric acid in near quantitative conversion under what appeared to be a reverse, non-enzymatic Krebs cycle reaction. Global production was in excess of 2,000,000 tons in 2018. More than 50% of this volume was produced in China. More than 50% was used as an acidity regulator in beverages, some 20% in other food applications, 20% for detergent applications, and 10% for applications other than food, such as cosmetics, pharmaceuticals, and in the chemical industry. == Chemical characteristics == Citric acid can be obtained as an anhydrous (water-free) form or as a monohydrate. The anhydrous form crystallizes from hot water, while the monohydrate forms when citric acid is crystallized from cold water. The monohydrate can be converted to the anhydrous form at about 78 °C. Citric acid also dissolves in absolute (anhydrous) ethanol (76 parts of citric acid per 100 parts of ethanol) at 15 °C. It decomposes with loss of carbon dioxide above about 175 °C. Citric acid is a triprotic acid, with pKa values, extrapolated to zero ionic strength, of 3.128, 4.761, and 6.396 at 25 °C. The pKa of the hydroxyl group has been found, by means of 13C NMR spectroscopy, to be 14.4. The speciation diagram shows that solutions of citric acid are buffer solutions between about pH 2 and pH 8. In biological systems around pH 7, the two species present are the citrate ion and mono-hydrogen citrate ion. The SSC 20X hybridization buffer is an example in common use. Tables compiled for biochemical studies are available. Conversely, the pH of a 1 mM solution of citric acid will be about 3.2. The pH of fruit juices from citrus fruits like oranges and lemons depends on the citric acid concentration, with a higher concentration of citric acid resulting in a lower pH. Acid salts of citric acid can be prepared by careful adjustment of the pH before crystallizing the compound. See, for example, sodium citrate. The citrate ion forms complexes with metallic cations. The stability constants for the formation of these complexes are quite large because of the chelate effect. Consequently, it forms complexes even with alkali metal cations. However, when a chelate complex is formed using all three carboxylate groups, the chelate rings have 7 and 8 members, which are generally less stable thermodynamically than smaller chelate rings. In consequence, the hydroxyl group can be deprotonated, forming part of a more stable 5-membered ring, as in ammonium ferric citrate, [NH+4]5Fe3+(C6H4O4−7)2·2H2O. Citric acid can be esterified at one or more of its three carboxylic acid groups to form any of a variety of mono-, di-, tri-, and mixed esters. == Biochemistry == === Citric acid cycle === Citrate is an intermediate in the citric acid cycle, also known as the tricarboxylic acid (TCA) cycle or the Krebs cycle, a central metabolic pathway for animals, plants, and bacteria. In the Krebs cycle, citrate synthase catalyzes the condensation of oxaloacetate with acetyl CoA to form citrate. Citrate then acts as the substrate for aconitase and is converted into aconitic acid. The cycle ends with regeneration of oxaloacetate. This series of chemical reactions is the source of two-thirds of the food-derived energy in higher organisms. The chemical energy released is available under the form of Adenosine triphosphate (ATP). Hans Adolf Krebs received the 1953 Nobel Prize in Physiology or Medicine for the discovery. === Other biological roles === Citrate can be transported out of the mitochondria and into the cytoplasm, then broken down into acetyl-CoA for fatty acid synthesis, and into oxaloacetate. Citrate is a positive modulator of this conversion, and allosterically regulates the enzyme acetyl-CoA carboxylase, which is the regulating enzyme in the conversion of acetyl-CoA into malonyl-CoA (the commitment step in fatty acid synthesis). In short, citrate is transported into the cytoplasm, converted into acetyl-CoA, which is then converted into malonyl-CoA by acetyl-CoA carboxylase, which is allosterically modulated by citrate. High concentrations of cytosolic citrate can inhibit phosphofructokinase, the catalyst of a rate-limiting step of glycolysis. This effect is advantageous: high concentrations of citrate indicate that there is a large supply of biosynthetic precursor molecules, so there is no need for phosphofructokinase to continue to send molecules of its substrate, fructose 6-phosphate, into glycolysis. Citrate acts by augmenting the inhibitory effect of high concentrations of ATP, another sign that there is no need to carry out glycolysis. Citrate is a vital component of bone, helping to regulate the size of apatite crystals. == Applications == === Food and drink === Because it is one of the stronger edible acids, the dominant use of citric acid is as a flavoring and preservative in food and beverages, especially soft drinks and candies. Within the European Union it is denoted by E number E330. Citrate salts of various metals are used to deliver those minerals in a biologically available form in many dietary supplements. Citric acid has 247 kcal per 100 g. In the United States the purity requirements for citric acid as a food additive are defined by the Food Chemicals Codex, which is published by the United States Pharmacopoeia (USP). Citric acid can be added to ice cream as an emulsifying agent to keep fats from separating, to caramel to prevent sucrose crystallization, or in recipes in place of fresh lemon juice. Citric acid is used with sodium bicarbonate in a wide range of effervescent formulae, both for ingestion (e.g., powders and tablets) and for personal care (e.g., bath salts, bath bombs, and cleaning of grease). Citric acid sold in a dry powdered form is commonly sold in markets and groceries as "sour salt", due to its physical resemblance to table salt. It has use in culinary applications, as an alternative to vinegar or lemon juice, where a pure acid is needed. Citric acid can be used in food coloring to balance the pH level of a normally basic dye. === Cleaning and chelating agent === Citric acid is an excellent chelating agent, binding metals by making them soluble. It is used to remove and discourage the buildup of limescale from boilers and evaporators. It can be used to treat water, which makes it useful in improving the effectiveness of soaps and laundry detergents. By chelating the metals in hard water, it lets these cleaners produce foam and work better without need for water softening. Citric acid is the active ingredient in some bathroom and kitchen cleaning solutions. A solution with a six percent concentration of citric acid will remove hard water stains from glass without scrubbing. Citric acid can be used in shampoo to wash out wax and coloring from the hair. Illustrative of its chelating abilities, citric acid was the first successful eluant used for total ion-exchange separation of the lanthanides, during the Manhattan Project in the 1940s. In the 1950s, it was replaced by the far more efficient EDTA. In industry, it is used to dissolve rust from steel, and to passivate stainless steels. === Cosmetics, pharmaceuticals, dietary supplements, and foods === Citric acid is used as an acidulant in creams, gels, and liquids. Used in foods and dietary supplements, it may be classified as a processing aid if it was added for a technical or functional effect (e.g. acidulent, chelator, viscosifier, etc.). If it is still present in insignificant amounts, and the technical or functional effect is no longer present, it may be exempt from labeling <21 CFR §101.100(c)>. Citric acid is an alpha hydroxy acid and is an active ingredient in chemical skin peels. Citric acid is commonly used as a buffer to increase the solubility of brown heroin. Citric acid is used as one of the active ingredients in the production of facial tissues with antiviral properties. === Other uses === The buffering properties of citrates are used to control pH in household cleaners and pharmaceuticals. Citric acid is used as an odorless alternative to white vinegar for fabric dyeing with acid dyes. It can enhance the mordanting process, crosslinking fabrics and dyes through an esterification reaction. Sodium citrate is a component of Benedict's reagent, used for both qualitative and quantitative identification of reducing sugars. Citric acid can be used as an alternative to nitric acid in passivation of stainless steel. Citric acid can be used as a lower-odor stop bath as part of the process for developing photographic film. Photographic developers are alkaline, so a mild acid is used to neutralize and stop their action quickly, but commonly used acetic acid leaves a strong vinegar odor in the darkroom. Citric acid is an excellent soldering flux, either dry or as a concentrated solution in water. It should be removed after soldering, especially with fine wires, as it is mildly corrosive. It dissolves and rinses quickly in hot water. Alkali citrate can be used as an inhibitor of kidney stones by increasing urine citrate levels, useful for prevention of calcium stones, and increasing urine pH, useful for preventing uric acid and cystine stones. == Synthesis of other organic compounds == Citric acid is a versatile precursor to many other organic compounds. Dehydration routes give itaconic acid and its anhydride. Citraconic acid can be produced via thermal isomerization of itaconic acid anhydride. The required itaconic acid anhydride is obtained by dry distillation of citric acid. Aconitic acid can be synthesized by dehydration of citric acid using sulfuric acid: (HO2CCH2)2C(OH)CO2H → HO2CCH=C(CO2H)CH2CO2H + H2O Acetonedicarboxylic acid can also be prepared by decarboxylation of citric acid in fuming sulfuric acid. == Safety == Although a weak acid, exposure to pure citric acid can cause adverse effects. Inhalation may cause cough, shortness of breath, or sore throat. Over-ingestion may cause abdominal pain and sore throat. Exposure of concentrated solutions to skin and eyes can cause redness and pain. Long-term or repeated consumption may cause erosion of tooth enamel. == Compendial status == British Pharmacopoeia Japanese Pharmacopoeia == See also == Closely related acids: isocitric acid, aconitic acid, fluorocitric acid, chlorocitric acid, and propane-1,2,3-tricarboxylic acid (tricarballylic acid, carballylic acid) Acids in wine == Explanatory notes == == References == == External links ==
Wikipedia/Citrate
Bayesian inference ( BAY-zee-ən or BAY-zhən) is a method of statistical inference in which Bayes' theorem is used to calculate a probability of a hypothesis, given prior evidence, and update it as more information becomes available. Fundamentally, Bayesian inference uses a prior distribution to estimate posterior probabilities. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, including science, engineering, philosophy, medicine, sport, and law. In the philosophy of decision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability". == Introduction to Bayes' rule == === Formal explanation === Bayesian inference derives the posterior probability as a consequence of two antecedents: a prior probability and a "likelihood function" derived from a statistical model for the observed data. Bayesian inference computes the posterior probability according to Bayes' theorem: P ( H ∣ E ) = P ( E ∣ H ) ⋅ P ( H ) P ( E ) , {\displaystyle P(H\mid E)={\frac {P(E\mid H)\cdot P(H)}{P(E)}},} where H stands for any hypothesis whose probability may be affected by data (called evidence below). Often there are competing hypotheses, and the task is to determine which is the most probable. P ( H ) {\displaystyle P(H)} , the prior probability, is the estimate of the probability of the hypothesis H before the data E, the current evidence, is observed. E, the evidence, corresponds to new data that were not used in computing the prior probability. P ( H ∣ E ) {\displaystyle P(H\mid E)} , the posterior probability, is the probability of H given E, i.e., after E is observed. This is what we want to know: the probability of a hypothesis given the observed evidence. P ( E ∣ H ) {\displaystyle P(E\mid H)} is the probability of observing E given H and is called the likelihood. As a function of E with H fixed, it indicates the compatibility of the evidence with the given hypothesis. The likelihood function is a function of the evidence, E, while the posterior probability is a function of the hypothesis, H. P ( E ) {\displaystyle P(E)} is sometimes termed the marginal likelihood or "model evidence". This factor is the same for all possible hypotheses being considered (as is evident from the fact that the hypothesis H does not appear anywhere in the symbol, unlike for all the other factors) and hence does not factor into determining the relative probabilities of different hypotheses. P ( E ) > 0 {\displaystyle P(E)>0} (Else one has 0 / 0 {\displaystyle 0/0} .) For different values of H, only the factors P ( H ) {\displaystyle P(H)} and P ( E ∣ H ) {\displaystyle P(E\mid H)} , both in the numerator, affect the value of P ( H ∣ E ) {\displaystyle P(H\mid E)} – the posterior probability of a hypothesis is proportional to its prior probability (its inherent likeliness) and the newly acquired likelihood (its compatibility with the new observed evidence). In cases where ¬ H {\displaystyle \neg H} ("not H"), the logical negation of H, is a valid likelihood, Bayes' rule can be rewritten as follows: P ( H ∣ E ) = P ( E ∣ H ) P ( H ) P ( E ) = P ( E ∣ H ) P ( H ) P ( E ∣ H ) P ( H ) + P ( E ∣ ¬ H ) P ( ¬ H ) = 1 1 + ( 1 P ( H ) − 1 ) P ( E ∣ ¬ H ) P ( E ∣ H ) {\displaystyle {\begin{aligned}P(H\mid E)&={\frac {P(E\mid H)P(H)}{P(E)}}\\\\&={\frac {P(E\mid H)P(H)}{P(E\mid H)P(H)+P(E\mid \neg H)P(\neg H)}}\\\\&={\frac {1}{1+\left({\frac {1}{P(H)}}-1\right){\frac {P(E\mid \neg H)}{P(E\mid H)}}}}\\\end{aligned}}} because P ( E ) = P ( E ∣ H ) P ( H ) + P ( E ∣ ¬ H ) P ( ¬ H ) {\displaystyle P(E)=P(E\mid H)P(H)+P(E\mid \neg H)P(\neg H)} and P ( H ) + P ( ¬ H ) = 1. {\displaystyle P(H)+P(\neg H)=1.} This focuses attention on the term ( 1 P ( H ) − 1 ) P ( E ∣ ¬ H ) P ( E ∣ H ) . {\displaystyle \left({\tfrac {1}{P(H)}}-1\right){\tfrac {P(E\mid \neg H)}{P(E\mid H)}}.} If that term is approximately 1, then the probability of the hypothesis given the evidence, P ( H ∣ E ) {\displaystyle P(H\mid E)} , is about 1 2 {\displaystyle {\tfrac {1}{2}}} , about 50% likely - equally likely or not likely. If that term is very small, close to zero, then the probability of the hypothesis, given the evidence, P ( H ∣ E ) {\displaystyle P(H\mid E)} is close to 1 or the conditional hypothesis is quite likely. If that term is very large, much larger than 1, then the hypothesis, given the evidence, is quite unlikely. If the hypothesis (without consideration of evidence) is unlikely, then P ( H ) {\displaystyle P(H)} is small (but not necessarily astronomically small) and 1 P ( H ) {\displaystyle {\tfrac {1}{P(H)}}} is much larger than 1 and this term can be approximated as P ( E ∣ ¬ H ) P ( E ∣ H ) ⋅ P ( H ) {\displaystyle {\tfrac {P(E\mid \neg H)}{P(E\mid H)\cdot P(H)}}} and relevant probabilities can be compared directly to each other. One quick and easy way to remember the equation would be to use rule of multiplication: P ( E ∩ H ) = P ( E ∣ H ) P ( H ) = P ( H ∣ E ) P ( E ) . {\displaystyle P(E\cap H)=P(E\mid H)P(H)=P(H\mid E)P(E).} === Alternatives to Bayesian updating === Bayesian updating is widely used and computationally convenient. However, it is not the only updating rule that might be considered rational. Ian Hacking noted that traditional "Dutch book" arguments did not specify Bayesian updating: they left open the possibility that non-Bayesian updating rules could avoid Dutch books. Hacking wrote: "And neither the Dutch book argument nor any other in the personalist arsenal of proofs of the probability axioms entails the dynamic assumption. Not one entails Bayesianism. So the personalist requires the dynamic assumption to be Bayesian. It is true that in consistency a personalist could abandon the Bayesian model of learning from experience. Salt could lose its savour." Indeed, there are non-Bayesian updating rules that also avoid Dutch books (as discussed in the literature on "probability kinematics") following the publication of Richard C. Jeffrey's rule, which applies Bayes' rule to the case where the evidence itself is assigned a probability. The additional hypotheses needed to uniquely require Bayesian updating have been deemed to be substantial, complicated, and unsatisfactory. == Inference over exclusive and exhaustive possibilities == If evidence is simultaneously used to update belief over a set of exclusive and exhaustive propositions, Bayesian inference may be thought of as acting on this belief distribution as a whole. === General formulation === Suppose a process is generating independent and identically distributed events E n , n = 1 , 2 , 3 , … {\displaystyle E_{n},\ n=1,2,3,\ldots } , but the probability distribution is unknown. Let the event space Ω {\displaystyle \Omega } represent the current state of belief for this process. Each model is represented by event M m {\displaystyle M_{m}} . The conditional probabilities P ( E n ∣ M m ) {\displaystyle P(E_{n}\mid M_{m})} are specified to define the models. P ( M m ) {\displaystyle P(M_{m})} is the degree of belief in M m {\displaystyle M_{m}} . Before the first inference step, { P ( M m ) } {\displaystyle \{P(M_{m})\}} is a set of initial prior probabilities. These must sum to 1, but are otherwise arbitrary. Suppose that the process is observed to generate E ∈ { E n } {\displaystyle E\in \{E_{n}\}} . For each M ∈ { M m } {\displaystyle M\in \{M_{m}\}} , the prior P ( M ) {\displaystyle P(M)} is updated to the posterior P ( M ∣ E ) {\displaystyle P(M\mid E)} . From Bayes' theorem: P ( M ∣ E ) = P ( E ∣ M ) ∑ m P ( E ∣ M m ) P ( M m ) ⋅ P ( M ) . {\displaystyle P(M\mid E)={\frac {P(E\mid M)}{\sum _{m}{P(E\mid M_{m})P(M_{m})}}}\cdot P(M).} Upon observation of further evidence, this procedure may be repeated. === Multiple observations === For a sequence of independent and identically distributed observations E = ( e 1 , … , e n ) {\displaystyle \mathbf {E} =(e_{1},\dots ,e_{n})} , it can be shown by induction that repeated application of the above is equivalent to P ( M ∣ E ) = P ( E ∣ M ) ∑ m P ( E ∣ M m ) P ( M m ) ⋅ P ( M ) , {\displaystyle P(M\mid \mathbf {E} )={\frac {P(\mathbf {E} \mid M)}{\sum _{m}{P(\mathbf {E} \mid M_{m})P(M_{m})}}}\cdot P(M),} where P ( E ∣ M ) = ∏ k P ( e k ∣ M ) . {\displaystyle P(\mathbf {E} \mid M)=\prod _{k}{P(e_{k}\mid M)}.} === Parametric formulation: motivating the formal description === By parameterizing the space of models, the belief in all models may be updated in a single step. The distribution of belief over the model space may then be thought of as a distribution of belief over the parameter space. The distributions in this section are expressed as continuous, represented by probability densities, as this is the usual situation. The technique is, however, equally applicable to discrete distributions. Let the vector θ {\displaystyle {\boldsymbol {\theta }}} span the parameter space. Let the initial prior distribution over θ {\displaystyle {\boldsymbol {\theta }}} be p ( θ ∣ α ) {\displaystyle p({\boldsymbol {\theta }}\mid {\boldsymbol {\alpha }})} , where α {\displaystyle {\boldsymbol {\alpha }}} is a set of parameters to the prior itself, or hyperparameters. Let E = ( e 1 , … , e n ) {\displaystyle \mathbf {E} =(e_{1},\dots ,e_{n})} be a sequence of independent and identically distributed event observations, where all e i {\displaystyle e_{i}} are distributed as p ( e ∣ θ ) {\displaystyle p(e\mid {\boldsymbol {\theta }})} for some θ {\displaystyle {\boldsymbol {\theta }}} . Bayes' theorem is applied to find the posterior distribution over θ {\displaystyle {\boldsymbol {\theta }}} : p ( θ ∣ E , α ) = p ( E ∣ θ , α ) p ( E ∣ α ) ⋅ p ( θ ∣ α ) = p ( E ∣ θ , α ) ∫ p ( E ∣ θ , α ) p ( θ ∣ α ) d θ ⋅ p ( θ ∣ α ) , {\displaystyle {\begin{aligned}p({\boldsymbol {\theta }}\mid \mathbf {E} ,{\boldsymbol {\alpha }})&={\frac {p(\mathbf {E} \mid {\boldsymbol {\theta }},{\boldsymbol {\alpha }})}{p(\mathbf {E} \mid {\boldsymbol {\alpha }})}}\cdot p({\boldsymbol {\theta }}\mid {\boldsymbol {\alpha }})\\&={\frac {p(\mathbf {E} \mid {\boldsymbol {\theta }},{\boldsymbol {\alpha }})}{\int p(\mathbf {E} \mid {\boldsymbol {\theta }},{\boldsymbol {\alpha }})p({\boldsymbol {\theta }}\mid {\boldsymbol {\alpha }})\,d{\boldsymbol {\theta }}}}\cdot p({\boldsymbol {\theta }}\mid {\boldsymbol {\alpha }}),\end{aligned}}} where p ( E ∣ θ , α ) = ∏ k p ( e k ∣ θ ) . {\displaystyle p(\mathbf {E} \mid {\boldsymbol {\theta }},{\boldsymbol {\alpha }})=\prod _{k}p(e_{k}\mid {\boldsymbol {\theta }}).} == Formal description of Bayesian inference == === Definitions === x {\displaystyle x} , a data point in general. This may in fact be a vector of values. θ {\displaystyle \theta } , the parameter of the data point's distribution, i.e., x ∼ p ( x ∣ θ ) {\displaystyle x\sim p(x\mid \theta )} . This may be a vector of parameters. α {\displaystyle \alpha } , the hyperparameter of the parameter distribution, i.e., θ ∼ p ( θ ∣ α ) {\displaystyle \theta \sim p(\theta \mid \alpha )} . This may be a vector of hyperparameters. X {\displaystyle \mathbf {X} } is the sample, a set of n {\displaystyle n} observed data points, i.e., x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} . x ~ {\displaystyle {\tilde {x}}} , a new data point whose distribution is to be predicted. === Bayesian inference === The prior distribution is the distribution of the parameter(s) before any data is observed, i.e. p ( θ ∣ α ) {\displaystyle p(\theta \mid \alpha )} . The prior distribution might not be easily determined; in such a case, one possibility may be to use the Jeffreys prior to obtain a prior distribution before updating it with newer observations. The sampling distribution is the distribution of the observed data conditional on its parameters, i.e. p ( X ∣ θ ) {\displaystyle p(\mathbf {X} \mid \theta )} . This is also termed the likelihood, especially when viewed as a function of the parameter(s), sometimes written L ⁡ ( θ ∣ X ) = p ( X ∣ θ ) {\displaystyle \operatorname {L} (\theta \mid \mathbf {X} )=p(\mathbf {X} \mid \theta )} . The marginal likelihood (sometimes also termed the evidence) is the distribution of the observed data marginalized over the parameter(s), i.e. p ( X ∣ α ) = ∫ p ( X ∣ θ ) p ( θ ∣ α ) d θ . {\displaystyle p(\mathbf {X} \mid \alpha )=\int p(\mathbf {X} \mid \theta )p(\theta \mid \alpha )d\theta .} It quantifies the agreement between data and expert opinion, in a geometric sense that can be made precise. If the marginal likelihood is 0 then there is no agreement between the data and expert opinion and Bayes' rule cannot be applied. The posterior distribution is the distribution of the parameter(s) after taking into account the observed data. This is determined by Bayes' rule, which forms the heart of Bayesian inference: p ( θ ∣ X , α ) = p ( θ , X , α ) p ( X , α ) = p ( X ∣ θ , α ) p ( θ , α ) p ( X ∣ α ) p ( α ) = p ( X ∣ θ , α ) p ( θ ∣ α ) p ( X ∣ α ) ∝ p ( X ∣ θ , α ) p ( θ ∣ α ) . {\displaystyle p(\theta \mid \mathbf {X} ,\alpha )={\frac {p(\theta ,\mathbf {X} ,\alpha )}{p(\mathbf {X} ,\alpha )}}={\frac {p(\mathbf {X} \mid \theta ,\alpha )p(\theta ,\alpha )}{p(\mathbf {X} \mid \alpha )p(\alpha )}}={\frac {p(\mathbf {X} \mid \theta ,\alpha )p(\theta \mid \alpha )}{p(\mathbf {X} \mid \alpha )}}\propto p(\mathbf {X} \mid \theta ,\alpha )p(\theta \mid \alpha ).} This is expressed in words as "posterior is proportional to likelihood times prior", or sometimes as "posterior = likelihood times prior, over evidence". In practice, for almost all complex Bayesian models used in machine learning, the posterior distribution p ( θ ∣ X , α ) {\displaystyle p(\theta \mid \mathbf {X} ,\alpha )} is not obtained in a closed form distribution, mainly because the parameter space for θ {\displaystyle \theta } can be very high, or the Bayesian model retains certain hierarchical structure formulated from the observations X {\displaystyle \mathbf {X} } and parameter θ {\displaystyle \theta } . In such situations, we need to resort to approximation techniques. General case: Let P Y x {\displaystyle P_{Y}^{x}} be the conditional distribution of Y {\displaystyle Y} given X = x {\displaystyle X=x} and let P X {\displaystyle P_{X}} be the distribution of X {\displaystyle X} . The joint distribution is then P X , Y ( d x , d y ) = P Y x ( d y ) P X ( d x ) {\displaystyle P_{X,Y}(dx,dy)=P_{Y}^{x}(dy)P_{X}(dx)} . The conditional distribution P X y {\displaystyle P_{X}^{y}} of X {\displaystyle X} given Y = y {\displaystyle Y=y} is then determined by P X y ( A ) = E ( 1 A ( X ) | Y = y ) {\displaystyle P_{X}^{y}(A)=E(1_{A}(X)|Y=y)} Existence and uniqueness of the needed conditional expectation is a consequence of the Radon–Nikodym theorem. This was formulated by Kolmogorov in his famous book from 1933. Kolmogorov underlines the importance of conditional probability by writing "I wish to call attention to ... and especially the theory of conditional probabilities and conditional expectations ..." in the Preface. The Bayes theorem determines the posterior distribution from the prior distribution. Uniqueness requires continuity assumptions. Bayes' theorem can be generalized to include improper prior distributions such as the uniform distribution on the real line. Modern Markov chain Monte Carlo methods have boosted the importance of Bayes' theorem including cases with improper priors. === Bayesian prediction === The posterior predictive distribution is the distribution of a new data point, marginalized over the posterior: p ( x ~ ∣ X , α ) = ∫ p ( x ~ ∣ θ ) p ( θ ∣ X , α ) d θ {\displaystyle p({\tilde {x}}\mid \mathbf {X} ,\alpha )=\int p({\tilde {x}}\mid \theta )p(\theta \mid \mathbf {X} ,\alpha )d\theta } The prior predictive distribution is the distribution of a new data point, marginalized over the prior: p ( x ~ ∣ α ) = ∫ p ( x ~ ∣ θ ) p ( θ ∣ α ) d θ {\displaystyle p({\tilde {x}}\mid \alpha )=\int p({\tilde {x}}\mid \theta )p(\theta \mid \alpha )d\theta } Bayesian theory calls for the use of the posterior predictive distribution to do predictive inference, i.e., to predict the distribution of a new, unobserved data point. That is, instead of a fixed point as a prediction, a distribution over possible points is returned. Only this way is the entire posterior distribution of the parameter(s) used. By comparison, prediction in frequentist statistics often involves finding an optimum point estimate of the parameter(s)—e.g., by maximum likelihood or maximum a posteriori estimation (MAP)—and then plugging this estimate into the formula for the distribution of a data point. This has the disadvantage that it does not account for any uncertainty in the value of the parameter, and hence will underestimate the variance of the predictive distribution. In some instances, frequentist statistics can work around this problem. For example, confidence intervals and prediction intervals in frequentist statistics when constructed from a normal distribution with unknown mean and variance are constructed using a Student's t-distribution. This correctly estimates the variance, due to the facts that (1) the average of normally distributed random variables is also normally distributed, and (2) the predictive distribution of a normally distributed data point with unknown mean and variance, using conjugate or uninformative priors, has a Student's t-distribution. In Bayesian statistics, however, the posterior predictive distribution can always be determined exactly—or at least to an arbitrary level of precision when numerical methods are used. Both types of predictive distributions have the form of a compound probability distribution (as does the marginal likelihood). In fact, if the prior distribution is a conjugate prior, such that the prior and posterior distributions come from the same family, it can be seen that both prior and posterior predictive distributions also come from the same family of compound distributions. The only difference is that the posterior predictive distribution uses the updated values of the hyperparameters (applying the Bayesian update rules given in the conjugate prior article), while the prior predictive distribution uses the values of the hyperparameters that appear in the prior distribution. == Mathematical properties == === Interpretation of factor === P ( E ∣ M ) P ( E ) > 1 ⇒ P ( E ∣ M ) > P ( E ) {\textstyle {\frac {P(E\mid M)}{P(E)}}>1\Rightarrow P(E\mid M)>P(E)} . That is, if the model were true, the evidence would be more likely than is predicted by the current state of belief. The reverse applies for a decrease in belief. If the belief does not change, P ( E ∣ M ) P ( E ) = 1 ⇒ P ( E ∣ M ) = P ( E ) {\textstyle {\frac {P(E\mid M)}{P(E)}}=1\Rightarrow P(E\mid M)=P(E)} . That is, the evidence is independent of the model. If the model were true, the evidence would be exactly as likely as predicted by the current state of belief. === Cromwell's rule === If P ( M ) = 0 {\displaystyle P(M)=0} then P ( M ∣ E ) = 0 {\displaystyle P(M\mid E)=0} . If P ( M ) = 1 {\displaystyle P(M)=1} and P ( E ) > 0 {\displaystyle P(E)>0} , then P ( M | E ) = 1 {\displaystyle P(M|E)=1} . This can be interpreted to mean that hard convictions are insensitive to counter-evidence. The former follows directly from Bayes' theorem. The latter can be derived by applying the first rule to the event "not M {\displaystyle M} " in place of " M {\displaystyle M} ", yielding "if 1 − P ( M ) = 0 {\displaystyle 1-P(M)=0} , then 1 − P ( M ∣ E ) = 0 {\displaystyle 1-P(M\mid E)=0} ", from which the result immediately follows. === Asymptotic behaviour of posterior === Consider the behaviour of a belief distribution as it is updated a large number of times with independent and identically distributed trials. For sufficiently nice prior probabilities, the Bernstein-von Mises theorem gives that in the limit of infinite trials, the posterior converges to a Gaussian distribution independent of the initial prior under some conditions firstly outlined and rigorously proven by Joseph L. Doob in 1948, namely if the random variable in consideration has a finite probability space. The more general results were obtained later by the statistician David A. Freedman who published in two seminal research papers in 1963 and 1965 when and under what circumstances the asymptotic behaviour of posterior is guaranteed. His 1963 paper treats, like Doob (1949), the finite case and comes to a satisfactory conclusion. However, if the random variable has an infinite but countable probability space (i.e., corresponding to a die with infinite many faces) the 1965 paper demonstrates that for a dense subset of priors the Bernstein-von Mises theorem is not applicable. In this case there is almost surely no asymptotic convergence. Later in the 1980s and 1990s Freedman and Persi Diaconis continued to work on the case of infinite countable probability spaces. To summarise, there may be insufficient trials to suppress the effects of the initial choice, and especially for large (but finite) systems the convergence might be very slow. === Conjugate priors === In parameterized form, the prior distribution is often assumed to come from a family of distributions called conjugate priors. The usefulness of a conjugate prior is that the corresponding posterior distribution will be in the same family, and the calculation may be expressed in closed form. === Estimates of parameters and predictions === It is often desired to use a posterior distribution to estimate a parameter or variable. Several methods of Bayesian estimation select measurements of central tendency from the posterior distribution. For one-dimensional problems, a unique median exists for practical continuous problems. The posterior median is attractive as a robust estimator. If there exists a finite mean for the posterior distribution, then the posterior mean is a method of estimation. θ ~ = E ⁡ [ θ ] = ∫ θ p ( θ ∣ X , α ) d θ {\displaystyle {\tilde {\theta }}=\operatorname {E} [\theta ]=\int \theta \,p(\theta \mid \mathbf {X} ,\alpha )\,d\theta } Taking a value with the greatest probability defines maximum a posteriori (MAP) estimates: { θ MAP } ⊂ arg ⁡ max θ p ( θ ∣ X , α ) . {\displaystyle \{\theta _{\text{MAP}}\}\subset \arg \max _{\theta }p(\theta \mid \mathbf {X} ,\alpha ).} There are examples where no maximum is attained, in which case the set of MAP estimates is empty. There are other methods of estimation that minimize the posterior risk (expected-posterior loss) with respect to a loss function, and these are of interest to statistical decision theory using the sampling distribution ("frequentist statistics"). The posterior predictive distribution of a new observation x ~ {\displaystyle {\tilde {x}}} (that is independent of previous observations) is determined by p ( x ~ | X , α ) = ∫ p ( x ~ , θ ∣ X , α ) d θ = ∫ p ( x ~ ∣ θ ) p ( θ ∣ X , α ) d θ . {\displaystyle p({\tilde {x}}|\mathbf {X} ,\alpha )=\int p({\tilde {x}},\theta \mid \mathbf {X} ,\alpha )\,d\theta =\int p({\tilde {x}}\mid \theta )p(\theta \mid \mathbf {X} ,\alpha )\,d\theta .} == Examples == === Probability of a hypothesis === Suppose there are two full bowls of cookies. Bowl #1 has 10 chocolate chip and 30 plain cookies, while bowl #2 has 20 of each. Our friend Fred picks a bowl at random, and then picks a cookie at random. We may assume there is no reason to believe Fred treats one bowl differently from another, likewise for the cookies. The cookie turns out to be a plain one. How probable is it that Fred picked it out of bowl #1? Intuitively, it seems clear that the answer should be more than a half, since there are more plain cookies in bowl #1. The precise answer is given by Bayes' theorem. Let H 1 {\displaystyle H_{1}} correspond to bowl #1, and H 2 {\displaystyle H_{2}} to bowl #2. It is given that the bowls are identical from Fred's point of view, thus P ( H 1 ) = P ( H 2 ) {\displaystyle P(H_{1})=P(H_{2})} , and the two must add up to 1, so both are equal to 0.5. The event E {\displaystyle E} is the observation of a plain cookie. From the contents of the bowls, we know that P ( E ∣ H 1 ) = 30 / 40 = 0.75 {\displaystyle P(E\mid H_{1})=30/40=0.75} and P ( E ∣ H 2 ) = 20 / 40 = 0.5. {\displaystyle P(E\mid H_{2})=20/40=0.5.} Bayes' formula then yields P ( H 1 ∣ E ) = P ( E ∣ H 1 ) P ( H 1 ) P ( E ∣ H 1 ) P ( H 1 ) + P ( E ∣ H 2 ) P ( H 2 ) = 0.75 × 0.5 0.75 × 0.5 + 0.5 × 0.5 = 0.6 {\displaystyle {\begin{aligned}P(H_{1}\mid E)&={\frac {P(E\mid H_{1})\,P(H_{1})}{P(E\mid H_{1})\,P(H_{1})\;+\;P(E\mid H_{2})\,P(H_{2})}}\\\\\ &={\frac {0.75\times 0.5}{0.75\times 0.5+0.5\times 0.5}}\\\\\ &=0.6\end{aligned}}} Before we observed the cookie, the probability we assigned for Fred having chosen bowl #1 was the prior probability, P ( H 1 ) {\displaystyle P(H_{1})} , which was 0.5. After observing the cookie, we must revise the probability to P ( H 1 ∣ E ) {\displaystyle P(H_{1}\mid E)} , which is 0.6. === Making a prediction === An archaeologist is working at a site thought to be from the medieval period, between the 11th century to the 16th century. However, it is uncertain exactly when in this period the site was inhabited. Fragments of pottery are found, some of which are glazed and some of which are decorated. It is expected that if the site were inhabited during the early medieval period, then 1% of the pottery would be glazed and 50% of its area decorated, whereas if it had been inhabited in the late medieval period then 81% would be glazed and 5% of its area decorated. How confident can the archaeologist be in the date of inhabitation as fragments are unearthed? The degree of belief in the continuous variable C {\displaystyle C} (century) is to be calculated, with the discrete set of events { G D , G D ¯ , G ¯ D , G ¯ D ¯ } {\displaystyle \{GD,G{\bar {D}},{\bar {G}}D,{\bar {G}}{\bar {D}}\}} as evidence. Assuming linear variation of glaze and decoration with time, and that these variables are independent, P ( E = G D ∣ C = c ) = ( 0.01 + 0.81 − 0.01 16 − 11 ( c − 11 ) ) ( 0.5 − 0.5 − 0.05 16 − 11 ( c − 11 ) ) {\displaystyle P(E=GD\mid C=c)=(0.01+{\frac {0.81-0.01}{16-11}}(c-11))(0.5-{\frac {0.5-0.05}{16-11}}(c-11))} P ( E = G D ¯ ∣ C = c ) = ( 0.01 + 0.81 − 0.01 16 − 11 ( c − 11 ) ) ( 0.5 + 0.5 − 0.05 16 − 11 ( c − 11 ) ) {\displaystyle P(E=G{\bar {D}}\mid C=c)=(0.01+{\frac {0.81-0.01}{16-11}}(c-11))(0.5+{\frac {0.5-0.05}{16-11}}(c-11))} P ( E = G ¯ D ∣ C = c ) = ( ( 1 − 0.01 ) − 0.81 − 0.01 16 − 11 ( c − 11 ) ) ( 0.5 − 0.5 − 0.05 16 − 11 ( c − 11 ) ) {\displaystyle P(E={\bar {G}}D\mid C=c)=((1-0.01)-{\frac {0.81-0.01}{16-11}}(c-11))(0.5-{\frac {0.5-0.05}{16-11}}(c-11))} P ( E = G ¯ D ¯ ∣ C = c ) = ( ( 1 − 0.01 ) − 0.81 − 0.01 16 − 11 ( c − 11 ) ) ( 0.5 + 0.5 − 0.05 16 − 11 ( c − 11 ) ) {\displaystyle P(E={\bar {G}}{\bar {D}}\mid C=c)=((1-0.01)-{\frac {0.81-0.01}{16-11}}(c-11))(0.5+{\frac {0.5-0.05}{16-11}}(c-11))} Assume a uniform prior of f C ( c ) = 0.2 {\textstyle f_{C}(c)=0.2} , and that trials are independent and identically distributed. When a new fragment of type e {\displaystyle e} is discovered, Bayes' theorem is applied to update the degree of belief for each c {\displaystyle c} : f C ( c ∣ E = e ) = P ( E = e ∣ C = c ) P ( E = e ) f C ( c ) = P ( E = e ∣ C = c ) ∫ 11 16 P ( E = e ∣ C = c ) f C ( c ) d c f C ( c ) {\displaystyle f_{C}(c\mid E=e)={\frac {P(E=e\mid C=c)}{P(E=e)}}f_{C}(c)={\frac {P(E=e\mid C=c)}{\int _{11}^{16}{P(E=e\mid C=c)f_{C}(c)dc}}}f_{C}(c)} A computer simulation of the changing belief as 50 fragments are unearthed is shown on the graph. In the simulation, the site was inhabited around 1420, or c = 15.2 {\displaystyle c=15.2} . By calculating the area under the relevant portion of the graph for 50 trials, the archaeologist can say that there is practically no chance the site was inhabited in the 11th and 12th centuries, about 1% chance that it was inhabited during the 13th century, 63% chance during the 14th century and 36% during the 15th century. The Bernstein-von Mises theorem asserts here the asymptotic convergence to the "true" distribution because the probability space corresponding to the discrete set of events { G D , G D ¯ , G ¯ D , G ¯ D ¯ } {\displaystyle \{GD,G{\bar {D}},{\bar {G}}D,{\bar {G}}{\bar {D}}\}} is finite (see above section on asymptotic behaviour of the posterior). == In frequentist statistics and decision theory == A decision-theoretic justification of the use of Bayesian inference was given by Abraham Wald, who proved that every unique Bayesian procedure is admissible. Conversely, every admissible statistical procedure is either a Bayesian procedure or a limit of Bayesian procedures. Wald characterized admissible procedures as Bayesian procedures (and limits of Bayesian procedures), making the Bayesian formalism a central technique in such areas of frequentist inference as parameter estimation, hypothesis testing, and computing confidence intervals. For example: "Under some conditions, all admissible procedures are either Bayes procedures or limits of Bayes procedures (in various senses). These remarkable results, at least in their original form, are due essentially to Wald. They are useful because the property of being Bayes is easier to analyze than admissibility." "In decision theory, a quite general method for proving admissibility consists in exhibiting a procedure as a unique Bayes solution." "In the first chapters of this work, prior distributions with finite support and the corresponding Bayes procedures were used to establish some of the main theorems relating to the comparison of experiments. Bayes procedures with respect to more general prior distributions have played a very important role in the development of statistics, including its asymptotic theory." "There are many problems where a glance at posterior distributions, for suitable priors, yields immediately interesting information. Also, this technique can hardly be avoided in sequential analysis." "A useful fact is that any Bayes decision rule obtained by taking a proper prior over the whole parameter space must be admissible" "An important area of investigation in the development of admissibility ideas has been that of conventional sampling-theory procedures, and many interesting results have been obtained." === Model selection === Bayesian methodology also plays a role in model selection where the aim is to select one model from a set of competing models that represents most closely the underlying process that generated the observed data. In Bayesian model comparison, the model with the highest posterior probability given the data is selected. The posterior probability of a model depends on the evidence, or marginal likelihood, which reflects the probability that the data is generated by the model, and on the prior belief of the model. When two competing models are a priori considered to be equiprobable, the ratio of their posterior probabilities corresponds to the Bayes factor. Since Bayesian model comparison is aimed on selecting the model with the highest posterior probability, this methodology is also referred to as the maximum a posteriori (MAP) selection rule or the MAP probability rule. == Probabilistic programming == While conceptually simple, Bayesian methods can be mathematically and numerically challenging. Probabilistic programming languages (PPLs) implement functions to easily build Bayesian models together with efficient automatic inference methods. This helps separate the model building from the inference, allowing practitioners to focus on their specific problems and leaving PPLs to handle the computational details for them. == Applications == === Statistical data analysis === See the separate Wikipedia entry on Bayesian statistics, specifically the statistical modeling section in that page. === Computer applications === Bayesian inference has applications in artificial intelligence and expert systems. Bayesian inference techniques have been a fundamental part of computerized pattern recognition techniques since the late 1950s. There is also an ever-growing connection between Bayesian methods and simulation-based Monte Carlo techniques since complex models cannot be processed in closed form by a Bayesian analysis, while a graphical model structure may allow for efficient simulation algorithms like the Gibbs sampling and other Metropolis–Hastings algorithm schemes. Recently Bayesian inference has gained popularity among the phylogenetics community for these reasons; a number of applications allow many demographic and evolutionary parameters to be estimated simultaneously. As applied to statistical classification, Bayesian inference has been used to develop algorithms for identifying e-mail spam. Applications which make use of Bayesian inference for spam filtering include CRM114, DSPAM, Bogofilter, SpamAssassin, SpamBayes, Mozilla, XEAMS, and others. Spam classification is treated in more detail in the article on the naïve Bayes classifier. Solomonoff's Inductive inference is the theory of prediction based on observations; for example, predicting the next symbol based upon a given series of symbols. The only assumption is that the environment follows some unknown but computable probability distribution. It is a formal inductive framework that combines two well-studied principles of inductive inference: Bayesian statistics and Occam's Razor. Solomonoff's universal prior probability of any prefix p of a computable sequence x is the sum of the probabilities of all programs (for a universal computer) that compute something starting with p. Given some p and any computable but unknown probability distribution from which x is sampled, the universal prior and Bayes' theorem can be used to predict the yet unseen parts of x in optimal fashion. === Bioinformatics and healthcare applications === Bayesian inference has been applied in different Bioinformatics applications, including differential gene expression analysis. Bayesian inference is also used in a general cancer risk model, called CIRI (Continuous Individualized Risk Index), where serial measurements are incorporated to update a Bayesian model which is primarily built from prior knowledge. === In the courtroom === Bayesian inference can be used by jurors to coherently accumulate the evidence for and against a defendant, and to see whether, in totality, it meets their personal threshold for "beyond a reasonable doubt". Bayes' theorem is applied successively to all evidence presented, with the posterior from one stage becoming the prior for the next. The benefit of a Bayesian approach is that it gives the juror an unbiased, rational mechanism for combining evidence. It may be appropriate to explain Bayes' theorem to jurors in odds form, as betting odds are more widely understood than probabilities. Alternatively, a logarithmic approach, replacing multiplication with addition, might be easier for a jury to handle. If the existence of the crime is not in doubt, only the identity of the culprit, it has been suggested that the prior should be uniform over the qualifying population. For example, if 1,000 people could have committed the crime, the prior probability of guilt would be 1/1000. The use of Bayes' theorem by jurors is controversial. In the United Kingdom, a defence expert witness explained Bayes' theorem to the jury in R v Adams. The jury convicted, but the case went to appeal on the basis that no means of accumulating evidence had been provided for jurors who did not wish to use Bayes' theorem. The Court of Appeal upheld the conviction, but it also gave the opinion that "To introduce Bayes' Theorem, or any similar method, into a criminal trial plunges the jury into inappropriate and unnecessary realms of theory and complexity, deflecting them from their proper task." Gardner-Medwin argues that the criterion on which a verdict in a criminal trial should be based is not the probability of guilt, but rather the probability of the evidence, given that the defendant is innocent (akin to a frequentist p-value). He argues that if the posterior probability of guilt is to be computed by Bayes' theorem, the prior probability of guilt must be known. This will depend on the incidence of the crime, which is an unusual piece of evidence to consider in a criminal trial. Consider the following three propositions: A – the known facts and testimony could have arisen if the defendant is guilty. B – the known facts and testimony could have arisen if the defendant is innocent. C – the defendant is guilty. Gardner-Medwin argues that the jury should believe both A and not-B in order to convict. A and not-B implies the truth of C, but the reverse is not true. It is possible that B and C are both true, but in this case he argues that a jury should acquit, even though they know that they will be letting some guilty people go free. See also Lindley's paradox. === Bayesian epistemology === Bayesian epistemology is a movement that advocates for Bayesian inference as a means of justifying the rules of inductive logic. Karl Popper and David Miller have rejected the idea of Bayesian rationalism, i.e. using Bayes rule to make epistemological inferences: It is prone to the same vicious circle as any other justificationist epistemology, because it presupposes what it attempts to justify. According to this view, a rational interpretation of Bayesian inference would see it merely as a probabilistic version of falsification, rejecting the belief, commonly held by Bayesians, that high likelihood achieved by a series of Bayesian updates would prove the hypothesis beyond any reasonable doubt, or even with likelihood greater than 0. === Other === The scientific method is sometimes interpreted as an application of Bayesian inference. In this view, Bayes' rule guides (or should guide) the updating of probabilities about hypotheses conditional on new observations or experiments. The Bayesian inference has also been applied to treat stochastic scheduling problems with incomplete information by Cai et al. (2009). Bayesian search theory is used to search for lost objects. Bayesian inference in phylogeny Bayesian tool for methylation analysis Bayesian approaches to brain function investigate the brain as a Bayesian mechanism. Bayesian inference in ecological studies Bayesian inference is used to estimate parameters in stochastic chemical kinetic models Bayesian inference in econophysics for currency or prediction of trend changes in financial quotations Bayesian inference in marketing Bayesian inference in motor learning Bayesian inference is used in probabilistic numerics to solve numerical problems == Bayes and Bayesian inference == The problem considered by Bayes in Proposition 9 of his essay, "An Essay Towards Solving a Problem in the Doctrine of Chances", is the posterior distribution for the parameter a (the success rate) of the binomial distribution. == History == The term Bayesian refers to Thomas Bayes (1701–1761), who proved that probabilistic limits could be placed on an unknown event. However, it was Pierre-Simon Laplace (1749–1827) who introduced (as Principle VI) what is now called Bayes' theorem and used it to address problems in celestial mechanics, medical statistics, reliability, and jurisprudence. Early Bayesian inference, which used uniform priors following Laplace's principle of insufficient reason, was called "inverse probability" (because it infers backwards from observations to parameters, or from effects to causes). After the 1920s, "inverse probability" was largely supplanted by a collection of methods that came to be called frequentist statistics. In the 20th century, the ideas of Laplace were further developed in two different directions, giving rise to objective and subjective currents in Bayesian practice. In the objective or "non-informative" current, the statistical analysis depends on only the model assumed, the data analyzed, and the method assigning the prior, which differs from one objective Bayesian practitioner to another. In the subjective or "informative" current, the specification of the prior depends on the belief (that is, propositions on which the analysis is prepared to act), which can summarize information from experts, previous studies, etc. In the 1980s, there was a dramatic growth in research and applications of Bayesian methods, mostly attributed to the discovery of Markov chain Monte Carlo methods, which removed many of the computational problems, and an increasing interest in nonstandard, complex applications. Despite growth of Bayesian research, most undergraduate teaching is still based on frequentist statistics. Nonetheless, Bayesian methods are widely accepted and used, such as for example in the field of machine learning. == See also == == References == === Citations === === Sources === == Further reading == For a full report on the history of Bayesian statistics and the debates with frequentists approaches, read Vallverdu, Jordi (2016). Bayesians Versus Frequentists A Philosophical Debate on Statistical Reasoning. New York: Springer. ISBN 978-3-662-48638-2. Clayton, Aubrey (August 2021). Bernoulli's Fallacy: Statistical Illogic and the Crisis of Modern Science. Columbia University Press. ISBN 978-0-231-55335-3. === Elementary === The following books are listed in ascending order of probabilistic sophistication: Stone, JV (2013), "Bayes' Rule: A Tutorial Introduction to Bayesian Analysis", Download first chapter here, Sebtel Press, England. Dennis V. Lindley (2013). Understanding Uncertainty, Revised Edition (2nd ed.). John Wiley. ISBN 978-1-118-65012-7. Colin Howson & Peter Urbach (2005). Scientific Reasoning: The Bayesian Approach (3rd ed.). Open Court Publishing Company. ISBN 978-0-8126-9578-6. Berry, Donald A. (1996). Statistics: A Bayesian Perspective. Duxbury. ISBN 978-0-534-23476-8. Morris H. DeGroot & Mark J. Schervish (2002). Probability and Statistics (third ed.). Addison-Wesley. ISBN 978-0-201-52488-8. Bolstad, William M. (2007) Introduction to Bayesian Statistics: Second Edition, John Wiley ISBN 0-471-27020-2 Winkler, Robert L (2003). Introduction to Bayesian Inference and Decision (2nd ed.). Probabilistic. ISBN 978-0-9647938-4-2. Updated classic textbook. Bayesian theory clearly presented. Lee, Peter M. Bayesian Statistics: An Introduction. Fourth Edition (2012), John Wiley ISBN 978-1-1183-3257-3 Carlin, Bradley P. & Louis, Thomas A. (2008). Bayesian Methods for Data Analysis, Third Edition. Boca Raton, FL: Chapman and Hall/CRC. ISBN 978-1-58488-697-6. Gelman, Andrew; Carlin, John B.; Stern, Hal S.; Dunson, David B.; Vehtari, Aki; Rubin, Donald B. (2013). Bayesian Data Analysis, Third Edition. Chapman and Hall/CRC. ISBN 978-1-4398-4095-5. === Intermediate or advanced === Berger, James O (1985). Statistical Decision Theory and Bayesian Analysis. Springer Series in Statistics (Second ed.). Springer-Verlag. Bibcode:1985sdtb.book.....B. ISBN 978-0-387-96098-2. Bernardo, José M.; Smith, Adrian F. M. (1994). Bayesian Theory. Wiley. DeGroot, Morris H., Optimal Statistical Decisions. Wiley Classics Library. 2004. (Originally published (1970) by McGraw-Hill.) ISBN 0-471-68029-X. Schervish, Mark J. (1995). Theory of statistics. Springer-Verlag. ISBN 978-0-387-94546-0. Jaynes, E. T. (1998). Probability Theory: The Logic of Science. O'Hagan, A. and Forster, J. (2003). Kendall's Advanced Theory of Statistics, Volume 2B: Bayesian Inference. Arnold, New York. ISBN 0-340-52922-9. Robert, Christian P (2007). The Bayesian Choice: From Decision-Theoretic Foundations to Computational Implementation (paperback ed.). Springer. ISBN 978-0-387-71598-8. Pearl, Judea. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, San Mateo, CA: Morgan Kaufmann. Pierre Bessière et al. (2013). "Bayesian Programming". CRC Press. ISBN 9781439880326 Francisco J. Samaniego (2010). "A Comparison of the Bayesian and Frequentist Approaches to Estimation". Springer. New York, ISBN 978-1-4419-5940-9 == External links == "Bayesian approach to statistical problems", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Bayesian Statistics from Scholarpedia. Introduction to Bayesian probability from Queen Mary University of London Mathematical Notes on Bayesian Statistics and Markov Chain Monte Carlo Bayesian reading list Archived 2011-06-25 at the Wayback Machine, categorized and annotated by Tom Griffiths A. Hajek and S. Hartmann: Bayesian Epistemology, in: J. Dancy et al. (eds.), A Companion to Epistemology. Oxford: Blackwell 2010, 93–106. S. Hartmann and J. Sprenger: Bayesian Epistemology, in: S. Bernecker and D. Pritchard (eds.), Routledge Companion to Epistemology. London: Routledge 2010, 609–620. Stanford Encyclopedia of Philosophy: "Inductive Logic" Bayesian Confirmation Theory (PDF) What is Bayesian Learning? Data, Uncertainty and Inference — Informal introduction with many examples, ebook (PDF) freely available at causaScientia
Wikipedia/Bayesian_method
Fluorescence is widely used in the life sciences as a powerful and minimally invasive method to track and analyze biological molecules in real-time Some proteins or small molecules in cells are naturally fluorescent, which is called intrinsic fluorescence or autofluorescence (such as NADH, tryptophan or endogenous chlorophyll, phycoerythrin or green fluorescent protein). The intrinsic DNA fluorescence is very weak. Alternatively, specific or general proteins, nucleic acids, lipids or small molecules can be "labelled" with an extrinsic fluorophore, a fluorescent dye which can be a small molecule, protein or quantum dot. Several techniques exist to exploit additional properties of fluorophores, such as fluorescence resonance energy transfer, where the energy is passed non-radiatively to a particular neighbouring dye, allowing proximity or protein activation to be detected; another is the change in properties, such as intensity, of certain dyes depending on their environment allowing their use in structural studies. == Fluorescence == The principle behind fluorescence is that the fluorescent moiety contains electrons which can absorb a photon and briefly enter an excited state before either dispersing the energy non-radiatively or emitting it as a photon, but with a lower energy, i.e., at a longer wavelength (wavelength and energy are inversely proportional). The difference in the excitation and emission wavelengths is called the Stokes shift, and the time that an excited electron takes to emit the photon is called a lifetime. The quantum yield is an indicator of the efficiency of the dye (it is the ratio of emitted photons per absorbed photon), and the extinction coefficient is the amount of light that can be absorbed by a fluorophore. Both the quantum yield and extinction coefficient are specific for each fluorophore and multiplied together calculates the brightness of the fluorescent molecule. == Labelling == === Reactive dyes === Fluorophores can be attached to proteins via specific functional groups, such as: amino groups (e.g. via succinimide or Isothiocyanate); carboxyl groups (e.g. via activation with carbodiimide and subsequent coupling with amine); thiol (e.g. via maleimide or iodoacetamides); azide (e.g. via click chemistry with terminal alkyne); or non-specificately (glutaraldehyde) or non-covalently (e.g. via hydrophobicity, etc.). These fluorophores are either small molecules, protein or quantum dots. Organic fluorophores fluoresce thanks to delocalized electrons which can jump a band and stabilize the energy absorbed, hence most fluorophores are conjugated systems. Several families exist and their excitations range from the infrared to the ultraviolet. Lanthanides (chelated) are uniquely fluorescent metals, which emit thanks to transitions involving 4f orbits, which are forbidden, hence they have very low absorption coefficients and slow emissions, requiring excitation through fluorescent organic chelators (e.g. dipicolinate-based Terbium (III) chelators). A third class of small molecule fluorophore is that of the transition metal-ligand complexes, which display molecular fluorescence from a metal-to-ligand charge transfer state which is partially forbidden, these are generally complexes of Ruthenium, Rhenium or Osmium. === Quantum dots === Quantum dots are fluorescent semiconductor nanoparticles that typically brighter than conventional stains. They are generally more expensive, toxic, do not permeate cell membranes, and cannot be manufactured by the cell. === Fluorescent proteins === Several fluorescent protein exist in nature, but the most important one as a research tool is Green Fluorescent Protein (GFP) from the jellyfish Aequorea victoria, which spontaneously fluoresces upon folding via specific serine-tyrosine-glycine residues. The benefit that GFP and other fluorescent proteins have over organic dyes or quantum dots is that they can be expressed exogenously in cells alone or as a fusion protein, a protein that is created by ligating the fluorescent gene (e.g., GFP) to another gene and whose expression is driven by a housekeeping gene promoter or another specific promoter. This approach allows fluorescent proteins to be used as reporters for any number of biological events, such as sub-cellular localization and expression patterns. A variant of GFP is naturally found in corals, specifically the Anthozoa, and several mutants have been created to span the visible spectra and fluoresce longer and more stably. Other proteins are fluorescent but require a fluorophore cofactor, and hence can only be used in vitro; these are often found in plants and algae (phytofluors, phycobiliprotein such as allophycocyanin). === Fluorescence for nucleic acid analyses === Several nucleic acid analysis techniques utilized fluorescence as a read-out. For example, in quantitative PCR, replication of a target nucleic acid sequence is monitored for each cycle by measuring fluorescence intensity. The progression of these measurements can be plotted with x-axis as successive cycles of PCR, and for each cycle, Relative Fluorescence Units (RFU) plotted on the y-axis. With successive PCR cycles, the target nucleic acid sequence replicates which results in an increase in RFU. The cycle where the reaction achieves an RFU distinguishable from background is known as the Cycle time (Ct). Ct values can be compared to estimate the difference in the amount of starting material in different samples. The semi-quantitative amount of starting material in a sample can be used to estimate the abundance of a particular nucleic acid sequence. The abundance of a particular nucleic acid sequence (such as a gene) can indicate expression of that gene. Within a single reaction, the amplification of multiple nucleic acid sequences can be monitored simultaneously by using fluorophores (e.g. FAM, VIC, Cy5) with distinguishable excitation and emission spectra; this is known as multiplexed qPCR. Fluorescence is also used for analyses of nucleic acids in techniques such as microarray and in fluoremeters. === Computational techniques === The above techniques can be combined with computational methods to estimate staining levels without staining the cell. These approaches, generally, rely on training a deep-convolutional neural network to perform imaging remapping, converting the bright-field or phase image into a fluorescent image. By decoupling the training corpus from the cells under investigation, these approaches provide an avenue for using stains that are otherwise incompatible with live cell imaging, such as anti-body staining. == Bioluminescence and fluorescence == Fluorescence, chemiluminescence and phosphorescence are 3 different types of luminescence properties, i.e. emission of light from a substance. Fluorescence is a property where light is absorbed and remitted within a few nanoseconds (approx. 10ns) at a lower energy (=higher wavelength), while bioluminescence is biological chemiluminescence, a property where light is generated by a chemical reaction of an enzyme on a substrate. Phosphorescence is a property of materials to absorb light and emit the energy several milliseconds or more later (due to forbidden transitions to the ground state of a triplet state, while fluorescence occurs in excited singlet states). Until recently, this was not applicable to life science research due to the size of the inorganic particles. However the boundary between the fluorescence and phosphorescence is not clean cut as transition metal-ligand complexes, which combine a metal and several organic moieties, have long lifetimes, up to several microseconds (as they display mixed singlet-triplet states). == Comparison with radioactivity == Prior to its widespread use in the past three decades radioactivity was the most common label. The advantages of fluorescence over radioactive labels are as follows: Fluorescence is safer to use and does not require radiological controls. Several fluorescent molecules can be used simultaneously given that they do not overlap, cf. FRET, whereas with radioactivity two isotopes can be used (tritium and a low energy isotope such as 33P due to different intensities) but require special machinery (a tritium screen and a regular phosphor-imaging screen or a specific dual channel detector). Note: a channel is similar to "colour" but distinct, it is the pair of excitation and emission filters specific for a dye, e.g. agilent microarrays are dual channel, working on cy3 and cy5, these are colloquially referred to as green and red. Fluorescence is not necessarily more convenient to use because it requires specialized detection equipment of its own. For non-quantitative or relative quantification applications it can be useful but it is poorly suited for making absolute measurement because of fluorescence quenching, whereas measuring radioactively labeled molecules is always direct and highly sensitive. Disadvantages of fluorophores include: Significantly changes the properties of a fluorescently-labeled molecule Interference with normal biological processes Toxicity == Additional useful properties == The basic property of fluorescence are extensively used, such as a marker of labelled components in cells (fluorescence microscopy) or as an indicator in solution (Fluorescence spectroscopy), but other additional properties, not found with radioactivity, make it even more extensively used. === FRET === FRET (Förster resonance energy transfer) is a property in which the energy of the excited electron of one fluorphore, called the donor, is passed on to a nearby acceptor dye, either a dark quencher or another fluorophore, which has an excitation spectrum which overlaps with the emission spectrum of the donor dye resulting in a reduced fluorescence. This can be used to: detect if two labelled protein or nucleic acids come into contact or a doubly labelled single molecules is hydrolysed; detect changes in conformation; measure concentration by a competitive binding assay. === Sensitivity to environment === Environment-sensitive dyes change their properties (intensity, half-life, and excitation and emission spectra) depending on the polarity (hydrophobicity and charge) of their environments. Examples include: Indole, Cascade Yellow, prodan, Dansyl, Dapoxyl, NBD, PyMPO, Pyrene and diethylaminocumarin. This change is most pronounced when electron-donating and electron-withdrawing groups are placed at opposite ends of an aromatic ring system, as this results in a large change in dipole moment when excited. When a fluorophore is excited, it generally has a larger dipole moment (μE) than in the ground state (μG). Absorption of a photon by a fluorophore takes a few picoseconds. Before this energy is released (emission: 1–10 ns), the solvent molecules surrounding the fluorophore reorient (10–100 ps) due to the change in polarity in the excited singlet state; this process is called solvent relaxation. As a result of this relaxation, the energy of the excited state of the fluorophore is lowered (longer wavelength), hence fluorophores that have a large change in dipole moment have larger stokes shift changes in different solvents. The difference between the energy levels can be roughly determined with the Lipper-Mataga equation. A hydrophobic dye is a dye which is insoluble in water, a property independent of solvatochromism. Additionally, The term environment-sensitive in chemistry actually describes changes due to one of a variety of different environmental factors, such as pH or temperature, not just polarity; however, in biochemistry environment-sensitive fluorphore and solvatochromic fluorophore are used interchangeably: this convention is so widespread that suppliers describe them as environment-sensitive over solvatochromic. === Fluorescence lifetime === Fluorescent moieties emit photons several nanoseconds after absorption following an exponential decay curve, which differs between dyes and depends on the surrounding solvent. When the dye is attached to a macromolecules the decay curve becomes multiexponential. Conjugated dyes generally have a lifetime between 1–10 ns, a small amount of longer lived exceptions exist, notably pyrene with a lifetime of 400ns in degassed solvents or 100ns in lipids and coronene with 200ns. On a different category of fluorphores are the fluorescent organometals (lanthanides and transition metal-ligand complexes) which have been previously described, which have much longer lifetimes due to the restricted states: lanthanides have lifetimes of 0.5 to 3 ms, while transition metal-ligand complexes have lifetimes of 10 ns to 10 μs. Note that fluorescent lifetime should not be confused with the photodestruction lifetime or the "shelf-life" of a dye. === Multiphoton excitation === Multiphoton excitation is a way of focusing the viewing plane of the microscope by taking advantage of the phenomenon where two simultaneous low energy photons are absorbed by a fluorescent moiety which normally absorbs one photon with double their individual energy: say two NIR photons (800 nm) to excite a UV dye (400 nm). === Fluorescence anisotropy === A perfectly immobile fluorescent moiety when exited with polarized light will emit light which is also polarized. However, if a molecule is moving, it will tend to "scramble" the polarization of the light by radiating at a different direction from the incident light. === Fluorescent thermometry === Some fluorescent chemicals exhibit significant fluorescent quenching when exposed to increasing temperatures. This effect has been used to measure and examine the thermogenic properties of mitochondria. This involves placing mitochondria-targeting thermosensitive fluorophores inside cells, which naturally localise inside the mitochondria due to the inner mitochondrial membrane matrix-face's negative charge (as the fluorophores are cationic). The temperature of these fluorophores is inversely proportional to their fluorescence emission, and thus by measuring the fluorescent output, the temperature of actively-respiring mitochondria can be deduced. The fluorophores used are typically lipophilic cations derived from Rhodamine-B, such as ThermoFisher's MitoTracker probes. This technique has contributed significantly to the general scientific consensus that mitochondria are physiologically maintained at close to 50 ˚C, more than 10˚C above the rest of the cell. The inverse relationship between fluorescence and temperature can be explained by the change in the number of atomic collisions in the fluorophore's environment, depending on the kinetic energy. Collisions promote radiationless decay and loss of extra energy as heat, so more collisions or more forceful collisions will promote radiationless decay and reduce fluorescence emission. This temperature-measurement technique is, however, limited. These cationic fluorophores are heavily influenced by the charge of the inner mitochondrial membrane matrix-face, dependent on the cell type. For example, the thermosensitive fluorophore MTY (MitoTracker Yellow) shows a sudden and drastic drop in fluorescence after the addition of oligomycin (an ATP synthase inhibitor) to the mitochondria of human primary fibroblasts. This would suggest a sharp increase in mitochondrial temperature but is, in reality, explained by the hyperpolarisation of the mitochondrial inner membrane by oligomycin - leading to the breakdown of the positively-charged MTY fluorophore. == Methods == Fluorescence microscopy of tissues, cells or subcellular structures is accomplished by labeling an antibody with a fluorophore and allowing the antibody to find its target antigen within the sample. Labeling multiple antibodies with different fluorophores allows visualization of multiple targets within a single image. Automated sequencing of DNA by the chain termination method; each of four different chain terminating bases has its own specific fluorescent tag. As the labeled DNA molecules are separated, the fluorescent label is excited by a UV source, and the identity of the base terminating the molecule is identified by the wavelength of the emitted light. DNA detection: the compound ethidium bromide, when free to change its conformation in solution, has very little fluorescence. Ethidium bromide's fluorescence is greatly enhanced when it binds to DNA, so this compound is very useful in visualising the location of DNA fragments in agarose gel electrophoresis. Ethidium bromide can be toxic – a purportedly safer alternative is the dye SYBR Green. The DNA microarray. Immunology: An antibody has a fluorescent chemical group attached, and the sites (e.g., on a microscopic specimen) where the antibody has bound can be seen, and even quantified, by the fluorescence. FACS (fluorescent-activated cell sorting). Microscale Thermophoresis (MST) uses fluorescence as readout to quantify the directed movement of biomolecules in microscopic temperature gradients. Fluorescence has been used to study the structure and conformations of DNA and proteins with techniques such as Fluorescence resonance energy transfer, which measures distance at the angstrom level. This is especially important in complexes of multiple biomolecules. Fluorescence can be applied to study colocalization of various proteins of interest. It then can be analyzed using a specialized software, like CoLocalizer Pro. Also, many biological molecules have an intrinsic fluorescence that can sometimes be used without the need to attach a chemical tag. Sometimes this intrinsic fluorescence changes when the molecule is in a specific environment, so the distribution or binding of the molecule can be measured. Bilirubin, for instance, is highly fluorescent when bound to a specific site on serum albumin. Zinc protoporphyrin, formed in developing red blood cells instead of hemoglobin when iron is unavailable or lead is present, has a bright fluorescence and can be used to detect these problems. The number of fluorescence applications in the biomedical, biological and related sciences continuously expands. Methods of analysis in these fields are also growing, often with nomenclature in the form of acronyms such as: FLIM, FLI, FLIP, CALI, FLIE, FRET, FRAP, FCS, PFRAP, smFRET, FIONA, FRIPS, SHREK, SHRIMP or TIRF. Most of these techniques rely on fluorescence microscopes, which use high intensity light sources, usually mercury or xenon lamps, LEDs, or lasers, to excite fluorescence in the samples under observation. Optical filters then separate excitation light from emitted fluorescence to be detected by eye or with a (CCD) camera or other light detector (e.g., photomultiplier tubes, spectrographs). Considerable research is underway to improve the capabilities of such microscopes, the fluorescent probes used, and the applications they are applied to. Of particular note are confocal microscopes, which use a pinhole to achieve optical sectioning, which affords a quantitative, 3D view of the sample. == See also == Fluorophore Fluorescent microscopy Fluorescence imaging Fluorescent glucose biosensors Fluoroscopy == References ==
Wikipedia/Fluorescence_in_the_life_sciences
Nucleic acid methods are the techniques used to study nucleic acids: DNA and RNA. == Purification == DNA extraction Phenol–chloroform extraction Minicolumn purification RNA extraction Boom method Synchronous coefficient of drag alteration (SCODA) DNA purification == Quantification == Abundance in weight: spectroscopic nucleic acid quantitation Absolute abundance in number: real-time polymerase chain reaction (quantitative PCR) High-throughput relative abundance: DNA microarray High-throughput absolute abundance: serial analysis of gene expression (SAGE) Size: gel electrophoresis == Synthesis == De novo: oligonucleotide synthesis Amplification: polymerase chain reaction (PCR), loop-mediated isothermal amplification (LAMP), transcription-mediated amplification (TMA) == Kinetics == Multi-parametric surface plasmon resonance Dual-polarization interferometry Quartz crystal microbalance with dissipation monitoring (QCM-D) == Gene function == RNA interference == Other == Bisulfite sequencing DNA sequencing Expression cloning Fluorescence in situ hybridization Lab-on-a-chip Comparison of nucleic acid simulation software Northern blot Nuclear run-on assay Radioactivity in the life sciences Southern blot Differential centrifugation (sucrose gradient) Toeprinting assay Several bioinformatics methods, as seen in list of RNA structure prediction software == See also == CSH Protocols Current Protocols == References == == External links == Protocols for Recombinant DNA Isolation, Cloning, and Sequencing
Wikipedia/Nucleic_acid_methods
A case–control study (also known as case–referent study) is a type of observational study in which two existing groups differing in outcome are identified and compared on the basis of some supposed causal attribute. Case–control studies are often used to identify factors that may contribute to a medical condition by comparing subjects who have the condition with patients who do not have the condition but are otherwise similar. They require fewer resources but provide less evidence for causal inference than a randomized controlled trial. A case–control study is often used to produce an odds ratio. Some statistical methods make it possible to use a case–control study to also estimate relative risk, risk differences, and other quantities. == Definition == Porta's Dictionary of Epidemiology defines the case–control study as: "an observational epidemiological study of persons with the disease (or another outcome variable) of interest and a suitable control group of persons without the disease (comparison group, reference group). The potential relationship of a suspected risk factor or an attribute to the disease is examined by comparing the diseased and nondiseased subjects with regard to how frequently the factor or attribute is present (or, if quantitative, the levels of the attribute) in each of the groups (diseased and nondiseased)." The case–control study is frequently contrasted with cohort studies, wherein exposed and unexposed subjects are observed until they develop an outcome of interest. === Control group selection === Controls need not be in good health; inclusion of sick people is sometimes appropriate, as the control group should represent those at risk of becoming a case. Controls should come from the same population as the cases, and their selection should be independent of the exposures of interest. Controls can carry the same disease as the experimental group, but of another grade/severity, therefore being different from the outcome of interest. However, because the difference between the cases and the controls will be smaller, this results in a lower power to detect an exposure effect. As with any epidemiological study, greater numbers in the study will increase the power of the study. Numbers of cases and controls do not have to be equal. In many situations, it is much easier to recruit controls than to find cases. Increasing the number of controls above the number of cases, up to a ratio of about 4 to 1, may be a cost-effective way to improve the study. === Prospective vs. retrospective cohort studies === A prospective study watches for outcomes, such as the development of a disease, during the study period and relates this to other factors such as suspected risk or protection factor(s). The study usually involves taking a cohort of subjects and watching them over a long period. The outcome of interest should be common; otherwise, the number of outcomes observed will be too small to be statistically meaningful (indistinguishable from those that may have arisen by chance). All efforts should be made to avoid sources of bias such as the loss of individuals to follow up during the study. Prospective studies usually have fewer potential sources of bias and confounding than retrospective studies. A retrospective study, on the other hand, looks backwards and examines exposures to suspected risk or protection factors in relation to an outcome that is established at the start of the study. Many valuable case–control studies, such as Lane and Claypon's 1926 investigation of risk factors for breast cancer, were retrospective investigations. Most sources of error due to confounding and bias are more common in retrospective studies than in prospective studies. For this reason, retrospective investigations are often criticised. If the outcome of interest is uncommon, however, the size of prospective investigation required to estimate relative risk is often too large to be feasible. In retrospective studies the odds ratio provides an estimate of relative risk. One should take special care to avoid sources of bias and confounding in retrospective studies. == Strengths and weaknesses == Case–control studies are a relatively inexpensive and frequently used type of epidemiological study that can be carried out by small teams or individual researchers in single facilities in a way that more structured experimental studies often cannot be. They have pointed the way to a number of important discoveries and advances. The case–control study design is often used in the study of rare diseases or as a preliminary study where little is known about the association between the risk factor and disease of interest. Compared to prospective cohort studies they tend to be less costly and shorter in duration. In several situations, they have greater statistical power than cohort studies, which must often wait for a 'sufficient' number of disease events to accrue. Case–control studies are observational in nature and thus do not provide the same level of evidence as randomized controlled trials. The results may be confounded by other factors, to the extent of giving the opposite answer to better studies. A meta-analysis of what was considered 30 high-quality studies concluded that use of a product halved a risk, when in fact the risk was, if anything, increased. It may also be more difficult to establish the timeline of exposure to disease outcome in the setting of a case–control study than within a prospective cohort study design where the exposure is ascertained prior to following the subjects over time in order to ascertain their outcome status. The most important drawback in case–control studies relates to the difficulty of obtaining reliable information about an individual's exposure status over time. Case–control studies are therefore placed low in the hierarchy of evidence. == Examples == One of the most significant triumphs of the case–control study was the demonstration of the link between tobacco smoking and lung cancer, by Richard Doll and Bradford Hill. They showed a statistically significant association in a large case–control study. Opponents argued for many years that this type of study cannot prove causation, but the eventual results of cohort studies confirmed the causal link which the case–control studies suggested, and it is now accepted that tobacco smoking is the cause of about 87% of all lung cancer mortality in the US. == Analysis == Case–control studies were initially analyzed by testing whether or not there were significant differences between the proportion of exposed subjects among cases and controls. Subsequently, Cornfield pointed out that, when the disease outcome of interest is rare, the odds ratio of exposure can be used to estimate the relative risk (see rare disease assumption). The validity of the odds ratio depends highly on the nature of the disease studied, on the sampling methodology and on the type of follow-up. Although in classical case–control studies, it remains true that the odds ratio can only approximate the relative risk in the case of rare diseases, there is a number of other types of studies (case–cohort, nested case–control, cohort studies) in which it was later shown that the odds ratio of exposure can be used to estimate the relative risk or the incidence rate ratio of exposure without the need for the rare disease assumption. When the logistic regression model is used to model the case–control data and the odds ratio is of interest, both the prospective and retrospective likelihood methods will lead to identical maximum likelihood estimations for covariate, except for the intercept. The usual methods of estimating more interpretable parameters than odds ratios—such as risk ratios, levels, and differences—is biased if applied to case–control data, but special statistical procedures provide easy to use consistent estimators. == Impact on longevity and public health == Tetlock and Gardner claimed that the contributions of medical science to increasing human longevity and public health were negligible, and too often negative, until Scottish physician Archie Cochrane was able to convince the medical establishment to adopt randomized control trials after World War II. == See also == Nested case–control study Retrospective cohort study Prospective cohort study Randomized controlled trial == References == == Further reading == Stolley, Paul D., Schlesselman, James J. (1982). Case–control studies: design, conduct, analysis. Oxford [Oxfordshire]: Oxford University Press. ISBN 0-19-502933-X. (Still a very useful book, and a great place to start, but now a bit out of date.) == External links == Wellcome Trust Case Control Consortium
Wikipedia/Case-control
The two-rays ground-reflection model is a multipath radio propagation model which predicts the path losses between a transmitting antenna and a receiving antenna when they are in line of sight (LOS). Generally, the two antenna each have different height. The received signal having two components, the LOS component and the reflection component formed predominantly by a single ground reflected wave. The 2-ray ground reflection model is a simplified propagation model used to estimate the path loss between a transmitter and a receiver in wireless communication systems, in order to estimate the actual communication paths used. It assumes that the signal propagates through two paths: 1) Direct Path: A direct line-of-sight path between the transmitter and receiver antennas. 2) Reflected path: The path through which the signal reflects off the ground before reaching the receiver. == Mathematical derivation == From the figure the received line of sight component may be written as r l o s ( t ) = R e { λ G l o s 4 π × s ( t ) e − j 2 π l / λ l } {\displaystyle r_{los}(t)=Re\left\{{\frac {\lambda {\sqrt {G_{los}}}}{4\pi }}\times {\frac {s(t)e^{-j2\pi l/\lambda }}{l}}\right\}} and the ground reflected component may be written as r g r ( t ) = R e { λ Γ ( θ ) G g r 4 π × s ( t − τ ) e − j 2 π ( x + x ′ ) / λ x + x ′ } {\displaystyle r_{gr}(t)=Re\left\{{\frac {\lambda \Gamma (\theta ){\sqrt {G_{gr}}}}{4\pi }}\times {\frac {s(t-\tau )e^{-j2\pi (x+x')/\lambda }}{x+x'}}\right\}} where s ( t ) {\displaystyle s(t)} is the transmitted signal, l {\displaystyle l} is the length of the direct line-of-sight (LOS) ray, x + x ′ {\displaystyle x+x'} is the length of the ground-reflected ray, G l o s {\displaystyle G_{los}} is the combined antenna gain along the LOS path, G g r {\displaystyle G_{gr}} is the combined antenna gain along the ground-reflected path, λ {\displaystyle \lambda } is the wavelength of the transmission ( λ = c f {\displaystyle \lambda ={\frac {c}{f}}} , where c {\displaystyle c} is the speed of light and f {\displaystyle f} is the transmission frequency), Γ ( θ ) {\displaystyle \Gamma (\theta )} is ground reflection coefficient and τ {\displaystyle \tau } is the delay spread of the model which equals ( x + x ′ − l ) / c {\displaystyle (x+x'-l)/c} . The ground reflection coefficient is Γ ( θ ) = sin ⁡ θ − X sin ⁡ θ + X {\displaystyle \Gamma (\theta )={\frac {\sin \theta -X}{\sin \theta +X}}} where X = X h {\displaystyle X=X_{h}} or X = X v {\displaystyle X=X_{v}} depending if the signal is horizontal or vertical polarized, respectively. X {\displaystyle X} is computed as follows. X h = ε g − cos 2 θ , X v = ε g − cos 2 θ ε g = X h ε g {\displaystyle X_{h}={\sqrt {\varepsilon _{g}-{\cos }^{2}\theta }},\ X_{v}={\frac {\sqrt {\varepsilon _{g}-{\cos }^{2}\theta }}{\varepsilon _{g}}}={\frac {X_{h}}{\varepsilon _{g}}}} The constant ε g {\displaystyle \varepsilon _{g}} is the relative permittivity of the ground (or generally speaking, the material where the signal is being reflected), θ {\displaystyle \theta } is the angle between the ground and the reflected ray as shown in the figure above. From the geometry of the figure, yields: x + x ′ = ( h t + h r ) 2 + d 2 {\displaystyle x+x'={\sqrt {(h_{t}+h_{r})^{2}+d^{2}}}} and l = ( h t − h r ) 2 + d 2 {\displaystyle l={\sqrt {(h_{t}-h_{r})^{2}+d^{2}}}} , Therefore, the path-length difference between them is Δ d = x + x ′ − l = ( h t + h r ) 2 + d 2 − ( h t − h r ) 2 + d 2 {\displaystyle \Delta d=x+x'-l={\sqrt {(h_{t}+h_{r})^{2}+d^{2}}}-{\sqrt {(h_{t}-h_{r})^{2}+d^{2}}}} and the phase difference between the waves is Δ ϕ = 2 π Δ d λ {\displaystyle \Delta \phi ={\frac {2\pi \Delta d}{\lambda }}} The power of the signal received is P r = E { | r l o s ( t ) + r g r ( t ) | 2 } {\displaystyle P_{r}=E\{|r_{los}(t)+r_{gr}(t)|^{2}\}} where E { ⋅ } {\displaystyle E\{\cdot \}} denotes average (over time) value. === Approximation === If the signal is narrow band relative to the inverse delay spread 1 / τ {\displaystyle 1/\tau } , so that s ( t ) ≈ s ( t − τ ) {\displaystyle s(t)\approx s(t-\tau )} , the power equation may be simplified to P r = E { | s ( t ) | 2 } ( λ 4 π ) 2 × | G l o s × e − j 2 π l / λ l + Γ ( θ ) G g r e − j 2 π ( x + x ′ ) / λ x + x ′ | 2 = P t ( λ 4 π ) 2 × | G l o s l + Γ ( θ ) G g r e − j Δ ϕ x + x ′ | 2 {\displaystyle {\begin{aligned}P_{r}=E\{|s(t)|^{2}\}\left({\frac {\lambda }{4\pi }}\right)^{2}\times \left|{\frac {{\sqrt {G_{los}}}\times e^{-j2\pi l/\lambda }}{l}}+\Gamma (\theta ){\sqrt {G_{gr}}}{\frac {e^{-j2\pi (x+x')/\lambda }}{x+x'}}\right|^{2}&=P_{t}\left({\frac {\lambda }{4\pi }}\right)^{2}\times \left|{\frac {\sqrt {G_{los}}}{l}}+\Gamma (\theta ){\sqrt {G_{gr}}}{\frac {e^{-j\Delta \phi }}{x+x'}}\right|^{2}\end{aligned}}} where P t = E { | s ( t ) | 2 } {\displaystyle P_{t}=E\{|s(t)|^{2}\}} is the transmitted power. When distance between the antennas d {\displaystyle d} is very large relative to the height of the antenna we may expand Δ d = x + x ′ − l {\displaystyle \Delta d=x+x'-l} , Δ d = x + x ′ − l = d ( ( h t + h r ) 2 d 2 + 1 − ( h t − h r ) 2 d 2 + 1 ) {\displaystyle {\begin{aligned}\Delta d=x+x'-l=d{\Bigg (}{\sqrt {{\frac {(h_{t}+h_{r})^{2}}{d^{2}}}+1}}-{\sqrt {{\frac {(h_{t}-h_{r})^{2}}{d^{2}}}+1}}{\Bigg )}\end{aligned}}} using the Taylor series of 1 + x {\displaystyle {\sqrt {1+x}}} : 1 + x = 1 + 1 2 x − 1 8 x 2 + … , {\displaystyle {\sqrt {1+x}}=1+\textstyle {\frac {1}{2}}x-{\frac {1}{8}}x^{2}+\dots ,} and taking the first two terms only, x + x ′ − l ≈ d 2 × ( ( h t + h r ) 2 d 2 − ( h t − h r ) 2 d 2 ) = 2 h t h r d {\displaystyle x+x'-l\approx {\frac {d}{2}}\times \left({\frac {(h_{t}+h_{r})^{2}}{d^{2}}}-{\frac {(h_{t}-h_{r})^{2}}{d^{2}}}\right)={\frac {2h_{t}h_{r}}{d}}} The phase difference can then be approximated as Δ ϕ ≈ 4 π h t h r λ d {\displaystyle \Delta \phi \approx {\frac {4\pi h_{t}h_{r}}{\lambda d}}} When d {\displaystyle d} is large, d ≫ ( h t + h r ) {\displaystyle d\gg (h_{t}+h_{r})} , d ≈ l ≈ x + x ′ , Γ ( θ ) ≈ − 1 , G l o s ≈ G g r = G {\displaystyle {\begin{aligned}d&\approx l\approx x+x',\ \Gamma (\theta )\approx -1,\ G_{los}\approx G_{gr}=G\end{aligned}}} and hence P r ≈ P t ( λ G 4 π d ) 2 × | 1 − e − j Δ ϕ | 2 {\displaystyle P_{r}\approx P_{t}\left({\frac {\lambda {\sqrt {G}}}{4\pi d}}\right)^{2}\times |1-e^{-j\Delta \phi }|^{2}} Expanding e − j Δ ϕ {\displaystyle e^{-j\Delta \phi }} using Taylor series e x = ∑ n = 0 ∞ x n n ! = 1 + x + x 2 2 + x 3 6 + ⋯ {\displaystyle e^{x}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}=1+x+{\frac {x^{2}}{2}}+{\frac {x^{3}}{6}}+\cdots } and retaining only the first two terms e − j Δ ϕ ≈ 1 + ( − j Δ ϕ ) + ⋯ = 1 − j Δ ϕ {\displaystyle e^{-j\Delta \phi }\approx 1+({-j\Delta \phi })+\cdots =1-j\Delta \phi } it follows that P r ≈ P t ( λ G 4 π d ) 2 × | 1 − ( 1 − j Δ ϕ ) | 2 = P t ( λ G 4 π d ) 2 × Δ ϕ 2 = P t ( λ G 4 π d ) 2 × ( 4 π h t h r λ d ) 2 = P t G h t 2 h r 2 d 4 {\displaystyle {\begin{aligned}P_{r}&\approx P_{t}\left({\frac {\lambda {\sqrt {G}}}{4\pi d}}\right)^{2}\times |1-(1-j\Delta \phi )|^{2}\\&=P_{t}\left({\frac {\lambda {\sqrt {G}}}{4\pi d}}\right)^{2}\times \Delta \phi ^{2}\\&=P_{t}\left({\frac {\lambda {\sqrt {G}}}{4\pi d}}\right)^{2}\times \left({\frac {4\pi h_{t}h_{r}}{\lambda d}}\right)^{2}\\&=P_{t}{\frac {Gh_{t}^{2}h_{r}^{2}}{d^{4}}}\end{aligned}}} so that P r ≈ P t G h t 2 h r 2 d 4 {\displaystyle P_{r}\approx P_{t}{\frac {Gh_{t}^{2}h_{r}^{2}}{d^{4}}}} and path loss is P L = P t P r = d 4 G h t 2 h r 2 {\displaystyle PL={\frac {P_{t}}{P_{r}}}={\frac {d^{4}}{Gh_{t}^{2}h_{r}^{2}}}} which is accurate in the far field region, i.e. when Δ ϕ ≪ 1 {\displaystyle \Delta \phi \ll 1} (angles are measured here in radians, not degrees) or, equivalently, d ≫ 4 π h t h r λ {\displaystyle d\gg {\frac {4\pi h_{t}h_{r}}{\lambda }}} and where the combined antenna gain is the product of the transmit and receive antenna gains, G = G t G r {\displaystyle G=G_{t}G_{r}} . This formula was first obtained by B.A. Vvedenskij. Note that the power decreases with as the inverse fourth power of the distance in the far field, which is explained by the destructive combination of the direct and reflected paths, which are roughly of the same in magnitude and are 180 degrees different in phase. G t P t {\displaystyle G_{t}P_{t}} is called "effective isotropic radiated power" (EIRP), which is the transmit power required to produce the same received power if the transmit antenna were isotropic. == In logarithmic units == In logarithmic units : P r dBm = P t dBm + 10 log 10 ⁡ ( G h t 2 h r 2 ) − 40 log 10 ⁡ ( d ) {\displaystyle P_{r_{\text{dBm}}}=P_{t_{\text{dBm}}}+10\log _{10}(Gh_{t}^{2}h_{r}^{2})-40\log _{10}(d)} Path loss : P L = P t dBm − P r dBm = 40 log 10 ⁡ ( d ) − 10 log 10 ⁡ ( G h t 2 h r 2 ) {\displaystyle PL\;=P_{t_{\text{dBm}}}-P_{r_{\text{dBm}}}\;=40\log _{10}(d)-10\log _{10}(Gh_{t}^{2}h_{r}^{2})} == Power vs. distance characteristics == When the distance d {\displaystyle d} between antennas is less than the transmitting antenna height, two waves are added constructively to yield bigger power. As distance increases, these waves add up constructively and destructively, giving regions of up-fade and down-fade. As the distance increases beyond the critical distance d c {\displaystyle dc} or first Fresnel zone, the power drops proportionally to an inverse of fourth power of d {\displaystyle d} . An approximation to critical distance may be obtained by setting Δφ to π as the critical distance to a local maximum. == An extension to large antenna heights == The above approximations are valid provided that d ≫ ( h t + h r ) {\displaystyle d\gg (h_{t}+h_{r})} , which may be not the case in many scenarios, e.g. when antenna heights are not much smaller compared to the distance, or when the ground cannot be modelled as an ideal plane . In this case, one cannot use Γ ≈ − 1 {\displaystyle \Gamma \approx -1} and more refined analysis is required, see e.g. == Propagation modeling for high-altitude platforms, UAVs, drones, etc. == The above large antenna height extension can be used for modeling a ground-to-the-air propagation channel as in the case of an airborne communication node, e.g. an UAV, drone, high-altitude platform. When the airborne node altitude is medium to high, the relationship d ≫ ( h t + h r ) {\displaystyle d\gg (h_{t}+h_{r})} does not hold anymore, the clearance angle is not small and, consequently, Γ ≈ − 1 {\displaystyle \Gamma \approx -1} does not hold either. This has a profound impact on the propagation path loss and typical fading depth and the fading margin required for the reliable communication (low outage probability). == As a case of log distance path loss model == The standard expression of Log distance path loss model in [dB] is P L = P T d B m − P R d B m = P L 0 + 10 ν log 10 ⁡ d d 0 + X g , {\displaystyle PL\;=P_{T_{dBm}}-P_{R_{dBm}}\;=\;PL_{0}\;+\;10\nu \;\log _{10}{\frac {d}{d_{0}}}\;+\;X_{g},} where X g {\displaystyle X_{g}} is the large-scale (log-normal) fading, d 0 {\displaystyle d_{0}} is a reference distance at which the path loss is P L 0 {\displaystyle PL_{0}} , ν {\displaystyle \nu } is the path loss exponent; typically ν = 2...4 {\displaystyle \nu =2...4} . This model is particularly well-suited for measurements, whereby P L 0 {\displaystyle PL_{0}} and ν {\displaystyle \nu } are determined experimentally; d 0 {\displaystyle d_{0}} is selected for convenience of measurements and to have clear line-of-sight. This model is also a leading candidate for 5G and 6G systems and is also used for indoor communications, see e.g. and references therein. The path loss [dB] of the 2-ray model is formally a special case with ν = 4 {\displaystyle \nu =4} : P L = P t d B m − P r d B m = 40 log 10 ⁡ ( d ) − 10 log 10 ⁡ ( G h t 2 h r 2 ) {\displaystyle PL\;=P_{t_{dBm}}-P_{r_{dBm}}\;=40\log _{10}(d)-10\log _{10}(Gh_{t}^{2}h_{r}^{2})} where d 0 = 1 {\displaystyle d_{0}=1} , X g = 0 {\displaystyle X_{g}=0} , and P L 0 = − 10 log 10 ⁡ ( G h t 2 h r 2 ) {\displaystyle PL_{0}=-10\log _{10}(Gh_{t}^{2}h_{r}^{2})} , which is valid the far field, d > d c = 4 π h r h t / λ {\displaystyle d>d_{c}=4\pi h_{r}h_{t}/\lambda } = the critical distance. == As a case of multi-slope model == The 2-ray ground reflected model may be thought as a case of multi-slope model with break point at critical distance with slope 20 dB/decade before critical distance and slope of 40 dB/decade after the critical distance. Using the free-space and two-ray model above, the propagation path loss can be expressed as L = max { G , L m i n , L F S , L 2 − r a y } {\displaystyle L=\max\{G,L_{min},L_{FS},L_{2-ray}\}} where L F S = ( 4 π d / λ ) 2 {\displaystyle L_{FS}=(4\pi d/\lambda )^{2}} and L 2 − r a y = d 4 / ( h t h r ) 2 {\displaystyle L_{2-ray}=d^{4}/(h_{t}h_{r})^{2}} are the free-space and 2-ray path losses; L m i n {\displaystyle L_{min}} is a minimum path loss (at smallest distance), usually in practice; L m i n ≈ 20 {\displaystyle L_{min}\approx 20} dB or so. Note that L ≥ G {\displaystyle L\geq G} and also L ≥ 1 {\displaystyle L\geq 1} follow from the law of energy conservation (since the Rx power cannot exceed the Tx power) so that both L F S = ( 4 π d / λ ) 2 {\displaystyle L_{FS}=(4\pi d/\lambda )^{2}} and L 2 − r a y = d 4 / ( h t h r ) 2 {\displaystyle L_{2-ray}=d^{4}/(h_{t}h_{r})^{2}} break down when d {\displaystyle d} is small enough. This should be kept in mind when using these approximations at small distances (ignoring this limitation sometimes produces absurd results). == See also == Path loss Radio propagation model Free-space path loss Friis transmission equation ITU-R P.525 Link budget Ray tracing (physics) Reflection (physics) Specular reflection Six-rays model Ten-rays model == References == == Further reading == S. Salous, Radio Propagation Measurement and Channel Modelling, Wiley, 2013. J.S. Seybold, Introduction to RF propagation, Wiley, 2005. K. Siwiak, Radiowave Propagation and Antennas for Personal Communications, Artech House, 1998. M.P. Doluhanov, Radiowave Propagation, Moscow: Sviaz, 1972. V.V. Nikolskij, T.I. Nikolskaja, Electrodynamics and Radiowave Propagation, Moscow: Nauka, 1989. 3GPP TR 38.901, Study on Channel Model for Frequencies from 0.5 to 100 GHz (Release 16), Sophia Antipolis, France, 2019 [2] Recommendation ITU-R P.1238-8: Propagation data and prediction methods for the planning of indoor radiocommunication systems and radio local area networks in the frequency range 300 MHz to 100 GHz [3] S. Loyka, ELG4179: Wireless Communication Fundamentals, Lecture Notes (Lec. 2-4), University of Ottawa, Canada, 2021 [4]
Wikipedia/Two-ray_ground-reflection_model
The Graduate School and University Center of the City University of New York (CUNY Graduate Center) is a public research institution and postgraduate university in New York City. Formed in 1961 as Division of Graduate Studies at City University of New York, it was renamed to Graduate School and University Center in 1969. Serving as the principal doctorate-granting institution of the City University of New York (CUNY) system, CUNY Graduate Center is classified among "R1: Doctoral Universities – Very High Research Activity". CUNY Graduate Center is located at the B. Altman and Company Building at 365 Fifth Avenue in Midtown Manhattan. It offers 32 doctoral programs, 18 master's programs, and operates over 30 research centers and institutes. The Graduate Center employs a core faculty of approximately 130, in addition to over 1,700 faculty members appointed from other CUNY campuses throughout New York City. As of fall 2025, the Graduate Center enrolls over 3,100 students, of which 2,600 are doctoral students. For the fall 2024 semester, the average acceptance rate across all doctoral programs at the CUNY Graduate Center was 16.3%. The Graduate Center's primary library, named after the American mathematician Mina Rees, is part of the CUNY library network of 31 colleges that collectively holds over 6.2 million volumes. Since 1968, the CUNY Graduate Center has maintained an agreement with the New York Public Library, which gives faculty and students increased borrowing privileges at NYPL's research collections at the Stephen A. Schwarzman Building. The Graduate Center building also houses the James Gallery, which is an independent exhibition space open to the public, and television studios for NYC Media and CUNY TV. The faculty of the CUNY Graduate Center include recipients of the Nobel Prize, the Abel Prize, Pulitzer Prize, the National Humanities Medal, the National Medal of Science, the National Endowment for the Humanities, the Rockefeller Fellowship, the Schock Prize, the Bancroft Prize, the Wolf Prize, Grammy Awards, the George Jean Nathan Award for Dramatic Criticism, Guggenheim Fellowships, the New York City Mayor's Award for Excellence in Science and Technology, the Presidential Early Career Awards for Scientists and Engineers, Presidential Award for Excellence in Science, Mathematics and Engineering Mentoring, and memberships in the American Academy of Arts and Sciences, the National Academy of Sciences, and the National Academy of Education. == History == CUNY began offering doctoral education through its Division of Graduate Studies in 1961, and awarded its first two PhD to Daniel Robinson and Barbara Stern in 1965. Robinson, formerly a professor of philosophy at the University of Oxford, received his Ph.D. in psychology, while Stern, late of Rutgers University, received her Ph.D. in English literature. In 1969, the Division of Graduate Studies formally became the Graduate School and University Center. Mathematician Mina S. Rees served as the institution's first president from 1969 until her retirement in 1972. Rees was succeeded as president of the Graduate Center by environmental psychologist Harold M. Proshansky, who served until his death in 1990. Provost Steven M. Cahn was named acting president in Spring 1991. Psychologist Frances Degen Horowitz was appointed president in September 1991. In 2005, Horowitz was succeeded by the school's provost, Professor of English Literature William P. Kelly. During Kelly's tenure at the Graduate Center, the university saw significant growth in revenue, funding opportunities for students, increased Distinguished Faculty, and a general resurgence. This is in accordance with three primary goals articulated in the Graduate Center's strategic plan. The first of these involves enhancing student support. In 2013, 83 dissertation-year fellowships were awarded at a total cost of $1.65 million. The Graduate Center is also developing new programs to advance research prior to the dissertation phase, including archival work. The fiscal stability of the university has enabled the chancellery to increase, on an incremental basis, the value of these fellowships. The packages extended for the 2013–14 years increase stipends and reduce teaching requirements. In 2001, the Graduate Center provided 14 million dollars in student support, and, in Fall 2013, 51 million in student support. On April 23, 2013, the CUNY Board of Trustees announced that President Kelly would serve as interim chancellor for the City University of New York beginning July 1 with the retirement of Chancellor Matthew Goldstein. GC Provost Chase F. Robinson, a historian, was appointed to serve as interim president of the Graduate Center in 2013, and then served as president from July 2014 to December 2018. Joy Connolly became provost in August 2016 and interim president in December 2018. Julia Wrigley was appointed as interim provost in December 2018. In July 2019, James Muyskens became interim president, as Connolly had been appointed president of the American Council of Learned Societies. On March 30, 2020, Robin L. Garrell, vice provost for graduate education and dean of graduate division at University of California, Los Angeles, was announced as the next president of the Graduate Center. She assumed office on August 1, 2020 and served until September 28, 2023. Steve Everett assumed the position of provost and senior vice president in August 2021. Norman Carey succeeded him as interim provost in August 2024. Joshua Brumberg assumed the position of interim president on October 2, 2023. He was appointed president of the CUNY Graduate Center in June 2024. == Campus == The CUNY Graduate Center's main campus is located in the B. Altman and Company Building at 34th Street and Fifth Avenue in the Midtown Manhattan neighborhood of New York City. CUNY shares the B. Altman Building with the Oxford University Press. Before 2000, the Graduate Center was housed in Aeolian Hall on West 42nd Street across from the New York Public Library Main Branch. In 2017, the CUNY Advanced Science Research Center at 85 St. Nicholas Terrace in Manhattan's Harlem neighborhood became part of the CUNY Graduate Center. === Advanced Science Research Center === The Advanced Science Research Center at the Graduate Center (CUNY ASRC) is an interdisciplinary STEM center for research and education. It covers five related fields: nanoscience, photonics, structural biology, neuroscience, and environmental science. The CUNY ASRC is located in a 200,000-square-foot (19,000 m2) building on the southern edge of City College's campus in Upper Manhattan. The CUNY ASRC, which opened in September 2014, is an outgrowth of CUNY's "Decade of Science" initiative, a multibillion-dollar project to elevating science research and education. The CUNY ASRC formally joined the CUNY Graduate Center in spring 2017. Today, the CUNY ASRC is one of the major pieces of CUNY's citywide research network. Five years after the center opened, over 200 graduate, undergraduate, and high school students had been mentored by CUNY ASRC scientists. In that time, the center also hosted over 400 conferences, seminars, and workshops and awarded over $600,000 in seed grants to CUNY faculty. ==== Research initiatives ==== The CUNY ASRC was founded on the principle that researchers across different disciplines would collaborate to make scientific advancements. Thus, it consists of five related fields: Nanoscience: Exploring on the tiniest scale, using the living world for inspiration to create new materials and devices that advance fields ranging from biomedicine to energy production Photonics: Discovering new ways to control light, heat, radio waves, and sound for future optical computers, ultrasensitive cameras, and cell phone technology Structural biology: Combining physics and chemistry to explore biology at the molecular and cellular levels, with the intention of identifying new ways to treat diseases Neuroscience: Investigating how the brain senses and responds to environmental and social experiences, with a focus on neural networks, metabolic changes, and molecular signals occurring in brain cells, with the goal of developing biosensors and innovative solutions to promote mental health Environmental sciences: Developing high-tech, interdisciplinary solutions to urgent environmental challenges, including air and water issues, climate change, and disease transmission Each research initiative occupies one floor of the CUNY ASRC building that hosts four faculty laboratories and between two and four core facilities. ==== Core facilities ==== The CUNY ASRC has 15 core facilities with a variety of equipment. These facilities are open to researchers from CUNY, other academic institutions, nonprofit organizations, and for-profit companies from around the world. The facilities include: Advanced Laboratory for Chemical and Isotopic Signatures (ALCIS) Facility Biomolecular Nuclear Magnetic Resonance (NMR) Facility Comparative Medicine Unit (CMU) Epigenetics Facilities Imaging Facility Live Imaging & Bioenergetics Facility MALDI Imaging Joint Facility Magnetic Resonance Imaging (MRI) Facility Macromolecular Crystallization Facility Mass Spectrometry Core Facility Nanofabrication Facility Next Generation Environmental Sensor (NGENS) Lab Photonics Core Facility Radio Frequency and mm-Wave Facility Surface Science Facility ==== Education and outreach ==== The CUNY ASRC has various scientific education programs. Students from CUNY's community and senior colleges participate in research during the academic year and over the summer through programs such as the CUNY Summer Undergraduate Research Program. Graduate students from master's and doctoral programs at the Graduate Center and from the Grove School of Engineering are members of CUNY ASRC research teams. ===== IlluminationSpace ===== The CUNY ASRC's IlluminationSpace is an interactive education center, which accommodates high school field trips and provides free community hours. It has numerous virtual programs and resources. The CUNY ASRC received a Public Interest Technology University Network 2021 Challenge Grant to establish the IlluminationSpace, STEM pathways, and science communications and outreach at CUNY. The funding is being used to increase participation of underrepresented demographic groups in STEM fields. ===== Community Sensor Lab ===== The CUNY ASRC Community Sensor Lab teaches high school students and community members how to build inexpensive, homemade sensors that can monitor aspects of the environment from the level of carbon dioxide and pollutants in the air to acidity in the soil and water. ==== Faculty opportunities ==== The CUNY ASRC offers a seed grant program to fund collaborative research that supports tenured and tenure-track faculty at CUNY colleges. The program started in 2015 and currently awards six one-year, $20,000 grants annually. In addition, the center's National Science Foundation CAREER Bootcamp Program, which guides tenure-track faculty through the proposal writing process, have helped CUNY researchers secure substantial NSF CAREER grants. ===== Grants and research ===== Between 2014 and 2019, CUNY ASRC researchers secured 126 grants totaling $61 million. Several recent grants have set records for CUNY and the CUNY Graduate Center. Faculty, postdoctoral fellows, and graduate students at the CUNY ASRC also hold several patents. Professor Kevin Gardner, director of the CUNY ASRC Structural Biology Initiative, was instrumental in the identification of hypoxia-inducible factor 2-alpha (HIF-2α) as a druggable target and the drug development efforts that led to the FDA-approved first-in-kind kidney cancer drug from Merck, belzutifan. The CUNY ASRC is home to one of 15 Centers for Advanced Technology (CATs) designated by Empire State Development NYSTAR. Funded by a nearly $8.8 million grant, the CUNY ASRC Sensor CAT spurs academic-industry partnerships to develop sensor-based technology. Developing biomedical and environmental sensors is a particular focus, as is finding new approaches to sensing through photonics, materials, and nanoscience research. Supported by a 2020 grant of up to $16 million from the Simons Foundation, a team of scientists led by Professor Andrea Alù, director of the CUNY ASRC Photonics Initiative, is studying wave transport in metamaterials. The team's work could lead to greater sensing capabilities for the Internet of Things, improvements in biomedical applications, and extreme control of sound waves for medical imaging and wireless technology. Professors Rein Ulijn and Andrea Al], the directors of the CUNY ASRC Nanoscience Initiative and the CUNY ASRC Photonics Initiative, each won a prestigious Vannevar Bush Faculty Fellowship from the U.S. Department of Defense, the agency's highest-ranking single-investigator award. Alù's $3 million fellowship, awarded in 2019, allowed him to develop new materials that enable extreme wave manipulation in the context of thermal radiation and heat management. Alù was also named the 2021 Blavatnik National Awards Laureate in Physical Sciences and Engineering. Ulijn's $3 million fellowship, awarded in 2021, allowed him to research how complex mixtures of molecules acquire functionality and to repurpose this understanding to create new nanotechnology that is inspired by living systems. === Mina Rees Library === The Mina Rees Library, named after former president Mina Rees, supports the research, teaching, and learning activities of the CUNY Graduate Center by connecting its community with print materials, electronic resources, research assistance and instruction, and expertise about the complexities of scholarly communication. Situated on three floors of the CUNY Graduate Center, the library is a hub for discovery, delivery, digitization, and a place for solitary study. The library offers many services, including research consultations, a 24/7 online chat service with reference librarians, and workshops and webinars on using research tools. The library also serves as a gateway to the collections of other CUNY libraries, the New York Public Library (NYPL), and libraries worldwide. It participates in a CUNY-wide book delivery system and offers an interlibrary loan service to bring materials from outside CUNY to Graduate Center scholars. The main branch of NYPL is just a few blocks north on Fifth Avenue, and NYPL's Science, Industry and Business Library is just around the corner inside the B. Altman Building. CUNY Graduate Center students and faculty are NYPL's primary academic constituents, with borrowing privileges from NYPL research collections. NYPL's participation in the Manhattan Research Library Initiative (MaRLI) extends borrowing privileges for CUNY Graduate Center students to NYU and Columbia libraries as well. The Mina Rees Library is a key participant in the CUNY Graduate Center's digital initiatives. It supports the digital scholarship of students and faculty and promotes the understanding, creation, and use of open-access literature. Among its special collections is the Activist Women's Voices collection, an oral history project focused on unheralded New York City community-based women activists. === Cultural venues === The CUNY Graduate Center houses three performance spaces and two art galleries. The Harold M. Proshansky Auditorium, named for the institution's second president, is located on the concourse level and contains 389 seats. The Baisley Powell Elebash Recital Hall, located on the first floor, seats 180. The Martin E. Segal Theatre, also located on the first floor, seats 70. ==== James Gallery ==== The ground floor of the CUNY Graduate Center houses the Amie and Tony James Gallery, also known as the James Gallery, which the Center for the Humanities oversees. The James Gallery intends to bring scholars and artists into dialog with one another and serve as a site for interdisciplinary research. The James Gallery hosts numerous exhibitions annually, and has hosted solo exhibitions by notable American and international artists such as Alison Knowles and Dor Guez. === CUNY TV and NYC Media === ==== CUNY TV ==== The University's citywide cable channel, CUNY TV, broadcasts on cable and WNYE's digital terrestrial television subchannel 25.3. Its production studios and offices are located on the first floor, while the broadcast satellite dishes reside on the building's ninth floor (rooftop). ==== NYC Media ==== Sharing CUNY TV's main facilities is NYC Media, which is the official broadcast network and media production group of the NYC Mayor's Office of Media and Entertainment. The group includes WNYE-FM (91.5) radio station and WNYE-TV television channel (Channel 25), which also puts out "NYCLife" programming on 25.1 and "NYCGov" on 25.2, all broadcast 24/7 from within the building. == Academics == === Rankings === In 2023, two doctoral programs at CUNY Graduate Center (criminal justice and English), were ranked among the top 20 graduate programs in the U.S., and four (audiology, history, philosophy, and sociology) among the top 30. In the 2016 edition of QS World University Rankings, CUNY Graduate Center's PhD program in Philosophy was ranked 44th globally. In the 2022 edition of the Philosophical Gourmet Report ranked CUNY Graduate Center's philosophy program 14th best in the United States and 16th best in English-speaking countries. === Faculty === Faculty members include the recipients of the Nobel Prize, Pulitzer Prize, the National Humanities Medal, the National Medal of Science, the Schock Prize, the Bancroft Prize, Grammy Awards, the George Jean Nathan Award for Dramatic Criticism, Guggenheim Fellowships, the New York City Mayor's Award for Excellence in Science and Technology, the Presidential Early Career Awards for Scientists and Engineers, and memberships in the American Academy of Arts and Sciences and the National Academy of Sciences. Many departments are recognized internationally for their level of scholarship. Courses in the social sciences, humanities, and mathematics, and courses in the sciences requiring no laboratory work convene at the Graduate Center. Due to the consortial nature of doctoral study at the CUNY Graduate Center, courses requiring laboratory work, courses for the clinical doctorates, and courses in business, criminal justice, engineering, and social welfare convene on CUNY college campuses. === Community === The CUNY Graduate Center pioneered the CUNY Academic Commons in 2009 to much praise. The CUNY Academic Commons is an online, academic social network for faculty, staff, and graduate students of the City University of New York (CUNY) system. Designed to foster conversation, collaboration, and connections among the 24 individual colleges that make up the university system, the site, founded in 2009, has quickly grown as a hub for the CUNY community, serving in the process to strengthen a growing group of digital scholars, teachers, and open-source projects at the university. The project has received awards and grants from the Alfred P. Sloan Foundation, the Sloan Consortium and was the winner of the 2013 Digital Humanities Award. Also affiliated with the institution are four University Center programs: CUNY Baccalaureate for Unique and Interdisciplinary Studies through which undergraduates can earn individualized bachelor's degrees by completing courses at any of the CUNY colleges; the CUNY School of Professional Studies and the associated Joseph S. Murphy Institute for Worker Education and Labor Studies; the CUNY Graduate School of Journalism, which offers a master's degree in journalism; and Macaulay Honors College. == Research == CUNY Graduate Center describes itself as "research-intensive" and is classified by the Carnegie Classification of Institutions of Higher Education to be an R1 or have "highest research activity". The CUNY Graduate Center's primary library, named after Mina Rees, is located on campus; however, its students also have borrowing privileges at the remaining 31 City University of New York libraries, which collectively house 6.2 million printed works and over 300,000 e-books. Beginning in 1968, the CUNY Graduate Center maintains a formal collaboration with the New York Public Library that allows faculty and students access to NYPL's extensive research collections, regular library resources, as well as three research study rooms located in the Stephen A. Schwarzman Building. Further, as of 2011, students have access to the libraries of Columbia University and New York University through the NYPL's Manhattan Research Library Initiative. The CUNY Graduate Center library also maintains an online repository called CUNY Academic Works, which hosts open-access faculty and student research. === Initiatives and committees === The CUNY Graduate Center does additional work through its initiatives and committees: Futures Initiative Graduate Center Digital Initiatives Initiative for the Theoretical Sciences (ITS) Revolutionizing American Studies Initiative The Committee for the Study of Religion The Committee on Globalization and Social Change The Committee for Interdisciplinary Science Studies Endangered Language Initiative Intellectual Publics === Centers and institutes === With over 30 research institutes and centers the CUNY Graduate Center produces work on a range of social, cultural, scientific and civic issues. === American Social History Project === The American Social History Project/Center for Media and Learning (ASHP/CML) was established in 1981 to create and disseminate materials that help with understanding the diverse cultural and social history of the United States. Founded by Stephen Brier and Herbert Gutman, who sought to teach the history of everyday Americans, early projects included the film 1877: The Grand Army of Starvation, about the 1877 railway strike. ASHP has created curriculum grounded in the work of Howard Zinn, Herbert Gutman, and Stephen Brier which aims to teach social studies at the high school level with the inclusion of diverse viewpoints, including indigenous groups, enslaved Americans, immigrants, and the working class. Notable curricula and teaching tools have included Freedom's Unfinished Revolution: An Inquiry into the Civil War and Reconstruction, and Who Built America? Other curriculum, such as Golden Lands, Working Hands, has focused on labor history; these types of ASHP materials emphasize collaborative teaching and learning strategies and have been popular in teaching districts that prioritize union labor. Digital teaching resources created by ASHP have included the History Matters website and the online resource Liberté, Égalité, Fraternité: Exploring the French Revolution. As teaching tools, these websites place an emphasis on inclusion of primary source material for use in the classroom, alongside teaching strategies for seamless use of these documents in classroom curriculum. The online resource September 11 Digital Archive has received acclaim for its comprehensive representation of historic perspectives. ASHP is also a partner of the Mission US project and co-produced Mission US: Cheyenne Odyssey, an award-winning video game about a Cheyenne tribesman whose way of life is challenged by western expansion. ASHP was established out of the success of a series of National Endowment for the Humanities summer seminars; seminar topics have included Learning to Look: Teaching Humanitites with Visual Images and New Media, Visual Culture of the American Civil War and its Aftermath, and LGBTQ+ Histories of the United States. This focus on professional development opportunities for educators has included other workshops such as the Bridging Historias: Latino/a History and Culture in the Community College Classroom program. === Stone Center on Socio-Economic Inequality === The James M. and Cathleen D. Stone Center on Socio-Economic Inequality was launched on September 1, 2016. The Stone Center expanded and replaced the Luxembourg Income Study (LIS) Center, which opened its doors at the Graduate Center in 2009. It began a post-doctoral program in 2019. The Stone Center has hosted several scholarly convenings. One year after its launch, it hosted the 2017 Meeting Of The Society For The Study Of Economic Inequality (ECINEQ). In 2021, it convened wealth inequality scholars for the two-day conference, From Understanding Inequality to Reducing Inequality. == Notable people == The CUNY Graduate Center has graduated 15,000 alumni worldwide, including numerous academics, politicians, artists, and entrepreneurs. As of 2016, the CUNY Graduate Center counted five MacArthur Foundation Fellows among its alumni, including writer Maggie Nelson as the most recent recipient. Among alumni graduated between 2003 and 2018, more than two-thirds are employed at educational institutions and over half have remained within New York City or its metro area. Among the CUNY Graduate Center's alumni are leading scholars across numerous disciplines, including art historian and ACT-UP activist Douglas Crimp, political scientist Douglas Hale, anthropologist Faye Ginsburg, sociologist Michael P. Jacobson, historian Maurice Berger, and philosopher Nancy Fraser. The City University of New York has been acknowledged for its exceptional number of faculty and students who have been awarded nationally recognized prizes in poetry. Among this group include student Gregory Pardlo, winner of the 2015 Pulitzer Prize for Poetry The CUNY Graduate Center holds a reputation for attracting established scholars to its faculty. In 2001, the CUNY Graduate Center initiated a five-year faculty recruitment campaign to hire additional renowned academics and public intellectuals in order to bolster the institution's faculty roster. Those recruited during the drive include André Aciman, Jean Anyon, Mitchell Duneier, Victor Kolyvagin, Robert Reid-Pharr and Saul Kripke. The CUNY Graduate Center utilizes a unique consortium model, which hosts 140 faculty with sole appointments at the CUNY Graduate Center, most of whom are senior scholars in their respective disciplines, as well as draws upon 1,800 faculty from across the other CUNY schools to both teach classes and advise graduate students. Notable faculty members include: == Student life == Some CUNY Graduate Center students live in Graduate housing in East Harlem. The eight-story building includes a gym, laundry facilities, lounge and rooftop terrace. The Graduate housing opened in fall 2011 in conjunction with the construction of the Hunter College School of Social Work. The Doctoral and Graduate Students' Council (DGSC) is the sole policy-making body representing students in doctoral and master's programs at the CUNY Graduate Center. There are over forty doctoral student organizations ranging from the Middle Eastern Studies Organization and Africana Studies Group to the Prison Studies Group and the Immigration Working Group. These chartered organizations host conferences, publish online magazines, and create social events aimed at fostering a community for CUNY Graduate Center students. Doctoral students at the CUNY Graduate Center also produce a newspaper funded by the DGSC and run by a committee of editors from the various doctoral programs. The paper, entitled The GC Advocate, comes out six times per academic year and is free for students, faculty, staff, and visitors. == References == == External links == Official website Advanced Science Research Center
Wikipedia/Advanced_Science_Research_Center
The van Cittert–Zernike theorem, named after physicists Pieter Hendrik van Cittert and Frits Zernike, is a formula in coherence theory that states that under certain conditions the Fourier transform of the intensity distribution function of a distant, incoherent source is equal to its complex visibility. This implies that the wavefront from an incoherent source will appear mostly coherent at large distances. Intuitively, this can be understood by considering the wavefronts created by two incoherent sources. If we measure the wavefront immediately in front of one of the sources, our measurement will be dominated by the nearby source. If we make the same measurement far from the sources, our measurement will no longer be dominated by a single source; both sources will contribute almost equally to the wavefront at large distances. This reasoning can be easily visualized by dropping two stones in the center of a calm pond. Near the center of the pond, the disturbance created by the two stones will be very complicated. As the disturbance propagates towards the edge of the pond, however, the waves will smooth out and will appear to be nearly circular. The van Cittert–Zernike theorem has important implications for radio astronomy. With the exception of pulsars and masers, all astronomical sources are spatially incoherent. Nevertheless, because they are observed at distances large enough to satisfy the van Cittert–Zernike theorem, these objects exhibit a non-zero degree of coherence at different points in the imaging plane. By measuring the degree of coherence at different points in the imaging plane (the so-called "visibility function") of an astronomical object, a radio astronomer can thereby reconstruct the source's brightness distribution and make a two-dimensional map of the source's appearance. == Statement of the theorem == Consider two very distant parallel planes, both perpendicular to the line of sight, and let's call them source plane and observation plane; If Γ 12 ( u , v , 0 ) {\displaystyle \Gamma _{12}(u,v,0)} is the mutual coherence function between two points in the observation plane, then Γ 12 ( u , v , 0 ) = ∬ I ( l , m ) e − 2 π i ( u l + v m ) d l d m {\displaystyle \Gamma _{12}(u,v,0)=\iint I(l,m)e^{-2\pi i(ul+vm)}\,dl\,dm} where l {\displaystyle l} and m {\displaystyle m} are the direction cosines of a point on a distant source in the source plane, u {\displaystyle u} and v {\displaystyle v} are respectively the x-distance and the y-distance between the two observation points on the observation plane in unit of wavelength and I {\displaystyle I} is the intensity of the source. This theorem was first derived by Pieter Hendrik van Cittert in 1934 with a simpler proof provided by Frits Zernike in 1938. This theorem will remain confusing to some engineers or scientists because of its statistical nature and difference from simple correlation or even covariance processing methods. A good reference (which still might not clarify the issue for some users, but does have a great sketch to drive the method home) is Goodman, starting on page 207. == The mutual coherence function == The mutual coherence function for some electric field E ( t ) {\displaystyle E(t)} measured at two points in a plane of observation (call them 1 and 2), is defined to be Γ 12 ( τ ) = lim T → ∞ 1 2 T ∫ − T T E 1 ( t ) E 2 ∗ ( t − τ ) d t {\displaystyle \Gamma _{12}(\tau )=\lim _{T\to \infty }{\frac {1}{2T}}\int _{-T}^{T}E_{1}(t)E_{2}^{*}(t-\tau )dt} where τ {\displaystyle \tau } is the time offset between the measurement of E ( t ) {\displaystyle E(t)} at observation points 1 and 2. The mutual coherence of the field at two points may be thought of as the time-averaged cross-correlation between the electric fields at the two points separated in time by τ {\displaystyle \tau } . Thus, if we are observing two fully incoherent sources we should expect the mutual coherence function to be relatively small between the two random points in the observation plane, because the sources will interfere destructively as well as constructively. Far away from the sources, however, we should expect the mutual coherence function to be relatively large because the sum of the observed fields will be almost the same at any two points. Normalization of the mutual coherence function to the product of the square roots of the intensities of the two electric fields yields the complex degree of (second-order) coherence (correlation coefficient function): γ 12 ( τ ) = Γ 12 ( τ ) I 1 I 2 {\displaystyle \gamma _{12}(\tau )={\frac {\Gamma _{12}(\tau )}{{\sqrt {I_{1}}}{\sqrt {I_{2}}}}}} == Proof of the theorem == Let X Y {\displaystyle XY} and x y {\displaystyle xy} be respectively the cartesian coordinates of the source plane and the observation plane. Suppose the electric field due to some point from the source in the source plane is measured at two points, P 1 {\displaystyle P_{1}} and P 2 {\displaystyle P_{2}} , in the observation plane. The position of a point in the source may be referred to by its direction cosines ( l , m ) {\displaystyle (l,m)} . (Since the source is distant, its direction should be the same at P 1 {\displaystyle P_{1}} as at P 2 {\displaystyle P_{2}} .) The electric field measured at P 1 {\displaystyle P_{1}} can then be written using phasors: E 1 ( l , m , t ) = A ( l , m , t − R 1 c ) e − i ω ( t − R 1 c ) R 1 {\displaystyle E_{1}(l,m,t)=A\left(l,m,t-{\frac {R_{1}}{c}}\right){\frac {e^{-i\omega \left(t-{\frac {R_{1}}{c}}\right)}}{R_{1}}}} where R 1 {\displaystyle R_{1}} is the distance from the source to P 1 {\displaystyle P_{1}} , ω {\displaystyle \omega } is the angular frequency of the light, and A {\displaystyle A} is the complex amplitude of the electric field. Similarly, the electric field measured at P 2 {\displaystyle P_{2}} can be written as E 2 ( l , m , t ) = A ( l , m , t − R 2 c ) e − i ω ( t − R 2 c ) R 2 {\displaystyle E_{2}(l,m,t)=A\left(l,m,t-{\frac {R_{2}}{c}}\right){\frac {e^{-i\omega \left(t-{\frac {R_{2}}{c}}\right)}}{R_{2}}}} Let us now calculate the time-averaged cross-correlation between the electric field at P 1 {\displaystyle P_{1}} and P 2 {\displaystyle P_{2}} : ⟨ E 1 ( l , m , t ) E 2 ∗ ( l , m , t ) ⟩ = ⟨ A ( l , m , t − R 1 c ) A ∗ ( l , m , t − R 2 c ) ⟩ × e i ω R 1 c R 1 × e − i ω R 2 c R 2 {\displaystyle {\big \langle }E_{1}(l,m,t)E_{2}^{*}(l,m,t){\big \rangle }={\Bigg \langle }A\left(l,m,t-{\frac {R_{1}}{c}}\right)A^{*}\left(l,m,t-{\frac {R_{2}}{c}}\right){\Bigg \rangle }\times {\frac {e^{i\omega {\frac {R_{1}}{c}}}}{R_{1}}}\times {\frac {e^{-i\omega {\frac {R_{2}}{c}}}}{R_{2}}}} Because the quantity in the angle brackets is time-averaged an arbitrary offset to the temporal term of the amplitudes may be added as long as the same offset is added to both. Let us now add R 1 c {\displaystyle {\frac {R_{1}}{c}}} to the temporal term of both amplitudes. The time-averaged cross-correlation of the electric field at the two points therefore simplifies to ⟨ E 1 ( l , m , t ) E 2 ∗ ( l , m , t ) ⟩ = ⟨ A ( l , m , t ) A ∗ ( l , m , t − R 2 − R 1 c ) ⟩ × e i ω ( R 1 − R 2 c ) R 1 R 2 {\displaystyle {\big \langle }E_{1}(l,m,t)E_{2}^{*}(l,m,t){\big \rangle }={\Bigg \langle }A(l,m,t)A^{*}\left(l,m,t-{\frac {R_{2}-R_{1}}{c}}\right){\Bigg \rangle }\times {\frac {e^{i\omega \left({\frac {R_{1}-R_{2}}{c}}\right)}}{R_{1}R_{2}}}} But if the source is in the far field then the difference between R 1 {\displaystyle R_{1}} and R 2 {\displaystyle R_{2}} will be small compared to the distance light travels in time t {\displaystyle t} . ( t {\displaystyle t} is on the same order as the inverse bandwidth.) This small correction can therefore be neglected, further simplifying our expression for the cross-correlation of the electric field at P 1 {\displaystyle P_{1}} and P 2 {\displaystyle P_{2}} to ⟨ E 1 ( l , m , t ) E 2 ∗ ( l , m , t ) ⟩ = ⟨ A ( l , m , t ) A ∗ ( l , m , t ) ⟩ × e i ω ( R 1 − R 2 c ) R 1 R 2 {\displaystyle \langle E_{1}(l,m,t)E_{2}^{*}(l,m,t)\rangle =\langle A(l,m,t)A^{*}(l,m,t)\rangle \times {\frac {e^{i\omega \left({\frac {R_{1}-R_{2}}{c}}\right)}}{R_{1}R_{2}}}} Now, ⟨ A ( l , m , t ) A ∗ ( l , m , t ) ⟩ {\displaystyle \langle A(l,m,t)A^{*}(l,m,t)\rangle } is simply the intensity of the source at a particular point, I ( l , m ) {\displaystyle I(l,m)} . So our expression for the cross-correlation simplifies further to ⟨ E 1 ( l , m , t ) E 2 ∗ ( l , m , t ) ⟩ = I ( l , m ) e i ω ( R 1 − R 2 c ) R 1 R 2 {\displaystyle \langle E_{1}(l,m,t)E_{2}^{*}(l,m,t)\rangle =I(l,m){\frac {e^{i\omega \left({\frac {R_{1}-R_{2}}{c}}\right)}}{R_{1}R_{2}}}} To calculate the mutual coherence function from this expression, simply integrate over the entire source. Γ 12 ( u , v , 0 ) = ∬ source I ( l , m ) e i ω ( R 1 − R 2 c ) R 1 R 2 d S {\displaystyle \Gamma _{12}(u,v,0)=\iint _{\textrm {source}}I(l,m){\frac {e^{i\omega \left({\frac {R_{1}-R_{2}}{c}}\right)}}{R_{1}R_{2}}}\,dS} Note that cross terms of the form ⟨ A 1 ( l , m , t ) A 2 ∗ ( l , m , t ) ⟩ {\displaystyle \langle A_{1}(l,m,t)A_{2}^{*}(l,m,t)\rangle } are not included due to the assumption that the source is incoherent. The time-averaged correlation between two different points from the source will therefore be zero. Next rewrite the R 2 − R 1 {\displaystyle R_{2}-R_{1}} term using u , v , l {\displaystyle u,v,l} and m {\displaystyle m} . To do this, let P 1 = ( x 1 , y 1 ) {\displaystyle P_{1}=(x_{1},y_{1})} and P 2 = ( x 2 , y 2 ) {\displaystyle P_{2}=(x_{2},y_{2})} . This gives R 1 = R 2 + x 1 2 + y 1 2 {\displaystyle R_{1}={\sqrt {R^{2}+x_{1}^{2}+y_{1}^{2}}}\,} R 2 = R 2 + x 2 2 + y 2 2 {\displaystyle R_{2}={\sqrt {R^{2}+x_{2}^{2}+y_{2}^{2}}}\,} where R {\displaystyle R} is the distance between the center of the plane of observation and the center of the source. The difference between R 1 {\displaystyle R_{1}} and R 2 {\displaystyle R_{2}} thus becomes R 2 − R 1 = R 1 + x 2 2 R 2 + y 2 2 R 2 − R 1 + x 1 2 R 2 + y 1 2 R 2 {\displaystyle R_{2}-R_{1}=R{\sqrt {1+{\frac {x_{2}^{2}}{R^{2}}}+{\frac {y_{2}^{2}}{R^{2}}}}}-R{\sqrt {1+{\frac {x_{1}^{2}}{R^{2}}}+{\frac {y_{1}^{2}}{R^{2}}}}}} But because x 1 , x 2 , y 1 {\displaystyle x_{1},x_{2},y_{1}} and y 2 {\displaystyle y_{2}} are all much less than R {\displaystyle R} , the square roots may be Taylor expanded, yielding, to first order, R 2 − R 1 = R ( 1 + 1 2 ( x 2 2 + y 2 2 R 2 ) ) − R ( 1 + 1 2 ( x 1 2 + y 1 2 R 2 ) ) {\displaystyle R_{2}-R_{1}=R\left(1+{\frac {1}{2}}\left({\frac {x_{2}^{2}+y_{2}^{2}}{R^{2}}}\right)\right)-R\left(1+{\frac {1}{2}}\left({\frac {x_{1}^{2}+y_{1}^{2}}{R^{2}}}\right)\right)} which, after some algebraic manipulation, simplifies to R 2 − R 1 = 1 2 R ( ( x 2 − x 1 ) ( x 2 + x 1 ) + ( y 2 − y 1 ) ( y 2 + y 1 ) ) {\displaystyle R_{2}-R_{1}={\frac {1}{2R}}\left((x_{2}-x_{1})(x_{2}+x_{1})+(y_{2}-y_{1})(y_{2}+y_{1})\right)} Now, 1 2 ( x 2 + x 1 ) {\displaystyle {\frac {1}{2}}(x_{2}+x_{1})} is the midpoint along the x {\displaystyle x} -axis between P 1 {\displaystyle P_{1}} and P 2 {\displaystyle P_{2}} , so 1 2 R ( x 2 + x 1 ) {\displaystyle {\frac {1}{2R}}(x_{2}+x_{1})} gives us l {\displaystyle l} , one of the direction cosines to the sources. Similarly, m = 1 2 R ( y 2 + y 1 ) {\displaystyle m={\frac {1}{2R}}(y_{2}+y_{1})} . Moreover, recall that u {\displaystyle u} was defined to be the number of wavelengths along the x {\displaystyle x} -axis between P 1 {\displaystyle P_{1}} and P 2 {\displaystyle P_{2}} . So u = ω 2 π c ( x 1 − x 2 ) {\displaystyle u={\frac {\omega }{2\pi c}}(x_{1}-x_{2})} Similarly, v {\displaystyle v} is the number of wavelengths between P 1 {\displaystyle P_{1}} and P 2 {\displaystyle P_{2}} along the y {\displaystyle y} -axis, so v = ω 2 π c ( y 1 − y 2 ) {\displaystyle v={\frac {\omega }{2\pi c}}(y_{1}-y_{2})} Hence R 2 − R 1 = 2 π c ω ( u l + v m ) {\displaystyle R_{2}-R_{1}={\frac {2\pi c}{\omega }}(ul+vm)} Because x 1 , x 2 , y 1 , {\displaystyle x_{1},x_{2},y_{1},} and y 2 {\displaystyle y_{2}} are all much less than R {\displaystyle R} , R 1 ≃ R 2 ≃ R {\displaystyle R_{1}\simeq R_{2}\simeq R} . The differential area element, d S {\displaystyle dS} , may then be written as a differential element of solid angle of R 2 d l d m {\displaystyle R^{2}\,dl\,dm} . Our expression for the mutual coherence function becomes Γ 12 ( u , v , 0 ) = ∬ source I ( l , m ) e − i ω c 2 π c ω ( u l + v m ) d l d m {\displaystyle \Gamma _{12}(u,v,0)=\iint _{\textrm {source}}I(l,m)e^{-{\frac {i\omega }{c}}{\frac {2\pi c}{\omega }}(ul+vm)}\,dl\,dm} Which reduces to Γ 12 ( u , v , 0 ) = ∬ source I ( l , m ) e − 2 π i ( u l + v m ) d l d m {\displaystyle \Gamma _{12}(u,v,0)=\iint _{\textrm {source}}I(l,m)e^{-2\pi i(ul+vm)}\,dl\,dm} But the limits of these two integrals can be extended to cover the entire plane of the source as long as the source's intensity function is set to be zero over these regions. Hence, Γ 12 ( u , v , 0 ) = ∬ I ( l , m ) e − 2 π i ( u l + v m ) d l d m {\displaystyle \Gamma _{12}(u,v,0)=\iint I(l,m)e^{-2\pi i(ul+vm)}\,dl\,dm} which is the two-dimensional Fourier transform of the intensity function. This completes the proof. == Assumptions of the theorem == The van Cittert–Zernike theorem rests on a number of assumptions, all of which are approximately true for nearly all astronomical sources. The most important assumptions of the theorem and their relevance to astronomical sources are discussed here. === Incoherence of the source === A spatially coherent source does not obey the van Cittert–Zernike theorem. To see why this is, suppose we observe a source consisting of two points, a {\displaystyle a} and b {\displaystyle b} . Let us calculate the mutual coherence function between P 1 {\displaystyle P_{1}} and P 2 {\displaystyle P_{2}} in the plane of observation. From the principle of superposition, the electric field at P 1 {\displaystyle P_{1}} is E 1 = E a 1 + E b 1 {\displaystyle E_{1}=E_{a1}+E_{b1}} and at P 2 {\displaystyle P_{2}} is E 2 = E a 2 + E b 2 {\displaystyle E_{2}=E_{a2}+E_{b2}} so the mutual coherence function is ⟨ E 1 ( t ) E 2 ∗ ( t − τ ) ⟩ = ⟨ ( E a 1 ( t ) + E b 1 ( t ) ) ( E a 2 ∗ ( t − τ ) + E b 2 ∗ ( t − τ ) ) ⟩ {\displaystyle \langle E_{1}(t)E_{2}^{*}(t-\tau )\rangle =\langle (E_{a1}(t)+E_{b1}(t))(E_{a2}^{*}(t-\tau )+E_{b2}^{*}(t-\tau ))\rangle } Which becomes ⟨ E 1 ( t ) E 2 ∗ ( t − τ ) ⟩ = ⟨ E a 1 ( t ) E a 2 ∗ ( t − τ ) ⟩ + ⟨ E a 1 ( t ) E b 2 ∗ ( t − τ ) ⟩ + ⟨ E b 1 ( t ) E a 2 ∗ ( t − τ ) ⟩ + ⟨ E b 1 ( t ) E b 2 ∗ ( t − τ ) ⟩ {\displaystyle \langle E_{1}(t)E_{2}^{*}(t-\tau )\rangle =\langle E_{a1}(t)E_{a2}^{*}(t-\tau )\rangle +\langle E_{a1}(t)E_{b2}^{*}(t-\tau )\rangle +\langle E_{b1}(t)E_{a2}^{*}(t-\tau )\rangle +\langle E_{b1}(t)E_{b2}^{*}(t-\tau )\rangle } If points a {\displaystyle a} and b {\displaystyle b} are coherent then the cross terms in the above equation do not vanish. In this case, when we calculate the mutual coherence function for an extended coherent source, we would not be able to simply integrate over the intensity function of the source; the presence of non-zero cross terms would give the mutual coherence function no simple form. This assumption holds for most astronomical sources. Pulsars and masers are the only astronomical sources which exhibit coherence. === Distance to the source === In the proof of the theorem we assume that R ≫ x 1 − x 2 {\displaystyle R\gg x_{1}-x_{2}} and R ≫ y 1 − y 2 {\displaystyle R\gg y_{1}-y_{2}} . That is, we assume that the distance to the source is much greater than the size of the observation area. More precisely, the van Cittert–Zernike theorem requires that we observe the source in the so-called far field. Hence if D {\displaystyle D} is the characteristic size of the observation area (e.g. in the case of a two-dish radio telescope, the length of the baseline between the two telescopes) then R ≫ D 2 λ {\displaystyle R\gg {\frac {D^{2}}{\lambda }}} Using a reasonable baseline of 20 km for the Very Large Array at a wavelength of 1 cm, the far field distance is of order 4 × 10 10 {\displaystyle 4\times 10^{10}} m. Hence any astronomical object farther away than a parsec is in the far field. Objects in the Solar System are not necessarily in the far field, however, and so the van Cittert–Zernike theorem does not apply to them. === Angular size of the source === In the derivation of the van Cittert–Zernike theorem we write the direction cosines l {\displaystyle l} and m {\displaystyle m} as 1 2 ( x 1 + x 2 ) / R {\displaystyle {\frac {1}{2}}(x_{1}+x_{2})/R} and 1 2 ( y 1 + y 2 ) / R {\displaystyle {\frac {1}{2}}(y_{1}+y_{2})/R} . There is, however, a third direction cosine which is neglected since R ≫ 1 2 ( x 1 + x 2 ) {\displaystyle R\gg {\frac {1}{2}}(x_{1}+x_{2})} and R ≫ 1 2 ( y 1 + y 2 ) {\displaystyle R\gg {\frac {1}{2}}(y_{1}+y_{2})} ; under these assumptions it is very close to unity. But if the source has a large angular extent, we cannot neglect this third direction cosine and the van Cittert–Zernike theorem no longer holds. Because most astronomical sources subtend very small angles on the sky (typically much less than a degree), this assumption of the theorem is easily fulfilled in the domain of radio astronomy. === Quasi-monochromatic waves === The van Cittert–Zernike theorem assumes that the source is quasi-monochromatic. That is, if the source emits light over a range of frequencies, Δ ν {\displaystyle \Delta \nu } , with mean frequency ν {\displaystyle \nu } , then it should satisfy Δ ν ν ≲ 1 {\displaystyle {\frac {\Delta \nu }{\nu }}\lesssim 1} Moreover, the bandwidth must be narrow enough that Δ ν ν ≪ 1 l u {\displaystyle {\frac {\Delta \nu }{\nu }}\ll {\frac {1}{lu}}} where l {\displaystyle l} is again the direction cosine indicating the size of the source and u {\displaystyle u} is the number of wavelengths between one end of the aperture and the other. Without this assumption, we cannot neglect ( R 2 − R 1 ) / c {\displaystyle (R_{2}-R_{1})/c} compared to t {\displaystyle t} This requirement implies that a radio astronomer must restrict signals through a bandpass filter. Because radio telescopes almost always pass the signal through a relatively narrow bandpass filter, this assumption is typically satisfied in practice. === Two-dimensional source === We assume that our source lies in a two-dimensional plane. In reality, astronomical sources are three-dimensional. However, because they are in the far field, their angular distribution does not change with distance. Therefore, when we measure an astronomical source, its three-dimensional structure becomes projected upon a two-dimensional plane. This means that the van Cittert–Zernike theorem may be applied to measurements of astronomical sources, but we cannot determine structure along the line of sight with such measurements. === Homogeneity of the medium === The van Cittert–Zernike theorem assumes that the medium between the source and the imaging plane is homogeneous. If the medium is not homogeneous then light from one region of the source will be differentially refracted relative to other regions of the source due to the difference in light travel time through the medium. In the case of a heterogeneous medium one must use a generalization of the van Cittert–Zernike theorem, called Hopkins's formula. Because the wavefront does not pass through a perfectly uniform medium as it travels through the interstellar (and possibly intergalactic) medium and into the Earth's atmosphere, the van Cittert–Zernike theorem does not hold exactly true for astronomical sources. In practice, however, variations in the refractive index of the interstellar and intergalactic media and Earth's atmosphere are small enough that the theorem is approximately true to within any reasonable experimental error. Such variations in the refractive index of the medium result only in slight perturbations from the case of a wavefront traveling through a homogeneous medium. == Hopkins' formula == Suppose we have a situation identical to that considered when the van Cittert–Zernike theorem was derived, except that the medium is now heterogeneous. We therefore introduce the transmission function of the medium, K ( l , m , P , ν ) {\displaystyle K(l,m,P,\nu )} . Following a similar derivation as before, we find that Γ 12 ( l , m , 0 ) = λ 2 ∬ I ( l , m ) K ( l , m , P 1 , ν ) K ∗ ( l , m , P 2 , ν ) d S {\displaystyle \Gamma _{12}(l,m,0)=\lambda ^{2}\iint I(l,m)K(l,m,P_{1},\nu )K^{*}(l,m,P_{2},\nu )\,dS} If we define U ( l , m , P 1 ) ≡ i λ K ( l , m , P 1 , ν ) I ( l , m ) {\displaystyle U(l,m,P_{1})\equiv i\lambda K(l,m,P_{1},\nu ){\sqrt {I(l,m)}}} then the mutual coherence function becomes Γ 12 ( l , m , 0 ) = ∬ U ( l , m , P 1 ) U ∗ ( l , m , P 2 ) d S {\displaystyle \Gamma _{12}(l,m,0)=\iint U(l,m,P_{1})U^{*}(l,m,P_{2})\,dS} which is Hopkins's generalization of the van Cittert–Zernike theorem. In the special case of a homogeneous medium, the transmission function becomes K ( l , m , P , ν ) = − i e i k R λ R {\displaystyle K(l,m,P,\nu )=-{\frac {ie^{ikR}}{\lambda R}}} in which case the mutual coherence function reduces to the Fourier transform of the brightness distribution of the source. The primary advantage of Hopkins's formula is that one may calculate the mutual coherence function of a source indirectly by measuring its brightness distribution. == Applications of the theorem == === Aperture synthesis === The van Cittert–Zernike theorem is crucial to the measurement of the brightness distribution of a source. With two telescopes, a radio astronomer (or an infrared or submillimeter astronomer) can measure the correlation between the electric field at the two dishes due to some point from the source. By measuring this correlation for many points on the source, the astronomer can reconstruct the visibility function of the source. By applying the van Cittert–Zernike theorem, the astronomer can then take the inverse Fourier transform of the visibility function to discover the brightness distribution of the source. This technique is known as aperture synthesis or synthesis imaging. In practice, radio astronomers rarely recover the brightness distribution of a source by directly taking the inverse Fourier transform of a measured visibility function. Such a process would require a sufficient number of samples to satisfy the Nyquist sampling theorem; this is many more observations than are needed to approximately reconstruct the brightness distribution of the source. Astronomers therefore take advantage of physical constraints on the brightness distribution of astronomical sources to reduce the number of observations which must be made. Because the brightness distribution must be real and positive everywhere, the visibility function cannot take on arbitrary values in unsampled regions. Thus, a non-linear deconvolution algorithm like CLEAN or Maximum Entropy may be used to approximately reconstruct the brightness distribution of the source from a limited number of observations. === Adaptive optics === The van Cittert–Zernike theorem also places constraints on the sensitivity of an adaptive optics system. In an adaptive optics (AO) system, a distorted wavefront is provided and must be transformed to a distortion-free wavefront. An AO system must make a number of different corrections to remove the distortions from the wavefront. One such correction involves splitting the wavefront into two identical wavefronts and shifting one by some physical distance s {\displaystyle s} in the plane of the wavefront. The two wavefronts are then superimposed, creating a fringe pattern. By measuring the size and separation of the fringes, the AO system can determine phase differences along the wavefront. This technique is known as "shearing." The sensitivity of this technique is limited by the van Cittert–Zernike theorem. If an extended source is imaged, the contrast between the fringes will be reduced by a factor proportional to the Fourier transform of the brightness distribution of the source. The van Cittert–Zernike theorem implies that the mutual coherence of an extended source imaged by an AO system will be the Fourier transform of its brightness distribution. An extended source will therefore change the mutual coherence of the fringes, reducing their contrast. === Free-electron laser === The van Cittert–Zernike theorem can be used to calculate the partial spatial coherence of radiation from a free-electron laser. == See also == Degree of coherence Coherence theory Visibility Hanbury Brown and Twiss effect Bose–Einstein correlations == References == == Bibliography == Born, M. & Wolf, E.: Principles of optics, Pergamon Press, Oxford, 1987, p. 510 Klein, Miles V. & Furtak, Thomas E.: Optics, John Wiley & Sons, New York, 1986, 2nd edition, p. 544-545 == External links == Lecture on the Van Cittert–Zernike-theorem with applications. University of Berkeley, prof. David T. Attwood on YouTube (AST 210/EE 213 Lecture 23)]
Wikipedia/Mutual_coherence_function
ABC News, also known as ABC News and Current Affairs, is a public news service produced by the Australian Broadcasting Corporation. The service covers both local and world affairs, broadcasting both nationally as ABC News, and across the Asia-Pacific under the ABC Australia title. The division of the organisation ABC News, Analysis and Investigations is responsible for all news-gathering and coverage across the Australian Broadcasting Corporation's various television, radio, and online platforms. Some of the services included under the auspices of the division are its 24-hour news channel ABC News Australia TV Channel (formerly ABC News 24), the long-running radio news programs, AM, The World Today, and PM; ABC NewsRadio, a 24-hour continuous news radio channel; and radio news bulletins and programs on ABC Local Radio, ABC Radio National, ABC Classic FM, and Triple J. ABC News Online has an extensive online presence which includes many written news reports and videos available via ABC Online, an ABC News mobile app (ABC Listen), podcasts, and in addition, all of the ABC News television programs available via the video-on-demand platform, ABC iview. As of 2021, the ABC News website includes ABC Sport, ABC Health, ABC Science, ABC Arts & Culture, ABC Fact Check, ABC Environment, and news in other languages. Justin Stevens was appointed director of the division on 4 April 2022. == History == ABC News, from its inception in 1932, with ABC radio sourced its news from multiple sources, including cable news from London, its own bureaus in Europe, the Middle East, Greece and the Asia-Pacific, and in a fashion similar to commercial radio stations from local newspapers around Australia. Censorship was rife during the war, particularly after the U.S. entered the conflict on 7 December 1941. After General Douglas MacArthur set up his headquarters in Australia, he wielded enormous power, including on matters of censorship. Inter alia, he declared that every Australian radio station would only broadcast three news bulletins per day and that these would be simultaneous on all stations (ABC and commercial) at 7.45 a.m., midday, and 7.00 p.m. Weather forecasts were banned because it was felt that these may assist the enemy. The 7:45 a.m. bulletin was the only one that did not commence on the hour or the half-hour. It was placed at this timeslot as initially the ABC sourced its news from newspapers in a deal which required that news would not be broadcast earlier, to ensure newspapers sales were not effected. This bulletin continued at this time on ABC Local Radio stations until 19 September 2020, before being cancelled to save costs. Notices were issued banning radio stations from broadcasting some major wartime events, but as the federal government did not have the same power over the printed press as it did over the radio, newspapers usually reported events that radio was not permitted to mention. The ABC launched its first independent news bulletin on 1 June 1947 after years of negotiations with the Australian Government. The Australian Broadcasting Corporation Act 1983 mandates that the ABC "shall develop and maintain an independent service for the broadcasting of news and information" both within Australia on a daily basis, and also to countries outside Australia. The name of the division and director responsible has changed over the years. In 2004 it was the News and Current Affairs Division when John Cameron took over as Director from Max Uechtritz as Director. The financial year 2008–2009 saw a lot of changes, both in the way that television content was produced as well as an "expansion of international news programming and continuous news across platforms, new programs and a range of appointments to senior positions". Kate Torney became director of the News Division in April 2009. In November 2014, a cut of A$254 million to funding over the following five years meant that the ABC would have to shed about 10% of its total staff, around 400 people. There were several programming changes, with regional and local programming losing out to national programs, and the Adelaide TV production studio had to close apart from the news and current affairs section. In late 2015 Gaven Morris was appointed Director of the News Division. The ABC announced in November 2016 that their 24-hour television news channel ABC News 24 and ABC NewsRadio would be rebranded under the ABC News division with an updated logo, commencing on 10 April 2017. The ABC announced on that day that ABC News 24 and ABC NewsRadio were both called ABC NEWS, with a new logo and visual branding. They would be distinguished by context or by descriptors, such as "the ABC News channel" for TV and "ABC News on radio" for radio. Social media accounts would be merged. The Director's role changed its name to Director, News, Analysis & Investigations in 2017–2018, and as of June 2021 Morris was still in the role. During the 2017 to 2018 financial year, the ABC launched "Regional Connecting Communities" program, which provided funding for increased jobs in the regions, as well as more resources for local news, weather and live reporting. Justin Stevens was appointed director of the division of ABC News, Analysis and Investigations on 31 March 2022. Media executive and producer Kimberly Lynton "Kim" Williams AM was appointed chair of ABC News on 7 March 2024, with the term expected to conclude on 6 March 2029. == Functions == The division is responsible for all news-gathering and production of news output for ABC television, the ABC network of radio stations, and for its online services. In 2018 it was estimated that online ABC news and current affairs reached about 4.8 million users in Australia each month. As of 2021, the ABC News website includes ABC Sport, ABC Health, ABC Science, ABC Arts & Culture, ABC Fact Check, ABC Environment and news in other languages. == Theme music == The news theme used from the first days of ABC television from November 1956 to 1985 was "Majestic Fanfare", composed by Charles Williams. From 1956 until the early 1980s the version used was the abridged version performed by the Queen's Hall Light Orchestra, from a recording made in 1943. Each bulletin opened with a clip from the top story of the day, with the title "ABC News" superimposed over the footage. Later, this on-screen approach was replaced by a generic graphic title sequence. In 1982, to celebrate the ABC's 50th anniversary, a new version of the theme was commissioned, which incorporated both orchestral and new electronic elements. With the exception of a period in the mid-1980s, during which a synthesised theme ("Best Endeavours", written by Alan Hawkshaw, which was the theme for Channel 4 News in the UK) was used for around a year, this was used on radio until August 1988, and on television until early 1985. A reworking of "Majestic Fanfare" (essentially the original orchestration up one tone) was arranged by Richard Mills and recorded in 1988 by the Sydney Symphony Orchestra. From 1985, a theme composed by Tony Ansell and Peter Wall was used for 20 years, even after the 1998 brand refresh. In 2010, it was sampled and remixed by the group Pendulum and this revised work went on to be placed #11 on the Triple J Hottest 100 chart on Australia Day 2011. The theme for ABC News changed on Australia Day (26 January) 2005, to a piece written by Martin Armiger and John Gray, and for a couple of years it bore a resemblance to the original Peter Wall / Tony Ansell work in the opening signature notes. Wall challenged the ABC and was successful in reaching an agreement. The opening notes were removed and the work was re-arranged in 2010. The theme music from the 2005–2010 era was remixed by Armiger, giving it a more upbeat, synthesised feel. On 1 July 2022, ABC News used the 1985–2005 theme during the ABC's 90th Anniversary. That theme, by Wall and Ansell, was remixed from the original multi-track studio recording and re-introduced to news bulletins on 19 August 2024. == Television == === History === On 4 March 1985, the ABC refreshed its structure and look, when the 7 o'clock news and the following current affairs program (at that time, Nationwide) were combined to form The National, and moved to 6:30 pm until 8 December 1985. After The National was deemed unsuccessful, On 9 December 1985, the news was refreshed again with a new set, graphics, and theme. In 1998, the set was updated, a new opener featuring a light blue globe and the ABC logo was introduced, and the theme remained the same but was tweaked. The graphics also changed to match the new look. On Australia Day (26 January) 2005, a new look (along with theme music) was introduced. The new look made use of an orange and blue globe motif. At the same time the set and graphics received a major overhaul to fit in with this look. This package was used until 21 July 2010, a day before the launch of ABC News. In January 2010, the ABC announced that a dedicated 24-hour digital television news channel, named ABC News 24 would be launched during the year. The new channel commenced preliminary broadcasting with a promo loop in early July 2010, with the ABC re-numbering ABC HD channel 20 to logical channel number 24. The channel was officially launched as ABC News 24 at 7:30 pm Australian Eastern Standard Time on 22 July 2010, and simulcast its first hour of transmission on ABC1. With the launch of ABC News on 22 July 2010, all 7 pm bulletins across Australia had a graphics overhaul to match the look of the new channel. The blue/orange globe style opener was replaced with a series of sliding panels, featuring images specific to each state. New sets were built in each capital city studio to match the ABC News 24 set and graphics were changed to match. === ABC News channel === The news bulletins such as News Breakfast, ABC News Mornings, ABC News at Noon, ABC News Day, ABC News Afternoons, The World, ABC Late News and Weekend Breakfast are aired on ABC News along with its own 30- and 15-minute hourly bulletins. === National bulletins and programs === National news updates are presented on ABC TV throughout the day, with evening updates at 7 pm presented live in most states by the respective state news presenters. Bulletins focus strongly on issues of state relevance, with a greater inclusion of national and international news items than are found in the news bulletins of commercial broadcasters. A national financial bulletin is presented on weeknights by Alan Kohler in Melbourne. The ABC's Ultimo studios produces the 8:30 pm weeknight update presented by Joe O'Brien. News Breakfast is broadcast on weekdays from 6 am – 9 am on ABC TV and the ABC News channel from ABC's Melbourne studio and is presented by James Glenday and Bridget Brennan, news presenter Emma Rebellato, sport presenter Catherine Murphy and weather presenter Nate Byrne. The program is also shown online and on ABC Australia in the Asia Pacific region. Weekend Breakfast is broadcast on weekends from 7 am – 11 am on ABC TV and the ABC News channel from ABC's main national news studio in Sydney at Ultimo and is presented by Johanna Nicholson and Fauziah Ibrahim. ABC News Mornings is presented by Gemma Veness and Dan Bourchier (from the ABC's main national news studio in Sydney at Ultimo, and airs weekdays at 9 am on ABC TV and on the ABC News channel. Sport is presented by Tony Armstrong and weather is presented by Nate Byrne, both from the Melbourne studios. ABC News at Noon (launched in February 2005 to replace the less successful Midday News and Business, preceded in turn by the long-running World at Noon) is presented by Ros Childs (weekdays) and Dan Bourchier (weekends) from the ABC's main national news studio in the Sydney suburb of Ultimo, and airs on ABC TV and ABC News channel in each Australian state and territory at midday Australian Eastern Standard/Daylight Time. A separate edition of the bulletin is produced for Western Australia two to three hours after the original broadcast, as the time delay was deemed too long to remain up-to-date. 7.30 is presented by Sarah Ferguson from the ABC's main national news studio in parramatta , Sydney on ABC TV at 7:30 pm, weeknights. However, when a big state political event happens, the national program can be pre-empted by the local edition. ABC Late News is presented by Jade Barker (Sunday–Thursday) and Craig Smart (Friday–Saturday), which is broadcast on ABC TV at 10:20 pm (eastern time). A separate edition is presented from Perth for Western Australia also by Jade Barker on ABC at 10:30 pm (western time) and then ABC News channel at 11pm (eastern time) and 12:30 am. Later, they also host 15-minute News Overnight bulletins. Other news and current affairs programs broadcast nationally include Afternoon Briefing, ABC News at Five, 7.30, Insiders, Four Corners, Behind the News, Q&A, Landline, Offsiders, One Plus One, The Business, The World, Australian Story, Foreign Correspondent, Media Watch and Australia Wide. === State bulletins === ABC News ACT is presented from the ABC's Dickson studio by Greg Jennett from Sunday to Thursday and Adrienne Francis on Friday and Saturday. ABC News New South Wales is presented from the ABC's Parramatta studio (ABN) by Jeremy Fernandez from Sunday to Thursday and Lydia Feng and Nakari Thorpe on Friday and Saturday. Weather is presented by Tom Saunders on weeknights. ABC News Northern Territory is presented from the ABC's Darwin studio (ABD) by Kyle Dowling from Sunday to Thursday and Isabella Tolhurst on Friday and Saturday. ABC News Queensland is presented from the ABC's Queensland headquarters (ABQ) on Brisbane's South Bank by Jessica van Vonderen from Monday to Thursday and Lexy Hamilton-Smith from Friday to Sunday. Weather is presented by Jenny Woodward from Sunday to Thursday. ABC News South Australia is presented from the ABC's Collinswood studio (ABS) by Jessica Harmsen from Monday to Thursday and Richard Davies or Candice Prosser from Friday to Sunday. ABC News Tasmania is presented from the ABC's Hobart studio (ABT) by Guy Stayner on weeknights and Alexandra Alvaro on weekends. ABC News Victoria is presented from ABC Victoria's Southbank studio (ABV) by Tamara Oudyn from Sunday to Thursday and Iskhandar Razak on Friday and Saturday. Weather is presented by Dr Adam Morgan from Monday to Thursday. ABC News Western Australia is presented from ABC WA's East Perth studio by Pamela Medlen from Monday to Thursday and Charlotte Hamlyn from Friday to Sunday. === ABC Australia === News and current affairs programs are also broadcast on ABC Australia, a channel broadcast to the region outside Australia. These include Four Corners, 7:30 and Q+A. === Online === ABC news television programs are available via the video-on-demand platform, ABC iview. == Radio == ABC NewsRadio is a radio station dedicated to news and current affairs. ABC Radio Australia, which covers the Asia-Pacific region, broadcasts regular bulletins produced in Melbourne, featuring reports from foreign correspondents in the region. === National bulletins === ABC Classic FM broadcasts state bulletins every hour from 6 am until noon and then every 2 hours on the hour. Non-local streams of ABC Radio National broadcast national bulletins every hour, 24 hours a day. National youth radio station triple j broadcasts its own bulletins between 6:00 am and 6:00 pm on weekdays, and between 7:00 am and noon on weekends. === State bulletins === State bulletins are produced by the ABC Local Radio station from the capital city of each state and mainland territory. They are broadcast to all ABC Local Radio and ABC Radio National stations in each state, and focus strongly on issues of state relevance, but also feature national and international stories. National bulletins air when state bulletins are not produced. ABC Local Radio stations broadcast a flagship 15-minute state bulletin at 7:45 am, the only bulletin still introduced by the 18-second version of Majestic Fanfare. All other bulletins are introduced by a 9-second version of Majestic Fanfare. ABC Radio National and ABC Classic FM stations do not broadcast the 7:45 am bulletin, instead broadcasting an ordinary 8:00 am state bulletin and a 10-minute 7 am bulletin respectively, and continue to broadcast bulletins every hour when Local Radio stations broadcast bulletins every 30 minutes in the early morning. ABC News ACT is broadcast at 5:30 am and on the hour between 6 am and 10 pm each day from the studios of ABC Radio Canberra. ABC News New South Wales is broadcast at 5:30 am and on the hour between 6 am and 10 pm each day from the studios of ABC Radio Sydney. ABC News Northern Territory is broadcast at 5:30 am and on the hour between 6 am and 10 pm each day from the studios of ABC Radio Darwin. ABC News Queensland is broadcast at 5:30 am and on the hour between 6 am and 7 pm on weekdays from the studios of ABC Radio Brisbane. Weekend bulletins are broadcast on the hour between 6 am and midday. ABC News South Australia is broadcast at 5:30 am and on the hour between 6 am and 10 pm on weekdays from the studios of ABC Radio Adelaide. Weekend bulletins are broadcast on the hour between 6 am and midday. ABC News Tasmania is broadcast at 5:30 am and on the hour between 6 am and 9 pm on weekdays from the studios of ABC Radio Hobart. Weekend bulletins are broadcast on the hour between 6 am and midday. ABC News Victoria is broadcast at 5:30 am and on the hour between 6 am and 10 pm on weekdays from the studios of ABC Radio Melbourne. Weekend bulletins are broadcast on the hour between 6 am and midday. ABC News Western Australia is produced by ABC Radio Perth. Weekday bulletins are broadcast every 30 minutes between 5:00 am and 7:00 am, then at 7:45 am, then at 9:00 am, then every hour until 10:00 pm. Weekend bulletins are broadcast every 30 minutes between 6:00 am and 7:00 am, then at 7:45 am, then at 9:00 am, then every hour until 1:00 pm. === Current affairs === ABC News produces several current affairs programs for radio. All share a quasi-magazine format, and investigate stories in greater depth compared to news bulletins. AM is broadcast in three editions – a 10-minute edition at 6:05 am on ABC Local Radio, a 20-minute edition at 7:10 am on ABC Radio National, and a flagship 30 minute edition at 8:00 am on ABC Local Radio. The World Today is broadcast in one edition – a 30-minute edition at 12:10 pm on ABC Local Radio and ABC Radio National. PM is broadcast in two editions – a 30-minute edition at 5:00 pm on ABC Radio National, and a flagship 30 minute edition at 6:30 pm on ABC Local Radio. === Programs === Other news-related, factual and current affairs programs broadcast by the various radio stations of the ABC Radio network include: Sunday Extra, incorporating Background Briefing and Ockham's Razor, hosted by Julian Morrow (replacing Correspondents Report), RN Breakfast Late Night Live Hack Nightlife Awaye! Country Breakfast === Online === All ABC radio stations are available via an ABC News mobile app, ABC Listen, from which podcasts are also available. == Awards == In March 2024, ABC News won the 2023 Gold Lizzie for Best Title at the IT Journalism Awards, a shared honour between ABC News Story Lab and ABC Radio. ABC News also won Best Gaming Coverage and Best News Coverage. The Best News Coverage award was for three stories about data breaches affecting Australians: "See your identity pieced together from stolen data" (by Julian Fell, Ben Spraggon, and Matt Liddy) "Why the FBI calls this Gold Coast man when it finds a trove of stolen data" (by Julian Fell, with photography by Tim Leslie) "This is the most detailed portrait yet of data breaches in Australia" (by Julian Fell, Georgina Piper, and Matt Liddy) == Presenters == National ACT New South Wales Northern Territory Queensland South Australia Tasmania Victoria Western Australia Other Past == Notes == == References == == Further reading == ABC Bureaux and Foreign Correspondents 50 Years of ABC TV News and Current Affairs == External links == Official website News on iview
Wikipedia/ABC_Science
Applied Physics B: Lasers & Optics is a peer-reviewed scientific journal published by Springer Science+Business Media. The editor-in-chief is Jacob Mackenzie (University of Southampton). Topical coverage includes laser physics, optical & laser materials, linear optics, nonlinear optics, quantum optics, and photonic devices. Interest also includes laser spectroscopy pertaining to atoms, molecules, and clusters. The journal publishes original research articles, invited reviews, and rapid communications. == History == The journal Applied Physics was originally conceived and founded in 1972 by Helmut K.V. Lotsch at Springer-Verlag Berlin Heidelberg New York. Lotsch edited the journal up to volume 25 and split it thereafter into the two part A26(Solids and Surfaces) and B26(Photophysics and Laser Chemistry). He continued his editorship up to the volumes A61 and B61. Starting in 1995 the two journal parts were continued under separate editorships: Applied Physics B: Photophysics and Laser Chemistry (ISSN 0721-7269), in existence from September 1981 (volume B: 26 no. 1) to December 1993 (volume B: 57 no. 6) It partly continues Applied Physics (ISSN 0340-3793), in existence from January 1973 (volume 1 no. 1) to August 1981 (volume 25 no. 4). == Abstracting and indexing == The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2023 impact factor of 2.0. == References == == External links == Official website
Wikipedia/Applied_Physics_B
The dynamical theory of diffraction describes the interaction of waves with a regular lattice. The wave fields traditionally described are X-rays, neutrons or electrons and the regular lattice are atomic crystal structures or nanometer-scale multi-layers or self-arranged systems. In a wider sense, similar treatment is related to the interaction of light with optical band-gap materials or related wave problems in acoustics. The sections below deal with dynamical diffraction of X-rays. == Principle == The dynamical theory of diffraction considers the wave field in the periodic potential of the crystal and takes into account all multiple scattering effects. Unlike the kinematic theory of diffraction which describes the approximate position of Bragg or Laue diffraction peaks in reciprocal space, dynamical theory corrects for refraction, shape and width of the peaks, extinction and interference effects. Graphical representations are described in dispersion surfaces around reciprocal lattice points which fulfill the boundary conditions at the crystal interface. == Outcomes == The crystal potential by itself leads to refraction and specular reflection of the waves at the interface to the crystal and delivers the refractive index off the Bragg reflection. It also corrects for refraction at the Bragg condition and combined Bragg and specular reflection in grazing incidence geometries. A Bragg reflection is the splitting of the dispersion surface at the border of the Brillouin zone in reciprocal space. There is a gap between the dispersion surfaces in which no travelling waves are allowed. For a non-absorbing crystal, the reflection curve shows a range of total reflection, the so-called Darwin plateau. Regarding the quantum mechanical energy of the system, this leads to the band gap structure which is commonly well known for electrons. Upon Laue diffraction, intensity is shuffled from the forward diffracted beam into the Bragg diffracted beam until extinction. The diffracted beam itself fulfills the Bragg condition and shuffles intensity back into the primary direction. This round-trip period is called the Pendellösung period. The extinction length is related to the Pendellösung period. Even if a crystal is infinitely thick, only the crystal volume within the extinction length contributes considerably to the diffraction in Bragg geometry. In Laue geometry, beam paths lie within the Borrmann triangle. Kato fringes are the intensity patterns due to Pendellösung effects at the exit surface of the crystal. Anomalous absorption effects take place due to a standing wave patterns of two wave fields. Absorption is stronger if the standing wave has its anti-nodes on the lattice planes, i.e. where the absorbing atoms are, and weaker, if the anti-nodes are shifted between the planes. The standing wave shifts from one condition to the other on each side of the Darwin plateau which gives the latter an asymmetric shape. == Applications == X-ray diffraction Neutron diffraction Electron diffraction and transmission electron microscopy Structure determination in crystallography grazing incidence diffraction X-ray standing waves neutron and X-ray interferometry. synchrotron crystal optics neutron and X-ray diffraction topography X-ray imaging Crystal monochromators Electronic band structures == See also == Volume hologram == Further reading == J. Als-Nielsen, D. McMorrow: Elements of Modern X-ray physics. Wiley, 2001 (chapter 5: diffraction by perfect crystals). André Authier: Dynamical theory of X-ray diffraction. IUCr monographs on crystallography, no. 11. Oxford University Press (1st edition 2001/ 2nd edition 2003). ISBN 0-19-852892-2. R. W. James: The Optical Principles of the Diffraction of X-rays. Bell., 1948. M. von Laue: Röntgenstrahlinterferenzen. Akademische Verlagsanstalt, 1960 (German). Z. G. Pinsker: Dynamical Scattering of X-Rays in Crystals. Springer, 1978. B. E. Warren: X-ray diffraction. Addison-Wesley, 1969 (chapter 14: perfect crystal theory). W. H. Zachariasen: Theory of X-ray Diffraction in Crystals. Wiley, 1945. Boris W. Batterman, Henderson Cole: Dynamical Diffraction of X Rays by Perfect Crystals. Reviews of Modern Physics, Vol. 36, No. 3, 681-717, July 1964. H. Rauch, D. Petrascheck, “Grundlagen für ein Laue-Neutroneninterferometer Teil 1: Dynamische Beugung”, AIAU 74405b, Atominstitut der Österreichischen Universitäten, (1976) H. Rauch, D. Petrascheck, “Dynamical neutron diffraction and its application” in “Neutron Diffraction”, H. Dachs, Editor. (1978), Springer-Verlag: Berlin Heidelberg New York. p. 303. K.-D. Liss: "Strukturelle Charakterisierung und Optimierung der Beugungseigenschaften von Si(1-x)Ge(x) Gradientenkristallen, die aus der Gasphase gezogen wurden", Dissertation, Rheinisch Westfälische Technische Hochschule Aachen, (27 October 1994), urn:nbn:de:hbz:82-opus-2227
Wikipedia/Dynamical_theory
High-energy X-rays or HEX-rays are very hard X-rays, with typical energies of 80–1000 keV (1 MeV), about one order of magnitude higher than conventional X-rays used for X-ray crystallography (and well into gamma-ray energies over 120 keV). They are produced at modern synchrotron radiation sources such as the Cornell High Energy Synchrotron Source, SPring-8, and the beamlines ID15 and BM18 at the European Synchrotron Radiation Facility (ESRF). The main benefit is the deep penetration into matter which makes them a probe for thick samples in physics and materials science and permits an in-air sample environment and operation. Scattering angles are small and diffraction directed forward allows for simple detector setups. High energy (megavolt) X-rays are also used in cancer therapy, using beams generated by linear accelerators to suppress tumors. == Advantages == High-energy X-rays (HEX-rays) between 100 and 300 keV have several advantages over conventional hard X-rays, which lie in the range of 5–20 keV They can be listed as follows: High penetration into materials due to a strongly reduced photo-absorption cross-section. The photo-absorption strongly depends on the atomic number of the material and the X-ray energy. Several centimeter thick volumes can be accessed in steel and millimeters in lead containing samples. No radiation damage to the sample, which can pin incommensurations or destroy the chemical compound to be analyzed. The Ewald sphere has a curvature ten times smaller than in the low energy case, and allows whole regions to be mapped in a reciprocal lattice, similar to electron diffraction. Access to diffuse scattering. This is absorption and not extinction limited at low energies, while volume enhancement takes place at high energies. Complete 3D maps over several Brillouin zones can be easily obtained. High momentum transfers are naturally accessible due to the high momentum of the incident wave. This is of particular importance for studies of liquid, amorphous and nanocrystalline materials as well as for pair distribution function analysis. Realization of the Materials oscilloscope. Simple diffraction setups due to operation in air. Diffraction in forward direction for easy registration with a 2D detector. Forward scattering and penetration make sample environments easy and straightforward. Negligible polarization effects due to relative small scattering angles. Special non-resonant magnetic scattering. LLL (Triple Laue) interferometry Access to high-energy spectroscopic levels, both electronic and nuclear. Neutron-like, but complementary studies combined with high precision spatial resolution. Cross-sections for Compton scattering are similar to coherent scattering or absorption cross-sections. == Applications == With these advantages, HEX-rays can be applied for a wide range of investigations. An overview, which is far from complete: Structural investigations of real materials, such as metals, ceramics, and liquids. In particular, in-situ studies of phase transitions at elevated temperatures up to the melt of any metal. Phase transitions, recovery, chemical segregation, recrystallization, twinning and domain formation are a few aspects to follow in a single experiment. Materials in chemical or operation environments, such as electrodes in batteries, fuel cells, high-temperature reactors, electrolytes etc. The penetration and a well-collimated pencil beam allows focusing in the region and material of interest while it undergoes a chemical reaction. Study of 'thick' layers, such as oxidation of steel in its production and rolling process, which are too thick for classical reflectometry experiments. Interfaces and layers in complicated environments, such as the intermetallic reaction of Zincalume surface coating on industrial steel in the liquid bath. In situ studies of industrial like strip casting processes for light metals. A casting setup can be set up on a beamline and probed with the HEX-ray beam in real time. Bulk studies in single crystals differ from studies in surface-near regions limited by the penetration of conventional X-rays. It has been found and confirmed in almost all studies, that critical scattering and correlation lengths are strongly affected by this effect. Combination of neutron and HEX-ray investigations on the same sample, such as contrast variations due to the different scattering lengths. Residual stress analysis in the bulk with unique spatial resolution in centimeter thick samples; in-situ under realistic load conditions. In-situ studies of thermo-mechanical deformation processes such as forging, rolling, and extrusion of metals. Real time texture measurements in the bulk during a deformation, phase transition or annealing, such as in metal processing. Structures and textures of geological samples which may contain heavy elements and are thick. High resolution triple crystal diffraction for the investigation of single crystals with all the advantages of high penetration and studies from the bulk. Compton spectroscopy for the investigation of momentum distribution of the valence electron shells. Imaging and tomography with high energies. Dedicated sources can be strong enough to obtain 3D tomograms in a few seconds. Combination of imaging and diffraction is possible due to simple geometries. For example, tomography combined with residual stress measurement or structural analysis. == See also == == Notes == == References == == Further reading == Liss, Klaus-Dieter; Bartels, Arno; Schreyer, Andreas; Clemens, Helmut (2003). "High-Energy X-Rays: A tool for Advanced Bulk Investigations in Materials Science and Physics". Textures and Microstructures. 35 (3–4): 219–252. doi:10.1080/07303300310001634952. Benmore, C. J. (2012). "A Review of High-Energy X-Ray Diffraction from Glasses and Liquids". ISRN Materials Science. 2012: 1–19. doi:10.5402/2012/852905. Eberhard Haug; Werner Nakel (2004). The elementary process of Bremsstrahlung. World Scientific Lecture Notes in Physics. Vol. 73. River Edge, NJ: World Scientific. ISBN 978-981-238-578-9. == External links == Liss, Klaus-Dieter; et al. (2006). "Recrystallization and phase transitions in a γ-Ti Al-based alloy as observed by ex situ and in situ high-energy X-ray diffraction". Acta Materialia. 54 (14): 3721–3735. Bibcode:2006AcMat..54.3721L. doi:10.1016/j.actamat.2006.04.004.
Wikipedia/High_energy_X-rays
The dynamical theory of diffraction describes the interaction of waves with a regular lattice. The wave fields traditionally described are X-rays, neutrons or electrons and the regular lattice are atomic crystal structures or nanometer-scale multi-layers or self-arranged systems. In a wider sense, similar treatment is related to the interaction of light with optical band-gap materials or related wave problems in acoustics. The sections below deal with dynamical diffraction of X-rays. == Principle == The dynamical theory of diffraction considers the wave field in the periodic potential of the crystal and takes into account all multiple scattering effects. Unlike the kinematic theory of diffraction which describes the approximate position of Bragg or Laue diffraction peaks in reciprocal space, dynamical theory corrects for refraction, shape and width of the peaks, extinction and interference effects. Graphical representations are described in dispersion surfaces around reciprocal lattice points which fulfill the boundary conditions at the crystal interface. == Outcomes == The crystal potential by itself leads to refraction and specular reflection of the waves at the interface to the crystal and delivers the refractive index off the Bragg reflection. It also corrects for refraction at the Bragg condition and combined Bragg and specular reflection in grazing incidence geometries. A Bragg reflection is the splitting of the dispersion surface at the border of the Brillouin zone in reciprocal space. There is a gap between the dispersion surfaces in which no travelling waves are allowed. For a non-absorbing crystal, the reflection curve shows a range of total reflection, the so-called Darwin plateau. Regarding the quantum mechanical energy of the system, this leads to the band gap structure which is commonly well known for electrons. Upon Laue diffraction, intensity is shuffled from the forward diffracted beam into the Bragg diffracted beam until extinction. The diffracted beam itself fulfills the Bragg condition and shuffles intensity back into the primary direction. This round-trip period is called the Pendellösung period. The extinction length is related to the Pendellösung period. Even if a crystal is infinitely thick, only the crystal volume within the extinction length contributes considerably to the diffraction in Bragg geometry. In Laue geometry, beam paths lie within the Borrmann triangle. Kato fringes are the intensity patterns due to Pendellösung effects at the exit surface of the crystal. Anomalous absorption effects take place due to a standing wave patterns of two wave fields. Absorption is stronger if the standing wave has its anti-nodes on the lattice planes, i.e. where the absorbing atoms are, and weaker, if the anti-nodes are shifted between the planes. The standing wave shifts from one condition to the other on each side of the Darwin plateau which gives the latter an asymmetric shape. == Applications == X-ray diffraction Neutron diffraction Electron diffraction and transmission electron microscopy Structure determination in crystallography grazing incidence diffraction X-ray standing waves neutron and X-ray interferometry. synchrotron crystal optics neutron and X-ray diffraction topography X-ray imaging Crystal monochromators Electronic band structures == See also == Volume hologram == Further reading == J. Als-Nielsen, D. McMorrow: Elements of Modern X-ray physics. Wiley, 2001 (chapter 5: diffraction by perfect crystals). André Authier: Dynamical theory of X-ray diffraction. IUCr monographs on crystallography, no. 11. Oxford University Press (1st edition 2001/ 2nd edition 2003). ISBN 0-19-852892-2. R. W. James: The Optical Principles of the Diffraction of X-rays. Bell., 1948. M. von Laue: Röntgenstrahlinterferenzen. Akademische Verlagsanstalt, 1960 (German). Z. G. Pinsker: Dynamical Scattering of X-Rays in Crystals. Springer, 1978. B. E. Warren: X-ray diffraction. Addison-Wesley, 1969 (chapter 14: perfect crystal theory). W. H. Zachariasen: Theory of X-ray Diffraction in Crystals. Wiley, 1945. Boris W. Batterman, Henderson Cole: Dynamical Diffraction of X Rays by Perfect Crystals. Reviews of Modern Physics, Vol. 36, No. 3, 681-717, July 1964. H. Rauch, D. Petrascheck, “Grundlagen für ein Laue-Neutroneninterferometer Teil 1: Dynamische Beugung”, AIAU 74405b, Atominstitut der Österreichischen Universitäten, (1976) H. Rauch, D. Petrascheck, “Dynamical neutron diffraction and its application” in “Neutron Diffraction”, H. Dachs, Editor. (1978), Springer-Verlag: Berlin Heidelberg New York. p. 303. K.-D. Liss: "Strukturelle Charakterisierung und Optimierung der Beugungseigenschaften von Si(1-x)Ge(x) Gradientenkristallen, die aus der Gasphase gezogen wurden", Dissertation, Rheinisch Westfälische Technische Hochschule Aachen, (27 October 1994), urn:nbn:de:hbz:82-opus-2227
Wikipedia/Dynamical_theory_of_diffraction
In many areas of science, Bragg's law — also known as Wulff–Bragg's condition or Laue–Bragg interference — is a special case of Laue diffraction that gives the angles for coherent scattering of waves from a large crystal lattice. It describes how the superposition of wave fronts scattered by lattice planes leads to a strict relation between the wavelength and scattering angle. This law was initially formulated for X-rays, but it also applies to all types of matter waves including neutron and electron waves if there are a large number of atoms, as well as to visible light with artificial periodic microscale lattices. == History == Bragg diffraction (also referred to as the Bragg formulation of X-ray diffraction) was first proposed by Lawrence Bragg and his father, William Henry Bragg, in 1913 after their discovery that crystalline solids produced surprising patterns of reflected X-rays (in contrast to those produced with, for instance, a liquid). They found that these crystals, at certain specific wavelengths and incident angles, produced intense peaks of reflected radiation. Lawrence Bragg explained this result by modeling the crystal as a set of discrete parallel planes separated by a constant parameter d. He proposed that the incident X-ray radiation would produce a Bragg peak if reflections off the various planes interfered constructively. The interference is constructive when the phase difference between the wave reflected off different atomic planes is a multiple of 2π; this condition (see Bragg condition section below) was first presented by Lawrence Bragg on 11 November 1912 to the Cambridge Philosophical Society. Although simple, Bragg's law confirmed the existence of real particles at the atomic scale, as well as providing a powerful new tool for studying crystals. Lawrence Bragg and his father, William Henry Bragg, were awarded the Nobel Prize in physics in 1915 for their work in determining crystal structures beginning with NaCl, ZnS, and diamond. They are the only father-son team to jointly win. The concept of Bragg diffraction applies equally to neutron diffraction and approximately to electron diffraction. In both cases the wavelengths are comparable with inter-atomic distances (~ 150 pm). Many other types of matter waves have also been shown to diffract, and also light from objects with a larger ordered structure such as opals. == Bragg condition == Bragg diffraction occurs when radiation of a wavelength λ comparable to atomic spacings is scattered in a specular fashion (mirror-like reflection) by planes of atoms in a crystalline material, and undergoes constructive interference. When the scattered waves are incident at a specific angle, they remain in phase and constructively interfere. The glancing angle θ (see figure on the right, and note that this differs from the convention in Snell's law where θ is measured from the surface normal), the wavelength λ, and the "grating constant" d of the crystal are connected by the relation:: 1026  n λ = 2 d sin ⁡ θ {\displaystyle n\lambda =2d\sin \theta } where n {\displaystyle n} is the diffraction order ( n = 1 {\displaystyle n=1} is first order, n = 2 {\displaystyle n=2} is second order,: 221  n = 3 {\displaystyle n=3} is third order: 1028 ). This equation, Bragg's law, describes the condition on θ for constructive interference. A map of the intensities of the scattered waves as a function of their angle is called a diffraction pattern. Strong intensities known as Bragg peaks are obtained in the diffraction pattern when the scattering angles satisfy Bragg condition. This is a special case of the more general Laue equations, and the Laue equations can be shown to reduce to the Bragg condition with additional assumptions. == Derivation == In Bragg's original paper he describes his approach as a Huygens' construction for a reflected wave.: 46  Suppose that a plane wave (of any type) is incident on planes of lattice points, with separation d {\displaystyle d} , at an angle θ {\displaystyle \theta } as shown in the Figure. Points A and C are on one plane, and B is on the plane below. Points ABCC' form a quadrilateral.: 69  There will be a path difference between the ray that gets reflected along AC' and the ray that gets transmitted along AB, then reflected along BC. This path difference is ( A B + B C ) − ( A C ′ ) . {\displaystyle (AB+BC)-\left(AC'\right)\,.} The two separate waves will arrive at a point (infinitely far from these lattice planes) with the same phase, and hence undergo constructive interference, if and only if this path difference is equal to any integer value of the wavelength, i.e. n λ = ( A B + B C ) − ( A C ′ ) {\displaystyle n\lambda =(AB+BC)-\left(AC'\right)} where n {\displaystyle n} and λ {\displaystyle \lambda } are an integer and the wavelength of the incident wave respectively. Therefore, from the geometry A B = B C = d sin ⁡ θ and A C = 2 d tan ⁡ θ , {\displaystyle AB=BC={\frac {d}{\sin \theta }}{\text{ and }}AC={\frac {2d}{\tan \theta }}\,,} from which it follows that A C ′ = A C ⋅ cos ⁡ θ = 2 d tan ⁡ θ cos ⁡ θ = ( 2 d sin ⁡ θ cos ⁡ θ ) cos ⁡ θ = 2 d sin ⁡ θ cos 2 ⁡ θ . {\displaystyle AC'=AC\cdot \cos \theta ={\frac {2d}{\tan \theta }}\cos \theta =\left({\frac {2d}{\sin \theta }}\cos \theta \right)\cos \theta ={\frac {2d}{\sin \theta }}\cos ^{2}\theta \,.} Putting everything together, n λ = 2 d sin ⁡ θ − 2 d sin ⁡ θ cos 2 ⁡ θ = 2 d sin ⁡ θ ( 1 − cos 2 ⁡ θ ) = 2 d sin ⁡ θ sin 2 ⁡ θ {\displaystyle n\lambda ={\frac {2d}{\sin \theta }}-{\frac {2d}{\sin \theta }}\cos ^{2}\theta ={\frac {2d}{\sin \theta }}\left(1-\cos ^{2}\theta \right)={\frac {2d}{\sin \theta }}\sin ^{2}\theta } which simplifies to n λ = 2 d sin ⁡ θ , {\displaystyle n\lambda =2d\sin \theta \,,} which is Bragg's law shown above. If only two planes of atoms were diffracting, as shown in the Figure then the transition from constructive to destructive interference would be gradual as a function of angle, with gentle maxima at the Bragg angles. However, since many atomic planes are participating in most real materials, sharp peaks are typical. A rigorous derivation from the more general Laue equations is available (see page: Laue equations). == Beyond Bragg's law == The Bragg condition is correct for very large crystals. Because the scattering of X-rays and neutrons is relatively weak, in many cases quite large crystals with sizes of 100 nm or more are used. While there can be additional effects due to crystal defects, these are often quite small. In contrast, electrons interact thousands of times more strongly with solids than X-rays, and also lose energy (inelastic scattering). Therefore samples used in transmission electron diffraction are much thinner. Typical diffraction patterns, for instance the Figure, show spots for different directions (plane waves) of the electrons leaving a crystal. The angles that Bragg's law predicts are still approximately right, but in general there is a lattice of spots which are close to projections of the reciprocal lattice that is at right angles to the direction of the electron beam. (In contrast, Bragg's law predicts that only one or perhaps two would be present, not simultaneously tens to hundreds.) With low-energy electron diffraction where the electron energies are typically 30-1000 electron volts, the result is similar with the electrons reflected back from a surface. Also similar is reflection high-energy electron diffraction which typically leads to rings of diffraction spots. With X-rays the effect of having small crystals is described by the Scherrer equation. This leads to broadening of the Bragg peaks which can be used to estimate the size of the crystals. == Bragg scattering of visible light by colloids == A colloidal crystal is a highly ordered array of particles that forms over a long range (from a few millimeters to one centimeter in length); colloidal crystals have appearance and properties roughly analogous to their atomic or molecular counterparts. It has been known for many years that, due to repulsive Coulombic interactions, electrically charged macromolecules in an aqueous environment can exhibit long-range crystal-like correlations, with interparticle separation distances often being considerably greater than the individual particle diameter. Periodic arrays of spherical particles give rise to interstitial voids (the spaces between the particles), which act as a natural diffraction grating for visible light waves, when the interstitial spacing is of the same order of magnitude as the incident lightwave. In these cases brilliant iridescence (or play of colours) is attributed to the diffraction and constructive interference of visible lightwaves according to Bragg's law, in a matter analogous to the scattering of X-rays in crystalline solid. The effects occur at visible wavelengths because the interplanar spacing d is much larger than for true crystals. Precious opal is one example of a colloidal crystal with optical effects. == Volume Bragg gratings == Volume Bragg gratings (VBG) or volume holographic gratings (VHG) consist of a volume where there is a periodic change in the refractive index. Depending on the orientation of the refractive index modulation, VBG can be used either to transmit or reflect a small bandwidth of wavelengths. Bragg's law (adapted for volume hologram) dictates which wavelength will be diffracted: 2 Λ sin ⁡ ( θ + φ ) = m λ B , {\displaystyle 2\Lambda \sin(\theta +\varphi )=m\lambda _{B}\,,} where m is the Bragg order (a positive integer), λB the diffracted wavelength, Λ the fringe spacing of the grating, θ the angle between the incident beam and the normal (N) of the entrance surface and φ the angle between the normal and the grating vector (KG). Radiation that does not match Bragg's law will pass through the VBG undiffracted. The output wavelength can be tuned over a few hundred nanometers by changing the incident angle (θ). VBG are being used to produce widely tunable laser source or perform global hyperspectral imagery (see Photon etc.). == Selection rules and practical crystallography == The measurement of the angles can be used to determine crystal structure, see x-ray crystallography for more details. As a simple example, Bragg's law, as stated above, can be used to obtain the lattice spacing of a particular cubic system through the following relation: d = a h 2 + k 2 + ℓ 2 , {\displaystyle d={\frac {a}{\sqrt {h^{2}+k^{2}+\ell ^{2}}}}\,,} where a {\displaystyle a} is the lattice spacing of the cubic crystal, and h, k, and ℓ are the Miller indices of the Bragg plane. Combining this relation with Bragg's law gives: ( λ 2 a ) 2 = ( λ 2 d ) 2 1 h 2 + k 2 + ℓ 2 {\displaystyle \left({\frac {\lambda }{2a}}\right)^{2}=\left({\frac {\lambda }{2d}}\right)^{2}{\frac {1}{h^{2}+k^{2}+\ell ^{2}}}} One can derive selection rules for the Miller indices for different cubic Bravais lattices as well as many others, a few of the selection rules are given in the table below. These selection rules can be used for any crystal with the given crystal structure. KCl has a face-centered cubic Bravais lattice. However, the K+ and the Cl− ion have the same number of electrons and are quite close in size, so that the diffraction pattern becomes essentially the same as for a simple cubic structure with half the lattice parameter. Selection rules for other structures can be referenced elsewhere, or derived. Lattice spacing for the other crystal systems can be found here. == See also == Bragg plane Crystal lattice Diffraction Distributed Bragg reflector Fiber Bragg grating Dynamical theory of diffraction Electron diffraction Georg Wulff Henderson limit Laue conditions Powder diffraction Radar angels Structure factor X-ray crystallography == References == == Further reading == Neil W. Ashcroft and N. David Mermin, Solid State Physics (Harcourt: Orlando, 1976). Bragg W (1913). "The Diffraction of Short Electromagnetic Waves by a Crystal". Proceedings of the Cambridge Philosophical Society. 17: 43–57. == External links == Nobel Prize in Physics – 1915 https://web.archive.org/web/20110608141639/http://www.physics.uoguelph.ca/~detong/phys3510_4500/xray.pdf Learning crystallography
Wikipedia/Bragg_equation
The dynamical theory of diffraction describes the interaction of waves with a regular lattice. The wave fields traditionally described are X-rays, neutrons or electrons and the regular lattice are atomic crystal structures or nanometer-scale multi-layers or self-arranged systems. In a wider sense, similar treatment is related to the interaction of light with optical band-gap materials or related wave problems in acoustics. The sections below deal with dynamical diffraction of X-rays. == Principle == The dynamical theory of diffraction considers the wave field in the periodic potential of the crystal and takes into account all multiple scattering effects. Unlike the kinematic theory of diffraction which describes the approximate position of Bragg or Laue diffraction peaks in reciprocal space, dynamical theory corrects for refraction, shape and width of the peaks, extinction and interference effects. Graphical representations are described in dispersion surfaces around reciprocal lattice points which fulfill the boundary conditions at the crystal interface. == Outcomes == The crystal potential by itself leads to refraction and specular reflection of the waves at the interface to the crystal and delivers the refractive index off the Bragg reflection. It also corrects for refraction at the Bragg condition and combined Bragg and specular reflection in grazing incidence geometries. A Bragg reflection is the splitting of the dispersion surface at the border of the Brillouin zone in reciprocal space. There is a gap between the dispersion surfaces in which no travelling waves are allowed. For a non-absorbing crystal, the reflection curve shows a range of total reflection, the so-called Darwin plateau. Regarding the quantum mechanical energy of the system, this leads to the band gap structure which is commonly well known for electrons. Upon Laue diffraction, intensity is shuffled from the forward diffracted beam into the Bragg diffracted beam until extinction. The diffracted beam itself fulfills the Bragg condition and shuffles intensity back into the primary direction. This round-trip period is called the Pendellösung period. The extinction length is related to the Pendellösung period. Even if a crystal is infinitely thick, only the crystal volume within the extinction length contributes considerably to the diffraction in Bragg geometry. In Laue geometry, beam paths lie within the Borrmann triangle. Kato fringes are the intensity patterns due to Pendellösung effects at the exit surface of the crystal. Anomalous absorption effects take place due to a standing wave patterns of two wave fields. Absorption is stronger if the standing wave has its anti-nodes on the lattice planes, i.e. where the absorbing atoms are, and weaker, if the anti-nodes are shifted between the planes. The standing wave shifts from one condition to the other on each side of the Darwin plateau which gives the latter an asymmetric shape. == Applications == X-ray diffraction Neutron diffraction Electron diffraction and transmission electron microscopy Structure determination in crystallography grazing incidence diffraction X-ray standing waves neutron and X-ray interferometry. synchrotron crystal optics neutron and X-ray diffraction topography X-ray imaging Crystal monochromators Electronic band structures == See also == Volume hologram == Further reading == J. Als-Nielsen, D. McMorrow: Elements of Modern X-ray physics. Wiley, 2001 (chapter 5: diffraction by perfect crystals). André Authier: Dynamical theory of X-ray diffraction. IUCr monographs on crystallography, no. 11. Oxford University Press (1st edition 2001/ 2nd edition 2003). ISBN 0-19-852892-2. R. W. James: The Optical Principles of the Diffraction of X-rays. Bell., 1948. M. von Laue: Röntgenstrahlinterferenzen. Akademische Verlagsanstalt, 1960 (German). Z. G. Pinsker: Dynamical Scattering of X-Rays in Crystals. Springer, 1978. B. E. Warren: X-ray diffraction. Addison-Wesley, 1969 (chapter 14: perfect crystal theory). W. H. Zachariasen: Theory of X-ray Diffraction in Crystals. Wiley, 1945. Boris W. Batterman, Henderson Cole: Dynamical Diffraction of X Rays by Perfect Crystals. Reviews of Modern Physics, Vol. 36, No. 3, 681-717, July 1964. H. Rauch, D. Petrascheck, “Grundlagen für ein Laue-Neutroneninterferometer Teil 1: Dynamische Beugung”, AIAU 74405b, Atominstitut der Österreichischen Universitäten, (1976) H. Rauch, D. Petrascheck, “Dynamical neutron diffraction and its application” in “Neutron Diffraction”, H. Dachs, Editor. (1978), Springer-Verlag: Berlin Heidelberg New York. p. 303. K.-D. Liss: "Strukturelle Charakterisierung und Optimierung der Beugungseigenschaften von Si(1-x)Ge(x) Gradientenkristallen, die aus der Gasphase gezogen wurden", Dissertation, Rheinisch Westfälische Technische Hochschule Aachen, (27 October 1994), urn:nbn:de:hbz:82-opus-2227
Wikipedia/Dynamical_theory_of_x-ray_diffraction
Photographic emulsion is a light-sensitive colloid used in film-based photography. Most commonly, in silver-gelatin photography, it consists of silver halide crystals dispersed in gelatin. The emulsion is usually coated onto a substrate of glass, films (of cellulose nitrate, cellulose acetate or polyester), paper, or fabric. The substrate is often flexible and known as a film base. Photographic emulsion is not a true emulsion, but a suspension of solid particles (silver halide) in a fluid (gelatin in solution). However, the word emulsion is customarily used in a photographic context. Gelatin or gum arabic layers sensitized with dichromate used in the dichromated colloid processes carbon and gum bichromate are sometimes called emulsions. Some processes do not have emulsions, such as platinum, cyanotype, salted paper, or kallitype. == Components == Photographic emulsion is a fine suspension of insoluble light-sensitive crystals in a colloid sol, usually consisting of gelatin. The light-sensitive component is one or a mixture of silver halides: silver bromide, chloride and iodide. The gelatin is used as a permeable binder, allowing processing agents (e.g., developer, fixer, toners, etc.) in aqueous solution to enter the colloid without dislodging the crystals. Other polymer macromolecules are often blended, but gelatin has not been entirely replaced. The light-exposed crystals are reduced by the developer to black metallic silver particles that form the image. Color films and papers have multiple layers of emulsion, made sensitive to different parts of the visible spectrum by different color sensitizers, and incorporating different dye couplers which produce superimposed yellow, magenta and cyan dye images during development. Panchromatic black-and-white film also includes color sensitizers, but as part of a single emulsion layer. == Manufacture == A solution of silver nitrate is mixed into a warm gelatin solution containing potassium bromide, sodium chloride or other alkali metal halides. A reaction precipitates fine crystals of insoluble silver halides that are light-sensitive. The silver halide is actually being 'peptized' by the gelatin. The type and quantity of gelatin used influences the final emulsion's properties. A pH buffer, crystal habit modifier, metal dopants, ripener, ripening restrainer, surfactants, defoamer, emulsion stabilizer and biocide are also used in emulsion making. Most modern emulsions are "washed" to remove some of the reaction byproducts (potassium nitrate and excess salts). The "washing" or desalting step can be performed by ultrafiltration, dialysis, coagulation (using acylated gelatin), or a classic noodle washing method. Emulsion making also incorporates steps to increase sensitivity by using chemical sensitizing agents and sensitizing dyes. == See also == Nuclear emulsion Tintype Photoresist == References == Reilly, James M. (1986). Care and Identification of 19th-Century Photographic Prints. Rochester, NY, USA: Eastman Kodak. == Further reading == "Film Emulsion Codes" (PDF). 1.14. evertz. 2012-05-01. Archived (PDF) from the original on 2017-06-09. Retrieved 2019-06-09. == External links == Contemporary handcrafted silver gelatin emulsions Working with liquid photographic emulsion in a nutshell Archived 2017-02-03 at the Wayback Machine "The Big Film Database".
Wikipedia/Photographic_emulsion
Industrial radiography is a modality of non-destructive testing that uses ionizing radiation to inspect materials and components with the objective of locating and quantifying defects and degradation in material properties that would lead to the failure of engineering structures. It plays an important role in the science and technology needed to ensure product quality and reliability. In Australia, industrial radiographic non-destructive testing is colloquially referred to as "bombing" a component with a "bomb". Industrial Radiography uses either X-rays, produced with X-ray generators, or gamma rays generated by the natural radioactivity of sealed radionuclide sources. Neutrons can also be used. After crossing the specimen, photons are captured by a detector, such as a silver halide film, a phosphor plate, flat panel detector or CdTe detector. The examination can be performed in static 2D (named radiography), in real time 2D (fluoroscopy), or in 3D after image reconstruction (computed tomography or CT). It is also possible to perform tomography nearly in real time (4-dimensional computed tomography or 4DCT). Particular techniques such as X-ray fluorescence (XRF), X-ray diffractometry (XRD), and several other ones complete the range of tools that can be used in industrial radiography. Inspection techniques can be portable or stationary. Industrial radiography is used in welding, casting parts or composite pieces inspection, in food inspection and luggage control, in sorting and recycling, in EOD and IED analysis, aircraft maintenance, ballistics, turbine inspection, in surface characterisation, coating thickness measurement, in counterfeit drug control, etc. == History == Radiography started in 1895 with the discovery of X-rays (later also called Röntgen rays after the man who first described their properties in detail), a type of electromagnetic radiation. Soon after the discovery of X-rays, radioactivity was discovered. By using radioactive sources such as radium, far higher photon energies could be obtained than those from normal X-ray generators. Soon these found various applications, with one of the earliest users being Loughborough College. X-rays and gamma rays were put to use very early, before the dangers of ionizing radiation were discovered. After World War II new isotopes such as caesium-137, iridium-192 and cobalt-60 became available for industrial radiography, and the use of radium and radon decreased. == Applications == === Inspection of products === Gamma radiation sources, most commonly iridium-192 and cobalt-60, are used to inspect a variety of materials. The vast majority of radiography concerns the testing and grading of welds on piping, pressure vessels, high-capacity storage containers, pipelines, and some structural welds. Other tested materials include concrete (locating rebar or conduit), welder's test coupons, machined parts, plate metal, or pipewall (locating anomalies due to corrosion or mechanical damage). Non-metal components such as ceramics used in the aerospace industries are also regularly tested. Theoretically, industrial radiographers could radiograph any solid, flat material (walls, ceilings, floors, square or rectangular containers) or any hollow cylindrical or spherical object. === Inspection of welding === The beam of radiation must be directed to the middle of the section under examination and must be normal to the material surface at that point, except in special techniques where known defects are best revealed by a different alignment of the beam. The length of weld under examination for each exposure shall be such that the thickness of the material at the diagnostic extremities, measured in the direction of the incident beam, does not exceed the actual thickness at that point by more than 6%. The specimen to be inspected is placed between the source of radiation and the detecting device, usually the film in a light tight holder or cassette, and the radiation is allowed to penetrate the part for the required length of time to be adequately recorded. The result is a two-dimensional projection of the part onto the film, producing a latent image of varying densities according to the amount of radiation reaching each area. It is known as a radio graph, as distinct from a photograph produced by light. Because film is cumulative in its response (the exposure increasing as it absorbs more radiation), relatively weak radiation can be detected by prolonging the exposure until the film can record an image that will be visible after development. The radiograph is examined as a negative, without printing as a positive as in photography. This is because, in printing, some of the detail is always lost and no useful purpose is served. Before commencing a radiographic examination, it is always advisable to examine the component with one's own eyes, to eliminate any possible external defects. If the surface of a weld is too irregular, it may be desirable to grind it to obtain a smooth finish, but this is likely to be limited to those cases in which the surface irregularities (which will be visible on the radio graph) may make detecting internal defects difficult. After this visual examination, the operator will have a clear idea of the possibilities of access to the two faces of the weld, which is important both for the setting up of the equipment and for the choice of the most appropriate technique. Defects such as delaminations and planar cracks are difficult to detect using radiography, particularly to the untrained eye. Without overlooking the negatives of radiographic inspection, radiography does hold many significant benefits over ultrasonics, particularly insomuch that as a 'picture' is produced keeping a semi permanent record for the life cycle of the film, more accurate identification of the defect can be made, and by more interpreters. Very important as most construction standards permit some level of defect acceptance, depending on the type and size of the defect. To the trained radiographer, subtle variations in visible film density provide the technician the ability to not only accurately locate a defect, but identify its type, size and location; an interpretation that can be physically reviewed and confirmed by others, possibly eliminating the need for expensive and unnecessary repairs. For purposes of inspection, including weld inspection, there exist several exposure arrangements. First, there is the panoramic, one of the four single-wall exposure/single-wall view (SWE/SWV) arrangements. This exposure is created when the radiographer places the source of radiation at the center of a sphere, cone, or cylinder (including tanks, vessels, and piping). Depending upon client requirements, the radiographer would then place film cassettes on the outside of the surface to be examined. This exposure arrangement is nearly ideal – when properly arranged and exposed, all portions of all exposed film will be of the same approximate density. It also has the advantage of taking less time than other arrangements since the source must only penetrate the total wall thickness (WT) once and must only travel the radius of the inspection item, not its full diameter. The major disadvantage of the panoramic is that it may be impractical to reach the center of the item (enclosed pipe) or the source may be too weak to perform in this arrangement (large vessels or tanks). The second SWE/SWV arrangement is an interior placement of the source in an enclosed inspection item without having the source centered up. The source does not come in direct contact with the item, but is placed a distance away, depending on client requirements. The third is an exterior placement with similar characteristics. The fourth is reserved for flat objects, such as plate metal, and is also radiographed without the source coming in direct contact with the item. In each case, the radiographic film is located on the opposite side of the inspection item from the source. In all four cases, only one wall is exposed, and only one wall is viewed on the radiograph. Of the other exposure arrangements, only the contact shot has the source located on the inspection item. This type of radiograph exposes both walls, but only resolves the image on the wall nearest the film. This exposure arrangement takes more time than a panoramic, as the source must first penetrate the WT twice and travel the entire outside diameter of the pipe or vessel to reach the film on the opposite side. This is a double wall exposure/single wall view DWE/SWV arrangement. Another is the superimposure (wherein the source is placed on one side of the item, not in direct contact with it, with the film on the opposite side). This arrangement is usually reserved for very small diameter piping or parts. The last DWE/SWV exposure arrangement is the elliptical, in which the source is offset from the plane of the inspection item (usually a weld in pipe) and the elliptical image of the weld furthest from the source is cast onto the film. === Airport security === Both hold luggage and carry-on hand luggage are normally examined by X-ray machines using X-ray radiography. See airport security for more details. === Non-intrusive cargo scanning === Gamma radiography and high-energy X-ray radiography are currently used to scan intermodal freight cargo containers in US and other countries. Also research is being done on adapting other types of radiography like dual-energy X-ray radiography or muon radiography for scanning intermodal cargo containers. === Art === The American artist Kathleen Gilje has painted copies of Artemisia Gentileschi's Susanna and the Elders and Gustave Courbet's Woman with a Parrot. Before, she painted in lead white similar pictures with differences: Susanna fights the intrusion of the elders; there is a nude Courbet beyond the woman he paints. Then she painted over reproducing the original. Gilje's paintings are exhibited with radiographs that show the underpaintings, simulating the study of pentimentos and providing a comment on the old masters' work. == Sources == Many types of ionizing radiation sources exist for use in industrial radiography. === X-Ray generators === X-ray generators produce X-rays by applying a high voltage between the cathode and the anode of an X-ray tube and in heating the tube filament to start the electron emission. The electrons are then accelerated in the resulting electric potential and collide with the anode, which is usually made of Tungsten. The X-rays that are emitted by this generator are directed towards the object to control. They cross it and are absorbed according to the object material's attenuation coefficient. The attenuation coefficient is compiled from all the cross sections of the interactions that are happening in the material. The three most important inelastic interactions with X-rays at those energy levels are the photoelectric effect, compton scattering and pair production. After having crossed the object, the photons are captured by a detector, such as a silver halide film, a phosphor plate or flat panel detector. When an object is too thick, too dense, or its effective atomic number is too high, a linac can be used. They work in a similar way to produce X-rays, by electron collisions on a metal anode, the difference is that they use a much more complex method to accelerate them. === Sealed Radioactive Sources === Radionuclides are often used in industrial radiography. They have the advantage that they do not need a supply of electricity to function, but it also means that they can't be turned off. The two most common radionuclides used in industrial radiography are Iridium-192 and Cobalt-60. But others are used in general industry as well. Am-241: Backscatter gauges, smoke detectors, fill height and ash content detectors. Sr-90: Thickness gauging for thick materials up to 3 mm. Kr-85: Thickness gauging for thin materials like paper, plastics, etc. Cs-137: Density and fill height level switches. Ra-226: Ash content Cf-255: Ash content Ir-192: Industrial radiography Se-75: Industrial radiography Yb-169: Industrial radiography Co-60: Density and fill height level switches, industrial radiography These isotopes emit radiation in a discrete set of energies, depending on the decay mechanism happening in the atomic nucleus. Each energies will have different intensities depending on the probability of a particular decay interaction. The most prominent energies in Cobalt-60 are 1.33 and 1.17 MeV, and 0.31, 0.47 and 0.60 MeV for Iridium-192. From a radiation safety point of view, this makes them more difficult to handle and manage. They always need to be enclosed in a shielded container and because they are still radioactive after their normal life cycle, their ownership often requires a license and they are usually tracked by a governmental body. If this is the case, their disposal must be done in accordance with the national policies. The radionuclides used in industrial radiography are chosen for their high specific activity. This high activity means that only a small sample is required to obtain a good radiation flux. However, higher activity often means higher dose in the case of an accidental exposure. ==== Radiographic cameras ==== A series of different designs have been developed for radiographic "cameras". Rather than the "camera" being a device that accepts photons to record a picture, the "camera" in industrial radiography is the radioactive photon source. Most industries are moving from film based radiography to a digital sensor based radiography much the same way that traditional photography has made this move. Since the amount of radiation emerging from the opposite side of the material can be detected and measured, variations in this amount (or intensity) of radiation are used to determine thickness or composition of material. ===== Shutter design ===== One design uses a moving shutter to expose the source. The radioactive source is placed inside a shielded box; a hinge allows part of the shielding to be opened, exposing the source and allowing photons to exit the radiography camera. Another design for a shutter is where the source is placed in a metal wheel, which can turn inside the camera to move between the expose and storage positions. Shutter-based devices require the entire device, including the heavy shielding, to be located at the exposure site. This can be difficult or impossible, so they have largely been replaced by cable-driven projectors. ===== Projector design ===== Modern projector designs use a cable drive mechanism to move the source along a hollow guide tube to the exposure location. The source is stored in a block of shielding that has an S-shaped tube-like hole through the block. In the safe position the source is in the center of the block. The source is attached to a flexible metal cable called a pigtail. To use the source a guide tube is attached to one side of the device while a drive cable is attached to the pigtail. Using a hand-operated control the source is then pushed out of the shield and along the source guide tube to the tip of the tube to expose the film, then cranked back into its fully shielded position. === Neutrons === In some rare cases, radiography is done with neutrons. This type of radiography is called neutron radiography (NR, Nray, N-ray) or neutron imaging. Neutron radiography provides different images than X-rays, because neutrons can pass with ease through lead and steel but are stopped by plastics, water and oils. Neutron sources include radioactive (241Am/Be and Cf) sources, electrically driven D-T reactions in vacuum tubes and conventional critical nuclear reactors. It might be possible to use a neutron amplifier to increase the neutron flux. == Safety == Radiation safety is a very important part of industrial radiography. The International Atomic Energy Agency has published a report describing the best practices in order to lower the amount of radiation dose the workers are exposed to. It also provides a list of national competent authorities responsible for approvals and authorizations regarding the handling of radioactive material. === Shielding === Shielding can be used to protect the user of the harmful properties of ionizing radiation. The type of material used for shielding depends on the type of radiation being used. National radiation safety authorities usually regulate the design, commissioning, maintenance and inspection of Industrial Radiography installations. === In the industry === Industrial radiographers are in many locations required by governing authorities to use certain types of safety equipment and to work in pairs. Depending on location industrial radiographers may have been required to obtain permits, licenses and/or undertake special training. Prior to conducting any testing the nearby area should always first be cleared of all other persons and measures should be taken to ensure that workers do not accidentally enter into an area that may expose them to dangerous levels of radiation. The safety equipment usually includes four basic items: a radiation survey meter (such as a Geiger/Mueller counter), an alarming dosimeter or rate meter, a gas-charged dosimeter, and a film badge or thermoluminescent dosimeter (TLD). The easiest way to remember what each of these items does is to compare them to gauges on an automobile. The survey meter could be compared to the speedometer, as it measures the speed, or rate, at which radiation is being picked up. When properly calibrated, used, and maintained, it allows the radiographer to see the current exposure to radiation at the meter. It can usually be set for different intensities, and is used to prevent the radiographer from being overexposed to the radioactive source, as well as for verifying the boundary that radiographers are required to maintain around the exposed source during radiographic operations. The alarming dosimeter could be most closely compared with the tachometer, as it alarms when the radiographer "redlines" or is exposed to too much radiation. When properly calibrated, activated, and worn on the radiographer's person, it will emit an alarm when the meter measures a radiation level in excess of a preset threshold. This device is intended to prevent the radiographer from inadvertently walking up on an exposed source. The gas-charged dosimeter is like a trip meter in that it measures the total radiation received, but can be reset. It is designed to help the radiographer measure his/her total periodic dose of radiation. When properly calibrated, recharged, and worn on the radiographer's person, it can tell the radiographer at a glance how much radiation to which the device has been exposed since it was last recharged. Radiographers in many states are required to log their radiation exposures and generate an exposure report. In many countries personal dosimeters are not required to be used by radiographers as the dose rates they show are not always correctly recorded. The film badge or TLD is more like a car's odometer. It is actually a specialized piece of radiographic film in a rugged container. It is meant to measure the radiographer's total exposure over time (usually a month) and is used by regulating authorities to monitor the total exposure of certified radiographers in a certain jurisdiction. At the end of the month, the film badge is turned in and is processed. A report of the radiographer's total dose is generated and is kept on file. When these safety devices are properly calibrated, maintained, and used, it is virtually impossible for a radiographer to be injured by a radioactive overexposure. The elimination of just one of these devices can jeopardize the safety of the radiographer and all those who are nearby. Without the survey meter, the radiation received may be just below the threshold of the rate alarm, and it may be several hours before the radiographer checks the dosimeter, and up to a month or more before the film badge is developed to detect a low intensity overexposure. Without the rate alarm, one radiographer may inadvertently walk up on the source exposed by the other radiographer. Without the dosimeter, the radiographer may be unaware of an overexposure, or even a radiation burn, which may take weeks to result in noticeable injury. And without the film badge, the radiographer is deprived of an important tool designed to protect him or her from the effects of a long-term overexposure to occupationally obtained radiation, and thus may suffer long-term health problems as a result. There are three ways a radiographer will ensure they are not exposed to higher than required levels of radiation: time, distance, shielding. The less time that a person is exposed to radiation the lower their dose will be. The further a person is from a radioactive source the lower the level of radiation they receive, this is largely due to the inverse square law. Lastly the more a radioactive source is shielded by either better or greater amounts of shielding the lower the levels of radiation that will escape from the testing area. The most commonly used shielding materials in use are sand, lead (sheets or shot), steel, spent (non-radioactive uranium) tungsten and in suitable situations water. Industrial radiography appears to have one of the worst safety profiles of the radiation professions, possibly because there are many operators using strong gamma sources (> 2 Ci) in remote sites with little supervision when compared with workers within the nuclear industry or within hospitals. Due to the levels of radiation present whilst they are working many radiographers are also required to work late at night when there are few other people present as most industrial radiography is carried out 'in the open' rather than in purpose built exposure booths or rooms. Fatigue, carelessness and lack of proper training are the three most common factors attributed to industrial radiography accidents. Many of the "lost source" accidents commented on by the International Atomic Energy Agency involve radiography equipment. Lost source accidents have the potential to cause a considerable loss of human life. One scenario is that a passerby finds the radiography source and not knowing what it is, takes it home. The person shortly afterwards becomes ill and dies as a result of the radiation dose. The source remains in their home where it continues to irradiate other members of the household. Such an event occurred in March 1984 in Casablanca, Morocco. This is related to the more famous Goiânia accident, where a related chain of events caused members of the public to be exposed to radiation sources. == List of standards == === International Organization for Standardization (ISO) === ISO 4993, Steel and iron castings – Radiographic inspection ISO 5579, Non-destructive testing – Radiographic examination of metallic materials by X- and gamma-rays – Basic rules ISO 10675-1, Non-destructive testing of welds – Acceptance levels for radiographic testing – Part 1: Steel, nickel, titanium and their alloys ISO 11699-1, Non-destructive testing – Industrial radiographic films – Part 1: Classification of film systems for industrial radiography ISO 11699-2, Non-destructive testing – Industrial radiographic films – Part 2: Control of film processing by means of reference values ISO 14096-1, Non-destructive testing – Qualification of radiographic film digitisation systems – Part 1: Definitions, quantitative measurements of image quality parameters, standard reference film and qualitative control ISO 14096-2, Non-destructive testing – Qualification of radiographic film digitisation systems – Part 2: Minimum requirements ISO 17636-1: Non-destructive testing of welds. Radiographic testing. X- and gamma-ray techniques with film ISO 17636-2: Non-destructive testing of welds. Radiographic testing. X- and gamma-ray techniques with digital detectors ISO 19232, Non-destructive testing – Image quality of radiographs === European Committee for Standardization (CEN) === EN 444, Non-destructive testing; general principles for the radiographic examination of metallic materials using X-rays and gamma-rays EN 462-1: Non-destructive testing – image quality of radiographs – Part 1: Image quality indicators (wire type) – determination of image quality value EN 462-2, Non-destructive testing – image quality of radiographs – Part 2: image quality indicators (step/hole type) determination of image quality value EN 462-3, Non-destructive testing – Image quality of radiogrammes – Part 3: Image quality classes for ferrous metals EN 462-4, Non-destructive testing – Image quality of radiographs – Part 4: Experimental evaluation of image quality values and image quality tables EN 462-5, Non-destructive testing – Image quality of radiographs – Part 5: Image quality of indicators (duplex wire type), determination of image unsharpness value EN 584-1, Non-destructive testing – Industrial radiographic film – Part 1: Classification of film systems for industrial radiography EN 584-2, Non-destructive testing – Industrial radiographic film – Part 2: Control of film processing by means of reference values EN 1330-3, Non-destructive testing – Terminology – Part 3: Terms used in industrial radiographic testing EN 2002–21, Aerospace series – Metallic materials; test methods – Part 21: Radiographic testing of castings EN 10246-10, Non-destructive testing of steel tubes – Part 10: Radiographic testing of the weld seam of automatic fusion arc welded steel tubes for the detection of imperfections EN 12517-1, Non-destructive testing of welds – Part 1: Evaluation of welded joints in steel, nickel, titanium and their alloys by radiography – Acceptance levels EN 12517-2, Non-destructive testing of welds – Part 2: Evaluation of welded joints in aluminium and its alloys by radiography – Acceptance levels EN 12679, Non-destructive testing – Determination of the size of industrial radiographic sources – Radiographic method EN 12681, Founding – Radiographic examination EN 13068, Non-destructive testing – Radioscopic testing EN 14096, Non-destructive testing – Qualification of radiographic film digitisation systems EN 14784-1, Non-destructive testing – Industrial computed radiography with storage phosphor imaging plates – Part 1: Classification of systems EN 14584-2, Non-destructive testing – Industrial computed radiography with storage phosphor imaging plates – Part 2: General principles for testing of metallic materials using X-rays and gamma rays === ASTM International (ASTM) === ASTM E 94, Standard Guide for Radiographic Examination ASTM E 155, Standard Reference Radiographs for Inspection of Aluminum and Magnesium Castings ASTM E 592, Standard Guide to Obtainable ASTM Equivalent Penetrameter Sensitivity for Radiography of Steel Plates 1/4 to 2 in. [6 to 51 mm] Thick with X Rays and 1 to 6 in. [25 to 152 mm] Thick with Cobalt-60 ASTM E 747, Standard Practice for Design, Manufacture and Material Grouping Classification of Wire Image Quality Indicators (IQI) Used for Radiology ASTM E 801, Standard Practice for Controlling Quality of Radiological Examination of Electronic Devices ASTM E 1030, Standard Test Method for Radiographic Examination of Metallic Castings ASTM E 1032, Standard Test Method for Radiographic Examination of Weldments ASTM 1161, Standard Practice for Radiologic Examination of Semiconductors and Electronic Components ASTM E 1648, Standard Reference Radiographs for Examination of Aluminum Fusion Welds ASTM E 1735, Standard Test Method for Determining Relative Image Quality of Industrial Radiographic Film Exposed to X-Radiation from 4 to 25 MeV ASTM E 1815, Standard Test Method for Classification of Film Systems for Industrial Radiography ASTM E 1817, Standard Practice for Controlling Quality of Radiological Examination by Using Representative Quality Indicators (RQIs) ASTM E 2104, Standard Practice for Radiographic Examination of Advanced Aero and Turbine Materials and Components === American Society of Mechanical Engineers (ASME) === BPVC Section V, Nondestructive Examination: Article 2 Radiographic Examination === American Petroleum Institute (API) === API 1104, Welding of Pipelines and Related Facilities: 11.1 Radiographic Test Methods == See also == Collimator Industrial computed tomography Medical radiography == Notes == == References == == External links == NIST's XAAMDI: X-Ray Attenuation and Absorption for Materials of Dosimetric Interest Database NIST's XCOM: Photon Cross Sections Database NIST's FAST: Attenuation and Scattering Tables List of incidents UN information on the security of industrial sources
Wikipedia/Industrial_radiography
Photographic paper is a paper coated with a light-sensitive chemical, used for making photographic prints. When photographic paper is exposed to light, it captures a latent image that is then developed to form a visible image; with most papers the image density from exposure can be sufficient to not require further development, aside from fixing and clearing, though latent exposure is also usually present. The light-sensitive layer of the paper is called the emulsion, and functions similarly to photographic film. The most common chemistry used is gelatin silver, but other alternatives have also been used. The print image is traditionally produced by interposing a photographic negative between the light source and the paper, either by direct contact with a large negative (forming a contact print) or by projecting the shadow of the negative onto the paper (producing an enlargement). The initial light exposure is carefully controlled to produce a grayscale image on the paper with appropriate contrast and gradation. Photographic paper may also be exposed to light using digital printers such as the LightJet, with a camera (to produce a photographic negative), by scanning a modulated light source over the paper, or by placing objects upon it (to produce a photogram). Despite the introduction of digital photography, photographic papers are still sold commercially. Photographic papers are manufactured in numerous standard sizes, paper weights and surface finishes. A range of emulsions are also available that differ in their light sensitivity, colour response and the warmth of the final image. Color papers are also available for making colour images. == History == The effect of light in darkening a prepared paper was discovered by Thomas Wedgwood in 1802. Photographic papers have been used since the beginning of all negative–positive photographic processes as developed and popularized by William Fox Talbot's 1841 calotype. After the early days of photography, papers have been manufactured on a large scale with improved consistency and greater light sensitivity. == Types of photographic papers == Photographic papers fall into one of three sub-categories: Papers used for negative-positive processes. This includes all current black-and-white papers and chromogenic colour papers. Papers used for positive-positive processes in which the "film" is the same as the final image (e.g., the Polaroid process, Imago direct positive paper). Papers used for positive-positive film-to-paper processes where a positive image on a film slide is enlarged and copied onto a photographic paper, for example the Ilfochrome process. == Structure == All photographic papers consist of a light-sensitive emulsion, consisting of silver halide salts suspended in a colloidal material – usually gelatin-coated onto a paper, resin coated paper or polyester support. In black-and-white papers, the emulsion is normally sensitised to blue and green light, but is insensitive to wavelengths longer than 600 nm in order to facilitate handling under red or orange safelighting. In chromogenic colour papers, the emulsion layers are sensitive to red, green and blue light, respectively producing cyan, magenta and yellow dye during processing. === Base materials === ==== Black-and-white papers ==== Modern black-and-white papers are coated on a small range of bases; baryta-coated paper, resin-coated paper or polyester. In the past, linen has been used as a base material. ==== Fiber-based papers (FB) ==== Fiber-based (FB or Baryta) photographic papers consist of a paper base coated with baryta. Tints are sometimes added to the baryta to add subtle colour to the final print; however most modern papers use optical brighteners to extend the paper's tonal range. Most fiber-based papers include a clear hardened gelatin layer above the emulsion which protects it from physical damage, especially during processing. This is called a supercoating. Papers without a super coating are suitable for use with the bromoil process. Fiber-based papers are generally chosen as a medium for high-quality prints for exhibition, display and archiving purposes. These papers require careful processing and handling, especially when wet. However, they are easier to tone, hand-colour and retouch than resin-coated equivalents. ==== Resin-coated papers (RC) ==== The paper base of resin-coated papers is sealed by two polyethylene layers, making it impenetrable to liquids. Since no chemicals or water are absorbed into the paper base, the time needed for processing, washing and drying durations are significantly reduced in comparison to fiber-based papers. Resin paper prints can be finished and dried within twenty to thirty minutes. Resin-coated papers have improved dimensional stability, and do not curl upon drying. ==== The baryta layer ==== The term baryta derives from the name of a common barium sulfate-containing mineral, barite. However, the substance used to coat photographic papers is usually not pure barium sulfate, but a mixture of barium and strontium sulfates. The ratio of strontium to barium differs among commercial photographic papers, so chemical analysis can be used to identify the maker of the paper used to make a print and sometimes when the paper was made. The baryta layer has two functions 1) to brighten the image and 2) to prevent chemicals adsorbed on the fibers from infiltrating the gelatin layer. The brightening occurs because barium sulfate is in the form of a fine precipitate that scatters light back through the silver image layer. In the early days of photography, before baryta layers were used, impurities from the paper fibers could gradually diffuse into the silver layer and cause an uneven loss of sensitivity (before development) or mottle (unevenly discolour) the silver image (after development). ==== Colour papers ==== All colour photographic materials available today are coated on either RC (resin coated) paper or on solid polyester. The photographic emulsion used for colour photographic materials consists of three colour emulsion layers (cyan, yellow, and magenta) along with other supporting layers. The colour layers are sensitised to their corresponding colours. Although it is commonly believed that the layers in negative papers are shielded against the intrusion of light of a different wavelength than the actual layer by colour filters which dissolve during processing, this is not so. The colour layers in negative papers are actually produced to have speeds which increase from cyan (red sensitive) to magenta (green sensitive) to yellow (blue sensitive), and thus when filtered during printing, the blue light is "normalized" so that there is no crosstalk. Therefore, the yellow (blue sensitive) layer is nearly ISO 100 while the cyan (red) layer is about ISO 25. After adding enough yellow filtration to make a neutral, the blue sensitivity of the slow cyan layer is "lost". In negative-positive print systems, the blue sensitive layer is on the bottom, and the cyan layer is on the top. This is the reverse of the usual layer order in colour films. The emulsion layers can include the colour dyes, as in Ilfochrome; or they can include colour couplers, which react with colour developers to produce colour dyes, as in type C prints or chromogenic negative–positive prints. Type R prints, which are no longer made, were positive–positive chromogenic prints. == Black and white emulsion types == The emulsion contains light sensitive silver halide crystals suspended in gelatin. Black-and-white papers typically use relatively insensitive emulsions composed of agb silver bromide, silver chloride or a combination of both. The silver halide used affects the paper's sensitivity and the image tone of the resulting print. === Chloride papers === Popular in the past, chloride papers are nowadays unusual; a single manufacturer produces this material. These insensitive papers are suitable for contact printing, and yield warm toned images by development. Chloride emulsions are also used for printing-out papers, or POP, which require no further development after exposure. === Chlorobromide papers === Containing a blend of silver chloride and silver bromide salts, these emulsions produce papers sensitive enough to be used for enlarging. They produce warm-black to neutral image tones by development, which can be varied by using different developers. === Bromide papers === Papers with pure silver bromide emulsions are sensitive and produce neutral black or 'cold' blue-black image tones. === Contrast control === Fixed-grade – or graded – black-and-white papers were historically available in a range of 12 grades, numbered 0 to 5, with 0 being the softest, or least contrasty paper grade and 5 being the hardest, or most contrasty paper grade. Low contrast negatives can be corrected by printing on a contrasty paper; conversely a very contrasty negative can be printed on a low contrast paper. Because of decreased demand, most extreme paper grades are now discontinued, and the few graded ranges still available include only middle contrast grades. Variable-contrast – or "VC" – papers account for the great majority of consumption of these papers in the 21st century. VC papers permit the selection of a wide range of contrast grades, in the case of the brand leader between 00 and 5. These papers are coated with a mixture of two or three emulsions, all of equal contrast and sensitivity to blue light. However, each emulsion is sensitised in different proportions to green light. Upon exposure to blue light, all emulsions act in an additive manner to produce a high contrast image. When exposed to green light alone, the emulsions produce a low contrast image because each is differently sensitised to green. By varying the ratio of blue to green light, the contrast of the print can be approximately continuously varied between these extremes, creating all contrast grades from 00 to 5. Filters in the enlarger's light path are a common method of achieving this control. Magenta filters absorb green and transmit blue and red, while yellow filters absorb blue and transmit green and red. The contrast of photographic papers can also be controlled during processing or by the use of bleaches or toners. === Panchromatic papers === Panchromatic black-and-white photographic printing papers are sensitive to all wavelengths of visible light. They were designed for the printing of full-tone black-and-white images from colour negatives; this is not possible with conventional orthochromatic papers. Panchromatic papers can also be used to produce paper negatives in large-format cameras. These materials must be handled and developed in near-complete darkness. Kodak Panalure Select RC is an example of a panchromatic black-and-white paper; it was discontinued in 2005. === Non Silver papers === Numerous photo sensitive papers that do not use silver chemistry exist. Most are hand made by enthusiasts but cyanotype prints are made on what was commonly sold as blueprint paper. Certain precious metal including platinum and other chemistries have also been in common use at certain periods. == Archival stability == The longevity of any photographic print media will depend upon the processing, display and storage conditions of the print. === Black-and-white prints === Fixing must convert all non-image silver into soluble silver compounds that can be removed by washing with water. Washing must remove these compounds and all residual fixing chemicals from the emulsion and paper base. A hypo-clearing solution, also referred to as hypo clearing agent, HCA, or a washing aid, and which can consist of a 2% solution of sodium sulfite, can be used to shorten the effective washing time by displacing the thiosulfate fixer, and the byproducts of the process of fixation, that are bound to paper fibers. Toners are sometimes used to convert the metallic silver into more stable compounds. Commonly used archival toners are: selenium, gold and sulfide. Prints on fiber-based papers that have been properly fixed and washed should last at least fifty years without fading. Some alternative non-silver processes – such as platinum prints – employ metals that are, if processed correctly, inherently more stable than gelatin-silver prints. === Colour prints === For colour images, Ilfochrome is often used because of its clarity and the stability of the colour dyes. == See also == Standard AD size Film format#Still photography film formats Paper size Photo print sizes == References ==
Wikipedia/Photographic_paper
Radiography is an imaging technique using X-rays, gamma rays, or similar ionizing radiation and non-ionizing radiation to view the internal form of an object. Applications of radiography include medical ("diagnostic" radiography and "therapeutic radiography") and industrial radiography. Similar techniques are used in airport security, (where "body scanners" generally use backscatter X-ray). To create an image in conventional radiography, a beam of X-rays is produced by an X-ray generator and it is projected towards the object. A certain amount of the X-rays or other radiation are absorbed by the object, dependent on the object's density and structural composition. The X-rays that pass through the object are captured behind the object by a detector (either photographic film or a digital detector). The generation of flat two-dimensional images by this technique is called projectional radiography. In computed tomography (CT scanning), an X-ray source and its associated detectors rotate around the subject, which itself moves through the conical X-ray beam produced. Any given point within the subject is crossed from many directions by many different beams at different times. Information regarding the attenuation of these beams is collated and subjected to computation to generate two-dimensional images on three planes (axial, coronal, and sagittal) which can be further processed to produce a three-dimensional image. == History == Radiography's origins and fluoroscopy's origins can both be traced to 8 November 1895, when German physics professor Wilhelm Conrad Röntgen discovered the X-ray and noted that, while it could pass through human tissue, it could not pass through bone or metal. Röntgen referred to the radiation as "X", to indicate that it was an unknown type of radiation. He received the first Nobel Prize in Physics for his discovery. There are conflicting accounts of his discovery because Röntgen had his lab notes burned after his death, but this is a likely reconstruction by his biographers: Röntgen was investigating cathode rays using a fluorescent screen painted with barium platinocyanide and a Crookes tube which he had wrapped in black cardboard to shield its fluorescent glow. He noticed a faint green glow from the screen, about 1 metre away. Röntgen realized some invisible rays coming from the tube were passing through the cardboard to make the screen glow: they were passing through an opaque object to affect the film behind it. Röntgen discovered X-rays' medical use when he made a picture of his wife's hand on a photographic plate formed due to X-rays. The photograph of his wife's hand was the first ever photograph of a human body part using X-rays. When she saw the picture, she said, "I have seen my death." The first use of X-rays under clinical conditions was by John Hall-Edwards in Birmingham, England, on 11 January 1896, when he radiographed a needle stuck in the hand of an associate. On 14 February 1896, Hall-Edwards also became the first to use X-rays in a surgical operation. The United States saw its first medical X-ray obtained using a discharge tube of Ivan Pulyui's design. In January 1896, on reading of Röntgen's discovery, Frank Austin of Dartmouth College tested all of the discharge tubes in the physics laboratory and found that only the Pulyui tube produced X-rays. This was a result of Pulyui's inclusion of an oblique "target" of mica, used for holding samples of fluorescent material, within the tube. On 3 February 1896 Gilman Frost, professor of medicine at the college, and his brother Edwin Frost, professor of physics, exposed the wrist of Eddie McCarthy, whom Gilman had treated some weeks earlier for a fracture, to the X-rays and collected the resulting image of the broken bone on gelatin photographic plates obtained from Howard Langill, a local photographer also interested in Röntgen's work. X-rays were put to diagnostic use very early; for example, Alan Archibald Campbell-Swinton opened a radiographic laboratory in the United Kingdom in 1896, before the dangers of ionizing radiation were discovered. Indeed, Marie Curie pushed for radiography to be used to treat wounded soldiers in World War I. Initially, many kinds of staff conducted radiography in hospitals, including physicists, photographers, physicians, nurses, and engineers. The medical speciality of radiology grew up over many years around the new technology. When new diagnostic tests were developed, it was natural for the radiographers to be trained in and to adopt this new technology. Radiographers now perform fluoroscopy, computed tomography, mammography, ultrasound, nuclear medicine and magnetic resonance imaging as well. Although a nonspecialist dictionary might define radiography quite narrowly as "taking X-ray images", this has long been only part of the work of "X-ray departments", radiographers, and radiologists. Initially, radiographs were known as roentgenograms, while skiagrapher (from the Ancient Greek words for "shadow" and "writer") was used until about 1918 to mean radiographer. The Japanese term for the radiograph, rentogen (レントゲン), shares its etymology with the original English term. == Medical uses == Since the body is made up of various substances with differing densities, ionising and non-ionising radiation can be used to reveal the internal structure of the body on an image receptor by highlighting these differences using attenuation, or in the case of ionising radiation, the absorption of X-ray photons by the denser substances (like calcium-rich bones). The discipline involving the study of anatomy through the use of radiographic images is known as radiographic anatomy. Medical radiography acquisition is generally carried out by radiographers, while image analysis is generally done by radiologists. Some radiographers also specialise in image interpretation. Medical radiography includes a range of modalities producing many different types of image, each of which has a different clinical application. === Projectional radiography === The creation of images by exposing an object to X-rays or other high-energy forms of electromagnetic radiation and capturing the resulting remnant beam (or "shadow") as a latent image is known as "projection radiography". The "shadow" may be converted to light using a fluorescent screen, which is then captured on photographic film, it may be captured by a phosphor screen to be "read" later by a laser (CR), or it may directly activate a matrix of solid-state detectors (DR—similar to a very large version of a CCD in a digital camera). Bone and some organs (such as lungs) especially lend themselves to projection radiography. It is a relatively low-cost investigation with a high diagnostic yield. The difference between soft and hard body parts stems mostly from the fact that carbon has a very low X-ray cross section compared to calcium. === Computed tomography === Computed tomography or CT scan (previously known as CAT scan, the "A" standing for "axial") uses ionizing radiation (x-ray radiation) in conjunction with a computer to create images of both soft and hard tissues. These images look as though the patient was sliced like bread (thus, "tomography" – "tomo" means "slice"). Though CT uses a higher amount of ionizing x-radiation than diagnostic x-rays (both utilising X-ray radiation), with advances in technology, levels of CT radiation dose and scan times have reduced. CT exams are generally short, most lasting only as long as a breath-hold, Contrast agents are also often used, depending on the tissues needing to be seen. Radiographers perform these examinations, sometimes in conjunction with a radiologist (for instance, when a radiologist performs a CT-guided biopsy). === Dual energy X-ray absorptiometry === DEXA, or bone densitometry, is used primarily for osteoporosis tests. It is not projection radiography, as the X-rays are emitted in two narrow beams that are scanned across the patient, 90 degrees from each other. Usually the hip (head of the femur), lower back (lumbar spine), or heel (calcaneum) are imaged, and the bone density (amount of calcium) is determined and given a number (a T-score). It is not used for bone imaging, as the image quality is not good enough to make an accurate diagnostic image for fractures, inflammation, etc. It can also be used to measure total body fat, though this is not common. The radiation dose received from DEXA scans is very low, much lower than projection radiography examinations. === Fluoroscopy === Fluoroscopy is a term invented by Thomas Edison during his early X-ray studies. The name refers to the fluorescence he saw while looking at a glowing plate bombarded with X-rays. The technique provides moving projection radiographs. Fluoroscopy is mainly performed to view movement (of tissue or a contrast agent), or to guide a medical intervention, such as angioplasty, pacemaker insertion, or joint repair/replacement. The last can often be carried out in the operating theatre, using a portable fluoroscopy machine called a C-arm. It can move around the surgery table and make digital images for the surgeon. Biplanar Fluoroscopy works the same as single plane fluoroscopy except displaying two planes at the same time. The ability to work in two planes is important for orthopedic and spinal surgery and can reduce operating times by eliminating re-positioning. Angiography is the use of fluoroscopy to view the cardiovascular system. An iodine-based contrast is injected into the bloodstream and watched as it travels around. Since liquid blood and the vessels are not very dense, a contrast with high density (like the large iodine atoms) is used to view the vessels under X-ray. Angiography is used to find aneurysms, leaks, blockages (thromboses), new vessel growth, and placement of catheters and stents. Balloon angioplasty is often done with angiography. === Contrast radiography === Contrast radiography uses a radiocontrast agent, a type of contrast medium, to make the structures of interest stand out visually from their background. Contrast agents are required in conventional angiography, and can be used in both projectional radiography and computed tomography (called contrast CT). === Other medical imaging === Although not technically radiographic techniques due to not using X-rays, imaging modalities such as PET and MRI are sometimes grouped in radiography because the radiology department of hospitals handle all forms of imaging. Treatment using radiation is known as radiotherapy. == Industrial radiography == Industrial radiography is a method of non-destructive testing where many types of manufactured components can be examined to verify the internal structure and integrity of the specimen. Industrial Radiography can be performed utilizing either X-rays or gamma rays. Both are forms of electromagnetic radiation. The difference between various forms of electromagnetic energy is related to the wavelength. X and gamma rays have the shortest wavelength and this property leads to the ability to penetrate, travel through, and exit various materials such as carbon steel and other metals. Specific methods include industrial computed tomography. == Image quality == Image quality will depend on resolution and density. Resolution is the ability of an image to show closely spaced structure in the object as separate entities in the image while density is the blackening power of the image. Sharpness of a radiographic image is strongly determined by the size of the X-ray source. This is determined by the area of the electron beam hitting the anode. A large photon source results in more blurring in the final image and is worsened by an increase in image formation distance. This blurring can be measured as a contribution to the modulation transfer function of the imaging system. == Radiation dose == The dosage of radiation applied in radiography varies by procedure. For example, the effective dosage of a chest x-ray is 0.1 mSv, while an abdominal CT is 10 mSv. The American Association of Physicists in Medicine (AAPM) have stated that the "risks of medical imaging at patient doses below 50 mSv for single procedures or 100 mSv for multiple procedures over short time periods are too low to be detectable and may be nonexistent." Other scientific bodies sharing this conclusion include the International Organization of Medical Physicists, the UN Scientific Committee on the Effects of Atomic Radiation, and the International Commission on Radiological Protection. Nonetheless, radiological organizations, including the Radiological Society of North America (RSNA) and the American College of Radiology (ACR), as well as multiple government agencies, indicate safety standards to ensure that radiation dosage is as low as possible. === Shielding === Lead is the most common shield against X-rays because of its highdensity (11,340 kg/m3), stopping power, ease of installation and low cost. The maximum range of a high-energy photon such as an X-ray in matter is infinite; at every point in the matter traversed by the photon, there is a probability of interaction. Thus there is a very small probability of no interaction over very large distances. The shielding of photon beam is therefore exponential (with an attenuation length being close to the radiation length of the material); doubling the thickness of shielding will square the shielding effect. Starting in the 1950s, personal lead shielding began to be used on directly on patients during all X-rays over the abdomen to intending to protect the gonads (reproductive organs) or a fetus if the patient was pregnant. Dental X-rays would also typically additionally use lead shielding to protect the thyroid. However, a consensus was reached between 2019 and 2021 that lead shielding for routine diagnostic X-rays is not necessary and may in some cases be harmful. Personal shielding for medical professionals and other people in the room is still recommended. Rooms where X-rays are performed are lined with lead. The table in this section shows the recommended thickness of lead shielding for a room where X-rays are performed as function of X-ray energy, from the Recommendations by the Second International Congress of Radiology. === Campaigns === In response to increased concern by the public over radiation doses and the ongoing progress of best practices, The Alliance for Radiation Safety in Pediatric Imaging was formed within the Society for Pediatric Radiology. In concert with the American Society of Radiologic Technologists, the American College of Radiology, and the American Association of Physicists in Medicine, the Society for Pediatric Radiology developed and launched the Image Gently campaign which is designed to maintain high quality imaging studies while using the lowest doses and best radiation safety practices available on pediatric patients. This initiative has been endorsed and applied by a growing list of various professional medical organizations around the world and has received support and assistance from companies that manufacture equipment used in radiology. Following upon the success of the Image Gently campaign, the American College of Radiology, the Radiological Society of North America, the American Association of Physicists in Medicine, and the American Society of Radiologic Technologists have launched a similar campaign to address this issue in the adult population called Image Wisely. The World Health Organization and International Atomic Energy Agency (IAEA) of the United Nations have also been working in this area and have ongoing projects designed to broaden best practices and lower patient radiation dose. === Provider payment === Contrary to advice that emphasises only conducting radiographs when in the patient's interest, recent evidence suggests that they are used more frequently when dentists are paid under fee-for-service. == Equipment == === Sources === In medicine and dentistry, projectional radiography and computed tomography images generally use X-rays created by X-ray generators, which generate X-rays from X-ray tubes. The resultant images from the radiograph (X-ray generator/machine) or CT scanner are correctly referred to as "radiograms"/"roentgenograms" and "tomograms" respectively. A number of other sources of X-ray photons are possible, and may be used in industrial radiography or research; these include betatrons, linear accelerators (linacs), and synchrotrons. For gamma rays, radioactive sources such as 192Ir, 60Co, or 137Cs are used. === Grid === An anti-scatter grid may be placed between the patient and the detector to reduce the quantity of scattered x-rays that reach the detector. This improves the contrast resolution of the image, but also increases radiation exposure for the patient. === Detectors === Detectors can be divided into two major categories: imaging detectors (such as photographic plates and X-ray film (photographic film), now mostly replaced by various digitizing devices like image plates or flat panel detectors) and dose measurement devices (such as ionization chambers, Geiger counters, and dosimeters used to measure the local radiation exposure, dose, and/or dose rate, for example, for verifying that radiation protection equipment and procedures are effective on an ongoing basis). === Side markers === A radiopaque anatomical side marker is added to each image. For example, if the patient has their right hand x-rayed, the radiographer includes a radiopaque "R" marker within the field of the x-ray beam as an indicator of which hand has been imaged. If a physical marker is not included, the radiographer may add the correct side marker later as part of digital post-processing. === Image intensifiers and array detectors === As an alternative to X-ray detectors, image intensifiers are analog devices that readily convert the acquired X-ray image into one visible on a video screen. This device is made of a vacuum tube with a wide input surface coated on the inside with caesium iodide (CsI). When hit by X-rays, phosphor material causes the photocathode adjacent to it to emit electrons. These electrons are then focused using electron lenses inside the intensifier to an output screen coated with phosphorescent materials. The image from the output can then be recorded via a camera and displayed. Digital devices known as array detectors are becoming more common in fluoroscopy. These devices are made of discrete pixelated detectors known as thin-film transistors (TFT) which can either work indirectly by using photo detectors that detect light emitted from a scintillator material such as CsI, or directly by capturing the electrons produced when the X-rays hit the detector. Direct detectors do not tend to experience the blurring or spreading effect caused by phosphorescent scintillators or by film screens since the detectors are activated directly by X-ray photons. == Dual-energy == Dual-energy radiography is where images are acquired using two separate tube voltages. This is the standard method for bone densitometry. It is also used in CT pulmonary angiography to decrease the required dose of iodinated contrast. == See also == Autoradiograph – Radiograph made by recording radiation emitted by samples on photographic plates Background radiation – Measure of ionizing radiation in the environment Computer-aided diagnosis – Type of diagnosis assisted by computers GXMO Imaging science – Representation or reproduction of an object's formPages displaying short descriptions of redirect targets List of civilian radiation accidents Medical imaging in pregnancy – Types of pregnancy imaging techniques Radiation – Waves or particles moving through space Digital radiography – Form of radiography Radiation contamination – Undesirable radioactive elements on surfaces or in gases, liquids, or solids is a problemPages displaying short descriptions of redirect targets Radiographer – Healthcare professional Thermography – Infrared imaging used to reveal temperature == References == == Further reading == == External links == MedPix Medical Image Database Video on X-ray inspection and industrial computed tomography, Karlsruhe University of Applied Sciences NIST's XAAMDI: X-Ray Attenuation and Absorption for Materials of Dosimetric Interest Database NIST's XCOM: Photon Cross Sections Database NIST's FAST: Attenuation and Scattering Tables A lost industrial radiography source event RadiologyInfo - The radiology information resource for patients: Radiography (X-rays)
Wikipedia/Medical_radiography
Lithography (from Ancient Greek λίθος (líthos) 'stone' and γράφω (gráphō) 'to write') is a planographic method of printing originally based on the immiscibility of oil and water. The printing is from a stone (lithographic limestone) or a metal plate with a smooth surface. It was invented in 1796 by the German author and actor Alois Senefelder and was initially used mostly for musical scores and maps. Lithography can be used to print text or images onto paper or other suitable material. A lithograph is something printed by lithography, but this term is only used for fine art prints and some other, mostly older, types of printed matter, not for those made by modern commercial lithography. Traditionally, the image to be printed was drawn with a greasy substance, such as oil, fat, or wax onto the surface of a smooth and flat limestone plate. The stone was then treated with a mixture of weak acid and gum arabic ("etch") that made the parts of the stone's surface that were not protected by the grease more hydrophilic (water attracting). For printing, the stone was first moistened. The water adhered only to the gum-treated parts, making them even more oil-repellant. An oil-based ink was then applied, and would stick only to the original drawing. The ink would finally be transferred to a blank sheet of paper, producing a printed page. This traditional technique is still used for fine art printmaking. In modern commercial lithography, the image is transferred or created as a patterned polymer coating applied to a flexible plastic or metal plate. The printing plates, made of stone or metal, can be created by a photographic process, a method that may be referred to as "photolithography" (although the term usually refers to a vaguely similar microelectronics manufacturing process). Offset printing or "offset lithography" is an elaboration of lithography in which the ink is transferred from the plate to the paper indirectly by means of a rubber plate or cylinder, rather than by direct contact. This technique keeps the paper dry and allows fully automated high-speed operation. It has mostly replaced traditional lithography for medium- and high-volume printing: since the 1960s, most books and magazines, especially when illustrated in colour, are printed with offset lithography from photographically created metal plates. As a printing technology, lithography is different from intaglio printing (gravure), wherein a plate is engraved, etched, or stippled to score cavities to contain the printing ink; and woodblock printing or letterpress printing, wherein ink is applied to the raised surfaces of letters or images. == The principle of lithography == Lithography uses simple chemical processes to create an image. For instance, the positive part of an image is a water-repelling ("hydrophobic") substance, while the negative image would be water-retaining ("hydrophilic"). Thus, when the plate is introduced to a compatible printing ink and water mixture, the ink will adhere to the positive image and the water will clean the negative image. This allows a flat print plate to be used, enabling much longer and more detailed print runs than the older physical methods of printing (e.g., intaglio printing, letterpress printing). Lithography was invented by Alois Senefelder in the Electorate of Bavaria in 1796. In the early days of lithography, a smooth piece of limestone was used (hence the name "lithography": "lithos" (λιθος) is the Ancient Greek word for "stone"). After the oil-based image was put on the surface, a solution of gum arabic in water was applied, the gum sticking only to the non-oily surface. During printing, water adhered to the gum arabic surfaces and was repelled by the oily parts, while the oily ink used for printing did the opposite. === Lithography on limestone === Lithography works because of the mutual repulsion of oil and water. The image is drawn on the surface of the print plate with a fat or oil-based medium (hydrophobic) such as a wax crayon, which may be pigmented to make the drawing visible. A wide range of oil-based media is available, but the durability of the image on the stone depends on the lipid content of the material being used, and its ability to withstand water and acid. After the drawing of the image, an aqueous solution of gum arabic, weakly acidified with nitric acid (HNO3) is applied to the stone. The function of this solution is to create a hydrophilic layer of calcium nitrate salt, Ca(NO3)2, and gum arabic on all non-image surfaces. The gum solution penetrates into the pores of the stone, completely surrounding the original image with a hydrophilic layer that will not accept the printing ink. Using lithographic turpentine, the printer then removes any excess of the greasy drawing material, but a hydrophobic molecular film of it remains tightly bonded to the surface of the stone, rejecting the gum arabic and water, but ready to accept the oily ink. When printing, the stone is kept wet with water. The water is naturally attracted to the layer of gum and salt created by the acid wash. Printing ink based on drying oils such as linseed oil and varnish loaded with pigment is then rolled over the surface. The water repels the greasy ink but the hydrophobic areas left by the original drawing material accept it. When the hydrophobic image is loaded with ink, the stone and paper are run through a press that applies even pressure over the surface, transferring the ink to the paper and off the stone. Senefelder had experimented during the early 19th century with multicolor lithography; in his 1819 book, he predicted that the process would eventually be perfected and used to reproduce paintings. Multi-color printing was introduced by a new process developed by Godefroy Engelmann (France) in 1837 known as chromolithography. A separate stone was used for each color, and a print went through the press separately for each stone. The main challenge was to keep the images aligned (in register). This method lent itself to images consisting of large areas of flat color, and resulted in the characteristic poster designs of this period. "Lithography, or printing from soft stone, largely took the place of engraving in the production of English commercial maps after about 1852. It was a quick, cheap process and had been used to print British army maps during the Peninsular War. Most of the commercial maps of the second half of the 19th century were lithographed and unattractive, though accurate enough." === Modern lithographic process === High-volume lithography is used to produce posters, maps, books, newspapers, and packaging—just about any smooth, mass-produced item with print and graphics on it. Most books, indeed all types of high-volume text, are printed using offset lithography. For offset lithography, which depends on photographic processes, flexible aluminum, polyester, mylar or paper printing plates are used instead of stone tablets. Modern printing plates have a brushed or roughened texture and are covered with a photosensitive emulsion. A photographic negative of the desired image is placed in contact with the emulsion and the plate is exposed to ultraviolet light. After development, the emulsion shows a reverse of the negative image, which is thus a duplicate of the original (positive) image. The image on the plate emulsion can also be created by direct laser imaging in a CTP (computer-to-plate) device known as a platesetter. The positive image is the emulsion that remains after imaging. Non-image portions of the emulsion have traditionally been removed by a chemical process, though in recent times, plates have become available that do not require such processing. The plate is affixed to a cylinder on a printing press. Dampening rollers apply water, which covers the blank portions of the plate but is repelled by the emulsion of the image area. Hydrophobic ink, which is repelled by the water and only adheres to the emulsion of the image area, is then applied by the inking rollers. If this image were transferred directly to paper, it would create a mirror-type image and the paper would become too wet. Instead, the plate rolls against a cylinder covered with a rubber blanket, which squeezes away the water, picks up the ink and transfers it to the paper with uniform pressure. The paper passes between the blanket cylinder and a counter-pressure or impression cylinder and the image is transferred to the paper. Because the image is first transferred, or offset to the rubber blanket cylinder, this reproduction method is known as offset lithography or offset printing. Many innovations and technical refinements have been made in printing processes and presses over the years, including the development of presses with multiple units (each containing one printing plate) that can print multi-color images in one pass on both sides of the sheet, and presses that accommodate continuous rolls (webs) of paper, known as web presses. Another innovation was the continuous dampening system first introduced by Dahlgren, instead of the old method (conventional dampening) which is still used on older presses, using rollers covered with molleton (cloth) that absorbs the water. This increased control of the water flow to the plate and allowed for better ink and water balance. Recent dampening systems include a "delta effect or vario", which slows the roller in contact with the plate, thus creating a sweeping movement over the ink image to clean impurities known as "hickies". This press is also called an ink pyramid because the ink is transferred through several layers of rollers with different purposes. Fast lithographic 'web' printing presses are commonly used in newspaper production. The advent of desktop publishing made it possible for type and images to be modified easily on personal computers for eventual printing by desktop or commercial presses. The development of digital imagesetters enabled print shops to produce negatives for platemaking directly from digital input, skipping the intermediate step of photographing an actual page layout. The development of the digital platesetter during the late 20th century eliminated film negatives altogether by exposing printing plates directly from digital input, a process known as computer-to-plate printing. == Lithography as an artistic medium == During the early years of the 19th century, lithography had only a limited effect on printmaking, mainly because technical difficulties remained to be overcome. Germany was the main center of production in this period. Godefroy Engelmann, who moved his press from Mulhouse to Paris in 1816, largely succeeded in resolving the technical problems, and during the 1820s lithography was adopted by artists such as Delacroix and Géricault. After early experiments such as Specimens of Polyautography (1803), which had experimental works by a number of British artists including Benjamin West, Henry Fuseli, James Barry, Thomas Barker of Bath, Thomas Stothard, Henry Richard Greville, Richard Cooper, Henry Singleton, and William Henry Pyne, London also became a center, and some of Géricault's prints were in fact produced there. Goya in Bordeaux produced his last series of prints by lithography—The Bulls of Bordeaux of 1828. By the mid-century the initial enthusiasm had somewhat diminished in both countries, although the use of lithography was increasingly favored for commercial applications, which included the prints of Daumier, published in newspapers. Rodolphe Bresdin and Jean-François Millet also continued to practice the medium in France, and Adolph Menzel in Germany. In 1862 the publisher Cadart tried to initiate a portfolio of lithographs by various artists, which was not successful but included several prints by Manet. The revival began during the 1870s, especially in France with artists such as Odilon Redon, Henri Fantin-Latour and Degas producing much of their work in this manner. The need for strictly limited editions to maintain the price had now been realized, and the medium became more accepted. In the 1890s, color lithography gained success in part by the emergence of Jules Chéret, known as the father of the modern poster, whose work went on to inspire a new generation of poster designers and painters, most notably Toulouse-Lautrec, and former student of Chéret, Georges de Feure. By 1900 the medium in both color and monotone was an accepted part of printmaking. During the 20th century, a group of artists, including Braque, Calder, Chagall, Dufy, Léger, Matisse, Miró, and Picasso, rediscovered the largely undeveloped artform of lithography thanks to the Mourlot Studios, also known as Atelier Mourlot, a Parisian printshop founded in 1852 by the Mourlot family. The Atelier Mourlot originally specialized in the printing of wallpaper; but it was transformed when the founder's grandson, Fernand Mourlot, invited a number of 20th-century artists to explore the complexities of fine art printing. Mourlot encouraged the painters to work directly on lithographic stones in order to create original artworks that could then be executed under the direction of master printers in small editions. The combination of modern artist and master printer resulted in lithographs that were used as posters to promote the artists' work. Grant Wood, George Bellows, Alphonse Mucha, Max Kahn, Pablo Picasso, Eleanor Coen, Jasper Johns, David Hockney, Susan Dorothea White, and Robert Rauschenberg are a few of the artists who have produced most of their prints in the medium. M. C. Escher is considered a master of lithography, and many of his prints were created using this process. More than other printmaking techniques, printmakers in lithography still largely depend on access to good printers, and the development of the medium has been greatly influenced by when and where these have been established. An American scene for lithography was founded by Robert Blackburn in New York City. As a special form of lithography, the serilith or seriolithograph is a mixed-media original print that combines both lithography and serigraphy (screen printing). In this technique, the artist hand-draws the separations for each process, ensuring a high level of craftsmanship. Seriliths are typically produced as limited-edition fine art prints and are published by artists and publishers around the world. They are widely recognized and collected within the art community. == See also == == References == == External links == About Lithography Twyman, Michael. Early Lithographed Books. Pinner, Middlesex: Private Libraries Association, 1990 Museum of Modern Art information on printing techniques and examples of prints The Invention of Lithography, Aloys Senefelder, (Eng. trans. 1911) (a searchable facsimile at the University of Georgia Libraries; DjVu and layered PDF format) Theo De Smedt's website, author of "What's lithography" Extensive information on Honoré Daumier and his life and work, including his entire output of lithographs Digital work catalog to 4000 lithographs and 1000 wood engravings Detailed examination of the processes involved in the creation of a typical scholarly lithographic illustration in the 19th century Nederlands Steendrukmuseum Delacroix's Faust lithographs at the Davison Art Center, Wesleyan University A brief historic overview of Lithography. University of Delaware Library. Includes citations for 19th century books using early lithographic illustrations. Philadelphia on Stone: The First Fifty Years of Commercial Lithography in Philadelphia. Library Company of Philadelphia. Provides an historic overview of the commercial trade in Philadelphia and links to a biographical dictionary of over 500 Philadelphia lithographers and catalog of more than 1300 lithographs documenting Philadelphia. Prints & People: A Social History of Printed Pictures, an exhibition catalog from The Metropolitan Museum of Art (fully available online as PDF), which contains material on lithography
Wikipedia/Lithography
A micrograph is an image, captured photographically or digitally, taken through a microscope or similar device to show a magnified image of an object. This is opposed to a macrograph or photomacrograph, an image which is also taken on a microscope but is only slightly magnified, usually less than 10 times. Micrography is the practice or art of using microscopes to make photographs. A photographic micrograph is a photomicrograph, and one taken with an electron microscope is an electron micrograph. A micrograph contains extensive details of microstructure. A wealth of information can be obtained from a simple micrograph like behavior of the material under different conditions, the phases found in the system, failure analysis, grain size estimation, elemental analysis and so on. Micrographs are widely used in all fields of microscopy. == Types == === Photomicrograph === A light micrograph or photomicrograph is a micrograph prepared using an optical microscope, a process referred to as photomicroscopy. At a basic level, photomicroscopy may be performed simply by connecting a camera to a microscope, thereby enabling the user to take photographs at reasonably high magnification. Scientific use began in England in 1850 by Richard Hill Norris FRSE for his studies of blood cells. Roman Vishniac was a pioneer in the field of photomicroscopy, specializing in the photography of living creatures in full motion. He also made major developments in light-interruption photography and color photomicroscopy. Photomicrographs may also be obtained using a USB microscope attached directly to a home computer or laptop. === Electron micrograph === An electron micrograph is a micrograph prepared using an electron microscope. == Magnification and micron bars == Micrographs usually have micron bars, or magnification ratios, or both. Magnification is a ratio between the size of an object on a picture and its real size. Magnification can be a misleading parameter as it depends on the final size of a printed picture and therefore varies with picture size. A scale bar, or micron bar, is a line of known length displayed on a picture. The bar can be used for measurements on a picture. When the picture is resized the bar is also resized making it possible to recalculate the magnification. Ideally, all pictures destined for publication/presentation should be supplied with a scale bar; the magnification ratio is optional. All but one (limestone) of the micrographs presented on this page do not have a micron bar; supplied magnification ratios are likely incorrect, as they were not calculated for pictures at the present size. == Micrography as art == The microscope has been mainly used for scientific discovery. It has also been linked to the arts since its invention in the 17th century. Early adopters of the microscope, such as Robert Hooke and Antonie van Leeuwenhoek, were excellent illustrators. Cornelius Varley's graphic microscope made sketching from a microscope easier with a camera-lucida-like mechanism. After the invention of photography in the 1820s the microscope was later combined with the camera to take pictures instead of relying on an artistic rendering. Since the early 1970s individuals have been using the microscope as an artistic instrument. Websites and traveling art exhibits such as the Nikon Small World and Olympus Bioscapes have featured a range of images for the sole purpose of artistic enjoyment. Some collaborative groups, such as the Paper Project have also incorporated microscopic imagery into tactile art pieces as well as 3D immersive rooms and dance performances. In 2015, photographer and gemologist Danny J. Sanchez photographed mineral and gemstone interiors in works referred to as "otherworldly". == Photomicrography in smartphones == A paper published in 2009 described a method of photomicrography in a smartphone using a free-hand technique. An operator only need focus the camera through the eyepiece of a microscope and capture a photo normally. Later, adapters were designed for the purpose and sold commercially or home-made. A home-made adapter was also made using scrap materials and a Coca-Cola aluminum can. == Gallery == == See also == Close-up Digital microscope Macro photography Microphotograph Microscopy USB microscope == References == == External links == Shots with a Microscope – a basic, comprehensive guide to photomicrography Scientific photomicrographs – free scientific quality photomicrographs by Doc. RNDr. Josef Reischig, CSc. Seeing Beyond the Human Eye Video produced by Off Book (web series) Solomon C. Fuller bio Charles Krebs Microscopic Images Photomicrography by Danny J. Sanchez Dennis Kunkel Microscopy Andrew Paul Leonard, APL Microscopic Cell Centered Database – Montage Nikon Small World Olympus Bioscapes Other examples Robert Berdan micrographs
Wikipedia/Electron_micrograph