id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
8,761,205
https://en.wikipedia.org/wiki/Compatibility%20%28chemical%29
Chemical compatibility is a rough measure of how stable a substance is when mixed with another substance. If two substances can mix together and not undergo a chemical reaction, they are considered compatible. Incompatible chemicals react with each other, and can cause corrosion, mechanical weakening, evolution of gas, fire, or other undesirable interactions. Chemical compatibility is important when choosing materials for chemical storage or reactions, so that the vessel and other apparatus will not be damaged by its contents. For purposes of chemical storage, chemicals that are incompatible should not be stored together, so that any leak will not cause an even more dangerous situation from chemical reactions. In addition, chemical compatibility refers to the container material being acceptable to store the chemical or for a tool or object that comes in contact with a chemical to not degrade. For example, when stirring a chemical, the stirrer must be stable in the chemical that is being stirred. Many companies publish chemical resistance charts. and databases to help chemical users use appropriate materials for handling chemicals. Such charts are particularly important for polymers as they are often not compatible with common chemical reagents; this may even depend on how the polymers have been processed. For example, 3-D printing polymer tools used for chemical experiments must be chosen to ensure chemical compatibility with care. Chemical compatibility is also important when choosing among different chemicals that have similar purposes. For example, bleach and ammonia, both commonly used as cleaners, can undergo a dangerous chemical reaction when combined with each other, producing poisonous fumes. Even though each of them has a similar use, care must be taken not to allow these chemicals to mix. References External links Chemical compatibility database Chemical safety
Compatibility (chemical)
[ "Chemistry" ]
336
[ "Chemical safety", "Chemical accident", "nan" ]
8,761,262
https://en.wikipedia.org/wiki/Criticism%20of%20Linux
The criticism of Linux focuses on issues concerning use of operating systems which use the Linux kernel. While the Linux-based Android operating system dominates the smartphone market in many countries, and Linux is used on the New York Stock Exchange and most supercomputers, it is used in few desktop and laptop computers. Much of the criticism of Linux is related to the lack of desktop and laptop adoption, although there has been growing unease with the project's perspective on security and its adoption of systemd has been controversial. Linux kernel criticisms Kernel development politics Some security professionals say that the rise in prominence of operating system-level virtualization using Linux has raised the profile of attacks against the kernel, and that Linus Torvalds is reticent to add mitigations against kernel-level attacks in official releases. Linux 4.12, released in 2017, enabled KASLR by default, but its effectiveness is debated. Con Kolivas, a former kernel developer, tried to optimize the kernel scheduler for interactive desktop use. He finally dropped the support for his patches due to the lack of appreciation for his development. In the 2007 interview Why I quit: kernel developer Con Kolivas he stated: Kernel performance At LinuxCon 2009, Linux creator Linus Torvalds said that the Linux kernel has become "bloated and huge": At LinuxCon 2014, Torvalds said he thinks the bloat situation is better because modern PCs are a lot faster: Kernel code quality In an interview with German newspaper Zeit Online in November 2011, Linus Torvalds stated that Linux has become "too complex" and he was concerned that developers would not be able to find their way through the software anymore. He complained that even subsystems have become very complex and he told the publication that he is "afraid of the day" when there will be an error that "cannot be evaluated anymore." Andrew Morton, one of Linux kernel lead developers, explains that many bugs identified in Linux are never fixed: Theo de Raadt, founder of OpenBSD, compares OpenBSD development process to Linux: Desktop use Critics of Linux on the desktop have frequently argued that a lack of top-selling video games on the platform holds adoption back. For instance, , the Steam gaming service has 1,500 games available on Linux, compared to 2,323 games for Mac and 6,500 Windows games. As of October 2021, Proton, a Steam-backed development effort descended from Wine provides compatibility with a large number of Windows-only games, and potentially better performance over Linux-native ports in some cases. ProtonDB is a community-maintained effort to gauge how well different versions of Proton work with a given game. As a desktop operating system, Linux has been criticized on a number of fronts, including: A confusing number of choices of distributions, and desktop environments. Poor open source support for some hardware, in particular drivers for 3D graphics chips, where manufacturers were unwilling to provide full specifications. As a result, many video cards have both open and closed source drivers, usually with different levels of support. Limited availability of widely used commercial applications (such as Adobe Photoshop and Microsoft Word). This is a result of the software developers not supporting Linux rather than any fault of Linux itself. Sometimes this can be solved by running the Windows versions of these programs through Wine, a virtual machine, or dual-booting. Even so, this creates a chicken or the egg situation where developers make programs for Windows due to its market share, and consumers use Windows due to availability of the programs. Distribution fragmentation Another common complaint levelled against Linux is the abundance of distributions available. , DistroWatch lists 275 distributions. While Linux advocates have defended the number as an example of freedom of choice, other critics cite the large number as cause for confusion and lack of standardization in Linux operating systems. Alexander Wolfe wrote in InformationWeek: Caitlyn Martin from LinuxDevCenter has been critical of the number of Linux distributions: Hardware support In recent decades (since the established dominance of Microsoft Windows) hardware developers have often been reluctant to provide full technical documentation for their products, to allow drivers to be written. This has meant that a Linux user had to carefully hand pick the hardware that made up the system to ensure functionality and compatibility. These problems have largely been addressed: At one time, Linux systems required removable media, such as floppy discs and CD-ROMs, to be manually mounted before they could be accessed. Mounting media is now automatic in nearly all distributions, with the development of the udev. Some companies, such as EmperorLinux, have addressed the problems of laptop hardware compatibility by making modified Linux distributions with specially selected hardware to ensure compatibility from delivery. Directory structure The traditional directory structure, which is a heritage from Linux's Unix roots in the 1970s, has been criticized as inappropriate for desktop end users. Some Linux distributions like GoboLinux and have proposed alternative hierarchies that were argued to be easier for end users, though they achieved little acceptance. Criticism by Microsoft In 2004, Microsoft initiated its Get the Facts marketing campaign, which specifically criticized Linux server usage. In particular, it claimed that the vulnerabilities of Windows are fewer in number than those of Linux distributions, that Windows is more reliable and secure than Linux, that the total cost of ownership of Linux is higher (due to complexity, acquisition costs, and support costs), that use of Linux places a burden of liability on businesses, and that "Linux vendors provide little, if any indemnification coverage." In addition, the corporation published various studies in an attempt to prove this – the factuality of which has been heavily disputed by different authors who claim that Microsoft's comparisons are flawed. Many Linux distributors now offer indemnification to customers. Internal Microsoft reports from the Halloween documents leak have presented conflicting views. Particularly documents from 1998 and 1999 ceded that "Linux ... is trusted in mission critical applications, and – due to its open source code – has a long term credibility which exceeds many other competitive OSs", "An advanced Win32 GUI user would have a short learning cycle to become productive [under Linux]", "Long term, my simple experiments do indicate that Linux has a chance at the desktop market ...", and "Overall respondents felt the most compelling reason to support OSS was that it 'Offers a low total cost of ownership (TCO)'." Responses to criticism The Linux community has had mixed responses to these and other criticisms. As mentioned above, while some criticism has led to new features and better user-friendliness, the Linux community as a whole has a reputation for being resistant to criticism. Writing for PC World, Keir Thomas, noted that, "Most of the time the world of Linux tends to be anti-critical. If anybody in the community dares be critical, they get stomped upon." In a 2015 interview, Linus Torvalds also mentioned the tendency of Linux desktop environment projects to blame their users instead of themselves in case of criticism. See also Criticism of desktop Linux Criticism of Microsoft Windows The Unix-Haters Handbook References Linux Linux
Criticism of Linux
[ "Technology" ]
1,453
[ "Criticisms of software and websites" ]
8,761,319
https://en.wikipedia.org/wiki/Skorokhod%27s%20embedding%20theorem
In mathematics and probability theory, Skorokhod's embedding theorem is either or both of two theorems that allow one to regard any suitable collection of random variables as a Wiener process (Brownian motion) evaluated at a collection of stopping times. Both results are named for the Ukrainian mathematician A. V. Skorokhod. Skorokhod's first embedding theorem Let X be a real-valued random variable with expected value 0 and finite variance; let W denote a canonical real-valued Wiener process. Then there is a stopping time (with respect to the natural filtration of W), τ, such that Wτ has the same distribution as X, and Skorokhod's second embedding theorem Let X1, X2, ... be a sequence of independent and identically distributed random variables, each with expected value 0 and finite variance, and let Then there is a sequence of stopping times τ1 ≤ τ2 ≤ ... such that the have the same joint distributions as the partial sums Sn and τ1, τ2 − τ1, τ3 − τ2, ... are independent and identically distributed random variables satisfying and References (Theorems 37.6, 37.7) Probability theorems Wiener process Ukrainian inventions
Skorokhod's embedding theorem
[ "Mathematics" ]
266
[ "Theorems in probability theory", "Mathematical theorems", "Mathematical problems" ]
8,761,410
https://en.wikipedia.org/wiki/Zeta%20Crucis
Zeta Crucis, Latinized from ζ Crucis, is a binary star system in the southern constellation of Crux. It is visible to the naked eye with an apparent magnitude of 4.06m. ζ Crucis is located at about 360 light-years from the Sun. It is a member of the Lower Centaurus–Crux subgroup of the Scorpius–Centaurus association. This is a double-lined spectroscopic binary star system. The spectrum matches a B-type main-sequence star with a stellar classification of B2.5 V. There is a faint visual companion with an apparent magnitude of 12.49. References B-type main-sequence stars Spectroscopic binaries Double stars Lower Centaurus Crux Crucis, Zeta Crux Durchmusterung objects 106983 060009 4679
Zeta Crucis
[ "Astronomy" ]
179
[ "Crux", "Constellations" ]
8,761,441
https://en.wikipedia.org/wiki/Eta%20Crucis
Eta Crucis (η Crucis) is a solitary star in the southern constellation of Crux. It can be seen with the naked eye, having an apparent visual magnitude of 4.14m. Based upon parallax measurements, η Crucis is located 64 light-years from the Sun. The system made its closest approach about 1.6 million years ago when it achieved perihelion at a distance of roughly 26 light years. This is an F-type main sequence star with a stellar classification of F2 V. It has 130% of the Sun's radius and shines with 7 times the luminosity of the Sun from an outer atmosphere with an effective temperature of 6,964 K. Observations of the system using the Spitzer Space Telescope show a statistically significant infrared excess of emission at a wavelength of 70 μm. This suggests the presence of a circumstellar disk. The temperature of this material is below 70 K. Eta Crucis has a pair of visual companions. Component B is a magnitude 11.80 star located at an angular separation of 48.30″ along a position angle of 300°, as of 2010. Component C has a magnitude of 12.16 and lies at an angular separation of 35.50″ along a position angle of 194°, as of 2000. References External links Crucis, Eta Crux F-type main-sequence stars 105211 059072 4616 Durchmusterung objects Gliese and GJ objects
Eta Crucis
[ "Astronomy" ]
313
[ "Crux", "Constellations" ]
8,761,520
https://en.wikipedia.org/wiki/Theta1%20Crucis
{{DISPLAYTITLE:Theta1 Crucis}} Theta1 Crucis (θ1 Cru, Theta1 Crucis) is a spectroscopic binary star system in the southern constellation of Crux. It is visible to the naked eye with an apparent visual magnitude of 4.30m. The distance to this star, as determined using parallax measurements, is around 235 light years. The pair orbit each other closely with a period of 24.5 days and an eccentricity of 0.61. The primary component is an Am star, which is a chemically peculiar A-type star that shows anomalous variations in absorption lines of certain elements. It has a stellar classification of A3(m)A8-A8. With a mass 157% times that of the Sun, it radiates 81 times the Sun's luminosity from its outer atmosphere at an effective temperature of 7341 K. Unusually for a fully radiative A-type star, X-ray emissions have been detected, which may instead be coming from the orbiting companion. References A-type main-sequence stars Crucis, Theta1 Crux Spectroscopic binaries 104671 058758 4599 Durchmusterung objects
Theta1 Crucis
[ "Astronomy" ]
262
[ "Crux", "Constellations" ]
8,761,679
https://en.wikipedia.org/wiki/Lambda%20Crucis
λ Crucis, Latinized as Lambda Crucis, is a single, variable star in the southern constellation Crux, near the constellation border with Centaurus. It is visible to the naked eye as a faint, blue-white hued point of light with an apparent visual magnitude that fluctuates around 4.62. The star is located approximately 384 light-years distant from the Sun based on parallax, and is drifting further away with a radial velocity of +12 km/s. It is a proper motion member of the Lower Centaurus–Crux sub-group in the Scorpius–Centaurus OB association, the nearest such association of co-moving massive stars to the Sun. λ Crucis is listed in the General Catalogue of Variable Stars as a possible β Cephei-type variable. Its brightness varies with an amplitude of 0m.02 over a period of 0.3951 days. However, it is currently thought more likely to be a different type of variable, possibly a λ Eridani variable or rotating ellipsoidal variable. This object is a B-type main-sequence star with a stellar classification of B4 Vne, where the suffix notation indicates "nebulous" (broad) lines due to rapid rotation, along with emission lines from circumstellar material, making it a Be star. It is around 53 million years old and is spinning rapidly with a projected rotational velocity of 341 km/s. The star has five times the mass of the Sun and about 3.0 times the Sun's radius. It is radiating 790 times the luminosity of the Sun from its photosphere at an effective temperature of . References B-type main-sequence stars Beta Cephei variables Lower Centaurus Crux Crux Crucis, Lambda Durchmusterung objects 112078 063007 4897 Be stars
Lambda Crucis
[ "Astronomy" ]
395
[ "Crux", "Constellations" ]
8,761,762
https://en.wikipedia.org/wiki/Iota%20Crucis
ι Crucis, Latinized as Iota Crucis, is a wide double star in the southern constellation of Crux. It is visible to the naked eye as a faint, orange-hued point of light with an apparent visual magnitude of 4m.69. This object is located 125 light-years from the Sun, based on parallax, and is drifting further away with a radial velocity of +7.5 km/s. The primary component is an aging giant star with a stellar classification of K0 III. Having exhausted the supply of hydrogen at its core, the star has cooled and expanded off the main sequence, and now has over seven times the girth of the Sun. It is radiating 24 times the luminosity of the Sun from its swollen photosphere at an effective temperature of 4,824 K. The secondary is a magnitude 10.24 star at an angular separation of from the primary along a position angle of 2°, as of 2015. The Washington Double Star Catalog (2001) notes this is an "optical pair, based on study of relative motion of the components," whereas Eggleton and Tokovinin (2008) list it as a binary system. Gaia Data Release 2 gives a parallax of for the companion, implying a distance around . References K-type giants Double stars Crux Crucis, Iota Durchmusterung objects 110829 062268 4842
Iota Crucis
[ "Astronomy" ]
298
[ "Crux", "Constellations" ]
8,761,882
https://en.wikipedia.org/wiki/Theta2%20Crucis
{{DISPLAYTITLE:Theta2 Crucis}} Theta2 Crucis, Latinized from θ2 Crucis, is a spectroscopic binary star in the constellation Crux. This pair of stars complete an orbit every 3.4280 days and they have a low orbital eccentricity that is close to 0.0. Theta2 Crucis is located at about 690 light-years from the Sun. Since a member of the system is a β Cephei-type variable star, the magnitude is not fixed but varies slightly between +4.70 and +4.74. The period of this variability is 0.0889 days. The system is categorized as is a blue-white B-type main sequence star with a stellar classification of B3 V, although it has also been classified as a subgiant. Evolutionary models show it at a late stage of its main sequence life. References Crucis, Theta2 Beta Cephei variables Crux Spectroscopic binaries B-type subgiants 4603 104841 058867 Durchmusterung objects
Theta2 Crucis
[ "Astronomy" ]
229
[ "Crux", "Constellations" ]
8,761,903
https://en.wikipedia.org/wiki/Strain%20engineering
Strain engineering refers to a general strategy employed in semiconductor manufacturing to enhance device performance. Performance benefits are achieved by modulating strain, as one example, in the transistor channel, which enhances electron mobility (or hole mobility) and thereby conductivity through the channel. Another example are semiconductor photocatalysts strain-engineered for more effective use of sunlight. In CMOS manufacturing The use of various strain engineering techniques has been reported by many prominent microprocessor manufacturers, including AMD, IBM, and Intel, primarily with regards to sub-130 nm technologies. One key consideration in using strain engineering in CMOS technologies is that PMOS and NMOS respond differently to different types of strain. Specifically, PMOS performance is best served by applying compressive strain to the channel, whereas NMOS receives benefit from tensile strain. Many approaches to strain engineering induce strain locally, allowing both n-channel and p-channel strain to be modulated independently. One prominent approach involves the use of a strain-inducing capping layer. CVD silicon nitride is a common choice for a strained capping layer, in that the magnitude and type of strain (e.g. tensile vs compressive) may be adjusted by modulating the deposition conditions, especially temperature. Standard lithography patterning techniques can be used to selectively deposit strain-inducing capping layers, to deposit a compressive film over only the PMOS, for example. Capping layers are key to the Dual Stress Liner (DSL) approach reported by IBM-AMD. In the DSL process, standard patterning and lithography techniques are used to selectively deposit a tensile silicon nitride film over the NMOS and a compressive silicon nitride film over the PMOS. A second prominent approach involves the use of a silicon-rich solid solution, especially silicon-germanium, to modulate channel strain. One manufacturing method involves epitaxial growth of silicon on top of a relaxed silicon-germanium underlayer. Tensile strain is induced in the silicon as the lattice of the silicon layer is stretched to mimic the larger lattice constant of the underlying silicon-germanium. Conversely, compressive strain could be induced by using a solid solution with a smaller lattice constant, such as silicon-carbon. See, e.g., U.S. Patent No. 7,023,018. Another closely related method involves replacing the source and drain region of a MOSFET with silicon-germanium. In thin films Strain can be induced in thin films with either epitaxial growth, or more recently, topological growth. Epitaxial strain in thin films generally arises due to lattice mismatch between the film and its substrate and triple junction restructuring at the surface triple junction, which arises either during film growth or due to thermal expansion mismatch. Tuning this epitaxial strain can be used to moderate the properties of thin films and induce phase transitions. The misfit parameter () is given by the equation below: where is the lattice parameter of the epitaxial film and is the lattice parameter of the substrate. After some critical film thickness, it becomes energetically favorable to relieve some mismatch strain through the formation of misfit dislocations or microtwins. Misfit dislocations can be interpreted as a dangling bond at an interface between layers with different lattice constants. This critical thickness () was computed by Mathews and Blakeslee to be: where is the length of the Burgers vector, is the Poisson ratio, is the angle between the Burgers vector and misfit dislocation line, and is the angle between the Burgers vector and the vector normal to the dislocation's glide plane. The equilibrium in-plane strain for a thin film with a thickness () that exceeds is then given by the expression: Strain relaxation at thin film interfaces via misfit dislocation nucleation and multiplication occurs in three stages which are distinguishable based on the relaxation rate. The first stage is dominated by glide of pre-existing dislocations and is characterized by a slow relaxation rate. The second stage has a faster relaxation rate, which depends on the mechanisms for dislocation nucleation in the material. Finally, the last stage represents a saturation in strain relaxation due to strain hardening. Strain engineering has been well-studied in complex oxide systems, in which epitaxial strain can strongly influence the coupling between the spin, charge, and orbital degrees of freedom, and thereby impact the electrical and magnetic properties. Epitaxial strain has been shown to induce metal-insulator transitions and shift the Curie temperature for the antiferromagnetic-to-ferromagnetic transition in La_{1-x}Sr_{x}MnO_{3}. In alloy thin films, epitaxial strain has been observed to impact the spinodal instability, and therefore impact the driving force for phase separation. This is explained as a coupling between the imposed epitaxial strain and the system's composition-dependent elastic properties. Researchers more recently have achieved strain in thick oxide films larger than that achieved in epitaxial growth by incorporating nano-structured topologies (Guerra and Vezenov, 2002) and nanorods/nanopillars within an oxide film matrix. Following this work, researchers world-wide have created such self-organized, phase-separated, nanorod/nanopillar structures in numerous oxide films as reviewed here. In 2008, Thulin and Guerra published calculations of strain-modified anatase titania band structures, which included an indicated higher hole mobility with increasing strain. Additionally, in two dimensional materials such as strain has been shown to induce conversion from an indirect semiconductor to a direct semiconductor allowing a hundred-fold increase in the light emission rate. In III-N LEDs Strain engineering plays a major role in III-N LEDs, one of the most ubiquitous and efficient LED varieties that has only gained popularity after the 2014 Nobel Prize in Physics. Most III-N LEDs utilize a combination of GaN and InGaN, the latter being used as the quantum well region. The composition of In within the InGaN layer can be tuned to change the color of the light emitted from these LEDs. However, the epilayers of the LED quantum well have inherently mismatched lattice constants, creating strain between the layers. Due to the quantum confined Stark effect (QCSE), the electron and hole wave functions are misaligned within the quantum well, resulting in a reduced overlap integral, decreased recombination probability, and increased carrier lifetime. As such, applying an external strain can negate the internal quantum well strain, reducing the carrier lifetime and making the LEDs a more attractive light source for communications and other applications requiring fast modulation speeds. With appropriate strain engineering, it is possible to grow III-N LEDs on Si substrates. This can be accomplished via strain relaxed templates, superlattices, and pseudo-substrates. Furthermore, electro-plated metal substrates have also shown promise in applying an external counterbalancing strain to increase the overall LED efficiency. In DUV LEDs In addition to traditional strain engineering that takes place with III-N LEDs, Deep Ultraviolet (DUV) LEDs, which use AlN, AlGaN, and GaN, undergo a polarity switch from TE to TM at a critical Al composition within the active region. The polarity switch arises from the negative value of AlN’s crystal field splitting, which results in its valence bands switching character at this critical Al composition. Studies have established a linear relationship between this critical composition within the active layer and the Al composition used in the substrate templating region, underscoring the importance of strain engineering in the character of light emitted from DUV LEDs. Furthermore, any existing lattice mismatch causes phase separation and surface roughness, in addition to creating dislocations and point defects. The former results in local current leakage while the latter enhances the nonradiative recombination process, both reducing the device's internal quantum efficiency (IQE). Active layer thickness can trigger the bending and annihilation of threading dislocations, surface roughening, phase separation, misfit dislocation formation, and point defects. All of these mechanisms compete across different thicknesses. By delaying strain accumulation to grow at a thicker epilayer before reaching the target relaxation degree, certain adverse effects can be reduced. In nano-scale materials Typically, the maximum elastic strain achievable in normal bulk materials ranges from 0.1% to 1%. This limits our ability to effectively modify material properties in a reversible and quantitative manner using strain. However, recent research on nanoscale materials has shown that the elastic strain range is much broader. Even the hardest material in nature, diamond, exhibits up to 9.0% uniform elastic strain at the nanoscale. Keeping in line with Moore's law, semiconductor devices are continuously shrinking in size to the nanoscale. With the concept of "smaller is stronger", elastic strain engineering can be fully exploited at the nanoscale. In nanoscale elastic strain engineering, the crystallographic direction plays a crucial role. Most materials are anisotropic, meaning their properties vary with direction. This is particularly true in elastic strain engineering, as applying strain in different crystallographic directions can have a significant impact on the material's properties. Taking diamond as an example, Density Functional Theory (DFT) simulations demonstrate distinct behaviors in the bandgap decreasing rates when strained along different directions. Straining along the <110> direction results in a higher bandgap decreasing rate, while straining along the <111> direction leads to a lower bandgap decreasing rate but a transition from an indirect to a direct bandgap. A similar indirect-direct bandgap transition can be observed in strained silicon. Theoretically, achieving this indirect-direct bandgap transition in silicon requires a strain of more than 14% uniaxial strain. In 2D materials In the case of elastic strain, when the limit is exceeded, plastic deformation occurs due to slip and dislocation movement in the microstructure of the material. Plastic deformation is not commonly utilized in strain engineering due to the difficulty in controlling its uniform outcome. Plastic deformation is more influenced by local distortion rather than the global stress field observed in elastic strain. However, 2D materials have a greater range of elastic strain compared to bulk materials because they lack typical plastic deformation mechanisms like slip and dislocation. Additionally, it is easier to apply strain along a specific crystallographic direction in 2D materials compared to bulk materials. Recent research has shown significant progress in strain engineering in 2D materials through techniques such as deforming the substrate, inducing material rippling, and creating lattice asymmetry. These methods of applying strain effectively enhance the electric, magnetic, thermal, and optical properties of the material. For example, in the reference provided, the optical gap of monolayer and bilayer MoS2 decreases at rates of approximately 45 and 120 meV/%, respectively, under 0-2.2% uniaxial strain. Additionally, the photoluminescence intensity of monolayer MoS2 decreases at 1% strain, indicating an indirect-to-direct bandgap transition. The reference also demonstrates that strain-engineered rippling in black phosphorus leads to bandgap variations between +10% and -30%. In the case of ReSe2, the literature shows the formation of local wrinkle structures when the substrate is relaxed after stretching. This folding process results in a redshift in the absorption spectrum peak, leading to increased light absorption and changes in magnetic properties and bandgap. The research team also conducted I-V curve tests on the stretched samples and found that a 30% stretching resulted in lower resistance compared to the unstretched samples. However, a 50% stretching showed the opposite effect, with higher resistance compared to the unstretched samples. This behavior can be attributed to the folding of ReSe2, with the folded regions being particularly weak. See also Strained silicon References Semiconductors
Strain engineering
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,511
[ "Electrical resistance and conductance", "Physical quantities", "Semiconductors", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Matter" ]
8,762,069
https://en.wikipedia.org/wiki/Toter
A toter, or toter truck, is a tractor unit specifically designed for the modular and manufactured housing industries. Characteristics The toter is often confused or mistaken for a semi-trailer tractor. The key difference between the two is in the method of coupling. Toters are equipped with a 2-5/16" (59 mm) diameter ball that couples with the tow hitch on the tongue of a mobile or manufactured home or the removable transport frame of a modular home. See also Heavy hauler References Tractors
Toter
[ "Engineering" ]
105
[ "Engineering vehicles", "Tractors" ]
8,762,082
https://en.wikipedia.org/wiki/Mark%20I%20Fire%20Control%20Computer
The Mark 1, and later the Mark 1A, Fire Control Computer was a component of the Mark 37 Gun Fire Control System deployed by the United States Navy during World War II and up to 1991 and possibly later. It was originally developed by Hannibal C. Ford of the Ford Instrument Company and William Newell. It was used on a variety of ships, ranging from destroyers (one per ship) to battleships (four per ship). The Mark 37 system used tachymetric target motion prediction to compute a fire control solution. It contained a target simulator which was updated by further target tracking until it matched. Weighing more than , the Mark 1 itself was installed in the plotting room, a watertight compartment that was located deep inside the ship's hull to provide as much protection against battle damage as possible. Essentially an electromechanical analog computer, the Mark 1 was electrically linked to the gun mounts and the Mark 37 gun director, the latter mounted as high on the superstructure as possible to afford maximum visual and radar range. The gun director was equipped with both optical and radar range finding, and was able to rotate on a small barbette-like structure. Using the range finders and telescopes for bearing and elevation, the director was able to produce a continuously varying set of outputs, referred to as line-of-sight (LOS) data, that were electrically relayed to the Mark 1 via synchro motors. The LOS data provided the target's present range, bearing, and in the case of aerial targets, altitude. Additional inputs to the Mark 1A were continuously generated from the stable element, a gyroscopic device that reacted to the roll and pitch of the ship, the pitometer log, which measured the ship's speed through the water, and an anemometer, which provided wind speed and direction. The Stable Element would now be called a vertical gyro. In "Plot" (the plotting room), a team of sailors stood around the Mark 1 and continuously monitored its operation. They would also be responsible for calculating and entering the average muzzle velocity of the projectiles to be fired before action started. This calculation was based on the type of propellant to be used and its temperature, the projectile type and weight, and the number of rounds fired through the guns to date. Given these inputs, the Mark 1 automatically computed the lead angles to the future position of the target at the end of the projectile's time of flight, adding in corrections for gravity, relative wind, the magnus effect of the spinning projectile, and parallax, the latter compensation necessary because the guns themselves were widely displaced along the length of the ship. Lead angles and corrections were added to the LOS data to generate the line-of-fire (LOF) data. The LOF data, bearing and elevation, as well as the projectile's fuze time, was sent to the mounts by synchro motors, whose motion actuated hydraulic servos with excellent dynamic accuracy to aim the guns. Once the system was "locked" on the target, it produced a continuous fire control solution. While these fire control systems greatly improved the long-range accuracy of ship-to-ship and ship-to-shore gunfire, especially on heavy cruisers and battleships, it was in the anti-aircraft warfare mode that the Mark 1 made the greatest contribution. However, the anti-aircraft value of analog computers such as the Mark 1 was greatly reduced with the introduction of jet aircraft, where the relative motion of the target became such that the computer's mechanism could not react quickly enough to produce accurate results. Furthermore, the target speed, originally limited to 300 knots by a mechanical stop, was twice doubled to 600, then 1,200 knots by gear ratio changes. The design of the postwar Mark 1A may have been influenced by the Bell Labs Mark 8, which was developed as an all electrical computer, incorporating technology from the M9 gun data computer as a safeguard to ensure adequate supplies of fire control computers for the USN during WW2. Surviving Mark 1 computers were upgraded to the Mark 1A standard after World War II ended. Among the upgrades were removing the vector solver from the Mark 1 and redesigning the reverse coordinate conversion scheme that updated target parameters. The scheme kept the four component integrators, obscure devices not included in explanations of basic fire control mechanisms. They worked like a ball–type computer mouse, but had shaft inputs to rotate the ball and to determine the angle of its axis of rotation. The round target course indicator on the right side of the star shell computer with the two panic buttons is a holdover from WW II days when early tracking data and initial angle–output position of the vector solver caused target speed to decrease. Pushbuttons slewed the vector solver quickly. See also Ship gun fire-control system Admiralty Fire Control Table High Angle Control System Gun data computer Stabilisierter Leitstand References External links Fire Control Fundamentals Manual for the Mark 1 and Mark 1a Computer Maintenance Manual for the Mark 1 Computer Manual for the Mark 6 Stable Element Gun Fire Control System Mark 37 Operating Instructions at ibiblio.org Director section of Mark 1 Mod 1 computer operations at NavSource.org Naval Ordnance and Gunnery, Vol. 2, Chapter 25, AA Fire Control Systems Artillery operation Mechanical computers Military computers Fire-control computers of World War II Military equipment introduced in the 1930s
Mark I Fire Control Computer
[ "Physics", "Technology" ]
1,096
[ "Physical systems", "Machines", "Mechanical computers" ]
8,762,611
https://en.wikipedia.org/wiki/Global%20Language%20Monitor
The Global Language Monitor (GLM) is a company based in Austin, Texas, that analyzes trends in the English language. History Founded in Silicon Valley in 2003 by Paul J.J. Payack, the GLM describes its role as "a media analytics company that documents, analyzes and tracks cultural trends in language the world over, with a particular emphasis upon International and Global English". In April 2008, GLM moved its headquarters from San Diego to Austin. In July 2020, GLM announced that the word covid was its Top Word of 2020 for English. The company has been repeatedly criticized by linguists for promoting misinformation about language. Writing on Language Log, the linguist Ben Zimmer accused it of "hoodwink[ing] unsuspecting journalists on a range of pseudoscientific claims". References Companies based in Austin, Texas Human communication Companies established in 2003 Corpus linguistics Linguistic research institutes
Global Language Monitor
[ "Biology" ]
191
[ "Human communication", "Behavior", "Human behavior" ]
8,762,886
https://en.wikipedia.org/wiki/Hemophagocytosis
Hemophagocytosis is a dangerous form of phagocytosis in which histiocytes engulf red blood cells, white blood cells, platelets, and their precursors in bone marrow and other tissues. It is part of the presentation of hemophagocytic lymphohistiocytosis and macrophage activation syndrome. It has also been seen at autopsy of people who died of COVID-19. References Pathology
Hemophagocytosis
[ "Biology" ]
93
[ "Pathology" ]
8,763,022
https://en.wikipedia.org/wiki/Dice%20stacking
Dice stacking is a performance art, akin to juggling or sleight-of-hand, in which the performer scoops dice off a flat surface with a dice cup and then sets the cup down while moving it in a pattern that stacks the dice into a vertical column via centripetal force and inertia. Various dice arrangements, colors of dice, scooping patterns and props allow for many degrees of complexity and difficulty. Dice stacking is usually performed with canceled casino dice, as their square edges and heavy weight give them an advantage when being stacked. In Germany, the first national dice stacking championship tournament took place in May 2008. Tournament rules included the use of a special designed DiceBoard for players which shows the different prescribed moves. There are two disciplines: the "full-area" discipline and the "speed-area" discipline. Three of the most common dice stacks are the normal stack, the fast stack, and the point stack. See also Sport stacking References External links YouTube video of dice stacking Performing arts Object manipulation
Dice stacking
[ "Biology" ]
214
[ "Behavior", "Object manipulation", "Motor control" ]
8,763,689
https://en.wikipedia.org/wiki/Long-range%20Wi-Fi
Long-range Wi-Fi is used for low-cost, unregulated point-to-point computer network connections, as an alternative to other fixed wireless, cellular networks or satellite Internet access. Wi-Fi networks have a range that's limited by the frequency, transmission power, antenna type, the location they're used in, and the environment. A typical wireless router in an indoor point-to-multipoint arrangement using 802.11n and a stock antenna might have a range of or less. Outdoor point-to-point arrangements, through use of directional antennas, can be extended with many kilometers between stations. Introduction Since the development of the IEEE 802.11 radio standard (marketed under the Wi-Fi brand name), the technology has become markedly less expensive and achieved higher bit rates. Long-range Wi-Fi especially in the 2.4 GHz band (as the shorter-range higher-bit-rate 5.8 GHz bands become popular alternatives to wired LAN connections) have proliferated with specialist devices. While Wi-Fi hotspots are ubiquitous in urban areas, some rural areas use more powerful longer-range transceivers as alternatives to cell (GSM, CDMA) or fixed wireless (Motorola Canopy and other 900 MHz) applications. The main drawbacks of 2.4 GHz vs. these lower-frequency options are: poor signal penetration – 2.4 GHz connections are effectively limited to line of sight or soft obstacles; far less range – GSM or CDMA cell phones can connect reliably at > distances; the range of GSM, imposed by the parameters of time-division multiple access, is set at ; few service providers commercially support long-distance Wi-Fi connections. Despite a lack of commercial service providers, applications for long-range Wi-Fi have cropped up around the world. It has also been used in experimental trials in the developing world to link communities separated by difficult geography with few or no other connectivity options. Some benefits of using long-range Wi-Fi for these applications include: unlicensed spectrum – avoiding negotiations with incumbent telecom providers, governments or others; smaller, simpler, cheaper antennas – 2.4 GHz antennas have less than half the size of comparable-strength 900 MHz antennas and require less lightning protection; availability of proven free software like OpenWrt, DD-WRT, Tomato that works even on old routers (WRT54G, for instance) and makes modes like WDS, OLSR, etc., available to anyone, including revenue-sharing models for hotspots. Nonprofit organizations operating widespread installations, such as forest services, also make extensive use of long-range Wi-Fi to augment or replace older communications technologies such as shortwave or microwave transceivers in licensed bands. Applications Business Provide coverage to a large office or business complex or campus. Establish point-to-point link between large skyscrapers or other office buildings or airports. Bring Internet to remote construction sites or research labs. Simplify networking technologies by coalescing around a small number of Internet related widely used technologies, limiting or eliminating legacy technologies such as shortwave radio so these can be dedicated to uses where they actually are needed. Bring Internet to a home if regular cable/DSL cannot be hooked up at the location. Bring Internet to a vacation home or cottage on a remote mountain or on a lake. Bring Internet to a yacht or large seafaring vessel. Share a neighborhood Wi-Fi network. Nonprofit and Government Connect widespread physical guard posts, e.g. for foresters, that guard a physical area, without any new wiring In tourist regions, fill in cell dead zones with Wi-Fi coverage, and ensure connectivity for local tourist trade operators Reduce costs of dedicated network infrastructure and improve security by applying modern encryption and authentication. Military Connect critical opinion leaders, infrastructure such as schools and police stations, in a network local authorities can maintain Build resilient infrastructure with cheaper equipment that an impoverished war-torn region can afford, i.e. using commercial grade, rather than military-class network technology, which may then be left with the developed-world military Reduce costs and simplify/protect supply chains by using cheaper simpler equipment that draws less fuel and battery power; In general these are high priorities for commercial technologies like Wi-Fi especially as they are used in mobile devices. Scientific research A long-range seismic sensor network was used during the Andean Seismic Project in Peru. A multi-hop span with a total length of was crossed with some segments around . The goal was to connect to outlying stations to UCLA in order to receive seismic data in real time. Large-scale deployments The Technology and Infrastructure for Emerging Regions (TIER) project at University of California at Berkeley in collaboration with Intel, uses a modified Wi-Fi setup to create long-distance point-to-point links for several of its projects in the developing world. This technique, dubbed Wi-Fi over Long Distance (WiLD), is used to connect the Aravind Eye Hospital with several outlying clinics in Tamil Nadu state, India. Distances range from five to over fifteen kilometres (3–10 miles) with stations placed in line of sight of each other. These links allow specialists at the hospital to communicate with nurses and patients at the clinics through video conferencing. If the patient needs further examination or care, a hospital appointment can then be scheduled. Another network in Ghana links the University of Ghana, Legon campus to its remote campuses at the Korle bu Medical School and the City campus; a further extension will feature links up to apart. The Tegola project of the University of Edinburgh is developing new technologies to bring high-speed, affordable broadband to rural areas beyond the reach of fibre. A 5-link ring connects Knoydart, the N. shore of Loch Hourne, and a remote community at Kilbeg to backhaul from the Gaelic College on Skye. All links pass over tidal waters; they range in length from . Increasing range in other ways Specialized Wi-Fi channels In most standard Wi-Fi routers, the three standards, a, b and g, are enough. But in long-range Wi-Fi, special technologies are used to get the most out of a Wi-Fi connection. The 802.11-2007 standard adds 10 MHz and 5 MHz OFDM modes to the 802.11a standard, and extend the time of cyclic prefix protection from 0.8 μs to 3.2 μs, quadrupling the multipath distortion protection. Some commonly available 802.11a/g chipsets support the OFDM 'half-clocking' and 'quarter-clocking' that is in the 2007 standard, and 4.9 GHz and 5.0 GHz products are available with 10 MHz and 5 MHz channel bandwidths. It is likely that some 802.11n D.20 chipsets will also support 'half-clocking' for use in 10 MHz channel bandwidths, and at double the range of the 802.11n standard. 802.11n and MIMO Preliminary 802.11n working became available in many routers in 2008. This technology can use multiple antennas to target one or more sources to increase speed. This is known as MIMO, Multiple Input Multiple Output. In tests, the speed increase was said to only occur over short distances rather than the long range needed for most point-to-point setups. On the other hand, using dual antennas with orthogonal polarities along with a 2x2 MIMO chipset effectively enable two independent carrier signals to be sent and received along the same long-distance path. Power increase or receiver sensitivity boosting Another way of adding range uses a power amplifier. Commonly known as "range extender amplifiers" these small devices usually supply around watt of power to the antenna. Such amplifiers may give more than five times the range to an existing network. Every 3 dB gain doubles the effective output power. An antenna receiving 1 watt of power, and 6 dB gain would have an effective power of 4 watts. Higher gain antennas and adapter placement Specially shaped directional antennas can increase the range of a Wi-Fi transmission without a drastic increase in transmission power. High gain antenna may be of many designs, but all allow transmitting a narrow signal beam over greater distance than a non-directional antenna, often nulling out nearby interference sources. Such "WokFi" techniques typically yield gains more than 10 dB over the bare system; enough for line of sight (LOS) ranges of several kilometers (miles) and improvements in marginal locations. Protocol hacking The standard IEEE 802.11 protocol implementations can be modified to make them more suitable for long distance, point-to-point usage, at the risk of breaking interoperability with other Wi-Fi devices and suffering interference from transmitters located near the antenna. These approaches are used by the TIER project. In addition to power levels, it is also important to know how the 802.11 protocol acknowledges each received frame. If the acknowledgement is not received, the frame is re-transmitted. By default, the maximum distance between transmitter and receiver is . On longer distances the delay will force retransmissions. On standard firmware for some professional equipment such as the Cisco Aironet 1200, this parameter can be tuned for optimal throughput. OpenWrt, DD-WRT and all derivatives of it also enable such tweaking. In general, open source software is vastly superior to commercial firmware for all purposes involving protocol hacking, as the philosophy is to expose all radio chipset capabilities and let the user modify them. This strategy has been especially effective with low end routers such as the WRT54G which had excellent hardware features the commercial firmware did not support. As of 2011, many vendors still supported only a subset of chipset features that open source firmware unlocked, and most vendors actively encourage the use of open source firmware for protocol hacking, in part to avoid the difficulty of trying to support commercial firmware users attempting this. Packet fragmentation can also be used to improve throughput in noisy/congested conditions. Although packet fragmentation is often thought of as something bad, and does indeed add a large overhead, reducing throughput, it is sometimes necessary. For example, in a congested situation, ping times of 30 byte packets can be excellent, while ping times of 1450 byte packets can be very poor with high packet loss. Dividing the packet in half, by setting the fragmentation threshold to 750, can vastly improve the throughput. The fragmentation threshold should be a division of the MTU, typically 1500, so should be 750, 500, 375, etc. However, excessive fragmentation can make the problem worse, since the increased overhead will increase congestion. Obstacles to long-range Wi-Fi Methods that increase the range of a Wi-Fi connection may also make it fragile and volatile, due to various factors including: Landscape interference Obstacles are among the biggest problems when setting up a long-range Wi-Fi. Trees and forests attenuate the microwave signal, and hills make it difficult to establish line-of-sight propagation. Rain and wet foliage can decrease range further with extreme amounts of rain. In a city, buildings will impact integrity, speed and connectivity. Steel frames and sheet metal in walls or roofs may partially or fully reflect radio signals, causing signal loss or multipath problems. Concrete or plaster walls absorb microwave signals significantly, reducing the total signal. Hospitals, with their extreme amounts of shielding, can require extensive planning to produce a viable network. Tidal fading When point-to-point wireless connections cross tidal estuaries or archipelagos, multipath interference from reflections over tidal water can be considerably destructive. The Tegola project uses a slow frequency-hopping technique to mitigate tidal fading. 2.4 GHz interference Microwave ovens in residences dominate the 2.4 GHz band and will cause "meal time perturbations" of the noise floor. There are many other sources of interference that aggregate into a formidable obstacle to enabling long-range use in occupied areas. Residential wireless phones, USB 3.0 Hubs, baby monitors, wireless cameras, remote car starters, and Bluetooth products are all capable of transmitting in the 2.4 GHz band. Due to the intended nature of the 2.4 GHz band, there are many users of this band, with potentially dozens of devices per household. By its very nature, "long range" connotes an antenna system which can see many of these devices, which when added together produce a very high noise floor, whereby no single signal is usable, but nonetheless are still received. The aim of a long-range system is to produce a system which over-powers these signals and/or uses directional antennas to prevent the receiver "seeing" these devices, thereby reducing the noise floor. Notable links Italy The longest unamplified Wi-Fi link is a 304 km link achieved by CISAR (Italian Center for Radio Activities). New world record for long-range wireless broadband. link first established on 2016-05-07 and 2016-05-08 it appears to be permanent from Monte Amiata (Tuscany) to Monte Limbara (Sardinia) frequency: 5765 MHz IEEE 802.11a (Wi-Fi), bandwidth 50 MHz data rates: of up to 356.33 Mbit/s Radio: Ubiquiti Networks AF-5X radios Wireless routers: Ubiquiti airFiber Length: . Antenna is 120 cm (4') with handmade waveguide. 35 dBi estimated Venezuela Another notable unamplified Wi-Fi link is a link achieved by the Latin American Networking School Foundation. Pico del Águila – El Baúl Link. frequency: 2412 MHz link established in 2006 IEEE 802.11 (Wi-Fi), channel 1, bandwidth 22 MHz Wireless routers: Linksys WRT54G, OpenWrt firmware at el Águila and DD-WRT firmware at El Baúl. Length: . Parabolic dish antennas were used at both ends, recycled from satellite service. At El Aguila site an aluminum mesh reflector diameter, center-fed, at El Baúl a fiberglass solid reflector, offset-fed, . On both ends the feeds were 12 dBi Yagis. Linksys WRT54G series routers fed the antennas with short LMR400 cables, so the effective gain of the complete antenna is estimated at 30 dBi. This is the largest known range attained with this technology, improving on a previous US record of achieved last year in U.S. The Swedish space agency attained , but using 6 watt amplifiers to reach an overhead stratospheric balloon. Peru Loreto, in the jungle region of Peru, is the location of the longest Wi-Fi-based multihop network in the world. This network has been implemented by the Rural Telecommunications Research Group of the Pontificia Universidad Católica del Perú (GTR PUCP). The Wi-Fi chain goes through many small villages and takes seventeen hops to cover the whole distance. It begins in Cabo Pantoja's Health Post and finishes at downtown Iquitos. Its length is about . The intervention zone was established in the lowland jungle with elevations under 500 meters (1600') above sea level. It is a flat zone and for this reason GTR PUCP installed towers with an average height of 80 meters (260'). The link was established in 2007. GTR PUCP, the regional government of Loreto, and Vicariate San José de Amazonas are working together on maintenance of the network. Frequency channels used: 1, 6 and 11, 802.11g non-interfered channels Doodle Labs Wireless Routers were used. L-com antennas were used. See also Low-power wide-area network WiMAX LoRaWAN Fresnel zone References Bibliography External links Long-Distance 802.11b Links: Performance Measurements and Experience How-to: Parabolic Dish with BiQuad feeder Building long distance wireless networking antennas A list of Long-Distance WiFi links from SeattleWireless Wifi cisar map Wi-Fi
Long-range Wi-Fi
[ "Technology" ]
3,293
[ "Wireless networking", "Wi-Fi" ]
8,763,893
https://en.wikipedia.org/wiki/Three%20Stars%20%28Chinese%20constellation%29
The Three Stars mansion () is one of the twenty-eight mansions of the Chinese constellations. It is one of the western mansions of the White Tiger. This collection of seven bright stars is visible during winter in the Northern Hemisphere (summer in the Southern). Asterisms References Chinese constellations
Three Stars (Chinese constellation)
[ "Astronomy" ]
62
[ "Chinese constellations", "Constellations" ]
8,764,088
https://en.wikipedia.org/wiki/Lyman%20continuum%20photons
Lyman continuum photons (abbrev. LyC), shortened to Ly continuum photons or Lyc photons, are the photons emitted from stars or active galactic nuclei at photon energies above the Lyman limit. Hydrogen is ionized by absorbing LyC. Working from Victor Schumann's discovery of ultraviolet light, from 1906 to 1914, Theodore Lyman observed that atomic hydrogen absorbs light only at specific frequencies (or wavelengths) and the Lyman series is thus named after him. All the wavelengths in the Lyman series are in the ultraviolet band. This quantized absorption behavior occurs only up to an energy limit, known as the ionization energy. In the case of neutral atomic hydrogen, the minimum ionization energy is equal to the Lyman limit, where the photon has enough energy to completely ionize the atom, resulting in a free proton and a free electron. Above this energy (below this wavelength), all wavelengths of light may be absorbed. This forms a continuum in the energy spectrum; the spectrum is continuous rather than composed of many discrete lines, which are seen at lower energies. The Lyman limit is at the wavelength of 91.2 nm (912 Å), corresponding to a frequency of 3.29 million GHz and a photon energy of 13.6 eV. LyC energies are mostly in the ultraviolet C portion of the electromagnetic spectrum (see Lyman series). Although X-rays and gamma-rays will also ionize a hydrogen atom, there are far fewer of them emitted from a star's photosphere—LyC are predominantly UV-C. The photon absorption process leading to the ionization of atomic hydrogen can occur in reverse: an electron and a proton can collide and form atomic hydrogen. If the two particles were traveling slowly (so that kinetic energy can be ignored), then the photon the atom emits upon its creation will theoretically be 13.6 eV (in reality, the energy will be less if the atom is formed in an excited state). At faster speeds, the excess (kinetic) energy is radiated (but momentum must be conserved) as photons of lower wavelength (higher energy). Therefore, photons with energies above 13.6 eV are emitted by the combination of energetic protons and electrons forming atomic hydrogen, and emission from photoionized hydrogen. See also Balmer limit Lyman-alpha blob Lyman-alpha forest Lyman-break galaxy Lyman series Haro 11 - One of the two galaxies in the local universe that 'leaks' Lyman continuum photons. Tololo-1247-232 - The second galaxy in the local universe that 'leaks' Lyman continuum photons. Pea galaxy - Many nearby Green Peas are confirmed LyC 'leakers'. Reionization References Emission spectroscopy Hydrogen physics
Lyman continuum photons
[ "Physics", "Chemistry" ]
564
[ "Emission spectroscopy", "Spectroscopy", "Spectrum (physical sciences)" ]
8,764,576
https://en.wikipedia.org/wiki/Presentation%20logic
In software development, presentation logic is concerned with how business objects are displayed to users of the software, e.g. the choice between a pop-up screen and a drop-down menu. The separation of business logic from presentation logic is an important concern for software development and an instance of the separation of content and presentation. One major rationale behind "effective separation" is the need for maximum flexibility in the code and resources dedicated to the presentation logic. Client demands, changing customer preferences and desire to present a "fresh face" for pre-existing content often result in the need to dramatically modify the public appearance of content while disrupting the underlying infrastructure as little as possible. The distinction between "presentation" (front end) and "business logic" is usually an important one, because: the presentation source code language may differ from other code assets; the production process for the application may require the work to be done at separate times and locations; different workers have different skill sets, and presentation skills do not always coincide with skills for coding business logic; code assets are easier to maintain and more readable when disparate components are kept separate and loosely coupled; References Software design Software architecture Software engineering terminology
Presentation logic
[ "Technology", "Engineering" ]
242
[ "Computing terminology", "Software engineering terminology", "Software engineering stubs", "Software engineering", "Design", "Software design" ]
8,765,022
https://en.wikipedia.org/wiki/Speed%20of%20electricity
The word electricity refers generally to the movement of electrons, or other charge carriers, through a conductor in the presence of a potential difference or an electric field. The speed of this flow has multiple meanings. In everyday electrical and electronic devices, the signals travel as electromagnetic waves typically at 50%–99% of the speed of light in vacuum. The electrons themselves move much more slowly. See drift velocity and electron mobility. Electromagnetic waves The speed at which energy or signals travel down a cable is actually the speed of the electromagnetic wave traveling along (guided by) the cable. I.e., a cable is a form of a waveguide. The propagation of the wave is affected by the interaction with the material(s) in and surrounding the cable, caused by the presence of electric charge carriers, interacting with the electric field component, and magnetic dipoles, interacting with the magnetic field component. These interactions are typically described using mean field theory by the permeability and the permittivity of the materials involved. The energy/signal usually flows overwhelmingly outside the electric conductor of a cable. The purpose of the conductor is thus not to conduct energy, but to guide the energy-carrying wave. Velocity of electromagnetic waves in good dielectrics The velocity of electromagnetic waves in a low-loss dielectric is given by where = speed of light in vacuum. = the permeability of free space = 4π x 10−7 H/m. = relative magnetic permeability of the material. Usually in good dielectrics, e.g. vacuum, air, Teflon, . . = the permittivity of free space = 8.854 x 10−12 F/m. = relative permittivity of the material. Usually in good conductors e.g. copper, silver, gold, . . Velocity of electromagnetic waves in good conductors The velocity of transverse electromagnetic (TEM) mode waves in a good conductor is given by where = frequency. = angular frequency = 2f. = conductivity of annealed copper = . = conductivity of the material relative to the conductivity of copper. For hard drawn copper may be as low as 0.97. . and permeability is defined as above in = the permeability of free space = 4π x 10−7 H/m. = relative magnetic permeability of the material. Nonmagnetic conductive materials such as copper typically have a near 1. . This velocity is the speed with which electromagnetic waves penetrate into the conductor and is not the drift velocity of the conduction electrons. In copper at 60Hz, 3.2m/s. As a consequence of Snell's Law and the extremely low speed, electromagnetic waves always enter good conductors in a direction that is within a milliradian of normal to the surface, regardless of the angle of incidence. Electromagnetic waves in circuits In the theoretical investigation of electric circuits, the velocity of propagation of the electromagnetic field through space is usually not considered; the field is assumed, as a precondition, to be present throughout space. The magnetic component of the field is considered to be in phase with the current, and the electric component is considered to be in phase with the voltage. The electric field starts at the conductor, and propagates through space at the velocity of light, which depends on the material it is traveling through. The electromagnetic fields do not move through space. It is the electromagnetic energy that moves. The corresponding fields simply grow and decline in a region of space in response to the flow of energy. At any point in space, the electric field corresponds not to the condition of the electric energy flow at that moment, but to that of the flow at a moment earlier. The latency is determined by the time required for the field to propagate from the conductor to the point under consideration. In other words, the greater the distance from the conductor, the more the electric field lags. Since the velocity of propagation is very high – about 300,000 kilometers per second – the wave of an alternating or oscillating current, even of high frequency, is of considerable length. At 60 cycles per second, the wavelength is 5,000 kilometers, and even at 100,000 hertz, the wavelength is 3 kilometers. This is a very large distance compared to those typically used in field measurement and application. The important part of the electric field of a conductor extends to the return conductor, which usually is only a few feet distant. At greater distance, the aggregate field can be approximated by the differential field between conductor and return conductor, which tend to cancel. Hence, the intensity of the electric field is usually inappreciable at a distance which is still small compared to the wavelength. Within the range in which an appreciable field exists, this field is practically in phase with the flow of energy in the conductor. That is, the velocity of propagation has no appreciable effect unless the return conductor is very distant, or entirely absent, or the frequency is so high that the distance to the return conductor is an appreciable portion of the wavelength. Charge carrier drift The drift velocity deals with the average velocity of a particle, such as an electron, due to an electric field. In general, an electron will propagate randomly in a conductor at the Fermi velocity. Free electrons in a conductor follow a random path. Without the presence of an electric field, the electrons have no net velocity. When a DC voltage is applied, the electron drift velocity will increase in speed proportionally to the strength of the electric field. The drift velocity in a 2 mm diameter copper wire in 1 ampere current is approximately 8 cm per hour. AC voltages cause no net movement. The electrons oscillate back and forth in response to the alternating electric field, over a distance of a few micrometers – see example calculation. See also Speed of light Speed of gravity Speed of sound Telegrapher's equations Reflections of signals on conducting lines References Further reading Alfvén, H. (1950). Cosmical electrodynamics. Oxford: Clarendon Press Alfvén, H. (1981). Cosmic plasma. Taylor & Francis US. "Velocity of Propagation of Electric Field", Theory and Calculation of Transient Electric Phenomena and Oscillations by Charles Proteus Steinmetz, Chapter VIII, p. 394-, McGraw-Hill, 1920. Fleming, J. A. (1911). Propagation of electric currents in telephone & telegraph conductors. New York: Van Nostrand Electromagnetism Electricity
Speed of electricity
[ "Physics" ]
1,340
[ "Electromagnetism", "Physical phenomena", "Fundamental interactions" ]
8,765,284
https://en.wikipedia.org/wiki/Channel%20memory
An automatic channel memory system (ACMS) is a system in which a digitally controlled radio tuner such as a TV set or VCR could search and memorize TV channels automatically. While more common in television, it can also be used to store presets for radio stations. This is often called a channel scan, though that may also refer to a "preview" mode which plays each station it finds for a few seconds and then moves on to the next, without affecting memory. Channel scanning A typical TV device allows an automatic channel scan to be performed from a menu accessed by a button on the TV set, or sometimes only on the remote control. This applied first to analog TV sets — sometimes those with digital LED displays, or later always those with on-screen displays. These simply searched for the video carrier signal on every channel. (Before the advent of ACMS, many sets would search for the next channel every time it was changed.) It now also applies to digital TV, which must not only find the signal itself, but also decode its metadata enough to remap channel numbers to their proper locations. In the case of the American ATSC system, the ATSC tuner uses PSIP metadata to do this. The internal channel map for digital TV stations is different from the presets or "favorites" that the user has programmed. Just as with analog TV (which worked only by turning a preset on or off for each station/channel), users of digital television adapters and other similar tuners can choose to ignore channels that are still in the channel map. Analog station presets and digital channel maps are normally deleted when a new scan is started. On some tuners, digital channel maps can be added-to with an "easy-add" channel scan, which is useful for finding new stations without losing old ones that may be weak or currently off-air, or not aimed-at with an antenna rotator or other set-top TV antenna adjustment. If a station adds a digital subchannel, most digital TV tuners will find it automatically as soon as the user turns to another channel that is carried by that station, adding it to the channel map. Many will also automatically add a new digital station's subchannels by tuning manually to the station's physical channel, though if this conflicts with the virtual channel number of another station, a complete re-scan may be the only solution. (Choosing an unused subchannel number [i.e. 30.99] on that major channel number [i.e. 30] may avoid the remap on existing subchannels [i.e. 30.1] and force the tuner to listen on that physical channel.) This has often happened in the U.S., where stations (especially LPTV) find it easiest to place their digital operations on a vacated analog channel. The same problem also occurs when the same station moves its digital transmission back to its old analog channel. There is no way to delete a station from the internal channel map, and a re-scan may be needed; or if the new physical channel is found, it may leave the old mapping in place, causing duplicate channels that cannot be accessed through direct entry of the numbers without also pressing the channel-up/down buttons. Many modern TV sets do an ACMS scan automatically as part of the setup process the user is guided through when the device is initially plugged-in. While early sets often lost their memory in a power outage or by otherwise being disconnected from mains electricity, all of them now have non-volatile RAM or a backup of some sort. References Television technology
Channel memory
[ "Technology" ]
748
[ "Information and communications technology", "Television technology" ]
8,765,524
https://en.wikipedia.org/wiki/Cyanogen%20halide
A cyanogen halide is a molecule consisting of cyanide and a halogen. Cyanogen halides are chemically classified as pseudohalogens. The cyanogen halides are a group of chemically reactive compounds which contain a cyano group (-CN) attached to a halogen element, such as fluorine, chlorine, bromine or iodine. Cyanogen halides are colorless, volatile, lacrimatory (tear-producing) and highly poisonous compounds. Production Halogen cyanides can be obtained by the reaction of halogens with metal cyanides or the halogenation of hydrocyanic acid. M = metal, X = halogen Cyanogen fluoride can be obtained by thermal decomposition of cyanuric fluoride. Properties Halogen cyanides are stable at normal pressure below 20 °C and in the absence of moisture or acids. In the presence of free halogens or Lewis acids they easily polymerize to cyanuric halides, for example cyanogen chloride to cyanuric chloride. They are very toxic and tear-inducing (lachrymatory). Cyanogen chloride melts at −6 °C and boils at about 150 °C. Bromine cyanide melts at 52 °C and boils at 61 °C. Iodine cyanide sublimates at normal pressure. Cyanogen fluoride boils at −46 °C and polymerizes at room temperature to cyanuric fluoride. In some of their reactions they resemble halogens. The hydrolysis of cyanogen halides takes place in different ways depending on the electronegativity of the halogen and the resulting different polarity of the X-C bond. Cyanogen fluoride is a gas produced by heating cyanuric fluoride. Cyanogen chloride is a liquid produced by reacting chlorine with hydrocyanic acid. Biomedical effects and metabolism of cyanogen halides Cyanide is naturally present in human tissues in very small quantities. It is metabolized by rhodanese, a live enzyme at a rate of approximately 17 μg/kg·min. Rhodanese catalyzes the irreversible reaction forming thiocyanate from cyanide and sulfane which is non-toxic and can be excreted through the urine. Under normal conditions, availability of sulfane is the limiting factor which acts as a substrate for rhodanese. Sulfur can be administered therapeutically as sodium thiosulfate to accelerate the reaction. A lethal dose of cyanide is time-dependent because of the body's ability to detoxify and excrete small amounts of cyanide through rhodanese-sulfate catalysis. If an amount of cyanide is absorbed slowly, rhodanese-sulfate may be able to biologically render it non-toxic through catalysis to thiosulfate whereas the same amount administered over a short period of time may be lethal. Use Halogen cyanides, in particular cyanogen chloride and cyanogen bromide, are important starting materials for the incorporation of the cyanogen group, the production of other carbonic acid derivatives and heterocycles. It has been suggested that cyanogen chloride be used by the military as poison gas. Cyanogen bromide is a solid that is prepared by reacting bromine with hydrocyanic acid salts; it has been used as a chemical pesticide against insects and rodents and as a reagent for the study of protein structure. Cyanogen halides have been found to act as electrolytes in liquid solvents, sulfur dioxide, arsenous chloride, and sulfuryl chloride. See also Cyanogen fluoride Cyanogen chloride Cyanogen bromide Cyanogen iodide References Halides Triatomic molecules Cyano compounds Pseudohalogens
Cyanogen halide
[ "Physics", "Chemistry" ]
811
[ "Pseudohalogens", "Inorganic compounds", "Molecules", "Triatomic molecules", "Matter" ]
8,766,186
https://en.wikipedia.org/wiki/Martin%20Head-Gordon
Martin Philip Head-Gordon (né Martin Philip Head) is a professor of chemistry at the University of California, Berkeley, and Lawrence Berkeley National Laboratory working in the area of computational quantum chemistry. He is a member of the International Academy of Quantum Molecular Science. Education A native of Australia, Head-Gordon received his Bachelor of Science and Master of Science degrees from Monash University, followed by a PhD from Carnegie Mellon University working under the supervision of John Pople developing a number of useful techniques including the Head-Gordon-Pople scheme for the evaluation of integrals, and the orbital rotation picture of orbital optimization. Career and research At Berkeley, Martin supervises a group interested in pairing methods, local correlation methods, dual-basis methods, scaled MP2 methods, new efficient algorithms, and very recently corrections to the Kohn-Sham density functional framework. Broadly speaking, wavefunction based methods are the focus of his research. Head-Gordon is one of the founders of Q-Chem Inc. Awards and honors In 2015, Head-Gordon was elected a Member of the National Academy of Sciences. References Living people Members of the International Academy of Quantum Molecular Science Australian emigrants to the United States Carnegie Mellon University alumni UC Berkeley College of Chemistry faculty Fellows of the American Academy of Arts and Sciences 1962 births Computational chemists Theoretical chemists Schrödinger Medal recipients
Martin Head-Gordon
[ "Chemistry" ]
274
[ "Quantum chemistry", "Physical chemists", "Computational chemists", "Theoretical chemistry", "Computational chemistry", "Theoretical chemists" ]
8,766,693
https://en.wikipedia.org/wiki/Exploratory%20programming
Exploratory programming, as opposed to implementation (programming), is an important part of the software engineering cycle: when a domain is not very well understood or open-ended, or it's not clear what algorithms and data structures might be needed for an implementation, it's useful to be able to interactively develop and debug a program without having to go through the usual constraints of the edit-compile-run-debug cycle. Languages such as APL, Cecil, Clojure, C#, Dylan, Factor, Forth, F#, J, Java, Julia, Lisp, Mathematica, Obliq, Oz, Prolog, Python, REBOL, Perl, R, Ruby, Scala, Self, Smalltalk, Tcl, and JavaScript, often in conjunction with an IDE, provide support for exploratory programming via interactivity, dynamicity, and extensibility. Formal specification versus exploratory programming For some software development projects, it makes sense to do a requirements analysis and a formal specification. For other software development projects, it makes sense to let the developers experiment with the technology and let the specification of the software evolve depending upon the exploratory programming. Similarity to Breadboarding A similar method of exploration is used in electronics development, called Breadboarding, in which various combinations can quickly be tried and revised, accepting the tradeoff that the result is definitely temporary in nature. See also Live coding Software Prototyping Notes References Programming paradigms User interface techniques
Exploratory programming
[ "Engineering" ]
316
[ "Software engineering", "Software engineering stubs" ]
8,767,075
https://en.wikipedia.org/wiki/Pre-sunrise%20and%20post-sunset%20authorization
In USA AM broadcasting, presunrise authorization (PSRA) and postsunset authorization (PSSA) are permission from the Federal Communications Commission to broadcast in AM on mediumwave using a power level higher than what would normally be permitted prior to sunrise/after sunset, or in the latter case, provide Class D stations with service into the evening where they would otherwise be required to sign off. Sunrise and sunset times are provided on the licensee's basic instrument of authorization. The power level for both PSRA and PSSA service cannot exceed 500 watts. Power calculations are based on co-channel stations. PSRA At 6:00am local time, stations may power up using the station's daytime antenna (if applicable). Daylight saving Provided the locale participates in daylight saving time, special provisions must be made since the PSRA time is based on local time. The exact wording of the rule states: Between the first Sunday in April and the end of the month of April, Class D stations will be permitted to conduct pre-sunrise operation beginning at 6 a.m. local time with a maximum power of 500 watts (not to exceed the station's regular daytime or critical hours power)... For example, if the instrument of authorization states sunrise as being at 5:30am local standard time in April, the station does not require PSRA operation since sunrise time is prior to the 6:00am rule. When the clocks advance, this becomes 6:30am local time. PSRA will permit the station to power up at 5:00am standard time, since that would be 6:00am advanced time and local time would reflect the advanced time. As of 2007 (when the new US daylight saving times went into effect), the FCC instructed licensees to use the April advanced times when DST goes into effect in March. PSSA At sunset, Class D stations must sign off if they do not possess a nighttime license. PSSA operation allows the station to remain on the air an additional two hours at reduced power level determined by several factors: International boundaries Class A Clear-channel stations Whether the station is on a Regional channel Daylight saving There are no specific provisions related to daylight saving time within PSSA operation. Exceptions PSSA operation must cease at local sunset time for the closest co-channel Class A located west of the Class D station. Class D stations west of a co-channel Class A do not qualify. History The first presunrise authorizations came from a proposed rulemaking in 1967 (Operation by Standard Broadcast Stations, 8 FCC 2d 698 (1967)). There were major concerns of skywave interference to clear channel stations, so only a handful of stations were permitted to apply. On February 25, 1981, the FCC determined that there were no detrimental effects to clear-channel stations in remote areas, therefore, they permitted even more stations to apply for authorization. Current authorization Applications for PSRA and PSSA operation are no longer required. The licensee must merely notify the FCC. External links 47 CFR 73.99 (Official) FCC Radio and Television Rules 47 CFR Part 73 (Official) FCC Rules and Regulations (unofficial) Broadcast engineering
Pre-sunrise and post-sunset authorization
[ "Engineering" ]
647
[ "Broadcast engineering", "Electronic engineering" ]
8,767,347
https://en.wikipedia.org/wiki/Instrument%20of%20authorization
The United States Federal Communications Commission uses the term instrument of authorization with its broadcast licensees. It may refer to: Any license or permit issued Information relating to items on the station license Copies of the original documents In most cases, licensees are required to post the instrument of authorization at the primary point of control. The public is permitted to see such authorizations and licensees are required to give the public access to those documents. References External links FCC Radio and Television Broadcast Rules, 47 CFR Part 73 (Official) FCC Rules and Regulations (unofficial) Broadcast engineering
Instrument of authorization
[ "Engineering" ]
114
[ "Broadcast engineering", "Electronic engineering" ]
8,767,432
https://en.wikipedia.org/wiki/Hemiepiphyte
A hemiepiphyte is a plant that spends part of its life cycle as an epiphyte. The seeds of primary hemiepiphytes germinate in the canopy and initially live epiphytically. They send roots downward, and these roots eventually make contact with the ground. Secondary hemiepiphytes are root-climbers that begin as rooted vines growing upward from the forest floor, but later break their connection to the ground. When this happens, they may send down long roots to the ground. Strangler figs are hemiepiphytic – they may begin life as epiphytes but after making contact with the ground they encircle their host tree and "strangle" it. This usually results in the death of the host tree, either through girdling or through competition for light. Strangler figs can also germinate and develop as independent trees, not reliant on the support of a host. References Ecology terminology Epiphytes Plant morphology Highlands Rainforests of Africa
Hemiepiphyte
[ "Biology" ]
214
[ "Plant morphology", "Ecology terminology", "Plants" ]
8,768,674
https://en.wikipedia.org/wiki/Ethanedisulfonic%20acid
Ethanedisulfonic acid is a diprotic sulfonic acid, with pKa values of -1.46 and -2.06, making it a very strong acid. When used in pharmaceutical formulations, the salts with the active ingredient are known as edisylates. See also Methanedisulfonic acid 1,3-Propanedisulfonic acid Isethionic acid References Sulfonic acids 1,2-Ethanediyl compounds
Ethanedisulfonic acid
[ "Chemistry" ]
97
[ "Functional groups", "Organic compounds", "Sulfonic acids", "Organic compound stubs", "Organic chemistry stubs" ]
8,768,741
https://en.wikipedia.org/wiki/London%20Design%20Festival
London Design Festival is a citywide cultural event that takes place over nine days every September across London. It was founded by John Sorrell and Ben Evans in 2003 and celebrated its 22nd edition in September 2024. In an article by Wallpaper, the festival chairman stated, "We consciously founded the London Design Festival to be public-spirited. Over the last 20 years, the Festival has had incredible depth of penetration and success in bringing people together and distilling new ideas." About The inaugural edition of the London Design Festival took place from the 20 to 28 September 2003, "bringing together 90 speakers in over 60 events throughout the capital". "In 2017, the Festival welcomed an estimated audience of 420,000 visitors". In 2019, it attracted an audience of over 600,000 visitors from over 75 countries. Over 2,000 design businesses participate each year, including brands and universities. The Festival comprises over 400 events and exhibitions staged by over 300 partner organisations across the design spectrum and from around the world. The Festival also commissions and curates its program of Landmark Projects, Projects at the Victoria and Albert Museum (V&A) and Special Commissions throughout the city. The Festival also has events including its thought-leadership programme the Global Design Forum, talks, keynotes, daily tours, and workshops. In 2019 it had 50 speakers from 18 countries and 2,800 visitors. Landmark Projects The Festival commissions and curates large-scale installations across the city in indoor and outdoor locations. The installations are developed and shown during the Festival, with many later being shown in other cities or locations in the following months or years. Working with businesses and designers, previous Landmark Projects have included Sclera by David Adjaye (2008), Endless Stair by Alex de Rijke (2013), The Smile by Alison Brooks Architects (2016), Medusa by Tin Drum and Sou Fujimoto (2021), INTO SIGHT by Sony Design (2022), and Sabine Marcelis's swivelling stone chairs on St Giles Square (2022). Location Since 2009, the Victoria and Albert Museum has been the central hub for the London Design Festival, celebrating fourteen years of partnership in 2022. It has been called the "true epicentre" of the festival. Museum director Tristram Hunt said that the “London Design Festival occupies a vital role in London's thriving design sector, reaffirming London's position as one of the world's leading global design capitals.” In 2022, twelve Design Districts across London participated – Bankside, Brompton, Pimlico Road, Clerkenwell, King's Cross, Design District (Greenwich Peninsula), Mayfair, Shoreditch, Islington, Park Royal, William Morris Design Line and Southwark. Other districts have participated in previous editions including Paddington Central, West Kensington, Marylebone, and Chelsea. Awards Each year a jury composed of established designers, industry commentators and previous winners choose recipients of the London Design Medals across four categories. Winners are chosen from a wide range of design disciplines and awarded for their contribution to their field. Festival Director Ben Evens stated “While there is no shortage of design awards, we wanted to do it differently. So we took the Nobel Prize route – there's no shortlist, just a winner. So that means there's no losers either.” The London Design Medal is designed each year by jewellery designer Hannah Martin. The Medals feature a London bird, the Cockney Sparrow, in flight. The London Design Medal categories London Design Medal: The highest accolade bestowed upon an individual who has distinguished themselves within the industry and demonstrated consistent design excellence. Design Innovation Medal: Celebrates entrepreneurship in all its forms, both locally and internationally. It honours an individual for whom design lies at the core of their development and success. Emerging Talent Medal: Recognises an impact made on the design scene within five or so years of graduation. Lifetime Achievement Medal: Honours a significant and fundamental contribution to the design industry throughout a career. Previous medal winners Sir Ken Adam, Lifetime Achievement Medal (2015) Sir David Adjaye, London Design Medal (2016) Pooja Agrawal, Design Innovation Medal (2023) Paola Antonelli, London Design Medal (2020) Ron Arad, London Design Medal (2011) Ross Atkin, Emerging Design Medal (2019) Edward Barber and Jay Osgerby, London Design Medal (2015) Grace Wales Bonner, Emerging Design Medal (2018) Ronan and Erwan Bouroullec, London Design Medal (2014) Peter Brewin and Will Crawford, Design Innovation Medal (2015) Margaret Calvert, Lifetime Achievement Medal, (2017) Hussein Chalayan, Panerai London Design Medal (2018) Daniel Charny, Design Innovation Medal (2019) Natsai Audrey Chieza, Design Innovation Medal (2024) Mac Collins, Emerging Design Medal (2021) Sir Terence Conran, Lifetime Achievement Medal (2012) David Constantine, Design Entrepreneur Medal (2013) Ilse Crawford, The London Design Medal (2021) Es Devlin, Panerai London Design Medal (2017) Tom Dixon, London Design Medal (2019) Ken Garland, Lifetime Achievement Medal (2020) Alexandra Daisy Ginsberg, Emerging Design Medal (2012) Kenneth Grange, Lifetime Achievement Medal (2016) Dame Zaha Hadid, London Design Medal (2007) Thomas Heatherwick, London Design Medal (2010) Harry Blackiston Houston, Emerging Design Medal (2024) Rosario Hurtado and Robert Feo, London Design Medal (2012) Yinka Ilori, Emerging Design Medal (2020) Eva Jiricna, Lifetime Achievement Medal (2018) Indy Johar, Design Innovation Medal (2022) Rei Kawakubo, Lifetime Achievement Medal (2024) Hanif Kara, London Design Medal (2023) Roland Lamb, Emerging Design Medal (2014) Joycelyn Longdon, Emerging Design Medal (2022) Dame Ellen MacArthur, Design Innovation Medal (2020) Sir Don McCullin, Lifetime Achievement Medal (2022) Pat McGrath, London Design Medal (2024) Dame Magdalene Odundo, Lifetime Achievement Medal (2023) Julian Melchiorri, Emerging Design Medal (2017) Marc Newson, London Design Medal (2008) Jane Ni Dhulchaointigh, Design Entrepreneur Medal (2012) Neri Oxman, Design Innovation Medal (2018) Sandy Powell, London Design Medal (2022) Paul Priestman, Design Innovation Medal (2017) POoR Collective, Emerging Design Medal (2023) Dieter Rams, Lifetime Achievement Medal (2013) Lord Richard Rogers, Lifetime Achievement Medal (2014) Nicolas Roope, Design Innovation Medal (2014) Daan Roosegaard, Design Innovation Medal (2016) , Emerging Design Medal (2013) Vidal Sassoon, Lifetime Achievement Medal (2011) Peter Saville, London Design Medal (2013) Sir Paul Smith, London Design Medal (2009) Marjan Van Aubel, Emerging Design Medal (2015) Eyal Weizman, Design Innovation Medal (2021) Dame Vivienne Westwood, Lifetime Achievement Medal (2019) Michael Wolf, Lifetime Achievement Medal (2021) Bethan Laura Wood, Emerging Talent Medal (2016) See also British Council Crafts Council Cultural diplomacy List of design awards References External links London Design Biennale (related, similar event) Design Festival British design exhibitions Festivals established in 2003 Design events Annual events in London Victoria and Albert Museum Design awards
London Design Festival
[ "Engineering" ]
1,499
[ "Design", "Design events", "Design awards" ]
8,768,899
https://en.wikipedia.org/wiki/Obstructionism
Obstructionism is the practice of deliberately delaying or preventing a process or change, especially in politics. In politics Obstructionism or policy of obstruction denotes the deliberate interference with the progress of a legislation by various means such as the filibuster which consists of extending the debate upon a proposal in order to delay or completely prevent a vote on its passage. Another form of parliamentary obstruction practiced in the United States and other countries is called "slow walking". It specifically refers to the extremely slow speed with which legislators walk to the podium to cast their ballots. For example, in Japan this tactic is known as a "cow walk", and in Hawaii it is known as a "Devil's Gambit". Consequently, slow walking is also used as a synonym for obstructionism itself. Obstructionism can also take the form of widespread agreement to oppose policies from the other side of a political debate or dispute. Notable obstructionists John O'Connor Power, Joe Biggar, Frank Hugh O'Donnell, and Charles Stewart Parnell, Irish nationalists; all were famous for making long speeches in the British House of Commons. In a letter to Cardinal Cullen, 6 August 1877, The O'Donoghue, MP for County Kerry, denounced the obstruction policy: "It is Fenianism in a new form." The tactic deadlocked legislation and 'the autumn Session of 1882 was entirely devoted to the reform of the Rules of Procedure with a view to facilitating the despatch of business.' Sir Leslie Ward's "Spy" cartoon of John O'Connor Power appeared in Vanity Fairs "Men of the Day" series, 25 December 1886, and was captioned "the brains of Obstruction". Mitch McConnell, a United States Senator, has been described as an obstructionist for his filibusters of federal judge nominations. He has referred to himself as the "Grim Reaper" of the Democratic agenda. Mass media In September 2010, Jon Stewart of The Daily Show announced the Rally to Restore Sanity and/or Fear, an event dedicated to ending political obstructionism in American mass media. As workplace aggression An obstructionist causes problems. Neuman and Baron (1998) identify obstructionism as one of the three dimensions that encompass the range of workplace aggression. In this context, obstructionism refers to "behaviors intended to hinder an employee from performing their job or the organization from accomplishing its objectives". See also Filibuster Government failure Justice delayed is justice denied Judicial reform Judicial review Jury tampering Law reform Liberum veto Political corruption References Further reading Görne, Frank (2020). Die Obstruktionen in der Römischen Republik [Obstructions in the Roman Republic]. Historia Einzelschriften, vol. 264. Stuttgart: Franz Steiner, (with a general classification of obstructions). Human behavior Political philosophy Abuse of the legal system
Obstructionism
[ "Biology" ]
582
[ "Behavior", "Human behavior" ]
8,769,106
https://en.wikipedia.org/wiki/Specific-pathogen-free
Specific-pathogen-free (SPF) is a term used for laboratory animals that are guaranteed free of particular pathogens. Use of SPF animals ensures that specified diseases do not interfere with an experiment. For example, absence of respiratory pathogens such as influenza is desirable when investigating a drug's effect on lung function. Practical Completely germ-free The animals can be born through a caesarian section then special care taken so the newborn does not acquire infections, such as use of sterile isolation units with a positive pressure differential to keep all outside air and pathogens from entering. Everything that needs to be inserted into the isolator, such as food, water and equipment needs to be completely sterilized and disinfected, and inserted through an airlock that can be disinfected before opening from the inside. A disadvantage is that any contact with pathogens may be fatal. This is because the animals have no protective bacterial microbiota on the skin or in the intestine or respiratory tract, and because they have no natural immunity to common infections as they have never been exposed to them. Specific-pathogen-free To certify SPF, the population is checked for presence of (antibodies against) the specified pathogens. For SPF eggs the specific pathogens are: Avian Adenovirus Group I, Avian Adenovirus Group II (HEV), Avian Adenovirus Group III (EDS), Avian Encephalomyelitis, Avian Influenza (Type A), Avian Nephritis Virus, Avian Paramyxovirus Type 2, Avian Reovirus S 1133, Avian Rhinotracheitis Virus; Avian Rotavirus; Avian Tuberculosis M. avium; Chicken Anemia Virus; Endogenous GS Antigen; Fowl Pox; Hemophilus paragallinarum Serovars A, B, C; Infectious Bronchitis - Ark; Infectious Bronchitis - Conn; Infectious Bronchitis - JMK; Infectious Bronchitis - Mass; Infectious Bursal Disease Type 1; Infectious Bursal Disease Type 2; Infectious Laryngotracheitis; Lymphoid Leukosis A, B; Avian Lymphoid Leukosis Virus; Lymphoid Leukosis Viruses A, B, C, D, E, J; Marek's Disease (Serotypes 1,2, 3); Mycoplasma gallisepticum; Mycoplasma synoviae; Newcastle Disease LaSota; Reticuloendotheliosis Virus; Salmonella pullorum-gallinarum; Salmonella species; Minimal disease status When by accident some infection does occur, the population is said to have minimal disease status. Monitoring The population is regularly checked to ensure the status still holds. Applications SPF eggs can be used to make vaccines. Mice raised under SPF conditions (no Helicobacter pylori) were shown to develop colitis rather than enterocolitis. See also Filtered Air Positive Pressure Gnotobiotic animal References Animal testing Animal models
Specific-pathogen-free
[ "Chemistry", "Biology" ]
646
[ "Animal testing", "Animal models", "Model organisms" ]
8,769,160
https://en.wikipedia.org/wiki/List%20of%20dump%20truck%20manufacturers
List of dump truck manufacturers. Tractor units Ashok Leyland BelAZ BEML India Ltd CAMC Chevy DAF CF, 95 XF Davis Trailer & Equipment Dongfeng Liuqi Ford GAZ GMC HEPCO Hino Motors (Toyota) Leader Trucks International Isuzu Iveco Eurotrakker, Eurostar KamAZ Kenworth Kress Corporation:200CII. Mack MAN MAZ Mahindra Mercedes-Benz Actros, Axor Micro-Vett Mitsubishi Fuso Truck and Bus Corporation MoAZ Peterbilt Perlini Renault Trucks ROMAN Scania AB 220, 340, 360, 460 SISU ST Kinetics Sterling Trucks Tata Motors Tata Daewoo TBEI Tonar UD Ural Automotive Plant Volvo FH12 and Volvo FM12 Semi-trailer Schmitz Cargobull Seri Zenith Voltrailer Wielton See also Dump truck Tractor unit Semi-trailer and semi-trailer truck List of American truck manufacturers External links Manufacturers Dump truck Truck-related lists Dump truck
List of dump truck manufacturers
[ "Engineering" ]
203
[ "Engineering vehicles", "Dump trucks" ]
8,769,719
https://en.wikipedia.org/wiki/Fast%20low%20angle%20shot%20magnetic%20resonance%20imaging
Fast low angle shot magnetic resonance imaging (FLASH MRI) is a particular sequence of magnetic resonance imaging. It is a gradient echo sequence which combines a low-flip angle radio-frequency excitation of the nuclear magnetic resonance signal (recorded as a spatially encoded gradient echo) with a short repetition time. It is the generic form of steady-state free precession imaging. Different manufacturers of MRI equipment use different names for this experiment. Siemens uses the name FLASH, General Electric used the name SPGR (Spoiled Gradient Echo), and Philips uses the name CE-FFE-T1 (Contrast-Enhanced Fast Field Echo) or T1-FFE. Depending on the desired contrast, the generic FLASH technique provides spoiled versions that destroy transverse coherences and yield T1 contrast as well as refocused versions (constant phase per repetition) and fully balanced versions (zero phase per repetition) that incorporate transverse coherences into the steady-state signal and offer T1/T2 contrast. Physical basis The physical basis of MRI is the spatial encoding of the nuclear magnetic resonance (NMR) signal obtainable from water protons (i.e. hydrogen nuclei) in biologic tissue. In terms of MRI, signals with different spatial encodings that are required for the reconstruction of a full image need to be acquired by generating multiple signals – usually in a repetitive way using multiple radio-frequency excitations. The generic FLASH technique emerges as a gradient echo sequence which combines a low-flip angle radio-frequency excitation of the NMR signal (recorded as a spatially encoded gradient echo) with a rapid repetition of the basic sequence. The repetition time is usually much shorter than the typical T1 relaxation time of the protons in biologic tissue. Only the combination of (i) a low-flip angle excitation which leaves unused longitudinal magnetization for an immediate next excitation with (ii) the acquisition of a gradient echo which does not need a further radio-frequency pulse that would affect the residual longitudinal magnetization, allows for the rapid repetition of the basic sequence interval and the resulting speed of the entire image acquisition. In fact, the FLASH sequence eliminated all waiting periods previously included to accommodate effects from T1 saturation. FLASH reduced the typical sequence interval to what is minimally required for imaging: a slice-selective radio-frequency pulse and gradient, a phase-encoding gradient, and a (reversed) frequency-encoding gradient generating the echo for data acquisition. For radial data sampling, the phase- and frequency-encoding gradients are replaced by two simultaneously applied frequency-encoding gradients that rotate the Fourier lines in data space. In either case, repetition times are as short as 2 to 10 milliseconds, so that the use of 64 to 256 repetitions results in image acquisition times of about 0.1 to 2.5 seconds for a two-dimensional image. Most recently, highly undersampled radial FLASH MRI acquisitions have been combined with an iterative image reconstruction by regularized nonlinear inversion to achieve real-time MRI at a temporal resolution of 20 to 30 milliseconds for images with a spatial resolution of 1.5 to 2.0 millimeters. This method allows for a visualization of the beating heart in real time – without synchronization to the electrocardiogram and during free breathing. Applications Applications can include: cross-sectional images with acquisition times of a few seconds enable MRI studies of the thorax and abdomen within a single breathhold, dynamic acquisitions synchronized to the electrocardiogram generate movies of the beating heart, sequential acquisitions monitor physiological processes such as the differential uptake of contrast media into body tissues, three-dimensional acquisitions visualize complex anatomic structures (brain, joints) at unprecedented high spatial resolution in all three dimensions and along arbitrary view directions, and Magnetic resonance angiography (MRA) yields three-dimensional representations of the vasculature. History FLASH MRI was invented in 1985 by Jens Frahm, Axel Haase, W Hänicke, KD Merboldt, and D Matthaei (German Patent Application P 35 04 734.8, 12 February 1985) at the Max-Planck-Institut für biophysikalische Chemie in Göttingen, Germany. The technique is revolutionary in shortening MRI measuring times by up to two orders of magnitude. FLASH was very rapidly adopted commercially. RARE was slower, and echo-planar imaging (EPI) – for technical reasons – took even more time. Echo-planar imaging had been proposed by Mansfield's group in 1977, and the first crude images were shown by Mansfield and Ian Pykett in the same year. Roger Ordidge presented the first movie in 1981. Its breakthrough came with the invention of shielded gradients. The introduction of FLASH MRI sequences in diagnostic imaging for the first time allowed for a drastic shortening of the measuring times without a substantial loss in image quality. In addition, the measuring principle led to a broad range of completely new imaging modalities. In 2010, an extended FLASH method with highly undersampled radial data encoding and iterative image reconstruction achieved real-time MRI with a temporal resolution of 20 milliseconds (1/50th of a second). Taken together, this latest development corresponds to an acceleration by a factor of 10,000 compared to the MRI situation before 1985. In general, FLASH denoted a breakthrough in clinical MRI that stimulated further technical as well as scientific developments up to date. References External links Biomedizinische NMR Forschungs GmbH offers further detailed information about FLASH MRI and related applications (neurobiology, cardiovascular imaging) Press Release of the Max Planck Society http://www.mtbeurope.info/news/2010/1009005.htm Magnetic resonance imaging
Fast low angle shot magnetic resonance imaging
[ "Chemistry" ]
1,181
[ "Nuclear magnetic resonance", "Magnetic resonance imaging" ]
8,769,764
https://en.wikipedia.org/wiki/Cryochemistry
Cryochemistry is the study of chemical interactions at temperatures below . It is derived from the Greek word cryos, meaning 'cold'. It overlaps with many other sciences, including chemistry, cryobiology, condensed matter physics, and even astrochemistry. Cryochemistry has been a topic of interest since liquid nitrogen, which freezes at −210°C, became commonly available. Cryogenic-temperature chemical interactions are an important mechanism for studying the detailed pathways of chemical reactions by reducing the confusion introduced by thermal fluctuations. Cryochemistry forms the foundation for cryobiology, which uses slowed or stopped biological processes for medical and research purposes. Low temperature behaviours As a material cools, the relative motion of its component molecules/atoms decreases - its temperature decreases. Cooling can continue until all motion ceases, and its kinetic energy, or energy of motion, disappears. This condition is known as absolute zero and it forms the basis for the Kelvin temperature scale, which measures the temperature above absolute zero. Zero degrees Celsius (°C) coincides with 273 Kelvin. At absolute zero most elements become a solid, but not all behave as predictably as this; for instance, helium becomes a highly unusual liquid. The chemistry between substances, however, does not disappear, even near absolute zero temperatures, since separated molecules/atom can always combine to lower their total energy. Almost every molecule or element will show different properties at different temperatures; if cold enough, some functions are lost entirely. Cryogenic chemistry can lead to very different results compared with standard chemistry, and new chemical routes to substances may be available at cryogenic temperatures, such as the formation of argon fluorohydride, which is only a stable compound at or below . Methods of cooling One method that used to cool molecules to temperatures near absolute zero is laser cooling. In the Doppler cooling process, lasers are used to remove energy from electrons of a given molecule to slow or cool the molecule down. This method has applications in quantum mechanics and is related to particle traps and the Bose–Einstein condensate. All of these methods use a "trap" consisting of lasers pointed at opposite equatorial angles on a specific point in space. The wavelengths from the laser beams eventually hit the gaseous atoms and their outer spinning electrons. This clash of wavelengths decreases the kinetic energy state fraction by fraction to slow or cool the molecules down. Laser cooling has also been used to help improve atomic clocks and atom optics. Ultracold studies are not usually focused on chemical interactions, but rather on fundamental chemical properties. Because of the extremely low temperatures, diagnosing the chemical status is a major issue when studying low temperature physics and chemistry. The primary techniques in use today are optical - many types of spectroscopy are available, but these require special apparatus with vacuum windows that provide room temperature access to cryogenic processes. See also Thermochemistry Cryogenics Bose–Einstein condensate References Moskovits, M., and Ozin, G.A., (1976) Cryochemistry, J. Wiley & Sons, New York Dillinger, J. R. (1957). Low temperature physics & chemistry (edited by Joseph R. Dillinger.) Madison, Wisconsin: University of Wisconsin Press. Naduvalath, B. (2013). "Ultracold molecules." Phillips, W. D. (2012). "Laser cooling" Parpia, J. M., & Lee, D.M. (2012). "Absolute zero" Hasegawa, Y., Nakamura, D., Murata, M., Yamamoto, H., & Komine, T. (2010). "High-precision temperature control and stabilization using a cryocooler. Review of Scientific Instruments", doi:10.1063/1.3484192 External links Physical chemistry Thermochemistry Cryobiology Condensed matter physics Astrochemistry
Cryochemistry
[ "Physics", "Chemistry", "Materials_science", "Astronomy", "Engineering", "Biology" ]
805
[ "Physical phenomena", "Phase transitions", "Applied and interdisciplinary physics", "Thermochemistry", "Astronomical sub-disciplines", "Phases of matter", "Cryobiology", "Materials science", "Astrochemistry", "Condensed matter physics", "nan", "Biochemistry", "Physical chemistry", "Matter"...
8,770,352
https://en.wikipedia.org/wiki/HD%20Tach
HD Tach is a software program for Microsoft Windows (2000 or XP) that tests and graphs the sequential read, random access and interface burst speeds of attached storage devices (hard drive, flash drive, removable drive etc.). Drive technologies such as SCSI, IDE/ATA, IEEE 1394, USB, SATA and RAID are supported. A prominent feature of the software was an included library of drive benchmarks, as well as the option to save your own drive's benchmarks locally or submit them to an online database. The company's website also had a forum with over 2000 user posts. On December 5, 2011, citing the lack of time to devote to the project, Simpli Software formally announced on its website that HD Tach had reached end-of-life and was no longer being supported. The domain has since expired. The latest version of this application (3.0.4.0) is not fully compatible with Windows Vista, Windows 7, or Windows 8. However, HD Tach works in these operating systems by running it in Windows XP SP2 or SP3 compatibility mode. HD Tach 2.70 is the last version to work on Windows NT 4.0. History HD Tach was originally developed by TCD Labs, Inc. In 2000 the company was acquired by Oak Technology, Inc. Simpli Software, Inc. was formed by the original group of TCD Labs employees and acquired all rights to the benchmarks and source code from Oak Technology in 2003. The domain name displayed in the software, simplisoftware.com, began resolving to a domain reseller landing page in November 2012. Bibliography References Benchmarks (computing) Storage software Utilities for Windows
HD Tach
[ "Technology" ]
352
[ "Benchmarks (computing)", "Computing comparisons", "Computer performance" ]
8,770,538
https://en.wikipedia.org/wiki/Shpolskii%20matrix
Shpolskii systems are low-temperature host–guest systems – they are typically rapidly frozen solutions of polycyclic aromatic hydrocarbons in suitable low molecular weight normal alkanes. The emission and absorption spectra of lowest energy electronic transitions in the Shpolskii systems exhibit narrow lines instead of the inhomogeneously broadened features normally associated with spectra of chromophores in liquids and amorphous solids. The effect was first described by Eduard Shpolskii in the 1950s and 1960's in the journals Transactions of the U.S.S.R. Academy of Sciences and Soviet Physics Uspekhi. Subsequent detailed studies of concentration and speed of cooling behavior of Shpolskii systems by L. A. Nakhimovsky and coauthors led to a hypothesis that these systems are metastable segregational solid solutions formed when one or more chromophores replace two or more molecules in the host crystalline lattice. The solid state quasi-equilibrium solubility in most Shpolskii systems is very low. When the Shpolskii effect is manifested, the solid state solubility increases two to three orders of magnitude. Isothermic annealing of the supersaturated rapidly frozen solutions of dibenzofuran in heptane was performed, and it was shown that the return of the metastable system to equilibrium in time reasonable for laboratory observation required the annealing temperature to be close to the melting temperature of the metastable frozen solution. Thus the Shpolskii systems are an example of a persistent metastable state. A good match between the chromophore and the host lattice leads to a uniform environment for all the chromophores and hence greatly reduces the inhomogeneous broadening of the electronic transition's pure electronic and vibronic lines. In addition to the weak inhomogeneous broadening of the transitions, the quasi-lines observed at very low temperatures are phonon-less transitions. Since phonons originate in the lattice, an additional requirement is weak chromophore-lattice coupling. Weak coupling increases the probability of phonon-less transitions and hence favors the narrow zero phonon lines. The weak coupling is usually expressed in terms of the Debye-Waller factor, where a maximum value of one indicates no coupling between the chromophore and the lattice phonons. The narrow lines characteristic of the Shpolskii systems are only observed at cryogenic temperatures because at higher temperatures many phonons are active in the lattice and all of the amplitude of the transition shifts to the broad phonon sideband. The original observation of the Shpolskii effect was made at liquid nitrogen temperature (77 kelvins), but using temperatures close to that of liquid helium (4.2 K) yields much sharper spectral lines and is the usual practice. Low molecular weight normal alkanes absorb light at energies higher than the absorption of all pi-pi electronic transitions of aromatic hydrocarbons. They interact weakly with the chromophores and crystallize when frozen. The length of the alkanes is often chosen to approximately match at least one of the dimensions of the chromophore, and are usually in the size range between n-pentane and n-dodecane. See also Zero-phonon line and phonon sideband Phonon References DOI Link External links Aleksander Rebane's site at Montana State University, Bozeman Hyperphysics: Molecular Spectra Spectroscopy Optical materials Condensed matter physics
Shpolskii matrix
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
732
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Phases of matter", "Materials science", "Materials", "Optical materials", "Condensed matter physics", "Spectroscopy", "Matter" ]
6,794
https://en.wikipedia.org/wiki/Comet%20Shoemaker%E2%80%93Levy%209
Comet Shoemaker–Levy 9 (formally designated D/1993 F2) was a comet that broke apart in July 1992 and collided with Jupiter in July 1994, providing the first direct observation of an extraterrestrial collision of Solar System objects. This generated a large amount of coverage in the popular media, and the comet was closely observed by astronomers worldwide. The collision provided new information about Jupiter and highlighted its possible role in reducing space debris in the inner Solar System. The comet was discovered by astronomers Carolyn and Eugene M. Shoemaker, and David Levy in 1993. Shoemaker–Levy 9 (SL9) had been captured by Jupiter and was orbiting the planet at the time. It was located on the night of March 24 in a photograph taken with the Schmidt telescope at the Palomar Observatory in California. It was the first active comet observed to be orbiting a planet, and had probably been captured by Jupiter around 20 to 30 years earlier. Calculations showed that its unusual fragmented form was due to a previous closer approach to Jupiter in July 1992. At that time, the orbit of Shoemaker–Levy 9 passed within Jupiter's Roche limit, and Jupiter's tidal forces had acted to pull the comet apart. The comet was later observed as a series of fragments ranging up to in diameter. These fragments collided with Jupiter's southern hemisphere between July 16 and 22, 1994 at a speed of approximately (Jupiter's escape velocity) or . The prominent scars from the impacts were more visible than the Great Red Spot and persisted for many months. Discovery While conducting a program of observations designed to uncover near-Earth objects, the Shoemakers and Levy discovered Comet Shoemaker–Levy 9 on the night of March 24, 1993, in a photograph taken with the Schmidt telescope at the Palomar Observatory in California. The comet was thus a serendipitous discovery, but one that quickly overshadowed the results from their main observing program. Comet Shoemaker–Levy 9 was the ninth periodic comet (a comet whose orbital period is 200 years or less) discovered by the Shoemakers and Levy, thence its name. It was their eleventh comet discovery overall including their discovery of two non-periodic comets, which use a different nomenclature. The discovery was announced in IAU Circular 5725 on March 26, 1993. The discovery image gave the first hint that comet Shoemaker–Levy 9 was an unusual comet, as it appeared to show multiple nuclei in an elongated region about 50 arcseconds long and 10 arcseconds wide. Brian G. Marsden of the Central Bureau for Astronomical Telegrams noted that the comet lay only about 4 degrees from Jupiter as seen from Earth, and that although this could be a line-of-sight effect, its apparent motion in the sky suggested that the comet was physically close to the planet. Comet with a Jovian orbit Orbital studies of the new comet soon revealed that it was orbiting Jupiter rather than the Sun, unlike all other comets known at the time. Its orbit around Jupiter was very loosely bound, with a period of about 2 years and an apoapsis (the point in the orbit farthest from the planet) of . Its orbit around the planet was highly eccentric (e = 0.9986). Tracing back the comet's orbital motion revealed that it had been orbiting Jupiter for some time. It is likely that it was captured from a solar orbit in the early 1970s, although the capture may have occurred as early as the mid-1960s. Several other observers found images of the comet in precovery images obtained before March 24, including Kin Endate from a photograph exposed on March 15, Satoru Otomo on March 17, and a team led by Eleanor Helin from images on March 19. An image of the comet on a Schmidt photographic plate taken on March 19 was identified on March 21 by M. Lindgren, in a project searching for comets near Jupiter. However, as his team were expecting comets to be inactive or at best exhibit a weak dust coma, and SL9 had a peculiar morphology, its true nature was not recognised until the official announcement 5 days later. No precovery images dating back to earlier than March 1993 have been found. Before the comet was captured by Jupiter, it was probably a short-period comet with an aphelion just inside Jupiter's orbit, and a perihelion interior to the asteroid belt. The volume of space within which an object can be said to orbit Jupiter is defined by Jupiter's Hill sphere. When the comet passed Jupiter in the late 1960s or early 1970s, it happened to be near its aphelion, and found itself slightly within Jupiter's Hill sphere. Jupiter's gravity nudged the comet towards it. Because the comet's motion with respect to Jupiter was very small, it fell almost straight toward Jupiter, which is why it ended up on a Jove-centric orbit of very high eccentricity—that is to say, the ellipse was nearly flattened out. The comet had apparently passed extremely close to Jupiter on July 7, 1992, just over above its cloud tops—a smaller distance than Jupiter's radius of , and well within the orbit of Jupiter's innermost moon Metis and the planet's Roche limit, inside which tidal forces are strong enough to disrupt a body held together only by gravity. Although the comet had approached Jupiter closely before, the July 7 encounter seemed to be by far the closest, and the fragmentation of the comet is thought to have occurred at this time. Each fragment of the comet was denoted by a letter of the alphabet, from "fragment A" through to "fragment W", a practice already established from previously observed fragmented comets. More exciting for planetary astronomers was that the best orbital calculations suggested that the comet would pass within of the center of Jupiter, a distance smaller than the planet's radius, meaning that there was an extremely high probability that SL9 would collide with Jupiter in July 1994. Studies suggested that the train of nuclei would plow into Jupiter's atmosphere over a period of about five days. Predictions for the collision The discovery that the comet was likely to collide with Jupiter caused great excitement within the astronomical community and beyond, as astronomers had never before seen two significant Solar System bodies collide. Intense studies of the comet were undertaken, and as its orbit became more accurately established, the possibility of a collision became a certainty. The collision would provide a unique opportunity for scientists to look inside Jupiter's atmosphere, as the collisions were expected to cause eruptions of material from the layers normally hidden beneath the clouds. Astronomers estimated that the visible fragments of SL9 ranged in size from a few hundred metres (around ) to across, suggesting that the original comet may have had a nucleus up to across—somewhat larger than Comet Hyakutake, which became very bright when it passed close to the Earth in 1996. One of the great debates in advance of the impact was whether the effects of the impact of such small bodies would be noticeable from Earth, apart from a flash as they disintegrated like giant meteors. The most optimistic prediction was that large, asymmetric ballistic fireballs would rise above the limb of Jupiter and into sunlight to be visible from Earth. Other suggested effects of the impacts were seismic waves travelling across the planet, an increase in stratospheric haze on the planet due to dust from the impacts, and an increase in the mass of the Jovian ring system. However, given that observing such a collision was completely unprecedented, astronomers were cautious with their predictions of what the event might reveal. Impacts Anticipation grew as the predicted date for the collisions approached, and astronomers trained terrestrial telescopes on Jupiter. Several space observatories did the same, including the Hubble Space Telescope, the ROSAT X-ray-observing satellite, the W. M. Keck Observatory, and the Galileo spacecraft, then on its way to a rendezvous with Jupiter scheduled for 1995. Although the impacts took place on the side of Jupiter hidden from Earth, Galileo, then at a distance of from the planet, was able to see the impacts as they occurred. Jupiter's rapid rotation brought the impact sites into view for terrestrial observers a few minutes after the collisions. Two other space probes made observations at the time of the impact: the Ulysses spacecraft, primarily designed for solar observations, was pointed toward Jupiter from its location away, and the distant Voyager 2 probe, some from Jupiter and on its way out of the Solar System following its encounter with Neptune in 1989, was programmed to look for radio emission in the 1–390 kHz range and make observations with its ultraviolet spectrometer. Astronomer Ian Morison described the impacts as following: The first impact occurred at 20:13 UTC on July 16, 1994, when fragment A of the [comet's] nucleus slammed into Jupiter's southern hemisphere at about . Instruments on Galileo detected a fireball that reached a peak temperature of about , compared to the typical Jovian cloud-top temperature of about . It then expanded and cooled rapidly to about . The plume from the fireball quickly reached a height of over and was observed by the HST. A few minutes after the impact fireball was detected, Galileo measured renewed heating, probably due to ejected material falling back onto the planet. Earth-based observers detected the fireball rising over the limb of the planet shortly after the initial impact. Despite published predictions, astronomers had not expected to see the fireballs from the impacts and did not have any idea how visible the other atmospheric effects of the impacts would be from Earth. Observers soon saw a huge dark spot after the first impact; the spot was visible from Earth. This and subsequent dark spots were thought to have been caused by debris from the impacts, and were markedly asymmetric, forming crescent shapes in front of the direction of impact. Over the next six days, 21 distinct impacts were observed, with the largest coming on July 18 at 07:33 UTC when fragment G struck Jupiter. This impact created a giant dark spot over (almost one Earth diameter) across, and was estimated to have released an energy equivalent to 6,000,000 megatons of TNT (600 times the world's nuclear arsenal). Two impacts 12 hours apart on July 19 created impact marks of similar size to that caused by fragment G, and impacts continued until July 22, when fragment W struck the planet. Observations and discoveries Chemical studies Observers hoped that the impacts would give them a first glimpse of Jupiter beneath the cloud tops, as lower material was exposed by the comet fragments punching through the upper atmosphere. Spectroscopic studies revealed absorption lines in the Jovian spectrum due to diatomic sulfur (S2) and carbon disulfide (CS2), the first detection of either in Jupiter, and only the second detection of S2 in any astronomical object. Other molecules detected included ammonia (NH3) and hydrogen sulfide (H2S). The amount of sulfur implied by the quantities of these compounds was much greater than the amount that would be expected in a small cometary nucleus, showing that material from within Jupiter was being revealed. Oxygen-bearing molecules such as sulfur dioxide were not detected, to the surprise of astronomers. As well as these molecules, emission from heavy atoms such as iron, magnesium and silicon were detected, with abundances consistent with what would be found in a cometary nucleus. Although a substantial amount of water was detected spectroscopically, it was not as much as predicted, meaning that either the water layer thought to exist below the clouds was thinner than predicted, or that the cometary fragments did not penetrate deeply enough. Waves As predicted, the collisions generated enormous waves that swept across Jupiter at speeds of and were observed for over two hours after the largest impacts. The waves were thought to be travelling within a stable layer acting as a waveguide, and some scientists thought the stable layer must lie within the hypothesised tropospheric water cloud. However, other evidence seemed to indicate that the cometary fragments had not reached the water layer, and the waves were instead propagating within the stratosphere. Other observations Radio observations revealed a sharp increase in continuum emission at a wavelength of after the largest impacts, which peaked at 120% of the normal emission from the planet. This was thought to be due to synchrotron radiation, caused by the injection of relativistic electrons—electrons with velocities near the speed of light—into the Jovian magnetosphere by the impacts. About an hour after fragment K entered Jupiter, observers recorded auroral emission near the impact region, as well as at the antipode of the impact site with respect to Jupiter's strong magnetic field. The cause of these emissions was difficult to establish due to a lack of knowledge of Jupiter's internal magnetic field and of the geometry of the impact sites. One possible explanation was that upwardly accelerating shock waves from the impact accelerated charged particles enough to cause auroral emission, a phenomenon more typically associated with fast-moving solar wind particles striking a planetary atmosphere near a magnetic pole. Some astronomers had suggested that the impacts might have a noticeable effect on the Io torus, a torus of high-energy particles connecting Jupiter with the highly volcanic moon Io. High resolution spectroscopic studies found that variations in the ion density, rotational velocity, and temperatures at the time of impact and afterwards were within the normal limits. Voyager 2 failed to detect anything with calculations, showing that the fireballs were just below the craft's limit of detection; no abnormal levels of UV radiation or radio signals were registered after the blast. Ulysses also failed to detect any abnormal radio frequencies. Post-impact analysis Several models were devised to compute the density and size of Shoemaker–Levy 9. Its average density was calculated to be about ; the breakup of a much less dense comet would not have resembled the observed string of objects. The size of the parent comet was calculated to be about in diameter. These predictions were among the few that were actually confirmed by subsequent observation. One of the surprises of the impacts was the small amount of water revealed compared to prior predictions. Before the impact, models of Jupiter's atmosphere had indicated that the break-up of the largest fragments would occur at atmospheric pressures of anywhere from 30 kilopascals to a few tens of megapascals (from 0.3 to a few hundred bar), with some predictions that the comet would penetrate a layer of water and create a bluish shroud over that region of Jupiter. Astronomers did not observe large amounts of water following the collisions, and later impact studies found that fragmentation and destruction of the cometary fragments in a meteor air burst probably occurred at much higher altitudes than previously expected, with even the largest fragments being destroyed when the pressure reached , well above the expected depth of the water layer. The smaller fragments were probably destroyed before they even reached the cloud layer. Longer-term effects The visible scars from the impacts could be seen on Jupiter for many months. They were extremely prominent, and observers described them as more easily visible than the Great Red Spot. A search of historical observations revealed that the spots were probably the most prominent transient features ever seen on the planet, and that although the Great Red Spot is notable for its striking color, no spots of the size and darkness of those caused by the SL9 impacts had ever been recorded before, or since. Spectroscopic observers found that ammonia and carbon disulfide persisted in the atmosphere for at least fourteen months after the collisions, with a considerable amount of ammonia being present in the stratosphere as opposed to its normal location in the troposphere. Counterintuitively, the atmospheric temperature dropped to normal levels much more quickly at the larger impact sites than at the smaller sites: at the larger impact sites, temperatures were elevated over a region wide, but dropped back to normal levels within a week of the impact. At smaller sites, temperatures 10 K (10 °C; 18 °F) higher than the surroundings persisted for almost two weeks. Global stratospheric temperatures rose immediately after the impacts, then fell to below pre-impact temperatures 2–3 weeks afterwards, before rising slowly to normal temperatures. Frequency of impacts SL9 is not unique in having orbited Jupiter for a time; five comets, including 82P/Gehrels, 147P/Kushida–Muramatsu, and 111P/Helin–Roman–Crockett, are known to have been temporarily captured by the planet. Cometary orbits around Jupiter are unstable, as they will be highly elliptical and likely to be strongly perturbed by the Sun's gravity at apojove (the farthest point on the orbit from the planet). By far the most massive planet in the Solar System, Jupiter can capture objects relatively frequently, but the size of SL9 makes it a rarity: one post-impact study estimated that comets in diameter impact the planet once in approximately 500 years and those in diameter do so just once in every 6,000 years. There is very strong evidence that comets have previously been fragmented and collided with Jupiter and its satellites. During the Voyager missions to the planet, planetary scientists identified 13 crater chains on Callisto and three on Ganymede, the origin of which was initially a mystery. Crater chains seen on the Moon often radiate from large craters, and are thought to be caused by secondary impacts of the original ejecta, but the chains on the Jovian moons did not lead back to a larger crater. The impact of SL9 strongly implied that the chains were due to trains of disrupted cometary fragments crashing into the satellites. Impact of July 19, 2009 On July 19, 2009, exactly 15 years after the SL9 impacts, a new black spot about the size of the Pacific Ocean appeared in Jupiter's southern hemisphere. Thermal infrared measurements showed the impact site was warm and spectroscopic analysis detected the production of excess hot ammonia and silica-rich dust in the upper regions of Jupiter's atmosphere. Scientists have concluded that another impact event had occurred, but this time a more compact and stronger object, probably a small undiscovered asteroid, was the cause. Jupiter's role in protection of the inner Solar System The events of SL9's interaction with Jupiter greatly highlighted Jupiter's role in protecting the inner planets from both interstellar and in-system debris by acting as a "cosmic vacuum cleaner" for the Solar System (Jupiter barrier). The planet's strong gravitational influence attracts many small comets and asteroids and the rate of cometary impacts on Jupiter is thought to be between 2,000 and 8,000 times higher than the rate on Earth. The extinction of the non-avian dinosaurs at the end of the Cretaceous period is generally thought to have been caused by the Cretaceous–Paleogene impact event, which created the Chicxulub crater, demonstrating that cometary impacts are indeed a serious threat to life on Earth. Astronomers have speculated that without Jupiter's immense gravity, extinction events might have been more frequent on Earth and complex life might not have been able to develop. This is part of the argument used in the Rare Earth hypothesis. In 2009, it was shown that the presence of a smaller planet at Jupiter's position in the Solar System might increase the impact rate of comets on the Earth significantly. A planet of Jupiter's mass still seems to provide increased protection against asteroids, but the total effect on all orbital bodies within the Solar System is unclear. This and other recent models call into question the nature of Jupiter's influence on Earth impacts. See also List of Jupiter events Impact events on Jupiter Atmosphere of Jupiter 73P/Schwassmann–Wachmann, a near-Earth comet in the process of disintegrating References Notes Bibliography External links First Comet Shoemaker-Levy 9 website that collected photos submitted from observatories around the world and from Galileo spacecraft, curated by Ron Baalke, Jet Propulsion Laboratory software engineer Comet Shoemaker–Levy 9 FAQ Comet Shoemaker–Levy 9 Photo Gallery Downloadable gif Animation showing time course of impact and size relative to earthsize Comet Shoemaker-Levy 9 Dan Bruton, Texas A&M University Jupiter Swallows Comet Shoemaker Levy 9 APOD: November 5, 2000 Comet Shoemaker–Levy Collision with Jupiter National Space Science Data Center information Simulation of the orbit of SL-9 showing the passage that fragmented the comet and the collision 2 years later Interactive space simulator that includes accurate 3D simulation of the Shoemaker Levy 9 collision Shoemaker-Levy 9 Jupiter Impact Observing Campaign Archive at the NASA Planetary Data System, Small Bodies Node Destroyed comets Discoveries by Carolyn S. Shoemaker Discoveries by Eugene Merle Shoemaker 1993 F2 Collision Jupiter impact events 1994 in science 19930324 Predicted impact events
Comet Shoemaker–Levy 9
[ "Physics" ]
4,207
[ "Collision", "Mechanics" ]
6,804
https://en.wikipedia.org/wiki/Charge-coupled%20device
A charge-coupled device (CCD) is an integrated circuit containing an array of linked, or coupled, capacitors. Under the control of an external circuit, each capacitor can transfer its electric charge to a neighboring capacitor. CCD sensors are a major technology used in digital imaging. Overview In a CCD image sensor, pixels are represented by p-doped metal–oxide–semiconductor (MOS) capacitors. These MOS capacitors, the basic building blocks of a CCD, are biased above the threshold for inversion when image acquisition begins, allowing the conversion of incoming photons into electron charges at the semiconductor-oxide interface; the CCD is then used to read out these charges. Although CCDs are not the only technology to allow for light detection, CCD image sensors are widely used in professional, medical, and scientific applications where high-quality image data are required. In applications with less exacting quality demands, such as consumer and professional digital cameras, active pixel sensors, also known as CMOS sensors (complementary MOS sensors), are generally used. However, the large quality advantage CCDs enjoyed early on has narrowed over time and since the late 2010s CMOS sensors are the dominant technology, having largely if not completely replaced CCD image sensors. History The basis for the CCD is the metal–oxide–semiconductor (MOS) structure, with MOS capacitors being the basic building blocks of a CCD, and a depleted MOS structure used as the photodetector in early CCD devices. In the late 1960s, Willard Boyle and George E. Smith at Bell Labs were researching MOS technology while working on semiconductor bubble memory. They realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. This led to the invention of the charge-coupled device by Boyle and Smith in 1969. They conceived of the design of what they termed, in their notebook, "Charge 'Bubble' Devices". The initial paper describing the concept in April 1970 listed possible uses as memory, a delay line, and an imaging device. The device could also be used as a shift register. The essence of the design was the ability to transfer charge along the surface of a semiconductor from one storage capacitor to the next. The concept was similar in principle to the bucket-brigade device (BBD), which was developed at Philips Research Labs during the late 1960s. The first experimental device demonstrating the principle was a row of closely spaced metal squares on an oxidized silicon surface electrically accessed by wire bonds. It was demonstrated by Gil Amelio, Michael Francis Tompsett and George Smith in April 1970. This was the first experimental application of the CCD in image sensor technology, and used a depleted MOS structure as the photodetector. The first patent () on the application of CCDs to imaging was assigned to Tompsett, who filed the application in 1971. The first working CCD made with integrated circuit technology was a simple 8-bit shift register, reported by Tompsett, Amelio and Smith in August 1970. This device had input and output circuits and was used to demonstrate its use as a shift register and as a crude eight pixel linear imaging device. Development of the device progressed at a rapid rate. By 1971, Bell researchers led by Michael Tompsett were able to capture images with simple linear devices. Several companies, including Fairchild Semiconductor, RCA and Texas Instruments, picked up on the invention and began development programs. Fairchild's effort, led by ex-Bell researcher Gil Amelio, was the first with commercial devices, and by 1974 had a linear 500-element device and a 2D 100 × 100 pixel device. Peter Dillon, a scientist at Kodak Research Labs, invented the first color CCD image sensor by overlaying a color filter array on this Fairchild 100 x 100 pixel Interline CCD starting in 1974. Steven Sasson, an electrical engineer working for the Kodak Apparatus Division, invented a digital still camera using this same Fairchild CCD in 1975. The interline transfer (ILT) CCD device was proposed by L. Walsh and R. Dyck at Fairchild in 1973 to reduce smear and eliminate a mechanical shutter. To further reduce smear from bright light sources, the frame-interline-transfer (FIT) CCD architecture was developed by K. Horii, T. Kuroda and T. Kunii at Matsushita (now Panasonic) in 1981. The first KH-11 KENNEN reconnaissance satellite equipped with charge-coupled device array ( pixels) technology for imaging was launched in December 1976. Under the leadership of Kazuo Iwama, Sony started a large development effort on CCDs involving a significant investment. Eventually, Sony managed to mass-produce CCDs for their camcorders. Before this happened, Iwama died in August 1982. Subsequently, a CCD chip was placed on his tombstone to acknowledge his contribution. The first mass-produced consumer CCD video camera, the CCD-G5, was released by Sony in 1983, based on a prototype developed by Yoshiaki Hagiwara in 1981. Early CCD sensors suffered from shutter lag. This was largely resolved with the invention of the pinned photodiode (PPD). It was invented by Nobukazu Teranishi, Hiromitsu Shiraki and Yasuo Ishihara at NEC in 1980. They recognized that lag can be eliminated if the signal carriers could be transferred from the photodiode to the CCD. This led to their invention of the pinned photodiode, a photodetector structure with low lag, low noise, high quantum efficiency and low dark current. It was first publicly reported by Teranishi and Ishihara with A. Kohono, E. Oda and K. Arai in 1982, with the addition of an anti-blooming structure. The new photodetector structure invented at NEC was given the name "pinned photodiode" (PPD) by B.C. Burkey at Kodak in 1984. In 1987, the PPD began to be incorporated into most CCD devices, becoming a fixture in consumer electronic video cameras and then digital still cameras. Since then, the PPD has been used in nearly all CCD sensors and then CMOS sensors. In January 2006, Boyle and Smith were awarded the National Academy of Engineering Charles Stark Draper Prize, and in 2009 they were awarded the Nobel Prize for Physics for their invention of the CCD concept. Michael Tompsett was awarded the 2010 National Medal of Technology and Innovation, for pioneering work and electronic technologies including the design and development of the first CCD imagers. He was also awarded the 2012 IEEE Edison Medal for "pioneering contributions to imaging devices including CCD Imagers, cameras and thermal imagers". Basics of operation In a CCD for capturing images, there is a photoactive region (an epitaxial layer of silicon), and a transmission region made out of a shift register (the CCD, properly speaking). An image is projected through a lens onto the capacitor array (the photoactive region), causing each capacitor to accumulate an electric charge proportional to the light intensity at that location. A one-dimensional array, used in line-scan cameras, captures a single slice of the image, whereas a two-dimensional array, used in video and still cameras, captures a two-dimensional picture corresponding to the scene projected onto the focal plane of the sensor. Once the array has been exposed to the image, a control circuit causes each capacitor to transfer its contents to its neighbor (operating as a shift register). The last capacitor in the array dumps its charge into a charge amplifier, which converts the charge into a voltage. By repeating this process, the controlling circuit converts the entire contents of the array in the semiconductor to a sequence of voltages. In a digital device, these voltages are then sampled, digitized, and usually stored in memory; in an analog device (such as an analog video camera), they are processed into a continuous analog signal (e.g. by feeding the output of the charge amplifier into a low-pass filter), which is then processed and fed out to other circuits for transmission, recording, or other processing. Detailed physics of operation Charge generation Before the MOS capacitors are exposed to light, they are biased into the depletion region; in n-channel CCDs, the silicon under the bias gate is slightly p-doped or intrinsic. The gate is then biased at a positive potential, above the threshold for strong inversion, which will eventually result in the creation of an n channel below the gate as in a MOSFET. However, it takes time to reach this thermal equilibrium: up to hours in high-end scientific cameras cooled at low temperature. Initially after biasing, the holes are pushed far into the substrate, and no mobile electrons are at or near the surface; the CCD thus operates in a non-equilibrium state called deep depletion. Then, when electron–hole pairs are generated in the depletion region, they are separated by the electric field, the electrons move toward the surface, and the holes move toward the substrate. Four pair-generation processes can be identified: photo-generation (up to 95% of quantum efficiency), generation in the depletion region, generation at the surface, and generation in the neutral bulk. The last three processes are known as dark-current generation, and add noise to the image; they can limit the total usable integration time. The accumulation of electrons at or near the surface can proceed either until image integration is over and charge begins to be transferred, or thermal equilibrium is reached. In this case, the well is said to be full. The maximum capacity of each well is known as the well depth, typically about 105 electrons per pixel. CCDs are normally susceptible to ionizing radiation and energetic particles which causes noise in the output of the CCD, and this must be taken into consideration in satellites using CCDs. Design and manufacturing The photoactive region of a CCD is, generally, an epitaxial layer of silicon. It is lightly p doped (usually with boron) and is grown upon a substrate material, often p++. In buried-channel devices, the type of design utilized in most modern CCDs, certain areas of the surface of the silicon are ion implanted with phosphorus, giving them an n-doped designation. This region defines the channel in which the photogenerated charge packets will travel. Simon Sze details the advantages of a buried-channel device: This thin layer (= 0.2–0.3 micron) is fully depleted and the accumulated photogenerated charge is kept away from the surface. This structure has the advantages of higher transfer efficiency and lower dark current, from reduced surface recombination. The penalty is smaller charge capacity, by a factor of 2–3 compared to the surface-channel CCD. The gate oxide, i.e. the capacitor dielectric, is grown on top of the epitaxial layer and substrate. Later in the process, polysilicon gates are deposited by chemical vapor deposition, patterned with photolithography, and etched in such a way that the separately phased gates lie perpendicular to the channels. The channels are further defined by utilization of the LOCOS process to produce the channel stop region. Channel stops are thermally grown oxides that serve to isolate the charge packets in one column from those in another. These channel stops are produced before the polysilicon gates are, as the LOCOS process utilizes a high-temperature step that would destroy the gate material. The channel stops are parallel to, and exclusive of, the channel, or "charge carrying", regions. Channel stops often have a p+ doped region underlying them, providing a further barrier to the electrons in the charge packets (this discussion of the physics of CCD devices assumes an electron transfer device, though hole transfer is possible). The clocking of the gates, alternately high and low, will forward and reverse bias the diode that is provided by the buried channel (n-doped) and the epitaxial layer (p-doped). This will cause the CCD to deplete, near the p–n junction and will collect and move the charge packets beneath the gates—and within the channels—of the device. CCD manufacturing and operation can be optimized for different uses. The above process describes a frame transfer CCD. While CCDs may be manufactured on a heavily doped p++ wafer it is also possible to manufacture a device inside p-wells that have been placed on an n-wafer. This second method, reportedly, reduces smear, dark current, and infrared and red response. This method of manufacture is used in the construction of interline-transfer devices. Another version of CCD is called a peristaltic CCD. In a peristaltic charge-coupled device, the charge-packet transfer operation is analogous to the peristaltic contraction and dilation of the digestive system. The peristaltic CCD has an additional implant that keeps the charge away from the silicon/silicon dioxide interface and generates a large lateral electric field from one gate to the next. This provides an additional driving force to aid in transfer of the charge packets. Architecture The CCD image sensors can be implemented in several different architectures. The most common are full-frame, frame-transfer, and interline. The distinguishing characteristic of each of these architectures is their approach to the problem of shuttering. In a full-frame device, all of the image area is active, and there is no electronic shutter. A mechanical shutter must be added to this type of sensor or the image smears as the device is clocked or read out. With a frame-transfer CCD, half of the silicon area is covered by an opaque mask (typically aluminum). The image can be quickly transferred from the image area to the opaque area or storage region with acceptable smear of a few percent. That image can then be read out slowly from the storage region while a new image is integrating or exposing in the active area. Frame-transfer devices typically do not require a mechanical shutter and were a common architecture for early solid-state broadcast cameras. The downside to the frame-transfer architecture is that it requires twice the silicon real estate of an equivalent full-frame device; hence, it costs roughly twice as much. The interline architecture extends this concept one step further and masks every other column of the image sensor for storage. In this device, only one pixel shift has to occur to transfer from image area to storage area; thus, shutter times can be less than a microsecond and smear is essentially eliminated. The advantage is not free, however, as the imaging area is now covered by opaque strips dropping the fill factor to approximately 50 percent and the effective quantum efficiency by an equivalent amount. Modern designs have addressed this deleterious characteristic by adding microlenses on the surface of the device to direct light away from the opaque regions and on the active area. Microlenses can bring the fill factor back up to 90 percent or more depending on pixel size and the overall system's optical design. The choice of architecture comes down to one of utility. If the application cannot tolerate an expensive, failure-prone, power-intensive mechanical shutter, an interline device is the right choice. Consumer snap-shot cameras have used interline devices. On the other hand, for those applications that require the best possible light collection and issues of money, power and time are less important, the full-frame device is the right choice. Astronomers tend to prefer full-frame devices. The frame-transfer falls in between and was a common choice before the fill-factor issue of interline devices was addressed. Today, frame-transfer is usually chosen when an interline architecture is not available, such as in a back-illuminated device. CCDs containing grids of pixels are used in digital cameras, optical scanners, and video cameras as light-sensing devices. They commonly respond to 70 percent of the incident light (meaning a quantum efficiency of about 70 percent) making them far more efficient than photographic film, which captures only about 2 percent of the incident light. Most common types of CCDs are sensitive to near-infrared light, which allows infrared photography, night-vision devices, and zero lux (or near zero lux) video-recording/photography. For normal silicon-based detectors, the sensitivity is limited to 1.1 μm. One other consequence of their sensitivity to infrared is that infrared from remote controls often appears on CCD-based digital cameras or camcorders if they do not have infrared blockers. Cooling reduces the array's dark current, improving the sensitivity of the CCD to low light intensities, even for ultraviolet and visible wavelengths. Professional observatories often cool their detectors with liquid nitrogen to reduce the dark current, and therefore the thermal noise, to negligible levels. Frame transfer CCD The frame transfer CCD imager was the first imaging structure proposed for CCD Imaging by Michael Tompsett at Bell Laboratories. A frame transfer CCD is a specialized CCD, often used in astronomy and some professional video cameras, designed for high exposure efficiency and correctness. The normal functioning of a CCD, astronomical or otherwise, can be divided into two phases: exposure and readout. During the first phase, the CCD passively collects incoming photons, storing electrons in its cells. After the exposure time is passed, the cells are read out one line at a time. During the readout phase, cells are shifted down the entire area of the CCD. While they are shifted, they continue to collect light. Thus, if the shifting is not fast enough, errors can result from light that falls on a cell holding charge during the transfer. These errors are referred to as rolling shutter effect, making fast moving objects appear distorted. In addition, the CCD cannot be used to collect light while it is being read out. A faster shifting requires a faster readout, and a faster readout can introduce errors in the cell charge measurement, leading to a higher noise level. A frame transfer CCD solves both problems: it has a shielded, not light sensitive, area containing as many cells as the area exposed to light. Typically, this area is covered by a reflective material such as aluminium. When the exposure time is up, the cells are transferred very rapidly to the hidden area. Here, safe from any incoming light, cells can be read out at any speed one deems necessary to correctly measure the cells' charge. At the same time, the exposed part of the CCD is collecting light again, so no delay occurs between successive exposures. The disadvantage of such a CCD is the higher cost: the cell area is basically doubled, and more complex control electronics are needed. Intensified charge-coupled device An intensified charge-coupled device (ICCD) is a CCD that is optically connected to an image intensifier that is mounted in front of the CCD. An image intensifier includes three functional elements: a photocathode, a micro-channel plate (MCP) and a phosphor screen. These three elements are mounted one close behind the other in the mentioned sequence. The photons which are coming from the light source fall onto the photocathode, thereby generating photoelectrons. The photoelectrons are accelerated towards the MCP by an electrical control voltage, applied between photocathode and MCP. The electrons are multiplied inside of the MCP and thereafter accelerated towards the phosphor screen. The phosphor screen finally converts the multiplied electrons back to photons which are guided to the CCD by a fiber optic or a lens. An image intensifier inherently includes a shutter functionality: If the control voltage between the photocathode and the MCP is reversed, the emitted photoelectrons are not accelerated towards the MCP but return to the photocathode. Thus, no electrons are multiplied and emitted by the MCP, no electrons are going to the phosphor screen and no light is emitted from the image intensifier. In this case no light falls onto the CCD, which means that the shutter is closed. The process of reversing the control voltage at the photocathode is called gating and therefore ICCDs are also called gateable CCD cameras. Besides the extremely high sensitivity of ICCD cameras, which enable single photon detection, the gateability is one of the major advantages of the ICCD over the EMCCD cameras. The highest performing ICCD cameras enable shutter times as short as 200 picoseconds. ICCD cameras are in general somewhat higher in price than EMCCD cameras because they need the expensive image intensifier. On the other hand, EMCCD cameras need a cooling system to cool the EMCCD chip down to temperatures around . This cooling system adds additional costs to the EMCCD camera and often yields heavy condensation problems in the application. ICCDs are used in night vision devices and in various scientific applications. Electron-multiplying CCD An electron-multiplying CCD (EMCCD, also known as an L3Vision CCD, a product commercialized by e2v Ltd., GB, L3CCD or Impactron CCD, a now-discontinued product offered in the past by Texas Instruments) is a charge-coupled device in which a gain register is placed between the shift register and the output amplifier. The gain register is split up into a large number of stages. In each stage, the electrons are multiplied by impact ionization in a similar way to an avalanche diode. The gain probability at every stage of the register is small (P < 2%), but as the number of elements is large (N > 500), the overall gain can be very high (), with single input electrons giving many thousands of output electrons. Reading a signal from a CCD gives a noise background, typically a few electrons. In an EMCCD, this noise is superimposed on many thousands of electrons rather than a single electron; the devices' primary advantage is thus their negligible readout noise. The use of avalanche breakdown for amplification of photo charges had already been described in the in 1973 by George E. Smith/Bell Telephone Laboratories. EMCCDs show a similar sensitivity to intensified CCDs (ICCDs). However, as with ICCDs, the gain that is applied in the gain register is stochastic and the exact gain that has been applied to a pixel's charge is impossible to know. At high gains (> 30), this uncertainty has the same effect on the signal-to-noise ratio (SNR) as halving the quantum efficiency (QE) with respect to operation with a gain of unity. This effect is referred to as the Excess Noise Factor (ENF). However, at very low light levels (where the quantum efficiency is most important), it can be assumed that a pixel either contains an electron—or not. This removes the noise associated with the stochastic multiplication at the risk of counting multiple electrons in the same pixel as a single electron. To avoid multiple counts in one pixel due to coincident photons in this mode of operation, high frame rates are essential. The dispersion in the gain is shown in the graph on the right. For multiplication registers with many elements and large gains it is well modelled by the equation: where P is the probability of getting n output electrons given m input electrons and a total mean multiplication register gain of g. For very large numbers of input electrons, this complex distribution function converges towards a Gaussian. Because of the lower costs and better resolution, EMCCDs are capable of replacing ICCDs in many applications. ICCDs still have the advantage that they can be gated very fast and thus are useful in applications like range-gated imaging. EMCCD cameras indispensably need a cooling system—using either thermoelectric cooling or liquid nitrogen—to cool the chip down to temperatures in the range of . This cooling system adds additional costs to the EMCCD imaging system and may yield condensation problems in the application. However, high-end EMCCD cameras are equipped with a permanent hermetic vacuum system confining the chip to avoid condensation issues. The low-light capabilities of EMCCDs find use in astronomy and biomedical research, among other fields. In particular, their low noise at high readout speeds makes them very useful for a variety of astronomical applications involving low light sources and transient events such as lucky imaging of faint stars, high speed photon counting photometry, Fabry-Pérot spectroscopy and high-resolution spectroscopy. More recently, these types of CCDs have broken into the field of biomedical research in low-light applications including small animal imaging, single-molecule imaging, Raman spectroscopy, super resolution microscopy as well as a wide variety of modern fluorescence microscopy techniques thanks to greater SNR in low-light conditions in comparison with traditional CCDs and ICCDs. In terms of noise, commercial EMCCD cameras typically have clock-induced charge (CIC) and dark current (dependent on the extent of cooling) that together lead to an effective readout noise ranging from 0.01 to 1 electrons per pixel read. However, recent improvements in EMCCD technology have led to a new generation of cameras capable of producing significantly less CIC, higher charge transfer efficiency and an EM gain 5 times higher than what was previously available. These advances in low-light detection lead to an effective total background noise of 0.001 electrons per pixel read, a noise floor unmatched by any other low-light imaging device. Use in astronomy Due to the high quantum efficiencies of charge-coupled device (CCD) (the ideal quantum efficiency is 100%, one generated electron per incident photon), linearity of their outputs, ease of use compared to photographic plates, and a variety of other reasons, CCDs were very rapidly adopted by astronomers for nearly all UV-to-infrared applications. Thermal noise and cosmic rays may alter the pixels in the CCD array. To counter such effects, astronomers take several exposures with the CCD shutter closed and opened. The average of images taken with the shutter closed is necessary to lower the random noise. Once developed, the dark frame average image is then subtracted from the open-shutter image to remove the dark current and other systematic defects (dead pixels, hot pixels, etc.) in the CCD. Newer Skipper CCDs counter noise by collecting data with the same collected charge multiple times and has applications in precision light Dark Matter searches and neutrino measurements. The Hubble Space Telescope, in particular, has a highly developed series of steps ("data reduction pipeline") to convert the raw CCD data to useful images. CCD cameras used in astrophotography often require sturdy mounts to cope with vibrations from wind and other sources, along with the tremendous weight of most imaging platforms. To take long exposures of galaxies and nebulae, many astronomers use a technique known as auto-guiding. Most autoguiders use a second CCD chip to monitor deviations during imaging. This chip can rapidly detect errors in tracking and command the mount motors to correct for them. An unusual astronomical application of CCDs, called drift-scanning, uses a CCD to make a fixed telescope behave like a tracking telescope and follow the motion of the sky. The charges in the CCD are transferred and read in a direction parallel to the motion of the sky, and at the same speed. In this way, the telescope can image a larger region of the sky than its normal field of view. The Sloan Digital Sky Survey is the most famous example of this, using the technique to produce a survey of over a quarter of the sky. The Gaia space telescope is another instrument operating in this mode, rotating about its axis at a constant rate of 1 revolution in 6 hours and scanning a 360° by 0.5° strip on the sky during this time; a star traverses the entire focal plane in about 40 seconds (effective exposure time). In addition to imagers, CCDs are also used in an array of analytical instrumentation including spectrometers and interferometers. Color cameras Digital color cameras, including the digital color cameras in smartphones, generally use an integral color image sensor, which has a color filter array fabricated on top of the monochrome pixels of the CCD. The most popular CFA pattern is known as the Bayer filter, which is named for its inventor, Kodak scientist Bryce Bayer. In the Bayer pattern, each square of four pixels has one filtered red, one blue, and two green pixels (the human eye has greater acuity for luminance, which is more heavily weighted in green than in either red or blue). As a result, the luminance information is collected in each row and column using a checkerboard pattern, and the color resolution is lower than the luminance resolution. Better color separation can be reached by three-CCD devices (3CCD) and a dichroic beam splitter prism, that splits the image into red, green and blue components. Each of the three CCDs is arranged to respond to a particular color. Many professional video camcorders, and some semi-professional camcorders, use this technique, although developments in competing CMOS technology have made CMOS sensors, both with beam-splitters and Bayer filters, increasingly popular in high-end video and digital cinema cameras. Another advantage of 3CCD over a Bayer mask device is higher quantum efficiency (higher light sensitivity), because most of the light from the lens enters one of the silicon sensors, while a Bayer mask absorbs a high proportion (more than 2/3) of the light falling on each pixel location. For still scenes, for instance in microscopy, the resolution of a Bayer mask device can be enhanced by microscanning technology. During the process of color co-site sampling, several frames of the scene are produced. Between acquisitions, the sensor is moved in pixel dimensions, so that each point in the visual field is acquired consecutively by elements of the mask that are sensitive to the red, green, and blue components of its color. Eventually every pixel in the image has been scanned at least once in each color and the resolution of the three channels become equivalent (the resolutions of red and blue channels are quadrupled while the green channel is doubled). Sensor sizes Sensors (CCD / CMOS) come in various sizes, or image sensor formats. These sizes are often referred to with an inch fraction designation such as 1/1.8″ or 2/3″ called the optical format. This measurement originates back in the 1950s and the time of Vidicon tubes. Blooming When a CCD exposure is long enough, eventually the electrons that collect in the "bins" in the brightest part of the image will overflow the bin, resulting in blooming. The structure of the CCD allows the electrons to flow more easily in one direction than another, resulting in vertical streaking. Some anti-blooming features that can be built into a CCD reduce its sensitivity to light by using some of the pixel area for a drain structure. James M. Early developed a vertical anti-blooming drain that would not detract from the light collection area, and so did not reduce light sensitivity. See also References External links Journal Article On Basics of CCDs Nikon microscopy introduction to CCDs Concepts in Digital Imaging Technology More statistical properties L3CCDs used in astronomy American inventions Integrated circuits Image processing Image sensors Image scanners Astronomical imaging MOSFETs
Charge-coupled device
[ "Technology", "Engineering" ]
6,617
[ "Computer engineering", "Integrated circuits" ]
6,806
https://en.wikipedia.org/wiki/Computer%20memory
Computer memory stores information, such as data and programs, for immediate use in the computer. The term memory is often synonymous with the terms RAM, main memory, or primary storage. Archaic synonyms for main memory include core (for magnetic core memory) and store. Main memory operates at a high speed compared to mass storage which is slower but less expensive per bit and higher in capacity. Besides storing opened programs and data being actively processed, computer memory serves as a mass storage cache and write buffer to improve both reading and writing performance. Operating systems borrow RAM capacity for caching so long as it is not needed by running software. If needed, contents of the computer memory can be transferred to storage; a common way of doing this is through a memory management technique called virtual memory. Modern computer memory is implemented as semiconductor memory, where data is stored within memory cells built from MOS transistors and other components on an integrated circuit. There are two main kinds of semiconductor memory: volatile and non-volatile. Examples of non-volatile memory are flash memory and ROM, PROM, EPROM, and EEPROM memory. Examples of volatile memory are dynamic random-access memory (DRAM) used for primary storage and static random-access memory (SRAM) used mainly for CPU cache. Most semiconductor memory is organized into memory cells each storing one bit (0 or 1). Flash memory organization includes both one bit per memory cell and a multi-level cell capable of storing multiple bits per cell. The memory cells are grouped into words of fixed word length, for example, 1, 2, 4, 8, 16, 32, 64 or 128 bits. Each word can be accessed by a binary address of N bits, making it possible to store 2N words in the memory. History In the early 1940s, memory technology often permitted a capacity of a few bytes. The first electronic programmable digital computer, the ENIAC, using thousands of vacuum tubes, could perform simple calculations involving 20 numbers of ten decimal digits stored in the vacuum tubes. The next significant advance in computer memory came with acoustic delay-line memory, developed by J. Presper Eckert in the early 1940s. Through the construction of a glass tube filled with mercury and plugged at each end with a quartz crystal, delay lines could store bits of information in the form of sound waves propagating through the mercury, with the quartz crystals acting as transducers to read and write bits. Delay-line memory was limited to a capacity of up to a few thousand bits. Two alternatives to the delay line, the Williams tube and Selectron tube, originated in 1946, both using electron beams in glass tubes as means of storage. Using cathode-ray tubes, Fred Williams invented the Williams tube, which was the first random-access computer memory. The Williams tube was able to store more information than the Selectron tube (the Selectron was limited to 256 bits, while the Williams tube could store thousands) and was less expensive. The Williams tube was nevertheless frustratingly sensitive to environmental disturbances. Efforts began in the late 1940s to find non-volatile memory. Magnetic-core memory allowed for memory recall after power loss. It was developed by Frederick W. Viehe and An Wang in the late 1940s, and improved by Jay Forrester and Jan A. Rajchman in the early 1950s, before being commercialized with the Whirlwind I computer in 1953. Magnetic-core memory was the dominant form of memory until the development of MOS semiconductor memory in the 1960s. The first semiconductor memory was implemented as a flip-flop circuit in the early 1960s using bipolar transistors. Semiconductor memory made from discrete devices was first shipped by Texas Instruments to the United States Air Force in 1961. In the same year, the concept of solid-state memory on an integrated circuit (IC) chip was proposed by applications engineer Bob Norman at Fairchild Semiconductor. The first bipolar semiconductor memory IC chip was the SP95 introduced by IBM in 1965. While semiconductor memory offered improved performance over magnetic-core memory, it remained larger and more expensive and did not displace magnetic-core memory until the late 1960s. MOS memory The invention of the metal–oxide–semiconductor field-effect transistor (MOSFET) enabled the practical use of metal–oxide–semiconductor (MOS) transistors as memory cell storage elements. MOS memory was developed by John Schmidt at Fairchild Semiconductor in 1964. In addition to higher performance, MOS semiconductor memory was cheaper and consumed less power than magnetic core memory. In 1965, J. Wood and R. Ball of the Royal Radar Establishment proposed digital storage systems that use CMOS (complementary MOS) memory cells, in addition to MOSFET power devices for the power supply, switched cross-coupling, switches and delay-line storage. The development of silicon-gate MOS integrated circuit (MOS IC) technology by Federico Faggin at Fairchild in 1968 enabled the production of MOS memory chips. NMOS memory was commercialized by IBM in the early 1970s. MOS memory overtook magnetic core memory as the dominant memory technology in the early 1970s. The two main types of volatile random-access memory (RAM) are static random-access memory (SRAM) and dynamic random-access memory (DRAM). Bipolar SRAM was invented by Robert Norman at Fairchild Semiconductor in 1963, followed by the development of MOS SRAM by John Schmidt at Fairchild in 1964. SRAM became an alternative to magnetic-core memory, but requires six transistors for each bit of data. Commercial use of SRAM began in 1965, when IBM introduced their SP95 SRAM chip for the System/360 Model 95. Toshiba introduced bipolar DRAM memory cells for its Toscal BC-1411 electronic calculator in 1965. While it offered improved performance, bipolar DRAM could not compete with the lower price of the then dominant magnetic-core memory. MOS technology is the basis for modern DRAM. In 1966, Robert H. Dennard at the IBM Thomas J. Watson Research Center was working on MOS memory. While examining the characteristics of MOS technology, he found it was possible to build capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of a single-transistor DRAM memory cell. In 1967, Dennard filed a patent for a single-transistor DRAM memory cell based on MOS technology. This led to the first commercial DRAM IC chip, the Intel 1103 in October 1970. Synchronous dynamic random-access memory (SDRAM) later debuted with the Samsung KM48SL2000 chip in 1992. The term memory is also often used to refer to non-volatile memory including read-only memory (ROM) through modern flash memory. Programmable read-only memory (PROM) was invented by Wen Tsing Chow in 1956, while working for the Arma Division of the American Bosch Arma Corporation. In 1967, Dawon Kahng and Simon Sze of Bell Labs proposed that the floating gate of a MOS semiconductor device could be used for the cell of a reprogrammable ROM, which led to Dov Frohman of Intel inventing EPROM (erasable PROM) in 1971. EEPROM (electrically erasable PROM) was developed by Yasuo Tarui, Yutaka Hayashi and Kiyoko Naga at the Electrotechnical Laboratory in 1972. Flash memory was invented by Fujio Masuoka at Toshiba in the early 1980s. Masuoka and colleagues presented the invention of NOR flash in 1984, and then NAND flash in 1987. Toshiba commercialized NAND flash memory in 1987. Developments in technology and economies of scale have made possible so-called (VLM) computers. Volatility categories Volatile memory Volatile memory is computer memory that requires power to maintain the stored information. Most modern semiconductor volatile memory is either static RAM (SRAM) or dynamic RAM (DRAM). DRAM dominates for desktop system memory. SRAM is used for CPU cache. SRAM is also found in small embedded systems requiring little memory. SRAM retains its contents as long as the power is connected and may use a simpler interface, but commonly uses six transistors per bit. Dynamic RAM is more complicated for interfacing and control, needing regular refresh cycles to prevent losing its contents, but uses only one transistor and one capacitor per bit, allowing it to reach much higher densities and much cheaper per-bit costs. Non-volatile memory Non-volatile memory can retain the stored information even when not powered. Examples of non-volatile memory include read-only memory, flash memory, most types of magnetic computer storage devices (e.g. hard disk drives, floppy disks and magnetic tape), optical discs, and early computer storage methods such as magnetic drum, paper tape and punched cards. Non-volatile memory technologies under development include ferroelectric RAM, programmable metallization cell, Spin-transfer torque magnetic RAM, SONOS, resistive random-access memory, racetrack memory, Nano-RAM, 3D XPoint, and millipede memory. Semi-volatile memory A third category of memory is semi-volatile. The term is used to describe a memory that has some limited non-volatile duration after power is removed, but then data is ultimately lost. A typical goal when using a semi-volatile memory is to provide the high performance and durability associated with volatile memories while providing some benefits of non-volatile memory. For example, some non-volatile memory types experience wear when written. A worn cell has increased volatility but otherwise continues to work. Data locations which are written frequently can thus be directed to use worn circuits. As long as the location is updated within some known retention time, the data stays valid. After a period of time without update, the value is copied to a less-worn circuit with longer retention. Writing first to the worn area allows a high write rate while avoiding wear on the not-worn circuits. As a second example, an STT-RAM can be made non-volatile by building large cells, but doing so raises the cost per bit and power requirements and reduces the write speed. Using small cells improves cost, power, and speed, but leads to semi-volatile behavior. In some applications, the increased volatility can be managed to provide many benefits of a non-volatile memory, for example by removing power but forcing a wake-up before data is lost; or by caching read-only data and discarding the cached data if the power-off time exceeds the non-volatile threshold. The term semi-volatile is also used to describe semi-volatile behavior constructed from other memory types, such as nvSRAM, which combines SRAM and a non-volatile memory on the same chip, where an external signal copies data from the volatile memory to the non-volatile memory, but if power is removed before the copy occurs, the data is lost. Another example is battery-backed RAM, which uses an external battery to power the memory device in case of external power loss. If power is off for an extended period of time, the battery may run out, resulting in data loss. Management Proper management of memory is vital for a computer system to operate properly. Modern operating systems have complex systems to properly manage memory. Failure to do so can lead to bugs or slow performance. Bugs Improper management of memory is a common cause of bugs and security vulnerabilities, including the following types: A memory leak occurs when a program requests memory from the operating system and never returns the memory when it is done with it. A program with this bug will gradually require more and more memory until the program fails as the operating system runs out. A segmentation fault results when a program tries to access memory that it does not have permission to access. Generally, a program doing so will be terminated by the operating system. A buffer overflow occurs when a program writes data to the end of its allocated space and then continues to write data beyond this to memory that has been allocated for other purposes. This may result in erratic program behavior, including memory access errors, incorrect results, a crash, or a breach of system security. They are thus the basis of many software vulnerabilities and can be maliciously exploited. Virtual memory Virtual memory is a system where physical memory is managed by the operating system typically with assistance from a memory management unit, which is part of many modern CPUs. It allows multiple types of memory to be used. For example, some data can be stored in RAM while other data is stored on a hard drive (e.g. in a swapfile), functioning as an extension of the cache hierarchy. This offers several advantages. Computer programmers no longer need to worry about where their data is physically stored or whether the user's computer will have enough memory. The operating system will place actively used data in RAM, which is much faster than hard disks. When the amount of RAM is not sufficient to run all the current programs, it can result in a situation where the computer spends more time moving data from RAM to disk and back than it does accomplishing tasks; this is known as thrashing. Protected memory Protected memory is a system where each program is given an area of memory to use and is prevented from going outside that range. If the operating system detects that a program has tried to alter memory that does not belong to it, the program is terminated (or otherwise restricted or redirected). This way, only the offending program crashes, and other programs are not affected by the misbehavior (whether accidental or intentional). Use of protected memory greatly enhances both the reliability and security of a computer system. Without protected memory, it is possible that a bug in one program will alter the memory used by another program. This will cause that other program to run off of corrupted memory with unpredictable results. If the operating system's memory is corrupted, the entire computer system may crash and need to be rebooted. At times programs intentionally alter the memory used by other programs. This is done by viruses and malware to take over computers. It may also be used benignly by desirable programs which are intended to modify other programs, debuggers, for example, to insert breakpoints or hooks. See also Memory geometry Memory hierarchy Memory organization Processor registers store data but normally are not considered as memory, since they only store one word and do not include an addressing mechanism. Universal memory, memory combining both large capacity and high speed Notes References Further reading MOSFETs Digital electronics
Computer memory
[ "Engineering" ]
3,018
[ "Electronic engineering", "Digital electronics" ]
6,813
https://en.wikipedia.org/wiki/Chandrasekhar%20limit
The Chandrasekhar limit () is the maximum mass of a stable white dwarf star. The currently accepted value of the Chandrasekhar limit is about (). The limit was named after Subrahmanyan Chandrasekhar. White dwarfs resist gravitational collapse primarily through electron degeneracy pressure, compared to main sequence stars, which resist collapse through thermal pressure. The Chandrasekhar limit is the mass above which electron degeneracy pressure in the star's core is insufficient to balance the star's own gravitational self-attraction. Physics Normal stars fuse gravitationally compressed hydrogen into helium, generating vast amounts of heat. As the hydrogen is consumed, the stars' core compresses further allowing the helium and heavier nuclei to fuse ultimately resulting in stable iron nuclei, a process called stellar evolution. The next step depends upon the mass of the star. Stars below the Chandrasekhar limit become stable white dwarf stars, remaining that way throughout the rest of the history of the universe (assuming the absence of external forces). Stars above the limit can become neutron stars or black holes. The Chandrasekhar limit is a consequence of competition between gravity and electron degeneracy pressure. Electron degeneracy pressure is a quantum-mechanical effect arising from the Pauli exclusion principle. Since electrons are fermions, no two electrons can be in the same state, so not all electrons can be in the minimum-energy level. Rather, electrons must occupy a band of energy levels. Compression of the electron gas increases the number of electrons in a given volume and raises the maximum energy level in the occupied band. Therefore, the energy of the electrons increases on compression, so pressure must be exerted on the electron gas to compress it, producing electron degeneracy pressure. With sufficient compression, electrons are forced into nuclei in the process of electron capture, relieving the pressure. In the nonrelativistic case, electron degeneracy pressure gives rise to an equation of state of the form , where is the pressure, is the mass density, and is a constant. Solving the hydrostatic equation leads to a model white dwarf that is a polytrope of index – and therefore has radius inversely proportional to the cube root of its mass, and volume inversely proportional to its mass. As the mass of a model white dwarf increases, the typical energies to which degeneracy pressure forces the electrons are no longer negligible relative to their rest masses. The velocities of the electrons approach the speed of light, and special relativity must be taken into account. In the strongly relativistic limit, the equation of state takes the form . This yields a polytrope of index 3, which has a total mass, , depending only on . For a fully relativistic treatment, the equation of state used interpolates between the equations for small and for large . When this is done, the model radius still decreases with mass, but becomes zero at . This is the Chandrasekhar limit. The curves of radius against mass for the non-relativistic and relativistic models are shown in the graph. They are colored blue and green, respectively. has been set equal to 2. Radius is measured in standard solar radii or kilometers, and mass in standard solar masses. Calculated values for the limit vary depending on the nuclear composition of the mass. Chandrasekhar gives the following expression, based on the equation of state for an ideal Fermi gas: where: is the reduced Planck constant is the speed of light is the gravitational constant is the average molecular weight per electron, which depends upon the chemical composition of the star is the mass of the hydrogen atom is a constant connected with the solution to the Lane–Emden equation As is the Planck mass, the limit is of the order of The limiting mass can be obtained formally from the Chandrasekhar's white dwarf equation by taking the limit of large central density. A more accurate value of the limit than that given by this simple model requires adjusting for various factors, including electrostatic interactions between the electrons and nuclei and effects caused by nonzero temperature. Lieb and Yau have given a rigorous derivation of the limit from a relativistic many-particle Schrödinger equation. History In 1926, the British physicist Ralph H. Fowler observed that the relationship between the density, energy, and temperature of white dwarfs could be explained by viewing them as a gas of nonrelativistic, non-interacting electrons and nuclei that obey Fermi–Dirac statistics. This Fermi gas model was then used by the British physicist Edmund Clifton Stoner in 1929 to calculate the relationship among the mass, radius, and density of white dwarfs, assuming they were homogeneous spheres. Wilhelm Anderson applied a relativistic correction to this model, giving rise to a maximum possible mass of approximately . In 1930, Stoner derived the internal energy–density equation of state for a Fermi gas, and was then able to treat the mass–radius relationship in a fully relativistic manner, giving a limiting mass of approximately (for ). Stoner went on to derive the pressure–density equation of state, which he published in 1932. These equations of state were also previously published by the Soviet physicist Yakov Frenkel in 1928, together with some other remarks on the physics of degenerate matter. Frenkel's work, however, was ignored by the astronomical and astrophysical community. A series of papers published between 1931 and 1935 had its beginning on a trip from India to England in 1930, where the Indian physicist Subrahmanyan Chandrasekhar worked on the calculation of the statistics of a degenerate Fermi gas. In these papers, Chandrasekhar solved the hydrostatic equation together with the nonrelativistic Fermi gas equation of state, and also treated the case of a relativistic Fermi gas, giving rise to the value of the limit shown above. Chandrasekhar reviews this work in his Nobel Prize lecture. The existence of a related limit, based on the conceptual breakthrough of combining relativity with Fermi degeneracy, was first established in separate papers published by Wilhelm Anderson and E. C. Stoner for a uniform density star in 1929. Eric G. Blackman wrote that the roles of Stoner and Anderson in the discovery of mass limits were overlooked when Freeman Dyson wrote a biography of Chandrasekhar. Michael Nauenberg claims that Stoner established the mass limit first. The priority dispute has also been discussed at length by Virginia Trimble who writes that: "Chandrasekhar famously, perhaps even notoriously did his critical calculation on board ship in 1930, and ... was not aware of either Stoner's or Anderson's work at the time. His work was therefore independent, but, more to the point, he adopted Eddington's polytropes for his models which could, therefore, be in hydrostatic equilibrium, which constant density stars cannot, and real ones must be." This value was also computed in 1932 by the Soviet physicist Lev Landau, who, however, did not apply it to white dwarfs and concluded that quantum laws might be invalid for stars heavier than 1.5 solar mass. Chandrasekhar–Eddington dispute Chandrasekhar's work on the limit aroused controversy, owing to the opposition of the British astrophysicist Arthur Eddington. Eddington was aware that the existence of black holes was theoretically possible, and also realized that the existence of the limit made their formation possible. However, he was unwilling to accept that this could happen. After a talk by Chandrasekhar on the limit in 1935, he replied: Eddington's proposed solution to the perceived problem was to modify relativistic mechanics so as to make the law universally applicable, even for large . Although Niels Bohr, Fowler, Wolfgang Pauli, and other physicists agreed with Chandrasekhar's analysis, at the time, owing to Eddington's status, they were unwilling to publicly support Chandrasekhar. Through the rest of his life, Eddington held to his position in his writings, including his work on his fundamental theory. The drama associated with this disagreement is one of the main themes of Empire of the Stars, Arthur I. Miller's biography of Chandrasekhar. In Miller's view: However, Chandrasekhar chose to move on, leaving the study of stellar structure to focus on stellar dynamics. In 1983 in recognition for his work, Chandrasekhar shared a Nobel prize "for his theoretical studies of the physical processes of importance to the structure and evolution of the stars" with William Alfred Fowler. Applications The core of a star is kept from collapsing by the heat generated by the fusion of nuclei of lighter elements into heavier ones. At various stages of stellar evolution, the nuclei required for this process are exhausted, and the core collapses, causing it to become denser and hotter. A critical situation arises when iron accumulates in the core, since iron nuclei are incapable of generating further energy through fusion. If the core becomes sufficiently dense, electron degeneracy pressure will play a significant part in stabilizing it against gravitational collapse. If a main-sequence star is not too massive (less than approximately 8 solar masses), it eventually sheds enough mass to form a white dwarf having mass below the Chandrasekhar limit, which consists of the former core of the star. For more-massive stars, electron degeneracy pressure does not keep the iron core from collapsing to very great density, leading to formation of a neutron star, black hole, or, speculatively, a quark star. (For very massive, low-metallicity stars, it is also possible that instabilities destroy the star completely.) During the collapse, neutrons are formed by the capture of electrons by protons in the process of electron capture, leading to the emission of neutrinos. The decrease in gravitational potential energy of the collapsing core releases a large amount of energy on the order of (100 foes). Most of this energy is carried away by the emitted neutrinos and the kinetic energy of the expanding shell of gas; only about 1% is emitted as optical light. This process is believed responsible for supernovae of types Ib, Ic, and II. Type Ia supernovae derive their energy from runaway fusion of the nuclei in the interior of a white dwarf. This fate may befall carbon–oxygen white dwarfs that accrete matter from a companion giant star, leading to a steadily increasing mass. As the white dwarf's mass approaches the Chandrasekhar limit, its central density increases, and, as a result of compressional heating, its temperature also increases. This eventually ignites nuclear fusion reactions, leading to an immediate carbon detonation, which disrupts the star and causes the supernova. A strong indication of the reliability of Chandrasekhar's formula is that the absolute magnitudes of supernovae of Type Ia are all approximately the same; at maximum luminosity, is approximately −19.3, with a standard deviation of no more than 0.3. A 1-sigma interval therefore represents a factor of less than 2 in luminosity. This seems to indicate that all type Ia supernovae convert approximately the same amount of mass to energy. Super-Chandrasekhar mass supernovas In April 2003, the Supernova Legacy Survey observed a type Ia supernova, designated SNLS-03D3bb, in a galaxy approximately 4 billion light years away. According to a group of astronomers at the University of Toronto and elsewhere, the observations of this supernova are best explained by assuming that it arose from a white dwarf that had grown to twice the mass of the Sun before exploding. They believe that the star, dubbed the "Champagne Supernova" may have been spinning so fast that a centrifugal tendency allowed it to exceed the limit. Alternatively, the supernova may have resulted from the merger of two white dwarfs, so that the limit was only violated momentarily. Nevertheless, they point out that this observation poses a challenge to the use of type Ia supernovae as standard candles. Since the observation of the Champagne Supernova in 2003, several more type Ia supernovae have been observed that are very bright, and thought to have originated from white dwarfs whose masses exceeded the Chandrasekhar limit. These include SN 2006gz, SN 2007if, and SN 2009dc. The super-Chandrasekhar mass white dwarfs that gave rise to these supernovae are believed to have had masses up to 2.4–2.8 solar masses. One way to potentially explain the problem of the Champagne Supernova was considering it the result of an aspherical explosion of a white dwarf. However, spectropolarimetric observations of SN 2009dc showed it had a polarization smaller than 0.3, making the large asphericity theory unlikely. Tolman–Oppenheimer–Volkoff limit Stars sufficiently massive to pass the Chandrasekhar limit provided by electron degeneracy pressure do not become white dwarf stars. Instead they explode as supernovae. If the final mass is below the Tolman–Oppenheimer–Volkoff limit, then neutron degeneracy pressure contributes to the balance against gravity and the result will be a neutron star; but if the total mass is above the Tolman-Oppenheimer-Volkhoff limit, the result will be a black hole. See also Bekenstein bound Chandrasekhar's white dwarf equation Schönberg–Chandrasekhar limit Tolman–Oppenheimer–Volkoff limit References Further reading On Stars, Their Evolution and Their Stability, Nobel Prize lecture, Subrahmanyan Chandrasekhar, December 8, 1983. White dwarf stars and the Chandrasekhar limit, Masters' thesis, Dave Gentile, DePaul University, 1995. Estimating Stellar Parameters from Energy Equipartition, sciencebits.com. Discusses how to find mass-radius relations and mass limits for white dwarfs using simple energy arguments. Astrophysics White dwarfs Neutron stars Stellar dynamics
Chandrasekhar limit
[ "Physics", "Astronomy" ]
2,886
[ "Astronomical sub-disciplines", "Astrophysics", "Stellar dynamics" ]
6,818
https://en.wikipedia.org/wiki/Citric%20acid%20cycle
The citric acid cycle—also known as the Krebs cycle, Szent–Györgyi–Krebs cycle, or TCA cycle (tricarboxylic acid cycle)—is a series of biochemical reactions to release the energy stored in nutrients through the oxidation of acetyl-CoA derived from carbohydrates, fats, proteins, and alcohol. The chemical energy released is available in the form of ATP. The Krebs cycle is used by organisms that respire (as opposed to organisms that ferment) to generate energy, either by anaerobic respiration or aerobic respiration. In addition, the cycle provides precursors of certain amino acids, as well as the reducing agent NADH, that are used in numerous other reactions. Its central importance to many biochemical pathways suggests that it was one of the earliest components of metabolism. Even though it is branded as a "cycle", it is not necessary for metabolites to follow only one specific route; at least three alternative segments of the citric acid cycle have been recognized. The name of this metabolic pathway is derived from the citric acid (a tricarboxylic acid, often called citrate, as the ionized form predominates at biological pH) that is consumed and then regenerated by this sequence of reactions to complete the cycle. The cycle consumes acetate (in the form of acetyl-CoA) and water, reduces NAD+ to NADH, releasing carbon dioxide. The NADH generated by the citric acid cycle is fed into the oxidative phosphorylation (electron transport) pathway. The net result of these two closely linked pathways is the oxidation of nutrients to produce usable chemical energy in the form of ATP. In eukaryotic cells, the citric acid cycle occurs in the matrix of the mitochondrion. In prokaryotic cells, such as bacteria, which lack mitochondria, the citric acid cycle reaction sequence is performed in the cytosol with the proton gradient for ATP production being across the cell's surface (plasma membrane) rather than the inner membrane of the mitochondrion. For each pyruvate molecule (from glycolysis), the overall yield of energy-containing compounds from the citric acid cycle is three NADH, one FADH2, and one GTP. Discovery Several of the components and reactions of the citric acid cycle were established in the 1930s by the research of Albert Szent-Györgyi, who received the Nobel Prize in Physiology or Medicine in 1937 specifically for his discoveries pertaining to fumaric acid, a component of the cycle. He made this discovery by studying pigeon breast muscle. Because this tissue maintains its oxidative capacity well after breaking down in the Latapie mincer and releasing in aqueous solutions, breast muscle of the pigeon was very well qualified for the study of oxidative reactions. The citric acid cycle itself was finally identified in 1937 by Hans Adolf Krebs and William Arthur Johnson while at the University of Sheffield, for which the former received the Nobel Prize for Physiology or Medicine in 1953, and for whom the cycle is sometimes named the "Krebs cycle". Overview The citric acid cycle is a metabolic pathway that connects carbohydrate, fat, and protein metabolism. The reactions of the cycle are carried out by eight enzymes that completely oxidize acetate (a two carbon molecule), in the form of acetyl-CoA, into two molecules each of carbon dioxide and water. Through catabolism of sugars, fats, and proteins, the two-carbon organic product acetyl-CoA is produced which enters the citric acid cycle. The reactions of the cycle also convert three equivalents of nicotinamide adenine dinucleotide (NAD+) into three equivalents of reduced NAD (NADH), one equivalent of flavin adenine dinucleotide (FAD) into one equivalent of FADH2, and one equivalent each of guanosine diphosphate (GDP) and inorganic phosphate (Pi) into one equivalent of guanosine triphosphate (GTP). The NADH and FADH2 generated by the citric acid cycle are, in turn, used by the oxidative phosphorylation pathway to generate energy-rich ATP. One of the primary sources of acetyl-CoA is from the breakdown of sugars by glycolysis which yield pyruvate that in turn is decarboxylated by the pyruvate dehydrogenase complex generating acetyl-CoA according to the following reaction scheme: The product of this reaction, acetyl-CoA, is the starting point for the citric acid cycle. Acetyl-CoA may also be obtained from the oxidation of fatty acids. Below is a schematic outline of the cycle: The citric acid cycle begins with the transfer of a two-carbon acetyl group from acetyl-CoA to the four-carbon acceptor compound (oxaloacetate) to form a six-carbon compound (citrate). The citrate then goes through a series of chemical transformations, losing two carboxyl groups as CO2. The carbons lost as CO2 originate from what was oxaloacetate, not directly from acetyl-CoA. The carbons donated by acetyl-CoA become part of the oxaloacetate carbon backbone after the first turn of the citric acid cycle. Loss of the acetyl-CoA-donated carbons as CO2 requires several turns of the citric acid cycle. However, because of the role of the citric acid cycle in anabolism, they might not be lost, since many citric acid cycle intermediates are also used as precursors for the biosynthesis of other molecules. Most of the electrons made available by the oxidative steps of the cycle are transferred to NAD+, forming NADH. For each acetyl group that enters the citric acid cycle, three molecules of NADH are produced. The citric acid cycle includes a series of redox reactions in mitochondria. In addition, electrons from the succinate oxidation step are transferred first to the FAD cofactor of succinate dehydrogenase, reducing it to FADH2, and eventually to ubiquinone (Q) in the mitochondrial membrane, reducing it to ubiquinol (QH2) which is a substrate of the electron transfer chain at the level of Complex III. For every NADH and FADH2 that are produced in the citric acid cycle, 2.5 and 1.5 ATP molecules are generated in oxidative phosphorylation, respectively. At the end of each cycle, the four-carbon oxaloacetate has been regenerated, and the cycle continues. Steps There are ten basic steps in the citric acid cycle, as outlined below. The cycle is continuously supplied with new carbon in the form of acetyl-CoA, entering at step 0 in the table. Two carbon atoms are oxidized to CO2, the energy from these reactions is transferred to other metabolic processes through GTP (or ATP), and as electrons in NADH and QH2. The NADH generated in the citric acid cycle may later be oxidized (donate its electrons) to drive ATP synthesis in a type of process called oxidative phosphorylation. FADH2 is covalently attached to succinate dehydrogenase, an enzyme which functions both in the citric acid cycle and the mitochondrial electron transport chain in oxidative phosphorylation. FADH2, therefore, facilitates transfer of electrons to coenzyme Q, which is the final electron acceptor of the reaction catalyzed by the succinate:ubiquinone oxidoreductase complex, also acting as an intermediate in the electron transport chain. Mitochondria in animals, including humans, possess two succinyl-CoA synthetases: one that produces GTP from GDP, and another that produces ATP from ADP. Plants have the type that produces ATP (ADP-forming succinyl-CoA synthetase). Several of the enzymes in the cycle may be loosely associated in a multienzyme protein complex within the mitochondrial matrix. The GTP that is formed by GDP-forming succinyl-CoA synthetase may be utilized by nucleoside-diphosphate kinase to form ATP (the catalyzed reaction is GTP + ADP → GDP + ATP). Products Products of the first turn of the cycle are one GTP (or ATP), three NADH, one FADH2 and two CO2. Because two acetyl-CoA molecules are produced from each glucose molecule, two cycles are required per glucose molecule. Therefore, at the end of two cycles, the products are: two GTP, six NADH, two FADH2, and four CO2. The above reactions are balanced if Pi represents the H2PO4− ion, ADP and GDP the ADP2− and GDP2− ions, respectively, and ATP and GTP the ATP3− and GTP3− ions, respectively. The total number of ATP molecules obtained after complete oxidation of one glucose in glycolysis, citric acid cycle, and oxidative phosphorylation is estimated to be between 30 and 38. Efficiency The theoretical maximum yield of ATP through oxidation of one molecule of glucose in glycolysis, citric acid cycle, and oxidative phosphorylation is 38 (assuming 3 molar equivalents of ATP per equivalent NADH and 2 ATP per FADH2). In eukaryotes, two equivalents of NADH and two equivalents of ATP are generated in glycolysis, which takes place in the cytoplasm. If transported using the glycerol phosphate shuttle rather than the malate–aspartate shuttle, transport of two of these equivalents of NADH into the mitochondria effectively consumes two equivalents of ATP, thus reducing the net production of ATP to 36. Furthermore, inefficiencies in oxidative phosphorylation due to leakage of protons across the mitochondrial membrane and slippage of the ATP synthase/proton pump commonly reduces the ATP yield from NADH and FADH2 to less than the theoretical maximum yield. The observed yields are, therefore, closer to ~2.5 ATP per NADH and ~1.5 ATP per FADH2, further reducing the total net production of ATP to approximately 30. An assessment of the total ATP yield with newly revised proton-to-ATP ratios provides an estimate of 29.85 ATP per glucose molecule. Variation While the citric acid cycle is in general highly conserved, there is significant variability in the enzymes found in different taxa (note that the diagrams on this page are specific to the mammalian pathway variant). Some differences exist between eukaryotes and prokaryotes. The conversion of D-threo-isocitrate to 2-oxoglutarate is catalyzed in eukaryotes by the NAD+-dependent EC 1.1.1.41, while prokaryotes employ the NADP+-dependent EC 1.1.1.42. Similarly, the conversion of (S)-malate to oxaloacetate is catalyzed in eukaryotes by the NAD+-dependent EC 1.1.1.37, while most prokaryotes utilize a quinone-dependent enzyme, EC 1.1.5.4. A step with significant variability is the conversion of succinyl-CoA to succinate. Most organisms utilize EC 6.2.1.5, succinate–CoA ligase (ADP-forming) (despite its name, the enzyme operates in the pathway in the direction of ATP formation). In mammals a GTP-forming enzyme, succinate–CoA ligase (GDP-forming) (EC 6.2.1.4) also operates. The level of utilization of each isoform is tissue dependent. In some acetate-producing bacteria, such as Acetobacter aceti, an entirely different enzyme catalyzes this conversion – EC 2.8.3.18, succinyl-CoA:acetate CoA-transferase. This specialized enzyme links the TCA cycle with acetate metabolism in these organisms. Some bacteria, such as Helicobacter pylori, employ yet another enzyme for this conversion – succinyl-CoA:acetoacetate CoA-transferase (EC 2.8.3.5). Some variability also exists at the previous step – the conversion of 2-oxoglutarate to succinyl-CoA. While most organisms utilize the ubiquitous NAD+-dependent 2-oxoglutarate dehydrogenase, some bacteria utilize a ferredoxin-dependent 2-oxoglutarate synthase (EC 1.2.7.3). Other organisms, including obligately autotrophic and methanotrophic bacteria and archaea, bypass succinyl-CoA entirely, and convert 2-oxoglutarate to succinate via succinate semialdehyde, using EC 4.1.1.71, 2-oxoglutarate decarboxylase, and EC 1.2.1.79, succinate-semialdehyde dehydrogenase. In cancer, there are substantial metabolic derangements that occur to ensure the proliferation of tumor cells, and consequently metabolites can accumulate which serve to facilitate tumorigenesis, dubbed oncometabolites. Among the best characterized oncometabolites is 2-hydroxyglutarate which is produced through a heterozygous gain-of-function mutation (specifically a neomorphic one) in isocitrate dehydrogenase (IDH) (which under normal circumstances catalyzes the oxidation of isocitrate to oxalosuccinate, which then spontaneously decarboxylates to alpha-ketoglutarate, as discussed above; in this case an additional reduction step occurs after the formation of alpha-ketoglutarate via NADPH to yield 2-hydroxyglutarate), and hence IDH is considered an oncogene. Under physiological conditions, 2-hydroxyglutarate is a minor product of several metabolic pathways as an error but readily converted to alpha-ketoglutarate via hydroxyglutarate dehydrogenase enzymes (L2HGDH and D2HGDH) but does not have a known physiologic role in mammalian cells; of note, in cancer, 2-hydroxyglutarate is likely a terminal metabolite as isotope labelling experiments of colorectal cancer cell lines show that its conversion back to alpha-ketoglutarate is too low to measure. In cancer, 2-hydroxyglutarate serves as a competitive inhibitor for a number of enzymes that facilitate reactions via alpha-ketoglutarate in alpha-ketoglutarate-dependent dioxygenases. This mutation results in several important changes to the metabolism of the cell. For one thing, because there is an extra NADPH-catalyzed reduction, this can contribute to depletion of cellular stores of NADPH and also reduce levels of alpha-ketoglutarate available to the cell. In particular, the depletion of NADPH is problematic because NADPH is highly compartmentalized and cannot freely diffuse between the organelles in the cell. It is produced largely via the pentose phosphate pathway in the cytoplasm. The depletion of NADPH results in increased oxidative stress within the cell as it is a required cofactor in the production of GSH, and this oxidative stress can result in DNA damage. There are also changes on the genetic and epigenetic level through the function of histone lysine demethylases (KDMs) and ten-eleven translocation (TET) enzymes; ordinarily TETs hydroxylate 5-methylcytosines to prime them for demethylation. However, in the absence of alpha-ketoglutarate this cannot be done and there is hence hypermethylation of the cell's DNA, serving to promote epithelial-mesenchymal transition (EMT) and inhibit cellular differentiation. A similar phenomenon is observed for the Jumonji C family of KDMs which require a hydroxylation to perform demethylation at the epsilon-amino methyl group. Additionally, the inability of prolyl hydroxylases to catalyze reactions results in stabilization of hypoxia-inducible factor alpha, which is necessary to promote degradation of the latter (as under conditions of low oxygen there will not be adequate substrate for hydroxylation). This results in a pseudohypoxic phenotype in the cancer cell that promotes angiogenesis, metabolic reprogramming, cell growth, and migration. Regulation Allosteric regulation by metabolites. The regulation of the citric acid cycle is largely determined by product inhibition and substrate availability. If the cycle were permitted to run unchecked, large amounts of metabolic energy could be wasted in overproduction of reduced coenzyme such as NADH and ATP. The major eventual substrate of the cycle is ADP which gets converted to ATP. A reduced amount of ADP causes accumulation of precursor NADH which in turn can inhibit a number of enzymes. NADH, a product of all dehydrogenases in the citric acid cycle with the exception of succinate dehydrogenase, inhibits pyruvate dehydrogenase, isocitrate dehydrogenase, α-ketoglutarate dehydrogenase, and also citrate synthase. Acetyl-coA inhibits pyruvate dehydrogenase, while succinyl-CoA inhibits alpha-ketoglutarate dehydrogenase and citrate synthase. When tested in vitro with TCA enzymes, ATP inhibits citrate synthase and α-ketoglutarate dehydrogenase; however, ATP levels do not change more than 10% in vivo between rest and vigorous exercise. There is no known allosteric mechanism that can account for large changes in reaction rate from an allosteric effector whose concentration changes less than 10%. Citrate is used for feedback inhibition, as it inhibits phosphofructokinase, an enzyme involved in glycolysis that catalyses formation of fructose 1,6-bisphosphate, a precursor of pyruvate. This prevents a constant high rate of flux when there is an accumulation of citrate and a decrease in substrate for the enzyme. Regulation by calcium. Calcium is also used as a regulator in the citric acid cycle. Calcium levels in the mitochondrial matrix can reach up to the tens of micromolar levels during cellular activation. It activates pyruvate dehydrogenase phosphatase which in turn activates the pyruvate dehydrogenase complex. Calcium also activates isocitrate dehydrogenase and α-ketoglutarate dehydrogenase. This increases the reaction rate of many of the steps in the cycle, and therefore increases flux throughout the pathway. Transcriptional regulation. There is a link between intermediates of the citric acid cycle and the regulation of hypoxia-inducible factors (HIF). HIF plays a role in the regulation of oxygen homeostasis, and is a transcription factor that targets angiogenesis, vascular remodeling, glucose utilization, iron transport and apoptosis. HIF is synthesized constitutively, and hydroxylation of at least one of two critical proline residues mediates their interaction with the von Hippel Lindau E3 ubiquitin ligase complex, which targets them for rapid degradation. This reaction is catalysed by prolyl 4-hydroxylases. Fumarate and succinate have been identified as potent inhibitors of prolyl hydroxylases, thus leading to the stabilisation of HIF. Major metabolic pathways converging on the citric acid cycle Several catabolic pathways converge on the citric acid cycle. Most of these reactions add intermediates to the citric acid cycle, and are therefore known as anaplerotic reactions, from the Greek meaning to "fill up". These increase the amount of acetyl CoA that the cycle is able to carry, increasing the mitochondrion's capability to carry out respiration if this is otherwise a limiting factor. Processes that remove intermediates from the cycle are termed "cataplerotic" reactions. In this section and in the next, the citric acid cycle intermediates are indicated in italics to distinguish them from other substrates and end-products. Pyruvate molecules produced by glycolysis are actively transported across the inner mitochondrial membrane, and into the matrix. Here they can be oxidized and combined with coenzyme A to form CO2, acetyl-CoA, and NADH, as in the normal cycle. However, it is also possible for pyruvate to be carboxylated by pyruvate carboxylase to form oxaloacetate. This latter reaction "fills up" the amount of oxaloacetate in the citric acid cycle, and is therefore an anaplerotic reaction, increasing the cycle's capacity to metabolize acetyl-CoA when the tissue's energy needs (e.g. in muscle) are suddenly increased by activity. In the citric acid cycle all the intermediates (e.g. citrate, iso-citrate, alpha-ketoglutarate, succinate, fumarate, malate, and oxaloacetate) are regenerated during each turn of the cycle. Adding more of any of these intermediates to the mitochondrion therefore means that that additional amount is retained within the cycle, increasing all the other intermediates as one is converted into the other. Hence the addition of any one of them to the cycle has an anaplerotic effect, and its removal has a cataplerotic effect. These anaplerotic and cataplerotic reactions will, during the course of the cycle, increase or decrease the amount of oxaloacetate available to combine with acetyl-CoA to form citric acid. This in turn increases or decreases the rate of ATP production by the mitochondrion, and thus the availability of ATP to the cell. Acetyl-CoA, on the other hand, derived from pyruvate oxidation, or from the beta-oxidation of fatty acids, is the only fuel to enter the citric acid cycle. With each turn of the cycle one molecule of acetyl-CoA is consumed for every molecule of oxaloacetate present in the mitochondrial matrix, and is never regenerated. It is the oxidation of the acetate portion of acetyl-CoA that produces CO2 and water, with the energy thus released captured in the form of ATP. The three steps of beta-oxidation resemble the steps that occur in the production of oxaloacetate from succinate in the TCA cycle. Acyl-CoA is oxidized to trans-Enoyl-CoA while FAD is reduced to FADH2, which is similar to the oxidation of succinate to fumarate. Following, trans-enoyl-CoA is hydrated across the double bond to beta-hydroxyacyl-CoA, just like fumarate is hydrated to malate. Lastly, beta-hydroxyacyl-CoA is oxidized to beta-ketoacyl-CoA while NAD+ is reduced to NADH, which follows the same process as the oxidation of malate to oxaloacetate. In the liver, the carboxylation of cytosolic pyruvate into intra-mitochondrial oxaloacetate is an early step in the gluconeogenic pathway which converts lactate and de-aminated alanine into glucose, under the influence of high levels of glucagon and/or epinephrine in the blood. Here the addition of oxaloacetate to the mitochondrion does not have a net anaplerotic effect, as another citric acid cycle intermediate (malate) is immediately removed from the mitochondrion to be converted into cytosolic oxaloacetate, which is ultimately converted into glucose, in a process that is almost the reverse of glycolysis. In protein catabolism, proteins are broken down by proteases into their constituent amino acids. Their carbon skeletons (i.e. the de-aminated amino acids) may either enter the citric acid cycle as intermediates (e.g. alpha-ketoglutarate derived from glutamate or glutamine), having an anaplerotic effect on the cycle, or, in the case of leucine, isoleucine, lysine, phenylalanine, tryptophan, and tyrosine, they are converted into acetyl-CoA which can be burned to CO2 and water, or used to form ketone bodies, which too can only be burned in tissues other than the liver where they are formed, or excreted via the urine or breath. These latter amino acids are therefore termed "ketogenic" amino acids, whereas those that enter the citric acid cycle as intermediates can only be cataplerotically removed by entering the gluconeogenic pathway via malate which is transported out of the mitochondrion to be converted into cytosolic oxaloacetate and ultimately into glucose. These are the so-called "glucogenic" amino acids. De-aminated alanine, cysteine, glycine, serine, and threonine are converted to pyruvate and can consequently either enter the citric acid cycle as oxaloacetate (an anaplerotic reaction) or as acetyl-CoA to be disposed of as CO2 and water. In fat catabolism, triglycerides are hydrolyzed to break them into fatty acids and glycerol. In the liver the glycerol can be converted into glucose via dihydroxyacetone phosphate and glyceraldehyde-3-phosphate by way of gluconeogenesis. In skeletal muscle, glycerol is used in glycolysis by converting glycerol into glycerol-3-phosphate, then into dihydroxyacetone phosphate (DHAP), then into glyceraldehyde-3-phosphate. In many tissues, especially heart and skeletal muscle tissue, fatty acids are broken down through a process known as beta oxidation, which results in the production of mitochondrial acetyl-CoA, which can be used in the citric acid cycle. Beta oxidation of fatty acids with an odd number of methylene bridges produces propionyl-CoA, which is then converted into succinyl-CoA and fed into the citric acid cycle as an anaplerotic intermediate. The total energy gained from the complete breakdown of one (six-carbon) molecule of glucose by glycolysis, the formation of 2 acetyl-CoA molecules, their catabolism in the citric acid cycle, and oxidative phosphorylation equals about 30 ATP molecules, in eukaryotes. The number of ATP molecules derived from the beta oxidation of a 6 carbon segment of a fatty acid chain, and the subsequent oxidation of the resulting 3 molecules of acetyl-CoA is 40. Citric acid cycle intermediates serve as substrates for biosynthetic processes In this subheading, as in the previous one, the TCA intermediates are identified by italics. Several of the citric acid cycle intermediates are used for the synthesis of important compounds, which will have significant cataplerotic effects on the cycle. Acetyl-CoA cannot be transported out of the mitochondrion. To obtain cytosolic acetyl-CoA, citrate is removed from the citric acid cycle and carried across the inner mitochondrial membrane into the cytosol. There it is cleaved by ATP citrate lyase into acetyl-CoA and oxaloacetate. The oxaloacetate is returned to mitochondrion as malate (and then converted back into oxaloacetate to transfer more acetyl-CoA out of the mitochondrion). The cytosolic acetyl-CoA is used for fatty acid synthesis and the production of cholesterol. Cholesterol can, in turn, be used to synthesize the steroid hormones, bile salts, and vitamin D. The carbon skeletons of many non-essential amino acids are made from citric acid cycle intermediates. To turn them into amino acids the alpha keto-acids formed from the citric acid cycle intermediates have to acquire their amino groups from glutamate in a transamination reaction, in which pyridoxal phosphate is a cofactor. In this reaction the glutamate is converted into alpha-ketoglutarate, which is a citric acid cycle intermediate. The intermediates that can provide the carbon skeletons for amino acid synthesis are oxaloacetate which forms aspartate and asparagine; and alpha-ketoglutarate which forms glutamine, proline, and arginine. Of these amino acids, aspartate and glutamine are used, together with carbon and nitrogen atoms from other sources, to form the purines that are used as the bases in DNA and RNA, as well as in ATP, AMP, GTP, NAD, FAD and CoA. The pyrimidines are partly assembled from aspartate (derived from oxaloacetate). The pyrimidines, thymine, cytosine and uracil, form the complementary bases to the purine bases in DNA and RNA, and are also components of CTP, UMP, UDP and UTP. The majority of the carbon atoms in the porphyrins come from the citric acid cycle intermediate, succinyl-CoA. These molecules are an important component of the hemoproteins, such as hemoglobin, myoglobin and various cytochromes. During gluconeogenesis mitochondrial oxaloacetate is reduced to malate which is then transported out of the mitochondrion, to be oxidized back to oxaloacetate in the cytosol. Cytosolic oxaloacetate is then decarboxylated to phosphoenolpyruvate by phosphoenolpyruvate carboxykinase, which is the rate limiting step in the conversion of nearly all the gluconeogenic precursors (such as the glucogenic amino acids and lactate) into glucose by the liver and kidney. Because the citric acid cycle is involved in both catabolic and anabolic processes, it is known as an amphibolic pathway. Evan M.W.Duo Glucose feeds the TCA cycle via circulating lactate The metabolic role of lactate is well recognized as a fuel for tissues, mitochondrial cytopathies such as DPH Cytopathy, and the scientific field of oncology (tumors). In the classical Cori cycle, muscles produce lactate which is then taken up by the liver for gluconeogenesis. New studies suggest that lactate can be used as a source of carbon for the TCA cycle. Evolution It is believed that components of the citric acid cycle were derived from anaerobic bacteria, and that the TCA cycle itself may have evolved more than once. It may even predate biosis: the substrates appear to undergo most of the reactions spontaneously in the presence of persulfate radicals. Theoretically, several alternatives to the TCA cycle exist; however, the TCA cycle appears to be the most efficient. If several TCA alternatives had evolved independently, they all appear to have converged to the TCA cycle. See also Calvin cycle Glyoxylate cycle Reverse (reductive) Krebs cycle Krebs cycle (simple English) References External links An animation of the citric acid cycle at Smith College Citric acid cycle variants at MetaCyc Pathways connected to the citric acid cycle at Kyoto Encyclopedia of Genes and Genomes metpath: Interactive representation of the citric acid cycle Cellular respiration Exercise biochemistry Exercise physiology Metabolic pathways 1937 in biology
Citric acid cycle
[ "Chemistry", "Biology" ]
6,890
[ "Carbohydrate metabolism", "Cellular respiration", "Biochemistry", "Exercise biochemistry", "Metabolic pathways", "Metabolism", "Citric acid cycle" ]
6,821
https://en.wikipedia.org/wiki/Military%20engineering%20vehicle
A military engineering vehicle is a vehicle built for construction work or for the transportation of combat engineers on the battlefield. These vehicles may be modified civilian equipment (such as the armoured bulldozers that many nations field) or purpose-built military vehicles (such as the AVRE). The first appearance of such vehicles coincided with the appearance of the first tanks, these vehicles were modified Mark V tanks for bridging and mine clearance. Modern military engineering vehicles are expected to fulfill numerous roles such as; bulldozer, crane, grader, excavator, dump truck, breaching vehicle, bridging vehicle, military ferry, amphibious crossing vehicle, and combat engineer section carrier. History World War One A Heavy RE tank was developed shortly after World War I by Major Giffard LeQuesne Martel RE. This vehicle was a modified Mark V tank. Two support functions for these Engineer Tanks were developed: bridging and mine clearance. The bridging component involved an assault bridge, designed by Major Charles Inglis RE, called the Canal Lock Bridge, which had sufficient length to span a canal lock. Major Martel mated the bridge with the tank and used hydraulic power generated by the tank's engine to maneuver the bridge into place. For mine clearance the tanks were equipped with 2 ton rollers. 1918-1939 Between the wars various experimental bridging tanks were used to test a series of methods for bridging obstacles and developed by the Experimental Bridging Establishment (EBE). Captain SG Galpin RE conceived a prototype Light Tank Mk V to test the Scissors Assault Bridge. This concept was realised by Captain SA Stewart RE with significant input from a Mr DM Delany, a scientific civil servant in the employ of the EBE. MB Wild & Co, Birmingham, also developed a bridge that could span gaps of 26 feet using a complex system of steel wire ropes and a traveling jib, where the front section was projected and then attached to the rear section prior to launching the bridge. This system had to be abandoned due to lack of success in getting it to work, however the idea was later used successfully on the Beaver Bridge Laying Tank. Early World War Two Once World War Two had begun, the development of armoured vehicles for use by engineers in the field was accelerated under Delaney's direction. The EBE rapidly developed an assault bridge carried on a modified Covenanter tank capable of deploying a 24-ton tracked load capacity bridge (Class 24) that could span gaps of 30 feet. However, it did not see service in the British armed forces, and all vehicles were passed onto Allied forces such as Australia and Czechoslovakia. A Class 30 design superseded the Class 24 with no real re-design, simply the substitution of the Covenanter tank with a suitably modified Valentine. As tanks in the war got heavier, a new bridge capable of supporting them was developed. A heavily modified Churchill used a single-piece bridge mounted on a turret-less tank and was able to lay the bridge in 90 seconds; this bridge was able to carry a 60-ton tracked or 40-ton wheeled load. Late World War 2: Hobart's 'Funnies' and D-Day Hobart's Funnies were a number of unusually modified tanks operated during the Second World War by the 79th Armoured Division of the British Army or by specialists from the Royal Engineers. They were designed in light of problems that more standard tanks experienced during the amphibious Dieppe Raid, so that the new models would be able to overcome the problems of the planned Invasion of Normandy. These tanks played a major part on the Commonwealth beaches during the landings. They were forerunners of the modern combat engineering vehicle and were named after their commander, Major General Percy Hobart. Hobart's unusual, specialized tanks, nicknamed "funnies", included: AVRE (Assault Vehicle Royal Engineer), used to protect engineers in an assault role, and enable combat engineering. ARK (Armoured Ramp Carrier) where the tank itself was the "bridge". Multiple vehicles could be used to span gaps in both the vertical and horizontal. The tank had the turret removed and trackways fitted to the hull. Ramps were attached at each end of the trackways extending the bridging potential and allowing its use in difficult terrain. The tank would need recovery after its use was no longer required. Crab: A modified Sherman tank equipped with a mine flail, a rotating cylinder of weighted chains that exploded mines in the path of the tank. Armored bulldozer: A conventional Caterpillar D7 bulldozer fitted with armour to protect the driver and the engine. Their job was to clear the invasion beaches of obstacles and to make roads accessible by clearing rubble and filling in bomb craters. Conversions were carried out by Caterpillar importer Jack Olding & Company Ltd of Hatfield. Centaur bulldozer: A Centaur tank with the turret removed and fitted with a simple winch-operated bulldozer blade. These were produced because of a need for a well-armoured obstacle-clearing vehicle that, unlike a conventional bulldozer, would be fast enough to keep up with tank formations. They were not used on D-Day but were issued to the 79th Armoured Division in Belgium during the latter part of 1944. In U.S. Forces, Sherman tanks were also fitted with dozer blades, and anti-mine roller devices were developed, enabling engineering operations and providing similar capabilities. Post war Post war, the value of the combat engineering vehicles had been proven, and armoured multi-role engineering vehicles have been added to the majority of armoured forces. Types Civilian and militarized heavy equipment Military engineering can employ a wide variety of heavy equipment in the same ways to how this equipment is used outside the military. Bulldozers, cranes, graders, excavators, dump trucks, loaders, and backhoes all see extensive use by military engineers. Military engineers may also use civilian heavy equipment which was modified for military applications. Typically, this involves adding armour for protection from battlefield hazards such as artillery, unexploded ordnance, mines, and small arms fire. Often this protection is provided by armour plates and steel jackets. Some examples of armoured civilian heavy equipment are the IDF Caterpillar D9, American D7 TPK, Canadian D6 armoured bulldozer, cranes, graders, excavators, and M35 2-1/2 ton cargo truck. Militarized heavy equipment may also take on the form of traditional civilian equipment designed and built to unique military specifications. These vehicles typically sacrifice some depth of capability from civilian models in order to gain greater speed and independence from prime movers. Examples of this type of vehicle include high speed backhoes such as the Australian Army's High Mobility Engineering Vehicle (HMEV) from Thales or the Canadian Army's Multi-Purpose Engineer Vehicle (MPEV) from Arva. The main article for civilian heavy equipment is: Heavy equipment (construction) Armoured engineering vehicle Typically based on the platform of a main battle tank, these vehicles go by different names depending upon the country of use or manufacture. In the US the term "combat engineer vehicle (CEV)" is used, in the UK the terms "Armoured Vehicle Royal Engineers (AVRE)" or Armoured Repair and Recovery Vehicle (ARRV) are used, while in Canada and other commonwealth nations the term "armoured engineer vehicle (AEV)" is used. There is no set template for what such a vehicle will look like, yet likely features include a large dozer blade or mine ploughs, a large caliber demolition cannon, augers, winches, excavator arms and cranes or lifting booms. These vehicles are designed to directly conduct obstacle breaching operations and to conduct other earth-moving and engineering work on the battlefield. Good examples of this type of vehicle include the UK Trojan AVRE, the Russian IMR, and the US M728 Combat Engineer Vehicle. Although the term "armoured engineer vehicle" is used specifically to describe these multi-purpose tank based engineering vehicles, that term is also used more generically in British and Commonwealth militaries to describe all heavy tank based engineering vehicles used in the support of mechanized forces. Thus, "armoured engineer vehicle" used generically would refer to AEV, AVLB, Assault Breachers, and so on. Armoured earth mover Lighter and less multi-functional than the CEVs or AEVs described above, these vehicles are designed to conduct earth-moving work on the battlefield and generally be anti-tank explosive proof. These vehicles have greater high speed mobility than traditional heavy equipment and are protected against the effects of blast and fragmentation. Good examples are the American M9 ACE and the UK FV180 Combat Engineer Tractor. Breaching vehicle These vehicles are equipped with mechanical or other means for the breaching of man-made obstacles. Common types of breaching vehicles include mechanical flails, mine plough vehicles, and mine roller vehicles. In some cases, these vehicles will also mount mine-clearing line charges. Breaching vehicles may be either converted armoured fighting vehicles or purpose built vehicles. In larger militaries, converted AFV are likely to be used as assault breachers while the breached obstacle is still covered by enemy observation and fire, and then purpose built breaching vehicles will create additional lanes for following forces. Good examples of breaching vehicles include the US M1150 assault breacher vehicle, the UK Aardvark JSFU, and the Singaporean Trailblazer. Bridging vehicles Several types of military bridging vehicles have been developed. An armoured vehicle-launched bridge (AVLB) is typically a modified tank hull converted to carry a bridge into battle in order to support crossing ditches, small waterways, or other gap obstacles. Another type of bridging vehicle is the truck launched bridge. The Soviet TMM bridging truck could carry and launch a 10-meter bridge that could be daisy-chained with other TMM bridges to cross larger obstacles. More recent developments have seen the conversion of AVLB and truck launched bridge with launching systems that can be mounted on either tank or truck for bridges that are capable of supporting heavy main battle tanks. Earlier examples of bridging vehicles include a type in which a converted tank hull is the bridge. On these vehicles, the hull deck comprises the main portion of the tread way while ramps extend from the front and rear of the vehicle to allow other vehicles to climb over the bridging vehicle and cross obstacles. An example of this type of armoured bridging vehicle was the Churchill Ark used in the Second World War. Combat engineer section carriers Another type of CELLs are armoured fighting vehicles which are used to transport sappers (combat engineers) and can be fitted with a bulldozer's blade and other mine-breaching devices. They are often used as APCs because of their carrying ability and heavy protection. They are usually armed with machine guns and grenade launchers and usually tracked to provide enough tractive force to push blades and rakes. Some examples are the U.S. M113 APC, IDF Puma, Nagmachon, Husky, and U.S. M1132 ESV (a Stryker variant). Military ferries and amphibious crossing vehicles One of the major tasks of military engineering is crossing major rivers. Several military engineering vehicles have been developed in various nations to achieve this task. One of the more common types is the amphibious ferry such as the M3 Amphibious Rig. These vehicles are self-propelled on land, they can transform into raft type ferries when in the water, and often multiple vehicles can connect to form larger rafts or floating bridges. Other types of military ferries, such as the Soviet Plavayushij Transportyor - Srednyj, are able to load while still on land and transport other vehicles cross country and over water. In addition to amphibious crossing vehicles, military engineers may also employ several types of boats. Military assault boats are small boats propelled by oars or an outboard motor and used to ferry dismounted infantry across water. Tank-based combat engineering vehicles Most CEVs are armoured fighting vehicles that may be based on a tank chassis and have special attachments in order to breach obstacles. Such attachments may include dozer blades, mine rollers, cranes etc. An example of an engineering vehicle of this kind is a bridgelaying tank, which replaces the turret with a segmented hydraulic bridge. The Hobart's Funnies of the Second World War were a wide variety of armoured vehicles for combat engineering tasks. They were allocated to the initial beachhead assaults by the British and Commonwealth forces in the D-Day landings. Churchill tank The British Churchill tank because of its good cross-country performance and capacious interior with side hatches became the most adapted with modifications, the base unit being the AVRE carrying a large demolition gun. M4 Sherman Dozer: The bulldozer blade was a valuable battlefield tool on the WWII M4 Sherman tank. A 1943 field modification added the hydraulic dozer blade from a Caterpillar D8 to a Sherman. The later M1 dozer blade was standardized to fit any Sherman with VVSS suspension and the M1A1 would fit the wider HVSS. Some M4s made for the Engineer Corps had the blades fitted permanently and the turrets removed. In the early stages of the 1944 Battle of Normandy before the Culin Cutter, breaking through the Bocage hedgerows relied heavily on Sherman dozers. M4 Doozit: Engineer Corps' Sherman dozer with demolition charge on wooden platform and T40 Whizbang rocket launcher (the Doozit did not see combat but the Whizbang did). Bridgelayer: The US field-converted a few M4 in Italy with A-frame-supported bridge and heavy rear counter-weight to make the Mobile Assault Bridge. British developments for Shermans included the fascine (used by 79th Armoured Division), Crib, Twaby Ark, Octopus, Plymouth (Bailey bridge), and AVRE (SBG bridge). Mine-clearing: British conversions included the Sherman Crab. The US developed an extensive array of experimental types: T15/E1/E2: Series of mine resistant Shermans based on the T14 kit. Cancelled at war's end. Mine exploder T1E1 roller (Earthworm): Three sets of 6 discs made from armor plate. Mine exploder T1E2 roller: Two forward units with 7 discs only. Experimental. Mine exploder T1E3/M1 roller (Aunt Jemima): Two forward units with five 10' discs. Most widely used T1 variant, adopted as the M1. (picture) Mine exploder T1E4 roller: 16 discs. Mine exploder T1E5 roller: T1E3/M1 w/ smaller wheels. Experimental. Mine exploder T1E6 roller: T1E3/M1 w/ serrated edged discs. Experimental Mine exploder T2 flail: British Crab I mine flail. Mine exploder T3 flail: Based on British Scorpion flail. Development stopped in 1943. Mine exploder T3E1 flail: T3 w/ longer arms and sand filled rotor. Cancelled. Mine exploder T3E2 flail: E1 variant, rotor replaced with steel drum of larger diameter. Development terminated at war's end. Mine exploder T4: British Crab II mine flail. Mine exploder T7: Frame with small rollers with two discs each. Abandoned. Mine exploder T8 (Johnny Walker): Steel plungers on a pivot frame designed to pound on the ground. Vehicle steering was adversely affected. Mine exploder T9: 6' roller. Difficult to maneuver. Mine exploder T9E1: Lightened version, but proved unsatisfactory because it failed to explode all mines. Mine exploder T10: Remote control unit designed to be controlled by the following tank. Cancelled. Mine exploder T11: Six forward firing mortars to set off mines. Experimental. Mine exploder T12: 23 forward firing mortars. Apparently effective, but cancelled. Mine exploder T14: Direct modification to a Sherman tank, upgraded belly armor and reinforced tracks. Cancelled. Mine excavator T4: Plough device. Developed during 1942, but abandoned. Mine excavator T5/E1/E2: T4 variant w/ v-shaped plough. E1/E2 was a further improvement. Mine excavator T5E3: T5E1/E2 rigged to the hydraulic lift mechanism from the M1 dozer kit to control depth. Mine excavator T6: Based on the v-shape/T5, unable to control depth. Mine excavator T2/E1/E2: Based on the T4/T5's, but rigged to the hydraulic lift mechanism from the M1 dozer kit to control depth. M60 M60A1 AVLB – Armored vehicle launched bridge, scissors bridge on M60A1 chassis. M60 AVLM – armored vehicle launched MICLIC (mine-clearing line charge), modified M60 AVLB with up to 2 MICLIC mounted over the rear of the vehicle. M60 Panther – M60 modified into a remotely controlled mine clearing tank. The turret is removed with the turret ring sealed, and the front of the vehicle is fitted with mine rollers. M728 CEV – M60A1-based combat engineer vehicle fitted with a folding A-frame crane and winch attached to the front of the turret, and an M135 165 mm demolition gun. Commonly fitted with the D7 bulldozer blade, or a mine-clearing equipment. M728A1 – Upgraded version of the M728 CEV. M1 M1 Grizzly combat mobility vehicle (CMV) Grizzly breacher M1 Panther II remote controlled mine clearing vehicle Panther M104 Wolverine heavy assault bridge Wolverine (heavy assault bridge) M1074 Joint Assault Bridge System M1150 assault breacher vehicle Leopard 1 Biber (Beaver) armoured vehicle-launched bridge Pionierpanzer 1 Pionierpanzer 2 Dachs (Badger) armoured engineer vehicle Leopard 2 Panzerschnellbrücke 2 (Bridge layer) Pionierpanzer 3 Kodiak T-55/54 T-54 dozer - T-54 fitted with bulldozer blades for clearing soil, obstacles and snow. ALT-55 - Bulldozer version of the T-55 with large flat-plate superstructure, angular concave dozer blade on front and prominent hydraulic rams for dozer blade. T-55 hull fitted with an excavator body and armoured cab. T-55 MARRS - Fitted with a Vickers armoured recovery vehicle kit. It has a large flat-plate turret with slightly chamfered sides, vertical rear and very chamfered front and a large A-frame crane on the front of the turret. The crane has cylindrical winch rope fed between legs of crane. A dozer blade is fitted to the hull front. MT-55 or MTU-55 (Tankoviy Mostoukladchik) - Soviet designator for Czechoslovakian MT-55A bridge-layer tank with scissors bridge. MTU-12 (Tankoviy Mostoukladchik)- Bridge-layer tank with 12 m single-span bridge that can carry 50 tonnes. The system entered service in 1955; today only a very small number remains in service. Combat weight: 34 tonnes. MTU-20 (Ob'yekt 602) (Tankoviy Mostoukladchik) - The MTU-20 consists of a twin-treadway superstructure mounted on a modified T-54 tank chassis. Each treadway is made up of a box-type aluminum girder with a folding ramp attached to both ends to save space in the travel position. Because of that the vehicle with the bridge on board is only 11.6 m long, but the overall span length is 20 m. This is an increase of about 62% over that of the older MTU-1. The bridge is launched by the cantilever method. First the ramps are lowered and fully extended before the treadways are forward with the full load of the bridge resting on the forward support plate during launch. The span is moved out over the launching girder until the far end reaches the far bank. Next the near end is lowered onto the near bank. This method of launching gives the bridgelayer a low silhouette which makes it less vulnerable to detection and destruction. MTU-20 based on the T-55 chassis. BTS-1 (Bronetankoviy Tyagach Sredniy - Medium Armoured Tractor) - This is basically a turretless T-54A with a stowage basket. BTS-1M - improved or remanufactured BTS-1. BTS-2 (Ob'yekt 9) (Bronetankoviy Tyagach Sredniy - Medium Armoured Tractor) - BTS-1 upgraded with a hoist and a small folding crane with a capacity of 3,000 kg. It was developed on the T-54 hull in 1951; series production started in 1955. The prototype Ob.9 had a commander's cupola with DShK 1938/46 machine gun, but the production model has a square commander's hatch, opening to the right. Combat weight: 32 tons. Only a very small number remains in service. BTS-3 (Bronetankoviy Tyagach Sredniy - Medium Armoured Tractor) - JVBT-55A in service with the Soviet Army. BTS-4 (Bronetankoviy Tyagach Sredniy - Medium Armoured Tractor) - Similar to BTS-2 but with snorkel. In the West generally known as T-54T. There are many different models, based on the T-44, T-54, T-55 and T-62. BTS-4B - Dozer blade equipped armoured recovery vehicle converted from the early -odd-shaped turret versions of the T-54. BTS-4BM - Experimental version of the BTS-4B with the capacity to winch over the front of the vehicle. IMR (Ob'yekt 616) (Inzhenernaya Mashina Razgrazhdeniya) - Combat engineer vehicle. It's a T-55 that had its turret replaced with a hydraulically operated 2t crane. The crane can also be fitted with a small bucket or a pair of pincer type grabs for removing trees and other obstacles. A hydraulically operated dozer blade mounts to the front of the hull; it can be used in a straight or V-configuration only. The IMR was developed in 1969 and entered service five years later. SPK-12G (Samokhodniy Pod’yomniy Kran) - Heavy crane mounted on T-55 chassis. Only two were built. BMR-2 (Boyevaya Mashina Razminirovaniya) - Mine clearing tank based on T-55 chassis. This vehicle has no turret but a fixed superstructure, armed with an NSVT machine gun. It is fitted with a KMT-7 mine clearing set and entered service around 1987 during the war in Afghanistan. Improved version of BMR-2 that has been seen fitted with a wide variety of mine roller designs. T-64 BAT-2 – Fast combat engineering vehicle with the lower hull and "small roadwheels" & suspension of the T-64. KMDB - Vehicles Based on the MT-T Prime Mover Chassis. The vehicle is powered by a V-64-4 multi-fuel diesel engine, developing 700 hp. This engine is derived from that, used on the T-72 main battle tank. The 40-ton tractor sports a very large, all axis adjustable V-shaped hydraulic dozer blade at the front, a single soil ripper spike at the rear and a 2-ton crane on the top. The crew compartment holds 8 persons (driver, commander, radio operators plus a five-man sapper squad for dismounted tasks). The highly capable BAT-2 was designed to replace the old T-54/AT-T based BAT-M, but Warsaw Pact allies received only small numbers due to its high price and the old and new vehicles served alongside each other T-72 IMR-2 (Inzhenernaya Mashina Razgrashdeniya) - Combat engineering vehicle (CEV). It has a telescoping crane arm which can lift between 5 and 11 metric tons and utilizes a pincers for uprooting trees. Pivoted at the front of the vehicle is a dozer blade that can be used in a V-configuration or as a straight dozer blade. When not required it is raised clear of the ground. On the vehicle's rear, a mine-clearing system is mounted. IMR-2M1 - Simplified model without the mine-clearing system. Entered service in 1987. IMR-2M2 - Improved version that is better suited for operations in dangerous situations, for example in contaminated areas. It entered service in 1990 and has a modified crane arm with bucket instead off the pincers. IMR-2MA - Latest version with bigger operator's cabin armed with a 12.7 mm machine gun NSV. Klin-1 - Remote controlled IMR-2. MTU-72 (Ob'yekt 632) (Tankovyj Mostoukladchik) - bridge layer based on T-72 chassis. The overall layout and operating method of the system are similar to those of the MTU-20 and MTU bridgelayers. The bridge, when laid, has an overall length of 20 meters. The bridge has a maximum capacity of 50,000 kg, is 3.3 meters wide, and can span a gap of 18 m. By itself, the bridge weighs 6400 kg. The time required to lay the bridge is 3 minutes, and 8 minutes for retrieval. BLP 72 (Brückenlegepanzer) - The East-German army had plans to develop a new bridgelayer tank that should have been ready for series production from 1987 but after several difficulties the project was canceled. See also AM 50 automatically launched assault bridge Armored bulldozer Armoured recovery vehicle Armoured Vehicle Royal Engineers Bulldozer Caterpillar D9 Combat engineer Hobart's funnies Sapper Terrier armoured combat engineer vehicle References External links Australian Provincial Reconstruction Team - Afghanistan Kodiak Armoured Engineer Vehicle English inventions
Military engineering vehicle
[ "Engineering" ]
5,476
[ "Engineering vehicles", "Military engineering", "Military engineering vehicles" ]
6,824
https://en.wikipedia.org/wiki/Carl%20Sagan
Carl Edward Sagan (; ; November 9, 1934December 20, 1996) was an American astronomer, planetary scientist and science communicator. His best known scientific contribution is his research on the possibility of extraterrestrial life, including experimental demonstration of the production of amino acids from basic chemicals by exposure to light. He assembled the first physical messages sent into space, the Pioneer plaque and the Voyager Golden Record, which were universal messages that could potentially be understood by any extraterrestrial intelligence that might find them. He argued in favor of the hypothesis, which has since been accepted, that the high surface temperatures of Venus are the result of the greenhouse effect. Initially an assistant professor at Harvard, Sagan later moved to Cornell University, where he spent most of his career. He published more than 600 scientific papers and articles and was author, co-author or editor of more than 20 books. He wrote many popular science books, such as The Dragons of Eden, Broca's Brain, Pale Blue Dot and The Demon-Haunted World. He also co-wrote and narrated the award-winning 1980 television series Cosmos: A Personal Voyage, which became the most widely watched series in the history of American public television: Cosmos has been seen by at least 500 million people in 60 countries. A book, also called Cosmos, was published to accompany the series. Sagan also wrote a science-fiction novel, published in 1985, called Contact, which became the basis for the 1997 film Contact. His papers, comprising 595,000 items, are archived in the Library of Congress. Sagan was a popular public advocate of skeptical scientific inquiry and the scientific method; he pioneered the field of exobiology and promoted the search for extraterrestrial intelligent life (SETI). He spent most of his career as a professor of astronomy at Cornell University, where he directed the Laboratory for Planetary Studies. Sagan and his works received numerous awards and honors, including the NASA Distinguished Public Service Medal, the National Academy of Sciences Public Welfare Medal, the Pulitzer Prize for General Nonfiction (for his book The Dragons of Eden), and (for Cosmos: A Personal Voyage) two Emmy Awards, the Peabody Award, and the Hugo Award. He married three times and had five children. After developing myelodysplasia, Sagan died of pneumonia at the age of 62 on December 20, 1996. Early life Childhood Carl Edward Sagan was born on November 9, 1934, in the Bensonhurst neighborhood of New York City's Brooklyn borough. His mother, Rachel Molly Gruber (1906–1982), was a housewife from New York City; his father, Samuel Sagan (1905–1979), was a Ukrainian-born garment worker who had emigrated from Kamianets-Podilskyi (then in the Russian Empire). Sagan was named in honor of his maternal grandmother, Chaiya Clara, who had died while giving birth to her second child; she was, in Sagan's words, "the mother she [Rachel] never knew." Sagan's maternal grandfather later married a woman named Rose, who Sagan's sister, Carol, would later say, was "never accepted" as Rachel's mother because Rachel "knew she [Rose] wasn't her birth mother." Sagan's family lived in a modest apartment in Bensonhurst. He later described his family as Reform Jews, one of the more liberal of Judaism's four main branches. He and his sister agreed that their father was not especially religious, but that their mother "definitely believed in God, and was active in the temple [...] and served only kosher meat." During the worst years of the Depression, his father worked as a movie theater usher. According to biographer Keay Davidson, Sagan experienced a kind of "inner war" as a result of his close relationship with both his parents, who were in many ways "opposites." He traced his analytical inclinations to his mother, who had been extremely poor as a child in New York City during World War I and the 1920s, and whose later intellectual ambitions were sabotaged by her poverty, status as a woman and wife, and Jewish ethnicity. Davidson suggested she "worshipped her only son, Carl" because "he would fulfill her unfulfilled dreams." Sagan believed that he had inherited his sense of wonder from his father, who spent his free time giving apples to the poor or helping soothe tensions between workers and management within New York City's garment industry. Although awed by his son's intellectual abilities, Sagan's father also took his inquisitiveness in stride, viewing it as part of growing up. Later, during his career, Sagan would draw on his childhood memories to illustrate scientific points, as he did in his book Shadows of Forgotten Ancestors. Describing his parents' influence on his later thinking, Sagan said: "My parents were not scientists. They knew almost nothing about science. But in introducing me simultaneously to skepticism and to wonder, they taught me the two uneasily cohabiting modes of thought that are central to the scientific method." He recalled that a defining moment in his development came when his parents took him, at age four, to the 1939 New York World's Fair. He later described his vivid memories of several exhibits there. One, titled America of Tomorrow, included a moving map, which, as he recalled, "showed beautiful highways and cloverleaves and little General Motors cars all carrying people to skyscrapers, buildings with lovely spires, flying buttresses—and it looked great!" Another involved a flashlight shining on a photoelectric cell, which created a crackling sound, and another showed how the sound from a tuning fork became a wave on an oscilloscope. He also saw an exhibit of the then-nascent medium known as television. Remembering it, he later wrote: "Plainly, the world held wonders of a kind I had never guessed. How could a tone become a picture and light become a noise?" Sagan also saw one of the fair's most publicized events: the burial at Flushing Meadows of a time capsule, which contained mementos from the 1930s to be recovered by Earth's descendants in a future millennium. Davidson wrote that this "thrilled Carl." As an adult, inspired by his memories of the World's Fair, Sagan and his colleagues would create similar time capsules to be sent out into the galaxy: the Pioneer plaque and the Voyager Golden Record précis. During World War II, Sagan's parents worried about the fate of their European relatives, but he was generally unaware of the details of the ongoing war. He wrote, "Sure, we had relatives who were caught up in the Holocaust. Hitler was not a popular fellow in our household... but on the other hand, I was fairly insulated from the horrors of the war." His sister, Carol, said that their mother "above all wanted to protect Carl... she had an extraordinarily difficult time dealing with World War II and the Holocaust." Sagan's book The Demon-Haunted World (1996) included his memories of this conflicted period, when his family dealt with the realities of the war in Europe, but tried to prevent it from undermining his optimistic spirit. Soon after entering elementary school, Sagan began to express his strong inquisitiveness about nature. He recalled taking his first trips to the public library alone, at age five, when his mother got him a library card. He wanted to learn what stars were, since none of his friends or their parents could give him a clear answer: "I went to the librarian and asked for a book about stars [...] and the answer was stunning. It was that the Sun was a star, but really close. The stars were suns, but so far away they were just little points of light. The scale of the universe suddenly opened up to me. It was a kind of religious experience. There was a magnificence to it, a grandeur, a scale which has never left me. Never ever left me." When he was about six or seven, he and a close friend took trips to the American Museum of Natural History, in Manhattan. While there, they visited the Hayden Planetarium and walked around exhibits of space objects, such as meteorites, as well as displays of dinosaur skeletons and naturalistic scenes with animals. As Sagan later wrote, "I was transfixed by the dioramas—lifelike representations of animals and their habitats all over the world. Penguins on the dimly lit Antarctic ice [...] a family of gorillas, the male beating his chest [...] an American grizzly bear standing on his hind legs, ten or twelve feet tall, and staring me right in the eye." Sagan's parents nurtured his growing interest in science, buying him chemistry sets and reading matter. But his fascination with outer space emerged as his primary focus, especially after he had read science fiction by such writers as H. G. Wells and Edgar Rice Burroughs, stirring his curiosity about the possibility of life on Mars and other planets. According to biographer Ray Spangenburg, Sagan's efforts in his early years to understand the mysteries of the planets became a "driving force in his life, a continual spark to his intellect, and a quest that would never be forgotten." In 1947, Sagan discovered the magazine Astounding Science Fiction, which introduced him to more hard science fiction speculations than those in the Burroughs novels. That same year, mass hysteria developed about the possibility that extraterrestrial visitors had arrived in flying saucers, and the young Sagan joined in the speculation that the flying "discs" people reported seeing in the sky might be alien spaceships. Education Sagan attended David A. Boody Junior High School in his native Bensonhurst and had his bar mitzvah when he turned 13. In 1948, when he was 14, his father's work took the family to the older semi-industrial town of Rahway, New Jersey, where he attended Rahway High School. He was a straight-A student but was bored because his classes did not challenge him and his teachers did not inspire him. His teachers realized this and tried to convince his parents to send him to a private school, with an administrator telling them, "This kid ought to go to a school for gifted children, he has something really remarkable." However, his parents could not afford to do so. Sagan became president of the school's chemistry club, and set up his own laboratory at home. He taught himself about molecules by making cardboard cutouts to help him visualize how they were formed: "I found that about as interesting as doing [chemical] experiments." He was mostly interested in astronomy, learning about it in his spare time. In his junior year of high school, he discovered that professional astronomers were paid for doing something he always enjoyed, and decided on astronomy as a career goal: "That was a splendid day—when I began to suspect that if I tried hard I could do astronomy full-time, not just part-time." Sagan graduated from Rahway High School in 1951. Before the end of high school, Sagan entered an essay writing contest in which he explored the idea that human contact with advanced life forms from another planet might be as disastrous for people on Earth as Native Americans' first contact with Europeans had been for Native Americans. The subject was considered controversial, but his rhetorical skill won over the judges and they awarded him first prize. When he was about to graduate from high school, his classmates voted him "most likely to succeed" and put him in line to be valedictorian. He attended the University of Chicago because, despite his excellent high school grades, it was one of the very few colleges he had applied to that would consider accepting a 16-year-old. Its chancellor, Robert Maynard Hutchins, had recently retooled the undergraduate College of the University of Chicago into an "ideal meritocracy" built on Great Books, Socratic dialogue, comprehensive examinations, and early entrance to college with no age requirement. As an honors-program undergraduate, Sagan worked in the laboratory of geneticist H. J. Muller and wrote a thesis on the origins of life with physical chemist Harold Urey. He also joined the Ryerson Astronomical Society. In 1954, he was awarded a Bachelor of Liberal Arts with general and special honors in what he quipped was "nothing." In 1955, he earned a Bachelor of Science in physics. He went on to do graduate work at the University of Chicago, earning a Master of Science in physics in 1956 and a Doctor of Philosophy in astronomy and astrophysics in 1960. His doctoral thesis, submitted to the Department of Astronomy and Astrophysics, was entitled Physical Studies of the Planets. During his graduate studies, he used the summer months to work with planetary scientist Gerard Kuiper, who was his dissertation director, as well as physicist George Gamow and chemist Melvin Calvin. The title of Sagan's dissertation reflected interests he had in common with Kuiper, who had been president of the International Astronomical Union's commission on "Physical Studies of Planets and Satellites" throughout the 1950s. In 1958, Sagan and Kuiper worked on the classified military Project A119, a secret United States Air Force plan to detonate a nuclear warhead on the Moon and document its effects. Sagan had a Top Secret clearance at the Air Force and a Secret clearance with NASA. In 1999, an article published in the journal Nature revealed that Sagan had included the classified titles of two Project A119 papers in his 1959 application for a scholarship to University of California, Berkeley. A follow-up letter to the journal by project leader Leonard Reiffel confirmed Sagan's security leak. Career and research From 1960 to 1962 Sagan was a Miller Fellow at the University of California, Berkeley. Meanwhile, he published an article in 1961 in the journal Science on the atmosphere of Venus, while also working with NASA's Mariner 2 team, and served as a "Planetary Sciences Consultant" to the RAND Corporation. After the publication of Sagan's Science article, in 1961 Harvard University astronomers Fred Whipple and Donald Menzel offered Sagan the opportunity to give a colloquium at Harvard and subsequently offered him a lecturer position at the institution. Sagan instead asked to be made an assistant professor, and eventually Whipple and Menzel were able to convince Harvard to offer Sagan the assistant professor position he requested. Sagan lectured, performed research, and advised graduate students at the institution from 1963 until 1968, as well as working at the Smithsonian Astrophysical Observatory, also located in Cambridge, Massachusetts. In 1968, Sagan was denied academic tenure at Harvard. He later indicated that the decision was very unexpected. The denial has been blamed on several factors, including that he focused his interests too broadly across a number of areas (while the norm in academia is to become a renowned expert in a narrow specialty), and perhaps because of his well-publicized scientific advocacy, which some scientists perceived as borrowing the ideas of others for little more than self-promotion. An advisor from his years as an undergraduate student, Harold Urey, wrote a letter to the tenure committee recommending strongly against tenure for Sagan. {{quote box||align=left|width=25em|bgcolor = LightCyan|quote=Science is more than a body of knowledge; it is a way of thinking. I have a foreboding of an America in my children's or grandchildren's time – when the United States is a service and information economy; when nearly all the key manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what's true, we slide, almost without noticing, back into superstition and darkness... The dumbing down of America is most evident in the slow decay of substantive content in the enormously influential media, the 30 second sound bites (now down to 10 seconds or less), lowest common denominator programming, credulous presentations on pseudoscience and superstition, but especially a kind of celebration of ignorance.|source=Carl Sagan, from Demon-Haunted World (1995)<ref> Long before the ill-fated tenure process, Cornell University astronomer Thomas Gold had courted Sagan to move to Ithaca, New York, and join the recently hired astronomer Frank Drake among the faculty at Cornell. Following the denial of tenure from Harvard, Sagan accepted Gold's offer and remained a faculty member at Cornell for nearly 30 years until his death in 1996. Unlike Harvard, the smaller and more laid-back astronomy department at Cornell welcomed Sagan's growing celebrity status. Following two years as an associate professor, Sagan became a full professor at Cornell in 1970 and directed the Laboratory for Planetary Studies there. From 1972 to 1981, he was associate director of the Center for Radiophysics and Space Research (CRSR) at Cornell. In 1976, he became the David Duncan Professor of Astronomy and Space Sciences, a position he held for the remainder of his life. Sagan was associated with the U.S. space program from its inception. From the 1950s onward, he worked as an advisor to NASA, where one of his duties included briefing the Apollo astronauts before their flights to the Moon. Sagan contributed to many of the robotic spacecraft missions that explored the Solar System, arranging experiments on many of the expeditions. Sagan assembled the first physical message that was sent into space: a gold-plated plaque, attached to the space probe Pioneer 10, launched in 1972. Pioneer 11, also carrying another copy of the plaque, was launched the following year. He continued to refine his designs; the most elaborate message he helped to develop and assemble was the Voyager Golden Record, which was sent out with the Voyager space probes in 1977. Sagan often challenged the decisions to fund the Space Shuttle and the International Space Station at the expense of further robotic missions. Scientific achievements Former student David Morrison described Sagan as "an 'idea person' and a master of intuitive physical arguments and 'back of the envelope' calculations", and Gerard Kuiper said that "Some persons work best in specializing on a major program in the laboratory; others are best in liaison between sciences. Dr. Sagan belongs in the latter group." Sagan's contributions were central to the discovery of the high surface temperatures of the planet Venus. In the early 1960s no one knew for certain the basic conditions of Venus' surface, and Sagan listed the possibilities in a report later depicted for popularization in a Time Life book Planets. His own view was that Venus was dry and very hot as opposed to the balmy paradise others had imagined. He had investigated radio waves from Venus and concluded that there was a surface temperature of . As a visiting scientist to NASA's Jet Propulsion Laboratory, he contributed to the first Mariner missions to Venus, working on the design and management of the project. Mariner 2 confirmed his conclusions on the surface conditions of Venus in 1962. Sagan was among the first to hypothesize that Saturn's moon Titan might possess oceans of liquid compounds on its surface and that Jupiter's moon Europa might possess subsurface oceans of water. This would make Europa potentially habitable. Europa's subsurface ocean of water was later indirectly confirmed by the spacecraft Galileo. The mystery of Titan's reddish haze was also solved with Sagan's help. The reddish haze was revealed to be due to complex organic molecules constantly raining down onto Titan's surface. Sagan further contributed insights regarding the atmospheres of Venus and Jupiter, as well as seasonal changes on Mars. He also perceived global warming as a growing, man-made danger and likened it to the natural development of Venus into a hot, life-hostile planet through a kind of runaway greenhouse effect. He testified to the US Congress in 1985 that the greenhouse effect would change the Earth's climate system. Sagan and his Cornell colleague Edwin Ernest Salpeter speculated about life in Jupiter's clouds, given the planet's dense atmospheric composition rich in organic molecules. He studied the observed color variations on Mars' surface and concluded that they were not seasonal or vegetational changes as most believed, but shifts in surface dust caused by windstorms. Sagan is also known for his research on the possibilities of extraterrestrial life, including experimental demonstration of the production of amino acids from basic chemicals by radiation. He is also the 1994 recipient of the Public Welfare Medal, the highest award of the National Academy of Sciences for "distinguished contributions in the application of science to the public welfare." He was denied membership in the academy, reportedly because his media activities made him unpopular with many other scientists. , Sagan is the most cited SETI scientist and one of the most cited planetary scientists. Cosmos: popularizing science on TV In 1980 Sagan co-wrote and narrated the award-winning 13-part PBS television series Cosmos: A Personal Voyage, which became the most widely watched series in the history of American public television until 1990. The show has been seen by at least 500 million people across 60 countries. The book, Cosmos, written by Sagan, was published to accompany the series. Because of his earlier popularity as a science writer from his best-selling books, including The Dragons of Eden, which won him a Pulitzer Prize in 1977, he was asked to write and narrate the show. It was targeted to a general audience of viewers, who Sagan felt had lost interest in science, partly due to a stifled educational system. Each of the 13 episodes was created to focus on a particular subject or person, thereby demonstrating the synergy of the universe. They covered a wide range of scientific subjects including the origin of life and a perspective of humans' place on Earth. The show won an Emmy, along with a Peabody Award, and transformed Sagan from an obscure astronomer into a pop-culture icon. Time magazine ran a cover story about Sagan soon after the show broadcast, referring to him as "creator, chief writer and host-narrator of the show." In 2000, "Cosmos" was released on a remastered set of DVDs. "Billions and billions" After Cosmos aired, Sagan became associated with the catchphrase "billions and billions", although he never actually used the phrase in the Cosmos series. He rather used the term "billions upon billions." Richard Feynman, a precursor to Sagan, used the phrase "billions and billions" many times in his "red books." However, Sagan's frequent use of the word billions and distinctive delivery emphasizing the "b" (which he did intentionally, in place of more cumbersome alternatives such as "billions with a 'b, in order to distinguish the word from "millions") made him a favorite target of comic performers, including Johnny Carson, Gary Kroeger, Mike Myers, Bronson Pinchot, Penn Jillette, Harry Shearer, and others. Frank Zappa satirized the line in the song "Be in My Video", noting as well "atomic light." Sagan took this all in good humor, and his final book was titled Billions and Billions, which opened with a tongue-in-cheek discussion of this catchphrase, observing that Carson was an amateur astronomer and that Carson's comic caricature often included real science. As a humorous tribute to Sagan and his association with the catchphrase "billions and billions", a sagan has been defined as a unit of measurement equivalent to a very large number of anything. Sagan's number Sagan's number is the number of stars in the observable universe. This number is reasonably well defined, because it is known what stars are and what the observable universe is, but its value is highly uncertain. In 1980, Sagan estimated it to be 10 sextillion in short scale (1022). In 2003, it was estimated to be 70 sextillion (7 × 1022). In 2010, it was estimated to be 300 sextillion (3 × 1023). Scientific and critical thinking advocacy Sagan's ability to convey his ideas allowed many people to understand the cosmos better—simultaneously emphasizing the value and worthiness of the human race, and the relative insignificance of the Earth in comparison to the Universe. He delivered the 1977 series of Royal Institution Christmas Lectures in London. Sagan was a proponent of the search for extraterrestrial life. He urged the scientific community to listen with radio telescopes for signals from potential intelligent extraterrestrial life-forms. Sagan was so persuasive that by 1982 he was able to get a petition advocating SETI published in the journal Science, signed by 70 scientists, including seven Nobel Prize winners. This signaled a tremendous increase in the respectability of a then-controversial field. Sagan also helped Frank Drake write the Arecibo message, a radio message beamed into space from the Arecibo radio telescope on November 16, 1974, aimed at informing potential extraterrestrials about Earth. Sagan was chief technology officer of the professional planetary research journal Icarus for 12 years. He co-founded The Planetary Society and was a member of the SETI Institute Board of Trustees. Sagan served as Chairman of the Division for Planetary Science of the American Astronomical Society, as President of the Planetology Section of the American Geophysical Union, and as Chairman of the Astronomy Section of the American Association for the Advancement of Science (AAAS). At the height of the Cold War, Sagan became involved in nuclear disarmament efforts by promoting hypotheses on the effects of nuclear war, when Paul Crutzen's "Twilight at Noon" concept suggested that a substantial nuclear exchange could trigger a nuclear twilight and upset the delicate balance of life on Earth by cooling the surface. In 1983 he was one of five authors—the "S"—in the follow-up "TTAPS" model (as the research article came to be known), which contained the first use of the term "nuclear winter", which his colleague Richard P. Turco had coined. In 1984 he co-authored the book The Cold and the Dark: The World after Nuclear War and in 1990 the book A Path Where No Man Thought: Nuclear Winter and the End of the Arms Race, which explains the nuclear-winter hypothesis and advocates nuclear disarmament. Sagan received a great deal of skepticism and disdain for the use of media to disseminate a very uncertain hypothesis. A personal correspondence with nuclear physicist Edward Teller around 1983 began amicably, with Teller expressing support for continued research to ascertain the credibility of the winter hypothesis. However, Sagan and Teller's correspondence would ultimately result in Teller writing: "A propagandist is one who uses incomplete information to produce maximum persuasion. I can compliment you on being, indeed, an excellent propagandist, remembering that a propagandist is the better the less he appears to be one." Biographers of Sagan would also comment that from a scientific viewpoint, nuclear winter was a low point for Sagan, although, politically speaking, it popularized his image among the public. The adult Sagan remained a fan of science fiction, although disliking stories that were not realistic (such as ignoring the inverse-square law) or, he said, did not include "thoughtful pursuit of alternative futures." He wrote books to popularize science, such as Cosmos, which reflected and expanded upon some of the themes of A Personal Voyage and became the best-selling science book ever published in English; The Dragons of Eden: Speculations on the Evolution of Human Intelligence, which won a Pulitzer Prize; and Broca's Brain: Reflections on the Romance of Science. Sagan also wrote the best-selling science fiction novel Contact in 1985, based on a film treatment he wrote with his wife, Ann Druyan, in 1979, but he did not live to see the book's 1997 motion-picture adaptation, which starred Jodie Foster and won the 1998 Hugo Award for Best Dramatic Presentation. Sagan wrote a sequel to Cosmos, Pale Blue Dot: A Vision of the Human Future in Space, which was selected as a notable book of 1995 by The New York Times. He appeared on PBS's Charlie Rose program in January 1995. Sagan also wrote the introduction for Stephen Hawking's bestseller A Brief History of Time. Sagan was also known for his popularization of science, his efforts to increase scientific understanding among the general public, and his positions in favor of scientific skepticism and against pseudoscience, such as his debunking of the Betty and Barney Hill abduction. To mark the tenth anniversary of Sagan's death, David Morrison, a former student of Sagan, recalled "Sagan's immense contributions to planetary research, the public understanding of science, and the skeptical movement" in Skeptical Inquirer. Following Saddam Hussein's threats to light Kuwait's oil wells on fire in response to any physical challenge to Iraqi control of the oil assets, Sagan together with his "TTAPS" colleagues and Paul Crutzen, warned in January 1991 in The Baltimore Sun and Wilmington Morning Star newspapers that if the fires were left to burn over a period of several months, enough smoke from the 600 or so 1991 Kuwaiti oil fires "might get so high as to disrupt agriculture in much of South Asia ..." and that this possibility should "affect the war plans"; these claims were also the subject of a televised debate between Sagan and physicist Fred Singer on January 22, aired on the ABC News program Nightline. In the televised debate, Sagan argued that the effects of the smoke would be similar to the effects of a nuclear winter, with Singer arguing to the contrary. After the debate, the fires burnt for many months before extinguishing efforts were complete. The results of the smoke did not produce continental-sized cooling. Sagan later conceded in The Demon-Haunted World that the prediction did not turn out to be correct: "it was pitch black at noon and temperatures dropped 4–6 °C over the Persian Gulf, but not much smoke reached stratospheric altitudes and Asia was spared." In his later years, Sagan advocated the creation of an organized search for asteroids/near-Earth objects (NEOs) that might impact the Earth but to forestall or postpone developing the technological methods that would be needed to defend against them. He argued that all of the numerous methods proposed to alter the orbit of an asteroid, including the employment of nuclear detonations, created a deflection dilemma: if the ability to deflect an asteroid away from the Earth exists, then one would also have the ability to divert a non-threatening object towards Earth, creating an immensely destructive weapon. In a 1994 paper he co-authored, he ridiculed a three-day-long "Near-Earth Object Interception Workshop" held by Los Alamos National Laboratory (LANL) in 1993 that did not, "even in passing" state that such interception and deflection technologies could have these "ancillary dangers." Sagan remained hopeful that the natural NEO impact threat and the intrinsically double-edged essence of the methods to prevent these threats would serve as a "new and potent motivation to maturing international relations." Later acknowledging that, with sufficient international oversight, in the future a "work our way up" approach to implementing nuclear explosive deflection methods could be fielded, and when sufficient knowledge was gained, to use them to aid in mining asteroids. His interest in the use of nuclear detonations in space grew out of his work in 1958 for the Armour Research Foundation's Project A119, concerning the possibility of detonating a nuclear device on the lunar surface. Sagan was a critic of Plato, having said of the ancient Greek philosopher: "Science and mathematics were to be removed from the hands of the merchants and the artisans. This tendency found its most effective advocate in a follower of Pythagoras named Plato" and In 1995 (as part of his book The Demon-Haunted World), Sagan popularized a set of tools for skeptical thinking called the "baloney detection kit", a phrase first coined by Arthur Felberbaum, a friend of his wife Ann Druyan. Popularizing science Speaking about his activities in popularizing science, Sagan said that there were at least two reasons for scientists to share the purposes of science and its contemporary state. Simple self-interest was one: much of the funding for science came from the public, and the public therefore had the right to know how the money was being spent. If scientists increased public admiration for science, there was a good chance of having more public supporters. The other reason was the excitement of communicating one's own excitement about science to others. Following the success of Cosmos, Sagan set up his own publishing firm, Cosmos Store, to publish science books for the general public. It was not successful. Criticisms While Sagan was widely adored by the general public, his reputation in the scientific community was more polarized. Critics sometimes characterized his work as fanciful, non-rigorous, and self-aggrandizing, and others complained in his later years that he neglected his role as a faculty member to foster his celebrity status. One of Sagan's harshest critics, Harold Urey, felt that Sagan was getting too much publicity for a scientist and was treating some scientific theories too casually. Urey and Sagan were said to have different philosophies of science, according to Davidson. While Urey was an "old-time empiricist" who avoided theorizing about the unknown, Sagan was by contrast willing to speculate openly about such matters. Fred Whipple wanted Harvard to keep Sagan there, but learned that because Urey was a Nobel laureate, his opinion was an important factor in Harvard denying Sagan tenure. Sagan's Harvard friend Lester Grinspoon also stated: "I know Harvard well enough to know there are people there who certainly do not like people who are outspoken." Grinspoon added: Some, like Urey, later believed that Sagan's popular brand of scientific advocacy was beneficial to the science as a whole. Urey especially liked Sagan's 1977 book The Dragons of Eden and wrote Sagan with his opinion: "I like it very much and am amazed that someone like you has such an intimate knowledge of the various features of the problem... I congratulate you... You are a man of many talents." Sagan was accused of borrowing some ideas of others for his own benefit and countered these claims by explaining that the misappropriation was an unfortunate side effect of his role as a science communicator and explainer, and that he attempted to give proper credit whenever possible. Social concerns Sagan believed that the Drake equation, on substitution of reasonable estimates, suggested that a large number of extraterrestrial civilizations would form, but that the lack of evidence of such civilizations highlighted by the Fermi paradox suggests technological civilizations tend to self-destruct. This stimulated his interest in identifying and publicizing ways that humanity could destroy itself, with the hope of avoiding such a cataclysm and eventually becoming a spacefaring species. Sagan's deep concern regarding the potential destruction of human civilization in a nuclear holocaust was conveyed in a memorable cinematic sequence in the final episode of Cosmos, called "Who Speaks for Earth?" Sagan had already resigned from the Air Force Scientific Advisory Board's UFO-investigating Condon Committee and voluntarily surrendered his top-secret clearance in protest over the Vietnam War. Following his marriage to his third wife (novelist Ann Druyan) in June 1981, Sagan became more politically active—particularly in opposing escalation of the nuclear arms race under President Ronald Reagan. In March 1983, Reagan announced the Strategic Defense Initiative—a multibillion-dollar project to develop a comprehensive defense against attack by nuclear missiles, which was quickly dubbed the "Star Wars" program. Sagan spoke out against the project, arguing that it was technically impossible to develop a system with the level of perfection required, and far more expensive to build such a system than it would be for an enemy to defeat it through decoys and other means—and that its construction would seriously destabilize the "nuclear balance" between the United States and the Soviet Union, making further progress toward nuclear disarmament impossible. When Soviet leader Mikhail Gorbachev declared a unilateral moratorium on the testing of nuclear weapons, which would begin on August 6, 1985—the 40th anniversary of the atomic bombing of Hiroshima—the Reagan administration dismissed the dramatic move as nothing more than propaganda and refused to follow suit. In response, US anti-nuclear and peace activists staged a series of protest actions at the Nevada Test Site, beginning on Easter Sunday in 1986 and continuing through 1987. Hundreds of people in the "Nevada Desert Experience" group were arrested, including Sagan, who was arrested on two separate occasions as he climbed over a chain-link fence at the test site during the underground Operation Charioteer and United States's Musketeer nuclear test series of detonations. Sagan was also a vocal advocate of the controversial notion of testosterone poisoning, arguing in 1992 that human males could become gripped by an "unusually severe [case of] testosterone poisoning" and this could compel them to become genocidal. In his review of Moondance magazine writer Daniela Gioseffi's 1990 book Women on War, he argues that females are the only half of humanity "untainted by testosterone poisoning." One chapter of his 1993 book Shadows of Forgotten Ancestors is dedicated to testosterone and its alleged poisonous effects. In 1989, Carl Sagan was interviewed by Ted Turner whether he believed in socialism and responded that: "I'm not sure what a socialist is. But I believe the government has a responsibility to care for the people... I'm talking about making the people self-reliant." Personal life and beliefs Sagan was married three times. In 1957, he married biologist Lynn Margulis. The couple had two children, Jeremy and Dorion Sagan. Their marriage ended in 1964. Sagan married artist Linda Salzman in 1968 and they had a child together, Nick Sagan, and divorced in 1981. During these marriages, Carl Sagan focused heavily on his career, a factor which may have contributed to Sagan's first divorce. In 1981, Sagan married author Ann Druyan and they later had two children, Alexandra (known as Sasha) and Samuel Sagan. Carl Sagan and Druyan remained married until his death in 1996. While teaching at Cornell, he lived in an Egyptian revival house in Ithaca perched on the edge of a cliff that had formerly been the headquarters of a Cornell University secret society. While there he drove a red Porsche 911 Targa and an orange 1970 Porsche 914 with the license plate PHOBOS. In 1994, engineers at Apple Computer code-named the Power Macintosh 7100 "Carl Sagan" in the hope that Apple would make "billions and billions" with the sale of the PowerMac 7100. The name was only used internally, but Sagan was concerned that it would become a product endorsement and sent Apple a cease-and-desist letter. Apple complied, but engineers retaliated by changing the internal codename to "BHA" for "Butt-Head Astronomer." In November 1995, after further legal battle, an out-of-court settlement was reached and Apple's office of trademarks and patents released a conciliatory statement that "Apple has always had great respect for Dr. Sagan. It was never Apple's intention to cause Dr. Sagan or his family any embarrassment or concern." In 2019, Carl Sagan's daughter Sasha Sagan released For Small Creatures Such as We: Rituals for Finding Meaning in our Unlikely World, which depicts life with her parents and her father's death when she was fourteen. Building on a theme in her father's work, Sasha Sagan argues in For Small Creatures Such as We that skepticism does not imply pessimism. Sagan was acquainted with science fiction fandom through his friendship with Isaac Asimov, and he spoke at the Nebula Awards ceremony in 1969. Asimov described Sagan as one of only two people he ever met whose intellect surpassed his own, the other being computer scientist and artificial intelligence expert Marvin Minsky. Naturalism Sagan wrote frequently about religion and the relationship between religion and science, expressing his skepticism about the conventional conceptualization of God as a sapient being. For example: In another description of his view on the concept of God, Sagan wrote: On atheism, Sagan commented in 1981: Sagan also commented on Christianity and the Jefferson Bible, stating "My long-time view about Christianity is that it represents an amalgam of two seemingly immiscible parts, the religion of Jesus and the religion of Paul. Thomas Jefferson attempted to excise the Pauline parts of the New Testament. There wasn't much left when he was done, but it was an inspiring document." Sagan thought that spirituality should be scientifically informed and that traditional religions should be abandoned and replaced with belief systems that revolve around the scientific method, but also the mystery and incompleteness of scientific fields. Regarding spirituality and its relationship with science, Sagan stated: An environmental appeal, "Preserving and Cherishing the Earth", primarily written by Sagan and signed by him and other noted scientists as well as religious leaders, and published in January 1990, stated that "The historical record makes clear that religious teaching, example, and leadership are powerfully able to influence personal conduct and commitment... Thus, there is a vital role for religion and science." In reply to a question in 1996 about his religious beliefs, Sagan answered, "I'm agnostic." Sagan maintained that the idea of a creator God of the Universe was difficult to prove or disprove and that the only conceivable scientific discovery that could challenge it would be an infinitely old universe. His son, Dorion Sagan said, "My father believed in the God of Spinoza and Einstein, God not behind nature but as nature, equivalent to it." His last wife, Ann Druyan, stated: In 2006, Ann Druyan edited Sagan's 1985 Glasgow Gifford Lectures in Natural Theology into a book, The Varieties of Scientific Experience: A Personal View of the Search for God, in which he elaborates on his views of divinity in the natural world. Sagan is also widely regarded as a freethinker or skeptic; one of his most famous quotations, in Cosmos, was, "Extraordinary claims require extraordinary evidence" (called the "Sagan standard" by some). This was based on a nearly identical statement by fellow founder of the Committee for the Scientific Investigation of Claims of the Paranormal, Marcello Truzzi, "An extraordinary claim requires extraordinary proof." This idea had been earlier aphorized in Théodore Flournoy's work From India to the Planet Mars (1899) from a longer quote by Pierre-Simon Laplace (1749–1827), a French mathematician and astronomer, as the Principle of Laplace: "The weight of the evidence should be proportioned to the strangeness of the facts." Late in his life, Sagan's books elaborated on his naturalistic view of the world. In The Demon-Haunted World, he presented tools for testing arguments and detecting fallacious or fraudulent ones, essentially advocating the wide use of critical thinking and of the scientific method. The compilation Billions and Billions: Thoughts on Life and Death at the Brink of the Millennium, published in 1997 after Sagan's death, contains essays written by him, on topics such as his views on abortion, and also an essay by his widow, Ann Druyan, about the relationship between his agnostic and freethinking beliefs and his death. Sagan warned against humans' tendency towards anthropocentrism. He was the faculty adviser for the Cornell Students for the Ethical Treatment of Animals. In the Cosmos chapter "Blues For a Red Planet", Sagan wrote, "If there is life on Mars, I believe we should do nothing with Mars. Mars then belongs to the Martians, even if the Martians are only microbes." Marijuana advocacy Sagan was a user and advocate of marijuana. Under the pseudonym "Mr. X", he contributed an essay about smoking cannabis to the 1971 book Marihuana Reconsidered. The essay explained that marijuana use had helped to inspire some of Sagan's works and enhance sensual and intellectual experiences. After Sagan's death, his friend Lester Grinspoon disclosed this information to Sagan's biographer, Keay Davidson. The publishing of the biography Carl Sagan: A Life, in 1999 brought media attention to this aspect of Sagan's life. Not long after his death, his widow Ann Druyan went on to preside over the board of directors of the National Organization for the Reform of Marijuana Laws (NORML), a non-profit organization dedicated to reforming cannabis laws. UFOs In 1947, the year that inaugurated the "flying saucer" craze, the young Sagan suspected the "discs" might be alien spaceships. Sagan's interest in UFO reports prompted him on August 3, 1952, to write a letter to U.S. Secretary of State Dean Acheson to ask how the United States would respond if flying saucers turned out to be extraterrestrial. He later had several conversations on the subject in 1964 with Jacques Vallée. Though quite skeptical of any extraordinary answer to the UFO question, Sagan thought scientists should study the phenomenon, at least because there was widespread public interest in UFO reports. Stuart Appelle notes that Sagan "wrote frequently on what he perceived as the logical and empirical fallacies regarding UFOs and the abduction experience. Sagan rejected an extraterrestrial explanation for the phenomenon but felt there were both empirical and pedagogical benefits for examining UFO reports and that the subject was, therefore, a legitimate topic of study." In 1966, Sagan was a member of the Ad Hoc Committee to Review Project Blue Book, the U.S. Air Force's UFO investigation project. The committee concluded Blue Book had been lacking as a scientific study, and recommended a university-based project to give the UFO phenomenon closer scientific scrutiny. The result was the Condon Committee (1966–68), led by physicist Edward Condon, and in their final report they formally concluded that UFOs, regardless of what any of them actually were, did not behave in a manner consistent with a threat to national security. Sociologist Ron Westrum writes that "The high point of Sagan's treatment of the UFO question was the AAAS' symposium in 1969. A wide range of educated opinions on the subject were offered by participants, including not only proponents such as James McDonald and J. Allen Hynek but also skeptics like astronomers William Hartmann and Donald Menzel. The roster of speakers was balanced, and it is to Sagan's credit that this event was presented in spite of pressure from Edward Condon." With physicist Thornton Page, Sagan edited the lectures and discussions given at the symposium; these were published in 1972 as UFO's: A Scientific Debate. Some of Sagan's many books examine UFOs (as did one episode of Cosmos) and he claimed a religious undercurrent to the phenomenon. Sagan again revealed his views on interstellar travel in his 1980 Cosmos series. In one of his last written works, Sagan argued that the chances of extraterrestrial spacecraft visiting Earth are vanishingly small. However, Sagan did think it plausible that Cold War concerns contributed to governments misleading their citizens about UFOs, and wrote that "some UFO reports and analyses, and perhaps voluminous files, have been made inaccessible to the public which pays the bills ... It's time for the files to be declassified and made generally available." He cautioned against jumping to conclusions about suppressed UFO data and stressed that there was no strong evidence that aliens were visiting the Earth either in the past or present. Sagan briefly served as an adviser on Stanley Kubrick's film 2001: A Space Odyssey. Sagan proposed that the film suggest, rather than depict, extraterrestrial superintelligence. Sagan's paradox Sagan's contribution to the 1969 AAAS symposium was an attack on the belief that UFOs are piloted by extraterrestrial beings. Using the Drake equation and applying several logical assumptions, Sagan calculated the possible number of advanced civilizations capable of interstellar travel to be about one million. He projected that any civilization wishing to check on all the others on a regular basis of, say, once a year would have to launch 10,000 spacecraft annually. Not only does that seem like an unreasonable number of launchings, but it would take all the material in one percent of the universe's stars to produce all the spaceships needed for all the civilizations to seek each other out. To argue that the Earth was being chosen for regular visitations, Sagan said, one would have to assume that the planet is somehow unique, and that assumption "goes exactly against the idea that there are lots of civilizations around. Because if there are then our sort of civilization must be pretty common. And if we're not pretty common then there aren't going to be many civilizations advanced enough to send visitors." This argument, which some called Sagan's paradox, helped to establish a new school of thought, namely the belief that extraterrestrial life exists but has nothing to do with UFOs. The new belief had a salutary effect on UFO studies. It gave scientists opportunities to search the universe for intelligent life unencumbered by the stigma associated with UFOs. Death After suffering from myelodysplasia for two years and receiving three bone marrow transplants from his sister, Sagan died from pneumonia at the age of 62 at the Fred Hutchinson Cancer Research Center in Seattle on December 20, 1996. He was buried at Lake View Cemetery in Ithaca, New York. Awards and honors Annual Award for Television Excellence—1981—Ohio State University—PBS series Cosmos: A Personal Voyage Apollo Achievement Award—National Aeronautics and Space Administration NASA Distinguished Public Service Medal—National Aeronautics and Space Administration (1977) Emmy—Outstanding Individual Achievement—1981—PBS series Cosmos: A Personal Voyage Emmy—Outstanding Informational Series—1981—PBS series Cosmos: A Personal Voyage Fellow of the American Physical Society–1989 Exceptional Scientific Achievement Medal—National Aeronautics and Space Administration Helen Caldicott Leadership Award – Awarded by Women's Action for Nuclear Disarmament Hugo Award—1981—Best Dramatic Presentation—Cosmos: A Personal Voyage Hugo Award—1981—Best Related Non-Fiction Book—Cosmos Hugo Award—1998—Best Dramatic Presentation—Contact Humanist of the Year—1981—Awarded by the American Humanist Association American Philosophical Society—1995—Elected to membership. In Praise of Reason Award—1987—Committee for Skeptical Inquiry Isaac Asimov Award—1994—Committee for Skeptical Inquiry John F. Kennedy Astronautics Award—1982—American Astronautical Society Special non-fiction Campbell Memorial Award—1974—The Cosmic Connection: An Extraterrestrial Perspective Joseph Priestley Award—"For distinguished contributions to the welfare of mankind" Klumpke-Roberts Award of the Astronomical Society of the Pacific—1974 Golden Plate Award of the American Academy of Achievement—1975 Konstantin Tsiolkovsky Medal—Awarded by the Soviet Cosmonauts Federation Locus Award 1986—Contact Los Angeles Times Book Prize's 1996 Science and Technology category for The Demon-Haunted World: Science as a Candle in the Dark. Lowell Thomas Award—The Explorers Club—75th Anniversary Masursky Award—American Astronomical Society Miller Research Fellowship—Miller Institute (1960–1962) Oersted Medal—1990—American Association of Physics Teachers Peabody Award—1980—PBS series Cosmos: A Personal Voyage Le Prix Galabert d'astronautique—International Astronautical Federation (IAF) Public Welfare Medal—1994—National Academy of Sciences Pulitzer Prize for General Nonfiction—1978—The Dragons of Eden Science Fiction Chronicle Award—1998—Dramatic Presentation—Contact UCLA Medal–1991 Inductee to International Space Hall of Fame in 2004 Named the "99th Greatest American" on June 5, 2005, Greatest American television series on the Discovery Channel Named an honorary member of the Demosthenian Literary Society on November 10, 2011 New Jersey Hall of Fame—2009—Inductee. Committee for Skeptical Inquiry (CSI) Pantheon of Skeptics—April 2011—Inductee Grand-Cross of the Order of Saint James of the Sword, Portugal (November 23, 1998) Honorary Doctor of Science (Sc.D.) degree from Whittier College in 1978. Was given the 2012 Science Fiction and Fantasy Writers Association's Kate Wilhelm Solstice Award Posthumous recognition The 1997 film Contact was based on the only novel Sagan wrote and finished after his death. It ends with the dedication "For Carl." His photo can also be seen in the film. In 1997, the Sagan Planet Walk was opened in Ithaca, New York. It is a walking-scale model of the Solar System, extending 1.2 km from the center of The Commons in downtown Ithaca to the Sciencenter, a hands-on museum. The exhibition was created in memory of Carl Sagan, who was an Ithaca resident and Cornell Professor. Professor Sagan had been a founding member of the museum's advisory board. The landing site of the uncrewed Mars Pathfinder spacecraft was renamed the Carl Sagan Memorial Station on July 5, 1997. Asteroid 2709 Sagan is named in his honor, as is the Carl Sagan Institute for the search of habitable planets. Sagan's son, Nick Sagan, wrote several episodes in the Star Trek franchise. In an episode of Star Trek: Enterprise entitled "Terra Prime", a quick shot is shown of the relic rover Sojourner, part of the Mars Pathfinder mission, placed by a historical marker at Carl Sagan Memorial Station on the Martian surface. The marker displays a quote from Sagan: "Whatever the reason you're on Mars, I'm glad you're there, and I wish I was with you." Sagan's student Steve Squyres led the team that landed the rovers Spirit and Opportunity successfully on Mars in 2004. On November 9, 2001, on what would have been Sagan's 67th birthday, the Ames Research Center dedicated the site for the Carl Sagan Center for the Study of Life in the Cosmos. "Carl was an incredible visionary, and now his legacy can be preserved and advanced by a 21st century research and education laboratory committed to enhancing our understanding of life in the universe and furthering the cause of space exploration for all time", said NASA Administrator Daniel Goldin. Ann Druyan was at the center as it opened its doors on October 22, 2006. Sagan has at least three awards named in his honor: The Carl Sagan Memorial Award presented jointly since 1997 by the American Astronomical Society and The Planetary Society, The Carl Sagan Medal for Excellence in Public Communication in Planetary Science presented since 1998 by the American Astronomical Society's Division for Planetary Sciences (AAS/DPS) for outstanding communication by an active planetary scientist to the general public—Carl Sagan was one of the original organizing committee members of the DPS, and The Carl Sagan Award for Public Understanding of Science presented by the Council of Scientific Society presidents (CSSP)—Sagan was the first recipient of the CSSP award in 1993. August 2007 the Independent Investigations Group (IIG) awarded Sagan posthumously a Lifetime Achievement Award. This honor has also been awarded to Harry Houdini and James Randi. In September 2008, a musical compositor Benn Jordan released his album Pale Blue Dot as a tribute to Carl Sagan's life. Beginning in 2009, a musical project known as Symphony of Science sampled several excerpts of Sagan from his series Cosmos and remixed them to electronic music. To date, the videos have received over 21 million views worldwide on YouTube. The 2014 Swedish science fiction short film Wanderers uses excerpts of Sagan's narration in 1994 of his book Pale Blue Dot, played over digitally-created visuals of humanity's possible future expansion into outer space. In February 2015, the Finnish-based symphonic metal band Nightwish released the song "Sagan" as a non-album bonus track for their single "Élan." The song, written by the band's songwriter/composer/keyboardist Tuomas Holopainen, is an homage to the life and work of the late Carl Sagan. In August 2015, it was announced that a biopic of Sagan's life was being planned by Warner Bros. On October 21, 2019, the Carl Sagan and Ann Druyan Theater was opened at the Center for Inquiry West in Los Angeles. In 2022, Sagan was posthumously awarded the Future of Life Award "for reducing the risk of nuclear war by developing and popularizing the science of nuclear winter." The honor, shared by seven other recipients involved in nuclear winter research, was accepted by his widow, Ann Druyan. In 2022, the audiobook recording of Sagan's 1994 book Pale Blue Dot was selected by the U.S. Library of Congress for inclusion in the National Recording Registry for being "culturally, historically, or aesthetically significant." In 2023, a movie Voyagers by Sebastián Lelio was announced with Sagan played by Andrew Garfield and with Daisy Edgar-Jones playing Sagan's third wife, Ann Druyan. Books (Note: errata slip inserted.) See also List of peace activists Sagan effect Neil deGrasse Tyson Explanatory notes References Citations Cited references External links FBI Records: The Vault – Carl Sagan at fbi.gov David Morrison, "Carl Sagan", Biographical Memoirs of the National Academy of Sciences (2014) Scientist of the Day – Carl Sagan at Linda Hall Library Sagan interviewed by Ted Turner, CNN, 1989, video: 44 minutes. via YouTube. Carl Sagan – Great Lives, BBC Radio, December 15, 2017 "A man whose time has come" (archived) – Interview with Carl Sagan by Ian Ridpath, New Scientist, July 4, 1974 "Carl Sagan's Life and Legacy as Scientist, Teacher, and Skeptic" (archived), by David Morrison, Committee for Skeptical Inquiry "NASA Technical Reports Server (NTRS) 19630011050: Direct Contact Among Galactic Civilizations by Relativistic Interstellar Spaceflight", Carl Sagan, when he was at Stanford University, in 1962, produced a controversial paper funded by a NASA research grant that concludes ancient alien intervention may have sparked human civilization. Carl Sagan demonstrates how Eratosthenes determined that the Earth was round and the approximate circumference of the earth (via YouTube) 1934 births 1996 deaths 20th-century American astronomers 20th-century American male writers 20th-century American novelists 20th-century American naturalists American agnostics American anti–nuclear weapons activists American anti–Vietnam War activists American astrophysicists American cannabis activists American cosmologists American critics of alternative medicine American critics of creationism American humanists American male non-fiction writers American male novelists American naturalists American nature writers American pacifists American people of Russian-Jewish descent American people of Ukrainian-Jewish descent American planetary scientists American science fiction writers American science writers American skeptics American UFO writers Articles containing video clips American astrobiologists Astrochemists Cornell University faculty Critics of parapsychology Deaths from myelodysplastic syndrome Deaths from pneumonia in Washington (state) Fellows of the American Physical Society Grand Crosses of the Order of Saint James of the Sword Harvard University faculty Hugo Award–winning writers Interstellar messages Jewish agnostics Jewish American activists Jewish American scientists Jewish astronomers Jewish skeptics Members of the American Philosophical Society Novelists from New York (state) Pantheists People associated with the American Museum of Natural History People from Bensonhurst, Brooklyn Presidents of The Planetary Society Pulitzer Prize for General Nonfiction winners Rahway High School alumni Sagan family Scientists from New York (state) Search for extraterrestrial intelligence Secular humanists Space advocates University of California, Berkeley fellows University of Chicago alumni Writers about religion and science Writers from Brooklyn
Carl Sagan
[ "Chemistry", "Astronomy" ]
12,545
[ "Astronomers", "Astrochemists", "Jewish astronomers" ]
6,829
https://en.wikipedia.org/wiki/Cache%20%28computing%29
In computing, a cache ( ) is a hardware or software component that stores data so that future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation or a copy of data stored elsewhere. A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests that can be served from the cache, the faster the system performs. To be cost-effective, caches must be relatively small. Nevertheless, caches are effective in many areas of computing because typical computer applications access data with a high degree of locality of reference. Such access patterns exhibit temporal locality, where data is requested that has been recently requested, and spatial locality, where data is requested that is stored near data that has already been requested. Motivation In memory design, there is an inherent trade-off between capacity and speed because larger capacity implies larger size and thus greater physical distances for signals to travel causing propagation delays. There is also a tradeoff between high-performance technologies such as SRAM and cheaper, easily mass-produced commodities such as DRAM, flash, or hard disks. The buffering provided by a cache benefits one or both of latency and throughput (bandwidth). A larger resource incurs a significant latency for access – e.g. it can take hundreds of clock cycles for a modern 4 GHz processor to reach DRAM. This is mitigated by reading large chunks into the cache, in the hope that subsequent reads will be from nearby locations and can be read from the cache. Prediction or explicit prefetching can be used to guess where future reads will come from and make requests ahead of time; if done optimally, the latency is bypassed altogether. The use of a cache also allows for higher throughput from the underlying resource, by assembling multiple fine-grain transfers into larger, more efficient requests. In the case of DRAM circuits, the additional throughput may be gained by using a wider data bus. Operation Hardware implements cache as a block of memory for temporary storage of data likely to be used again. Central processing units (CPUs), solid-state drives (SSDs) and hard disk drives (HDDs) frequently include hardware-based cache, while web browsers and web servers commonly rely on software caching. A cache is made up of a pool of entries. Each entry has associated data, which is a copy of the same data in some backing store. Each entry also has a tag, which specifies the identity of the data in the backing store of which the entry is a copy. When the cache client (a CPU, web browser, operating system) needs to access data presumed to exist in the backing store, it first checks the cache. If an entry can be found with a tag matching that of the desired data, the data in the entry is used instead. This situation is known as a cache hit. For example, a web browser program might check its local cache on disk to see if it has a local copy of the contents of a web page at a particular URL. In this example, the URL is the tag, and the content of the web page is the data. The percentage of accesses that result in cache hits is known as the hit rate or hit ratio of the cache. The alternative situation, when the cache is checked and found not to contain any entry with the desired tag, is known as a cache miss. This requires a more expensive access of data from the backing store. Once the requested data is retrieved, it is typically copied into the cache, ready for the next access. During a cache miss, some other previously existing cache entry is typically removed in order to make room for the newly retrieved data. The heuristic used to select the entry to replace is known as the replacement policy. One popular replacement policy, least recently used (LRU), replaces the oldest entry, the entry that was accessed less recently than any other entry. More sophisticated caching algorithms also take into account the frequency of use of entries. Write policies Cache writes must eventually be propagated to the backing store. The timing for this is governed by the write policy. The two primary write policies are: Write-through: Writes are performed synchronously to both the cache and the backing store. Write-back: Initially, writing is done only to the cache. The write to the backing store is postponed until the modified content is about to be replaced by another cache block. A write-back cache is more complex to implement since it needs to track which of its locations have been written over and mark them as dirty for later writing to the backing store. The data in these locations are written back to the backing store only when they are evicted from the cache, a process referred to as a lazy write. For this reason, a read miss in a write-back cache may require two memory accesses to the backing store: one to write back the dirty data, and one to retrieve the requested data. Other policies may also trigger data write-back. The client may make many changes to data in the cache, and then explicitly notify the cache to write back the data. Write operations do not return data. Consequently, a decision needs to be made for write misses: whether or not to load the data into the cache. This is determined by these write-miss policies: Write allocate (also called fetch on write): Data at the missed-write location is loaded to cache, followed by a write-hit operation. In this approach, write misses are similar to read misses. No-write allocate (also called write-no-allocate or write around): Data at the missed-write location is not loaded to cache, and is written directly to the backing store. In this approach, data is loaded into the cache on read misses only. While both write policies can Implement either write-miss policy, they are typically paired as follows: A write-back cache typically employs write allocate, anticipating that subsequent writes or reads to the same location will benefit from having the data already in the cache. A write-through cache uses no-write allocate. Here, subsequent writes have no advantage, since they still need to be written directly to the backing store. Entities other than the cache may change the data in the backing store, in which case the copy in the cache may become out-of-date or stale. Alternatively, when the client updates the data in the cache, copies of that data in other caches will become stale. Communication protocols between the cache managers that keep the data consistent are associated with cache coherence. Prefetch On a cache read miss, caches with a demand paging policy read the minimum amount from the backing store. A typical demand-paging virtual memory implementation reads one page of virtual memory (often 4 KB) from disk into the disk cache in RAM. A typical CPU reads a single L2 cache line of 128 bytes from DRAM into the L2 cache, and a single L1 cache line of 64 bytes from the L2 cache into the L1 cache. Caches with a prefetch input queue or more general anticipatory paging policy go further—they not only read the data requested, but guess that the next chunk or two of data will soon be required, and so prefetch that data into the cache ahead of time. Anticipatory paging is especially helpful when the backing store has a long latency to read the first chunk and much shorter times to sequentially read the next few chunks, such as disk storage and DRAM. A few operating systems go further with a loader that always pre-loads the entire executable into RAM. A few caches go even further, not only pre-loading an entire file, but also starting to load other related files that may soon be requested, such as the page cache associated with a prefetcher or the web cache associated with link prefetching. Examples of hardware caches CPU cache Small memories on or close to the CPU can operate faster than the much larger main memory. Most CPUs since the 1980s have used one or more caches, sometimes in cascaded levels; modern high-end embedded, desktop and server microprocessors may have as many as six types of cache (between levels and functions). Some examples of caches with a specific function are the D-cache, I-cache and the translation lookaside buffer for the memory management unit (MMU). GPU cache Earlier graphics processing units (GPUs) often had limited read-only texture caches and used swizzling to improve 2D locality of reference. Cache misses would drastically affect performance, e.g. if mipmapping was not used. Caching was important to leverage 32-bit (and wider) transfers for texture data that was often as little as 4 bits per pixel. As GPUs advanced, supporting general-purpose computing on graphics processing units and compute kernels, they have developed progressively larger and increasingly general caches, including instruction caches for shaders, exhibiting functionality commonly found in CPU caches. These caches have grown to handle synchronization primitives between threads and atomic operations, and interface with a CPU-style MMU. DSPs Digital signal processors have similarly generalized over the years. Earlier designs used scratchpad memory fed by direct memory access, but modern DSPs such as Qualcomm Hexagon often include a very similar set of caches to a CPU (e.g. Modified Harvard architecture with shared L2, split L1 I-cache and D-cache). Translation lookaside buffer A memory management unit (MMU) that fetches page table entries from main memory has a specialized cache, used for recording the results of virtual address to physical address translations. This specialized cache is called a translation lookaside buffer (TLB). In-network cache Information-centric networking Information-centric networking (ICN) is an approach to evolve the Internet infrastructure away from a host-centric paradigm, based on perpetual connectivity and the end-to-end principle, to a network architecture in which the focal point is identified information. Due to the inherent caching capability of the nodes in an ICN, it can be viewed as a loosely connected network of caches, which has unique requirements for caching policies. However, ubiquitous content caching introduces the challenge to content protection against unauthorized access, which requires extra care and solutions. Unlike proxy servers, in ICN the cache is a network-level solution. Therefore, it has rapidly changing cache states and higher request arrival rates; moreover, smaller cache sizes impose different requirements on the content eviction policies. In particular, eviction policies for ICN should be fast and lightweight. Various cache replication and eviction schemes for different ICN architectures and applications have been proposed. Policies Time aware least recently used The time aware least recently used (TLRU) is a variant of LRU designed for the situation where the stored contents in cache have a valid lifetime. The algorithm is suitable in network cache applications, such as ICN, content delivery networks (CDNs) and distributed networks in general. TLRU introduces a new term: time to use (TTU). TTU is a time stamp on content which stipulates the usability time for the content based on the locality of the content and information from the content publisher. Owing to this locality-based time stamp, TTU provides more control to the local administrator to regulate in-network storage. In the TLRU algorithm, when a piece of content arrives, a cache node calculates the local TTU value based on the TTU value assigned by the content publisher. The local TTU value is calculated by using a locally-defined function. Once the local TTU value is calculated the replacement of content is performed on a subset of the total content stored in cache node. The TLRU ensures that less popular and short-lived content should be replaced with incoming content. Least frequent recently used The least frequent recently used (LFRU) cache replacement scheme combines the benefits of LFU and LRU schemes. LFRU is suitable for network cache applications, such as ICN, CDNs and distributed networks in general. In LFRU, the cache is divided into two partitions called privileged and unprivileged partitions. The privileged partition can be seen as a protected partition. If content is highly popular, it is pushed into the privileged partition. Replacement of the privileged partition is done by first evicting content from the unprivileged partition, then pushing content from the privileged partition to the unprivileged partition, and finally inserting new content into the privileged partition. In the above procedure, the LRU is used for the privileged partition and an approximated LFU (ALFU) scheme is used for the unprivileged partition. The basic idea is to cache the locally popular content with the ALFU scheme and push the popular content to the privileged partition. Weather forecast In 2011, the use of smartphones with weather forecasting options was overly taxing AccuWeather servers; two requests from the same area would generate separate requests. An optimization by edge-servers to truncate the GPS coordinates to fewer decimal places meant that the cached results from a nearby query would be used. The number of to-the-server lookups per day dropped by half. Software caches Disk cache While CPU caches are generally managed entirely by hardware, a variety of software manages other caches. The page cache in main memory is managed by the operating system kernel. While the disk buffer, which is an integrated part of the hard disk drive or solid state drive, is sometimes misleadingly referred to as disk cache, its main functions are write sequencing and read prefetching. High-end disk controllers often have their own on-board cache for the hard disk drive's data blocks. Finally, a fast local hard disk drive can also cache information held on even slower data storage devices, such as remote servers (web cache) or local tape drives or optical jukeboxes; such a scheme is the main concept of hierarchical storage management. Also, fast flash-based solid-state drives (SSDs) can be used as caches for slower rotational-media hard disk drives, working together as hybrid drives. Web cache Web browsers and web proxy servers employ web caches to store previous responses from web servers, such as web pages and images. Web caches reduce the amount of information that needs to be transmitted across the network, as information previously stored in the cache can often be re-used. This reduces bandwidth and processing requirements of the web server, and helps to improve responsiveness for users of the web. Web browsers employ a built-in web cache, but some Internet service providers (ISPs) or organizations also use a caching proxy server, which is a web cache that is shared among all users of that network. Another form of cache is P2P caching, where the files most sought for by peer-to-peer applications are stored in an ISP cache to accelerate P2P transfers. Similarly, decentralised equivalents exist, which allow communities to perform the same task for P2P traffic, for example, Corelli. Memoization A cache can store data that is computed on demand rather than retrieved from a backing store. Memoization is an optimization technique that stores the results of resource-consuming function calls within a lookup table, allowing subsequent calls to reuse the stored results and avoid repeated computation. It is related to the dynamic programming algorithm design methodology, which can also be thought of as a means of caching. Content delivery network A content delivery network (CDN) is a network of distributed servers that deliver pages and other Web content to a user, based on the geographic locations of the user, the origin of the web page and the content delivery server. CDNs began in the late 1990s as a way to speed up the delivery of static content, such as HTML pages, images and videos. By replicating content on multiple servers around the world and delivering it to users based on their location, CDNs can significantly improve the speed and availability of a website or application. When a user requests a piece of content, the CDN will check to see if it has a copy of the content in its cache. If it does, the CDN will deliver the content to the user from the cache. Cloud storage gateway A cloud storage gateway, also known as an edge filer, is a hybrid cloud storage device that connects a local network to one or more cloud storage services, typically object storage services such as Amazon S3. It provides a cache for frequently accessed data, providing high speed local access to frequently accessed data in the cloud storage service. Cloud storage gateways also provide additional benefits such as accessing cloud object storage through traditional file serving protocols as well as continued access to cached data during connectivity outages. Other caches The BIND DNS daemon caches a mapping of domain names to IP addresses, as does a resolver library. Write-through operation is common when operating over unreliable networks (like an Ethernet LAN), because of the enormous complexity of the coherency protocol required between multiple write-back caches when communication is unreliable. For instance, web page caches and client-side network file system caches (like those in NFS or SMB) are typically read-only or write-through specifically to keep the network protocol simple and reliable. Search engines also frequently make web pages they have indexed available from their cache. For example, Google provides a "Cached" link next to each search result. This can prove useful when web pages from a web server are temporarily or permanently inaccessible. Database caching can substantially improve the throughput of database applications, for example in the processing of indexes, data dictionaries, and frequently used subsets of data. A distributed cache uses networked hosts to provide scalability, reliability and performance to the application. The hosts can be co-located or spread over different geographical regions. Buffer vs. cache The semantics of a "buffer" and a "cache" are not totally different; even so, there are fundamental differences in intent between the process of caching and the process of buffering. Fundamentally, caching realizes a performance increase for transfers of data that is being repeatedly transferred. While a caching system may realize a performance increase upon the initial (typically write) transfer of a data item, this performance increase is due to buffering occurring within the caching system. With read caches, a data item must have been fetched from its residing location at least once in order for subsequent reads of the data item to realize a performance increase by virtue of being able to be fetched from the cache's (faster) intermediate storage rather than the data's residing location. With write caches, a performance increase of writing a data item may be realized upon the first write of the data item by virtue of the data item immediately being stored in the cache's intermediate storage, deferring the transfer of the data item to its residing storage at a later stage or else occurring as a background process. Contrary to strict buffering, a caching process must adhere to a (potentially distributed) cache coherency protocol in order to maintain consistency between the cache's intermediate storage and the location where the data resides. Buffering, on the other hand, reduces the number of transfers for otherwise novel data amongst communicating processes, which amortizes overhead involved for several small transfers over fewer, larger transfers, provides an intermediary for communicating processes which are incapable of direct transfers amongst each other, or ensures a minimum data size or representation required by at least one of the communicating processes involved in a transfer. With typical caching implementations, a data item that is read or written for the first time is effectively being buffered; and in the case of a write, mostly realizing a performance increase for the application from where the write originated. Additionally, the portion of a caching protocol where individual writes are deferred to a batch of writes is a form of buffering. The portion of a caching protocol where individual reads are deferred to a batch of reads is also a form of buffering, although this form may negatively impact the performance of at least the initial reads (even though it may positively impact the performance of the sum of the individual reads). In practice, caching almost always involves some form of buffering, while strict buffering does not involve caching. A buffer is a temporary memory location that is traditionally used because CPU instructions cannot directly address data stored in peripheral devices. Thus, addressable memory is used as an intermediate stage. Additionally, such a buffer may be feasible when a large block of data is assembled or disassembled (as required by a storage device), or when data may be delivered in a different order than that in which it is produced. Also, a whole buffer of data is usually transferred sequentially (for example to hard disk), so buffering itself sometimes increases transfer performance or reduces the variation or jitter of the transfer's latency as opposed to caching where the intent is to reduce the latency. These benefits are present even if the buffered data are written to the buffer once and read from the buffer once. A cache also increases transfer performance. A part of the increase similarly comes from the possibility that multiple small transfers will combine into one large block. But the main performance-gain occurs because there is a good chance that the same data will be read from cache multiple times, or that written data will soon be read. A cache's sole purpose is to reduce accesses to the underlying slower storage. Cache is also usually an abstraction layer that is designed to be invisible from the perspective of neighboring layers. See also Cache coloring Cache hierarchy Cache-oblivious algorithm Cache stampede Cache language model Cache manifest in HTML5 Dirty bit Five-minute rule Materialized view Memory hierarchy Pipeline burst cache Temporary file References Further reading "What Every Programmer Should Know About Memory" "Caching in the Distributed Environment" Computer architecture
Cache (computing)
[ "Technology", "Engineering" ]
4,580
[ "Computers", "Computer engineering", "Computer architecture" ]
6,834
https://en.wikipedia.org/wiki/List%20of%20computer%20scientists
This is a list of computer scientists, people who do work in computer science, in particular researchers and authors. Some persons notable as programmers are included here because they work in research as well as program. A few of these people pre-date the invention of the digital computer; they are now regarded as computer scientists because their work can be seen as leading to the invention of the computer. Others are mathematicians whose work falls within what would now be called theoretical computer science, such as complexity theory and algorithmic information theory. A Wil van der Aalst – business process management, process mining, Petri nets Scott Aaronson – quantum computing and complexity theory Rediet Abebe – algorithms, artificial intelligence Hal Abelson – intersection of computing and teaching Serge Abiteboul – database theory Samson Abramsky – game semantics Leonard Adleman – RSA, DNA computing Manindra Agrawal – polynomial-time primality testing Luis von Ahn – human-based computation Alfred Aho – compilers book, the 'a' in AWK Frances E. Allen – compiler optimization Gene Amdahl – supercomputer developer, Amdahl Corporation founder David P. Anderson – volunteer computing Lisa Anthony – natural user interfaces Andrew Appel – compiler of text books Cecilia R. Aragon – invented treap, human-centered data science Bruce Arden – programming language compilers (GAT, Michigan Algorithm Decoder (MAD)), virtual memory architecture, Michigan Terminal System (MTS) Kevin Ashton – pioneered and named The Internet of Things at M.I.T. Sanjeev Arora – PCP theorem Winifred "Tim" Alice Asprey – established the computer science curriculum at Vassar College John Vincent Atanasoff – computer pioneer, creator of Atanasoff Berry Computer (ABC) Shakuntala Atre – database theory Lennart Augustsson – languages (Lazy ML, Cayenne), compilers (HBC Haskell, parallel Haskell front end, Bluespec SystemVerilog early), LPMud pioneer, NetBSD device drivers B Charles Babbage (1791–1871) – invented first mechanical computer called the supreme mathematician Charles Bachman – American computer scientist, known for Integrated Data Store Roland Carl Backhouse – mathematics of computer program construction, algorithmic problem solving, ALGOL John Backus – FORTRAN, Backus–Naur form, first complete compiler David F. Bacon – programming languages, garbage collection David Bader Victor Bahl Anthony James Barr – SAS System Jean Bartik (1924–2011) – one of the first computer programmers, on ENIAC (1946), one of the first Vacuum tube computers, back when "programming" involved using cables, dials, and switches to physically rewire the machine; worked with John Mauchly toward BINAC (1949), EDVAC (1949), UNIVAC (1951) to develop early "stored program" computers Andrew Barto Friedrich L. Bauer – Stack (data structure), Sequential Formula Translation, ALGOL, software engineering, Bauer–Fike theorem Rudolf Bayer – B-tree Gordon Bell (1934–2024) – computer designer DEC VAX, author: Computer Structures Steven M. Bellovin – network security Cecilia Berdichevsky (1925–2010) – pioneering Argentinian computer scientist Tim Berners-Lee – World Wide Web Daniel J. Bernstein – qmail, software as protected speech Peter Bernus Abhay Bhushan Dines Bjørner – Vienna Development Method (VDM), RAISE Gerrit Blaauw – one of the principal designers of the IBM System 360 line of computers Sue Black David Blei Dorothy Blum – National Security Agency Lenore Blum – complexity Manuel Blum – cryptography Barry Boehm – software engineering economics, spiral development Corrado Böhm – author of the structured program theorem Kurt Bollacker Jeff Bonwick – invented slab allocation and ZFS Grady Booch – Unified Modeling Language, Object Management Group George Boole – Boolean logic Andrew Booth – developed the first rotating drum storage device Kathleen Booth – developed the first assembly language Anita Borg (1949–2003) – American computer scientist, founder of Anita Borg Institute for Women and Technology Bert Bos – Cascading Style Sheets Mikhail Botvinnik – World Chess Champion, computer scientist and electrical engineer, pioneered early expert system AI and computer chess Jonathan Bowen – Z notation, formal methods Stephen R. Bourne – Bourne shell, portable ALGOL 68C compiler Harry Bouwman (born 1953) – Dutch Information systems researcher, and Professor at the Åbo Akademi University Robert S. Boyer – string searching, ACL2 theorem prover Karlheinz Brandenburg – Main mp3 contributor Gilles Brassard – BB84 protocol and quantum cryptography pioneer Lawrence M. Breed – implementation of Iverson Notation (APL), co-developed APL\360, Scientific Time Sharing Corporation cofounder Jack E. Bresenham – early computer-graphics contributions, including Bresenham's algorithm Sergey Brin – co-founder of Google David J. Brown – unified memory architecture, binary compatibility Per Brinch Hansen (surname "Brinch Hansen") – RC 4000 multiprogramming system, operating system kernels, microkernels, monitors, concurrent programming, Concurrent Pascal, distributed computing & processes, parallel computing Sjaak Brinkkemper – methodology of product software development Fred Brooks – System 360, OS/360, The Mythical Man-Month, No Silver Bullet Rod Brooks Margaret Burnett – visual programming languages, end-user software engineering, and gender-inclusive software Rod Burstall – languages COWSEL (renamed POP-1), POP-2, NPL, Hope; ACM SIGPLAN 2009 PL Achievement Award Michael Butler – Event-B C Pino Caballero Gil – cryptography Tracy Camp – wireless computing Martin Campbell-Kelly – history of computing Rosemary Candlin Rod Canion – cofounder of Compaq Computer Corporation Bryan Cantrill – invented DTrace Luca Cardelli John Carmack – codeveloped Doom Michael Caspersen – programming methodology, education in OO programming, leadership in developing informatics education Edwin Catmull – computer graphics Vint Cerf – Internet, TCP/IP Gregory Chaitin Robert Cailliau – Belgian computer scientist Zhou Chaochen – duration calculus Peter Chen – entity-relationship model, data modeling, conceptual model Leonardo Chiariglione – founder of MPEG Tracy Chou – computer scientist and activist Alonzo Church – mathematics of combinators, lambda calculus Alberto Ciaramella – speech recognition, patent informatics Edmund M. Clarke – model checking John Cocke – reduced instruction set computer (RISC) Edgar F. Codd (1923–2003) – formulated the database relational model Jacques Cohen – computer science professor Ian Coldwater – computer security Simon Colton – computational creativity Alain Colmerauer – Prolog Douglas Comer – Xinu Paul Justin Compton – Ripple-down rules Richard W. Conway – CORC, CUPL, and PL/C languages and dialects; programming textbooks Stephen Cook – NP-completeness James Cooley – Fast Fourier transform (FFT) Steven Anson Coons – [[conic section analyses, Bézier surface patches (includes Coons patch), The Little Red Book (1967), computer graphics Danese Cooper – open-source software Fernando J. Corbató – Compatible Time-Sharing System (CTSS), Multics Gordon Cormack – co-invented dynamic Markov compression Kit Cosper – open-source software Patrick Cousot – abstract interpretation Ingemar Cox – digital watermarking Damien Coyle – computational neuroscience, neuroimaging, neurotechnology, and brain-computer interface Seymour Cray – Cray Research, supercomputer Nello Cristianini – machine learning, pattern analysis, artificial intelligence Jon Crowcroft – networking W. Bruce Croft Glen Culler – interactive computing, computer graphics, high performance computing Haskell Curry D Luigi Dadda – designer of the Dadda multiplier Ole-Johan Dahl – Simula, object-oriented programming Ryan Dahl – founder of node.js project Andries van Dam – computer graphics, hypertext Samir Das – Wireless Networks, Mobile Computing, Vehicular ad hoc network, Sensor Networks, Mesh networking, Wireless ad hoc network Neil Daswani – computer security, co-founder and co-director of Stanford Advanced Computer Security Program, co-founder of Dasient (acquired by Twitter), former chief information security of LifeLock and Symantec's Consumer Business Unit Christopher J. Date – proponent of database relational model Terry A. Davis – creator of TempleOS Jeff Dean – Bigtable, MapReduce, Spanner of Google Erik Demaine – computational origami Tom DeMarco Richard DeMillo – computer security, software engineering, educational technology Dorothy E. Denning – computer security Peter J. Denning – identified the use of an operating system's working set and balance set, President of ACM Michael Dertouzos – Director of Massachusetts Institute of Technology (MIT) Laboratory for Computer Science (LCS) from 1974 to 2001 Alexander Dewdney Robert Dewar – IFIP WG 2.1 member, ALGOL 68, chairperson; AdaCore cofounder, president, CEO Vinod Dham – P5 Pentium processor Jan Dietz (born 1945) (decay constant) – information systems theory and Design & Engineering Methodology for Organizations Whitfield Diffie (born 1944) (linear response function) – public key cryptography, Diffie–Hellman key exchange Edsger W. Dijkstra – algorithms, Dijkstra's algorithm, Go To Statement Considered Harmful, semaphore (programming), IFIP WG 2.1 member Matthew Dillon – DragonFly BSD with LWKT, vkernel OS-level virtualisation, file systems: HAMMER1, HAMMER2 Alan Dix – wrote important university level textbook on human–computer interaction Jack Dongarra – linear algebra high performance computing (HCI) Marco Dorigo – ant colony optimization Paul Dourish – human computer interaction Charles Stark Draper (1901–1987) – designer of Apollo Guidance Computer, "father of inertial navigation", MIT professor Susan Dumais – information retrieval Adam Dunkels – Contiki, lwIP, uIP, protothreads Jon Michael Dunn – founding dean of Indiana University School of Informatics, information based logics especially relevance logic Schahram Dustdar – Distributed Systems, TU Wien, Austria E Peter Eades – graph drawing Annie Easley Wim Ebbinkhuijsen – COBOL John Presper Eckert – ENIAC Alan Edelman – Edelman's Law, stochastic operator, Interactive Supercomputing, Julia (programming language) cocreator, high performance computing, numerical computing Brendan Eich – JavaScript, Mozilla Philip Emeagwali – supercomputing E. Allen Emerson – model checking Douglas Engelbart – tiled windows, hypertext, computer mouse Barbara Engelhardt – latent variable models, genomics, quantitative trait locus (QTL) David Eppstein Andrey Ershov – languages ALPHA, Rapira; first Soviet time-sharing system AIST-0, electronic publishing system RUBIN, multiprocessing workstation MRAMOR, IFIP WG 2.1 member, Aesthetics and the Human Factor in Programming Don Estridge (1937–1985) – led development of original IBM Personal Computer (PC); known as "father of the IBM PC" Oren Etzioni – MetaCrawler, Netbot Christopher Riche Evans David C. Evans – computer graphics Shimon Even F Scott Fahlman Edward Feigenbaum – intelligence Edward Felten – computer security Tim Finin Raphael Finkel Donald Firesmith Gary William Flake Tommy Flowers – Colossus computer Robert Floyd – NP-completeness Sally Floyd – Internet congestion control Lawrence J. Fogel – evolutionary programming James D. Foley Ken Forbus L. R. Ford, Jr. Lance Fortnow Mahmoud Samir Fayed – PWCT, Ring Martin Fowler Robert France Herbert W. Franke Edward Fredkin Yoav Freund Daniel P. Friedman Charlotte Froese Fischer – computational theoretical physics Ping Fu Xiaoming Fu Kunihiko Fukushima – neocognitron, artificial neural networks, convolutional neural network architecture, unsupervised learning, deep learning D. R. Fulkerson G Richard P. Gabriel – Maclisp, Common Lisp, Worse is Better, League for Programming Freedom, Lucid Inc., XEmacs Zvi Galil Bernard Galler – MAD (programming language) Hector Garcia-Molina Michael Garey – NP-completeness Hugo de Garis Bill Gates – cofounder of Microsoft David Gelernter Lisa Gelobter – was the Chief Digital Service Officer for the U.S. Department of Education, founder of teQuitable Charles Geschke Zoubin Ghahramani Sanjay Ghemawat Jeremy Gibbons – generic programming, functional programming, formal methods, computational biology, bioinformatics Juan E. Gilbert – human-centered computing Lee Giles – CiteSeer Seymour Ginsburg – formal languages, automata theory, AFL theory, database theory Robert L. Glass Kurt Gödel – computability; not a computer scientist per se, but his work was invaluable in the field Ashok Goel Joseph Goguen E. Mark Gold – Language identification in the limit Adele Goldberg – Smalltalk Andrew V. Goldberg – algorithms, algorithm engineering Ian Goldberg – cryptographer, off-the-record messaging Judy Goldsmith – computational complexity theory, decision theory, and computer ethics Oded Goldreich – cryptography, computational complexity theory Shafi Goldwasser – cryptography, computational complexity theory Gene Golub – Matrix computation Martin Charles Golumbic – algorithmic graph theory Gastón Gonnet – cofounder of Waterloo Maple Inc. Ian Goodfellow – machine learning James Gosling – Network extensible Window System (NeWS), Java Paul Graham – Viaweb, On Lisp, Arc Robert M. Graham – programming language compilers (GAT, Michigan Algorithm Decoder (MAD)), virtual memory architecture, Multics Susan L. Graham – compilers, programming environments Jim Gray – database Sheila Greibach – Greibach normal form, Abstract family of languages (AFL) theory David Gries – The Science of Programming, Interference freedom, Member Emeritus, IFIP WG 2.3 on Programming Methodology Robert Griesemer – Go language Ralph Griswold – SNOBOL Bill Gropp – Message Passing Interface, Portable, Extensible Toolkit for Scientific Computation (PETSc) Tom Gruber – ontology engineering Shelia Guberman – handwriting recognition Ramanathan V. Guha – Resource Description Framework (RDF), Netscape, RSS, Epinions Neil J. Gunther – computer performance analysis, capacity planning Jürg Gutknecht – with Niklaus Wirth: Lilith computer; Modula-2, Oberon, Zonnon programming languages; Oberon operating system Michael Guy – Phoenix, work on number theory, computer algebra, higher dimension polyhedra theory; with John Horton Conway Giri Topper - Topper of Anna University and Programmer H Nico Habermann – operating systems, software engineering, inter-process communication, process synchronization, deadlock avoidance, software verification, programming languages: ALGOL 60, BLISS, Pascal, Ada Philipp Matthäus Hahn – mechanical calculator Eldon C. Hall – Apollo Guidance Computer Wendy Hall Joseph Halpern Margaret Hamilton – ultra-reliable software design, Apollo program space missions Richard Hamming – Hamming code, founder of the Association for Computing Machinery Jiawei Han – data mining Frank Harary – graph theory Brian Harris – machine translation research, Canada's first computer-assisted translation course, natural translation theory, community interpreting (Critical Link) Juris Hartmanis – computational complexity theory Johan Håstad – computational complexity theory Les Hatton – software failure and vulnerabilities Igor Hawryszkiewycz (born 1948) – American computer scientist and organizational theorist He Jifeng – provably correct systems Eric Hehner – predicative programming, formal methods, quote notation, ALGOL Martin Hellman – encryption Gernot Heiser – operating system teaching, research, commercialising, Open Kernel Labs, OKL4, Wombat James Hendler – Semantic Web John L. Hennessy – computer architecture Andrew Herbert Carl Hewitt Kelsey Hightower – open source, cloud computing Danny Hillis – Connection Machine Geoffrey Hinton Julia Hirschberg Tin Kam Ho – artificial intelligence, machine learning C. A. R. Hoare – logic, rigor, communicating sequential processes (CSP) Louis Hodes (1934–2008) – Lisp, pattern recognition, logic programming, cancer research Betty Holberton – ENIAC programmer, developed the first Sort Merge Generator John Henry Holland – genetic algorithms Herman Hollerith (1860–1929) – invented recording of data on a machine readable medium, using punched cards Gerard Holzmann – software verification, logic model checking (SPIN) John Hopcroft – compilers Admiral Grace Hopper (1906–1992) – developed early compilers: FLOW-Matic, COBOL; worked on UNIVAC; gave speeches on computer history, where she gave out nano-seconds Eric Horvitz – artificial intelligence Alston Householder Paul Hudak (1952–2015) – Haskell language design, textbooks on it and computer music David A. Huffman (1925–1999) – Huffman coding, used in data compression John Hughes – structuring computations with arrows; QuickCheck randomized program testing framework; Haskell language design Roger Hui – co-created J language Watts Humphrey (1927–2010) – Personal Software Process (PSP), Software quality, Team Software Process (TSP) Sandra Hutchins (born 1946) – speech recognition I Jean Ichbiah – Ada Roberto Ierusalimschy – Lua (programming language) Dan Ingalls – Smalltalk, BitBlt, Lively Kernel Mary Jane Irwin Kenneth E. Iverson – APL, J J Ivar Jacobson – Unified Modeling Language, Object Management Group Anil K. Jain (born 1948) Ramesh Jain Jonathan James Jordi Ustrell Aguilà David S. Johnson Stephen C. Johnson Angie Jones – software engineer and automation architect. Holds 26 patented inventions in the United States of America and Japan Cliff Jones – Vienna Development Method (VDM) Michael I. Jordan Mathai Joseph Aravind K. Joshi Bill Joy (born 1954) – Sun Microsystems, BSD UNIX, vi, csh Dan Jurafsky – natural language processing K William Kahan – numerical analysis Robert E. Kahn – TCP/IP Avinash Kak – digital image processing Poul-Henning Kamp – invented GBDE, FreeBSD Jails, Varnish cache David Karger Richard Karp – NP-completeness Narendra Karmarkar – Karmarkar's algorithm Marek Karpinski – NP optimization problems Ted Kaehler – Smalltalk, Squeak, HyperCard Alan Kay – Dynabook, Smalltalk, overlapping windows Neeraj Kayal – AKS primality test Manolis Kellis – computational biology John George Kemeny – the language BASIC Ken Kennedy – compiling for parallel and vector machines Brian Kernighan (born 1942) – Unix, the 'k' in AWK Carl Kesselman – grid computing Gregor Kiczales – CLOS, reflective programming, aspect-oriented programming Peter T. Kirstein – Internet Stephen Cole Kleene – Kleene closure, recursion theory Dan Klein – Natural language processing, Machine translation Leonard Kleinrock – ARPANET, queueing theory, packet switching, hierarchical routing Donald Knuth – The Art of Computer Programming, MIX/MMIX, TeX, literate programming Andrew Koenig – C++ Daphne Koller – Artificial intelligence, bayesian network Michael Kölling – BlueJ Andrey Nikolaevich Kolmogorov – algorithmic complexity theory Janet L. Kolodner – case-based reasoning David Korn – KornShell Kees Koster – ALGOL 68 Robert Kowalski – logic programming John Koza – genetic programming John Krogstie – SEQUAL framework Joseph Kruskal – Kruskal's algorithm Maarja Kruusmaa – underwater roboticist Thomas E. Kurtz (1928–2024) – BASIC programming language; Dartmouth College computer professor L Richard E. Ladner Monica S. Lam Leslie Lamport – algorithms for distributed computing, LaTeX Butler Lampson – SDS 940, founding member Xerox PARC, Xerox Alto, Turing Award Peter Landin – ISWIM, J operator, SECD machine, off-side rule, syntactic sugar, ALGOL, IFIP WG 2.1 member, advanced lambda calculus to model programming languages (aided functional programming), denotational semantics Tom Lane – Independent JPEG Group, PostgreSQL, Portable Network Graphics (PNG) Börje Langefors Chris Lattner – creator of Swift (programming language) and LLVM compiler infrastructure Steve Lawrence Edward D. Lazowska Joshua Lederberg Manny M Lehman Charles E. Leiserson – cache-oblivious algorithms, provably good work-stealing, coauthor of Introduction to Algorithms Douglas Lenat – artificial intelligence, Cyc Yann LeCun Rasmus Lerdorf – PHP Max Levchin – Gausebeck–Levchin test and PayPal Leonid Levin – computational complexity theory Kevin Leyton-Brown – artificial intelligence J.C.R. Licklider David Liddle Jochen Liedtke – microkernel operating systems Eumel, L3, L4 John Lions – Lions' Commentary on UNIX 6th Edition, with Source Code (Lions Book) Charles H. Lindsey – IFIP WG 2.1 member, Revised Report on ALGOL 68 Richard J. Lipton – computational complexity theory Barbara Liskov – programming languages Yanhong Annie Liu – programming languages, algorithms, program design, program optimization, software systems, optimizing, analysis, and transformations, intelligent systems, distributed computing, computer security, IFIP WG 2.1 member Darrell Long – computer data storage, computer security Patricia D. Lopez – broadening participation in computing Gillian Lovegrove Ada Lovelace – first programmer David Luckham – Lisp, Automated theorem proving, Stanford Pascal Verifier, Complex event processing, Rational Software cofounder (Ada compiler) Eugene Luks Nancy Lynch M Nadia Magnenat Thalmann – computer graphics, virtual actor Tom Maibaum George Mallen – creative computing, computer arts Simon Marlow – Haskell developer, book author; co-developer: Glasgow Haskell Compiler, Haxl remote data access library Zohar Manna – fuzzy logic James Martin – information engineering Robert C. Martin (Uncle Bob) – software craftsmanship John Mashey Yuri Matiyasevich – solving Hilbert's tenth problem Yukihiro Matsumoto – Ruby (programming language) John Mauchly (1907–1980) – designed ENIAC, first general-purpose electronic digital computer, and EDVAC, BINAC and UNIVAC I, the first commercial computer; worked with Jean Bartik on ENIAC and Grace Murray Hopper on UNIVAC Ujjwal Maulik (born 1965) multi-objective clustering and Bioinformatics Derek McAuley – ubiquitous computing, computer architecture, networking Conor McBride – researches type theory, functional programming; cocreated Epigram (programming language) with James McKinna; member IFIP Working Group 2.1 on Algorithmic Languages and Calculi John McCarthy – Lisp (programming language), ALGOL, IFIP WG 2.1 member, artificial intelligence Andrew McCallum Douglas McIlroy – macros, pipes, Unix philosophy Chris McKinstry – artificial intelligence, Mindpixel Marshall Kirk McKusick – BSD, Berkeley Fast File System Lambert Meertens – ALGOL 68, IFIP WG 2.1 member, ABC (programming language) Kurt Mehlhorn – algorithms, data structures, LEDA Dora Metcalf – entrepreneur, engineer and mathematician Bertrand Meyer – Eiffel (programming language) Silvio Micali – cryptography Robin Milner – ML (programming language) Jack Minker – database logic Marvin Minsky – artificial intelligence, perceptrons, Society of Mind James G. Mitchell – WATFOR compiler, Mesa (programming language), Spring (operating system), ARM architecture Tom M. Mitchell Arvind Mithal – formal verification of large digital systems, developing dynamic dataflow architectures, parallel computing programming languages (Id, pH), compiling on parallel machines Paul Mockapetris – Domain Name System (DNS) Cleve Moler – numerical analysis, MATLAB Faron Moller – concurrency theory John P. Moon – inventor, Apple Inc. Charles H. Moore – Forth language Edward F. Moore – Moore machine Gordon Moore – Moore's law J Strother Moore – string searching, ACL2 theorem prover Roger Moore – co-developed APL\360, created IPSANET, co-founded I. P. Sharp Associates Hans Moravec – robotics Carroll Morgan – formal methods Robert Tappan Morris – Morris worm Joel Moses – Macsyma Rajeev Motwani – randomized algorithm Oleg A. Mukhanov – quantum computing developer, co-founder and CTO of SeeQC Stephen Muggleton – Inductive Logic Programming Klaus-Robert Müller – machine learning, artificial intelligence Alan Mycroft – programming languages Brad A. Myers – human-computer interaction N Mihai Nadin – anticipation research Makoto Nagao – machine translation, natural language processing, digital library Frieder Nake – pioneered computer arts Bonnie Nardi – human–computer interaction Peter Naur (1928–2016) – Backus–Naur form (BNF), ALGOL 60, IFIP WG 2.1 member Roger Needham – computer security James G. Nell – Generalised Enterprise Reference Architecture and Methodology (GERAM) Greg Nelson (1953–2015) – satisfiability modulo theories, extended static checking, program verification, Modula-3 committee, Simplify theorem prover in ESC/Java Bernard de Neumann – massively parallel autonomous cellular processor, software engineering research Klara Dan von Neumann (1911–1963) – early computers, ENIAC programmer and control designer John von Neumann (1903–1957) – early computers, von Neumann machine, set theory, functional analysis, mathematics pioneer, linear programming, quantum mechanics Allen Newell – artificial intelligence, Computer Structures Max Newman – Colossus computer, MADM Andrew Ng – artificial intelligence, machine learning, robotics Nils John Nilsson (1933–2019) – artificial intelligence G.M. Nijssen – Nijssen's Information Analysis Methodology (NIAM) object–role modeling Tobias Nipkow – proof assistance Maurice Nivat – theoretical computer science, Theoretical Computer Science journal, ALGOL, IFIP WG 2.1 member Jerre Noe – computerized banking Peter Nordin – artificial intelligence, genetic programming, evolutionary robotics Donald Norman – user interfaces, usability Peter Norvig – artificial intelligence, Director of Research at Google George Novacky – University of Pittsburgh: assistant department chair, senior lecturer in computer science, assistant dean of CAS for undergraduate studies Kristen Nygaard – Simula, object-oriented programming O Martin Odersky – Scala programming language Peter O'Hearn – separation logic, bunched logic, Infer Static Analyzer T. William Olle – Ferranti Mercury Steve Omohundro Severo Ornstein John O'Sullivan – Wi-Fi John Ousterhout – Tcl programming language Mark Overmars – video game programming Susan Owicki – interference freedom P Larry Page – co-founder of Google Sankar Pal Paritosh Pandya Christos Papadimitriou David Park (1935–1990) – first Lisp implementation, expert in fairness, program schemas, bisimulation in concurrent computing David Parnas – information hiding, modular programming DJ Patil – former Chief Data Scientist of United States Yale Patt – Instruction-level parallelism, speculative architectures David Patterson – reduced instruction set computer (RISC), RISC-V, redundant arrays of inexpensive disks (RAID), Berkeley Network of Workstations (NOW) Mike Paterson – algorithms, analysis of algorithms (complexity) Mihai Pătraşcu – data structures Lawrence Paulson – ML Randy Pausch (1960–2008) – human–computer interaction, Carnegie professor, "Last Lecture" Juan Pavón – software agents Judea Pearl – artificial intelligence, search algorithms Alan Perlis – Programming Pearls Radia Perlman – spanning tree protocol Pier Giorgio Perotto – computer designer at Olivetti, designer of the Programma 101 programmable calculator Rózsa Péter – recursive function theory Simon Peyton Jones – functional programming, Glasgow Haskell Compiler, C-- Kathy Pham – data, artificial intelligence, civic technology, healthcare, ethics Roberto Pieraccini – speech technologist, engineering director at Google Keshav Pingali – IEEE Computer Society Charles Babbage Award, ACM Fellow (2012) Gordon Plotkin Amir Pnueli – temporal logic Willem van der Poel – computer graphics, robotics, geographic information systems, imaging, multimedia, virtual environments, games Robin Popplestone – COWSEL (renamed POP-1), POP-2, POP-11 languages, Poplog IDE; Freddy II robot Cicely Popplewell (1920–1995) – British software engineer in 1960s Emil Post – mathematics Jon Postel – Internet Franco Preparata – computer engineering, computational geometry, parallel algorithms, computational biology William H. Press – numerical algorithms R Rapelang Rabana Grzegorz Rozenberg – natural computing, automata theory, graph transformations and concurrent systems Michael O. Rabin – nondeterministic machine Dragomir R. Radev – natural language processing, information retrieval T. V. Raman – accessibility, Emacspeak Brian Randell – ALGOL 60, software fault tolerance, dependability, pre-1950 history of computing hardware Anders P. Ravn – Duration Calculus Raj Reddy – artificial intelligence David P. Reed Trygve Reenskaug – model–view–controller (MVC) software architecture pattern John C. Reynolds – continuations, definitional interpreters, defunctionalization, Forsythe, Gedanken language, intersection types, polymorphic lambda calculus, relational parametricity, separation logic, ALGOL Joyce K. Reynolds – Internet Reinder van de Riet – Editor: Europe of Data and Knowledge Engineering, COLOR-X event modeling language Bernard Richards – medical informatics Martin Richards – Basic Combined Programming Language (BCPL) Adam Ries – advocate for Arabic numerals to replace Roman numerals C. J. van Rijsbergen Dennis Ritchie – C (programming language), Unix Ron Rivest – RSA, MD5, RC4 Lawrence Roberts – ARPANET program manager, Internet cofounder Ken Robinson – formal methods Colette Rolland – REMORA methodology, meta modelling John Romero – codeveloped Doom Azriel Rosenfeld Douglas T. Ross – Automatically Programmed Tools (APT), Computer-aided design, structured analysis and design technique, ALGOL X Guido van Rossum – Python (programming language) M. A. Rothman – UEFI Winston W. Royce – waterfall model Rudy Rucker – mathematician, writer, educator Steven Rudich – complexity theory, cryptography Jeff Rulifson James Rumbaugh – Unified Modeling Language, Object Management Group Peter Ružička – Slovak computer scientist and mathematician S George Sadowsky Mehrnoosh Sadrzadeh – compositional models of meaning, machine learning Umar Saif Gerard Salton – information retrieval Jean E. Sammet – programming languages Claude Sammut – artificial intelligence researcher Carl Sassenrath – operating systems, programming languages, Amiga, REBOL Mahadev Satyanarayanan – file systems, distributed systems, mobile computing, pervasive computing Walter Savitch – discovery of complexity class NL, Savitch's theorem, natural language processing, mathematical linguistics Nitin Saxena – AKS Primality test for polynomial time primality testing, computational complexity theory Jonathan Schaeffer Wilhelm Schickard – one of the first calculating machines Jürgen Schmidhuber – artificial intelligence, deep learning, artificial neural networks, recurrent neural networks, Gödel machine, artificial curiosity, meta-learning Steve Schneider – formal methods, security Bruce Schneier – cryptography, security Fred B. Schneider – concurrent and distributed computing Sarita Schoenebeck – human–computer interaction Glenda Schroeder – command-line shell, e-mail Bernhard Schölkopf – machine learning, artificial intelligence Dana Scott – domain theory Michael L. Scott – programming languages, algorithms, distributed computing Robert Sedgewick – algorithms, data structures Ravi Sethi – compilers, 2nd Dragon Book Nigel Shadbolt Adi Shamir – RSA, cryptanalysis Claude Shannon – information theory David E. Shaw – computational finance, computational biochemistry, parallel architectures Cliff Shaw – systems programmer, artificial intelligence Scott Shenker – networking Shashi Shekhar – spatial computing Ben Shneiderman – human–computer interaction, information visualization Edward H. Shortliffe – MYCIN (medical diagnostic expert system) Daniel Siewiorek – electronic design automation, reliability computing, context aware mobile computing, wearable computing, computer-aided design, rapid prototyping, fault tolerance Joseph Sifakis – model checking Herbert A. Simon – artificial intelligence Munindar P. Singh – multiagent systems, software engineering, artificial intelligence, social networks Ramesh Sitaraman – helped build Akamai's high performance network Daniel Sleator – splay tree, amortized analysis Aaron Sloman – artificial intelligence and cognitive science Arne Sølvberg – information modelling Brian Cantwell Smith – reflective programming, 3lisp David Canfield Smith – invented interface icons, programming by demonstration, developed graphical user interface, Xerox Star; Xerox PARC researcher, cofounded Dest Systems, Cognition Steven Spewak – enterprise architecture planning Carol Spradling Robert Sproull Rohini Kesavan Srihari – information retrieval, text analytics, multilingual text mining Sargur Srihari – pattern recognition, machine learning, computational criminology, CEDAR-FOX Maciej Stachowiak – GNOME, Safari, WebKit Richard Stallman (born 1953) – GNU Project Ronald Stamper Thad Starner Richard E. Stearns – computational complexity theory Guy L. Steele, Jr. – Scheme, Common Lisp Thomas Sterling – creator of Beowulf clusters Alexander Stepanov – generic programming W. Richard Stevens (1951–1999) – author of books, including TCP/IP Illustrated and Advanced Programming in the Unix Environment Larry Stockmeyer – computational complexity, distributed computing Salvatore Stolfo – computer security, machine learning Michael Stonebraker – relational database practice and theory Olaf Storaasli – finite element machine, linear algebra, high performance computing Christopher Strachey – denotational semantics Volker Strassen – matrix multiplication, integer multiplication, Solovay–Strassen primality test Bjarne Stroustrup – C++ Madhu Sudan – computational complexity theory, coding theory Gerald Jay Sussman – Scheme Bert Sutherland – graphics, Internet Ivan Sutherland – graphics Latanya Sweeney – data privacy and algorithmic fairness Mario Szegedy – complexity theory, quantum computing T Parisa Tabriz – Google Director of Engineering, also known as the Security Princess Roberto Tamassia – computational geometry, computer security Andrew S. Tanenbaum – operating systems, MINIX Austin Tate – Artificial Intelligence Applications, AI Planning, Virtual Worlds Bernhard Thalheim – conceptual modelling foundation Éva Tardos Gábor Tardos Robert Tarjan – splay tree Valerie Taylor Mario Tchou – Italian engineer, of Chinese descent, leader of Olivetti Elea project Jaime Teevan Shang-Hua Teng – analysis of algorithms Larry Tesler – human–computer interaction, graphical user interface, Apple Macintosh Avie Tevanian – Mach kernel team, NeXT, Mac OS X Charles P. Thacker – Xerox Alto, Microsoft Research Daniel Thalmann – computer graphics, virtual actor Ken Thompson – mainly designed and authored Unix, Plan 9 and Inferno operating systems, B and Bon languages (precursors of C), created UTF-8 character encoding, introduced regular expressions in QED, co-authored Go language Simon Thompson – functional programming research, textbooks; Cardano domain-specific languages: Marlowe Sebastian Thrun – AI researcher, pioneered autonomous driving Walter F. Tichy – RCS Seinosuke Toda – computation complexity, recipient of 1998 Gödel Prize Chai Keong Toh – mobile ad hoc networks pioneer Linus Torvalds – Linux kernel, Git Leonardo Torres Quevedo (1852–1936) – invented El Ajedrecista (the chess player) in 1912, a true automaton built to play chess without human guidance. In his work Essays on Automatics (1913), introduced the idea of floating-point arithmetic. In 1920, built an early electromechanical device of the Analytical Engine. Godfried Toussaint – computational geometry, computational music theory Gloria Townsend Edwin E. Tozer – business information systems Joseph F Traub – computational complexity of scientific problems John V. Tucker – computability theory John Tukey – founder of FFT algorithm, box plot, exploratory data analysis and Coining the term 'bit' Alan Turing (1912–1954) – British computing pioneer, Turing machine, algorithms, cryptology, computer architecture David Turner – SASL, Kent Recursive Calculator, Miranda, IFIP WG 2.1 member Murray Turoff – computer-mediated communication U Jeffrey D. Ullman – compilers, databases, complexity theory V Leslie Valiant – computational complexity theory, computational learning theory Vladimir Vapnik – pattern recognition, computational learning theory Moshe Vardi – professor of computer science at Rice University Dorothy Vaughan Bernard Vauquois – pioneered computer science in France, machine translation (MT) theory and practice including Vauquois triangle, ALGOL 60 Umesh Vazirani Manuela M. Veloso François Vernadat – enterprise modeling Richard Veryard – enterprise modeling Sergiy Vilkomir – software testing, RC/DC Paul Vitanyi – Kolmogorov complexity, Information distance, Normalized compression distance, Normalized Google distance Andrew Viterbi – Viterbi algorithm Jeffrey Scott Vitter – external memory algorithms, compressed data structures, data compression, databases Paul Vixie – DNS, BIND, PAIX, Internet Software Consortium, MAPS, DNSBL W Eiiti Wada – ALGOL N, IFIP WG 2.1 member, Japanese Industrial Standards (JIS) X 0208, 0212, Happy Hacking Keyboard David Wagner – security, cryptography David Waltz James Z. Wang Steve Ward Manfred K. Warmuth – computational learning theory David H. D. Warren – AI, logic programming, Prolog, Warren Abstract Machine (WAM) Kevin Warwick – artificial intelligence Jan Weglarz Philip Wadler – functional programming, Haskell, Monad, Java, logic Peter Wegner – object-oriented programming, interaction (computer science) Joseph Henry Wegstein – ALGOL 58, ALGOL 60, IFIP WG 2.1 member, data processing technical standards, fingerprint analysis Peter J. Weinberger – programming language design, the 'w' in AWK Mark Weiser – ubiquitous computing Joseph Weizenbaum – artificial intelligence, ELIZA David Wheeler – EDSAC, subroutines Franklin H. Westervelt – use of computers in engineering education, conversational use of computers, Michigan Terminal System (MTS), ARPANET, distance learning Steve Whittaker – human computer interaction, computer support for cooperative work, social media Jennifer Widom – nontraditional data management Gio Wiederhold – database management systems Norbert Wiener – Cybernetics Adriaan van Wijngaarden – Dutch pioneer; ARRA, ALGOL, IFIP WG 2.1 member Mary Allen Wilkes – LINC developer, assembler-linker designer Maurice Vincent Wilkes – microprogramming, EDSAC Yorick Wilks – computational linguistics, artificial intelligence James H. Wilkinson – numerical analysis Sophie Wilson – ARM architecture Shmuel Winograd – Coppersmith–Winograd algorithm Terry Winograd – artificial intelligence, SHRDLU Patrick Winston – artificial intelligence Niklaus Wirth – ALGOL W, IFIP WG 2.1 member, Pascal, Modula, Oberon Neil Wiseman – computer graphics Dennis E. Wisnosky – Integrated Computer-Aided Manufacturing (ICAM), IDEF Stephen Wolfram – Mathematica Mike Woodger – Pilot ACE, ALGOL 60, Ada (programming language) Philip Woodward – ambiguity function, sinc function, comb operator, rep operator, ALGOL 68-R Beatrice Helen Worsley – wrote the first PhD dissertation involving modern computers; was one of the people who wrote Transcode Steve Wozniak – engineered first generation personal computers at Apple Computer Jie Wu – computer networks William Wulf – BLISS system programming language + optimizing compiler, Hydra operating system, Tartan Laboratories Y Mihalis Yannakakis Andrew Chi-Chih Yao John Yen Nobuo Yoneda – Yoneda lemma, Yoneda product, ALGOL, IFIP WG 2.1 member Edward Yourdon – Structured Systems Analysis and Design Method Moti Yung Z Lotfi Zadeh – fuzzy logic Hans Zantema – termination analysis Arif Zaman – pseudo-random number generator Stanley Zdonik — database management systems Hussein Zedan – formal methods and real-time systems Shlomo Zilberstein – artificial intelligence, anytime algorithms, automated planning, and decentralized POMDPs Jill Zimmerman – James M. Beall Professor of Mathematics and Computer Science at Goucher College Mark Zuckerberg – cofounder of Facebook and Meta Platforms Konrad Zuse – German pioneer of hardware and software See also List of computing people List of Jewish American computer scientists List of members of the National Academy of Sciences (computer and information sciences) List of pioneers in computer science List of programmers List of programming language researchers List of Russian IT developers List of Slovenian computer scientists List of Indian computer scientists References External links CiteSeer list of the most cited authors in computer science Computer scientists with h-index >= 40 Lists of people by occupation
List of computer scientists
[ "Technology" ]
8,801
[ "Computing-related lists", "Computer science", "Computer scientists", "Lists of computer scientists" ]
6,839
https://en.wikipedia.org/wiki/Reaction%20kinetics%20in%20uniform%20supersonic%20flow
Reaction kinetics in uniform supersonic flow (, CRESU) is an experiment investigating chemical reactions taking place at very low temperatures. The technique involves the expansion of a gas or mixture of gases through a de Laval nozzle from a high-pressure reservoir into a vacuum chamber. As it expands, the nozzle collimates the gas into a uniform supersonic beam, which is essentially collision-free and has a temperature that, in the centre-of-mass frame, can be significantly below that of the reservoir gas. Each nozzle produces a characteristic temperature. This way, any temperature between room temperature and about 10 K can be achieved. Apparatus There are relatively few CRESU apparatuses in existence for the simple reason that the gas throughput and pumping requirements are huge, which makes them expensive to run. Two of the leading centres have been the University of Rennes (France) and the University of Birmingham (UK). A more recent development has been a pulsed version of the CRESU, which requires far less gas and therefore smaller pumps. Kinetics Most species have a negligible vapour pressure at such low temperatures, and this means that they quickly condense on the sides of the apparatus. Essentially, the CRESU technique provides a "wall-less flow tube", which allows the kinetics of gas-phase reactions to be investigated at much lower temperatures than otherwise possible. Chemical kinetics experiments can then be carried out in a pump–probe fashion, using a laser to initiate the reaction (for example, by preparing one of the reagents by photolysis of a precursor), followed by observation of that same species (for example, by laser-induced fluorescence) after a known time delay. The fluorescence signal is captured by a photomultiplier a known distance downstream of the de Laval nozzle. The time delay can be varied up to the maximum corresponding to the flow time over that known distance. By studying how quickly the reagent species disappears in the presence of differing concentrations of a (usually stable) co-reagent species, the reaction rate constant at the low temperature of the CRESU flow can be determined. Reactions studied by the CRESU technique typically have no significant activation energy barrier. In the case of neutral–neutral reactions (i.e., not involving any charged species, ions), these type of barrier-free reactions usually involve free radical species, such as molecular oxygen (O2), the cyanide radical (CN) or the hydroxyl radical (OH). The energetic driving force for these reactions is typically an attractive long-range intermolecular potential. CRESU experiments have been used to show deviations from Arrhenius kinetics at low temperatures: as the temperature is reduced, the rate constant actually increases. They can explain why chemistry is so prevalent in the interstellar medium, where many different polyatomic species have been detected (by radio astronomy). See also Cryochemistry References Chemistry experiments Chemical kinetics
Reaction kinetics in uniform supersonic flow
[ "Chemistry" ]
618
[ "Chemical reaction engineering", "nan", "Chemical kinetics" ]
6,840
https://en.wikipedia.org/wiki/Cygwin
Cygwin ( ) is a free and open-source Unix-like environment and command-line interface (CLI) for Microsoft Windows. The project also provides a software repository containing open-source packages. Cygwin allows source code for Unix-like operating systems to be compiled and run on Windows. Cygwin provides native integration of Windows-based applications. The terminal emulator Mintty is the default command-line interface (CLI) provided to interact with the environment. The Cygwin installation directory layout mimics the root file system of Unix-like systems, with directories such as /bin, /home, /etc, /usr, and /var. Cygwin is released under the GNU Lesser General Public License version 3. It was originally developed by Cygnus Solutions, which was later acquired by Red Hat (now part of IBM), to port the GNU toolchain to Win32, including the GNU Compiler Suite. Rather than rewrite the tools to use the Win32 runtime environment, Cygwin implemented a POSIX-compatible environment in the form of a DLL. The brand motto is "Get that Linux feeling – on Windows", although Cygwin doesn't have Linux in it. History Cygwin began in 1995 as a project of Steve Chamberlain, a Cygnus engineer who observed that Windows NT and 95 used COFF as their object file format, and that GNU already included support for x86 and COFF, and the C library newlib. He thought that it would be possible to retarget GCC and produce a cross compiler generating executables that could run on Windows. A prototype was later developed. Chamberlain bootstrapped the compiler on a Windows system, to emulate Unix to let the GNU configure shell script run. Initially, Cygwin was called gnuwin32. When Microsoft registered the trademark Win32, the "32" was dropped to simply become Cygwin. In 1999, Cygnus offered Cygwin 1.0 as a commercial product. Subsequent versions have not been released, instead relying on continued open source releases. Geoffrey Noer was the project lead from 1996 to 1999. Christopher Faylor was lead from 1999 to 2004; he left Red Hat and became co-lead with Corinna Vinschen. Corinna Vinschen has been the project lead from mid-2014 to date (as of September, 2024). From June 23, 2016, the Cygwin library version 2.5.2 was licensed under the GNU Lesser General Public License (LGPL) version 3. Description Cygwin is provided in two versions: the full 64-bit version and a stripped-down 32-bit version, whose final version was released in 2022. Cygwin consists of a library that implements the POSIX system call API in terms of Windows system calls to enable the running of a large number of application programs equivalent to those on Unix systems, and a GNU development toolchain (including GCC and GDB). Programmers have ported the X Window System, K Desktop Environment 3, GNOME, Apache, and TeX. Cygwin permits installing inetd, syslogd, sshd, Apache, and other daemons as standard Windows services. Cygwin programs have full access to the Windows API and other Windows libraries. Cygwin programs are installed by running Cygwin's "setup" program, which downloads them from repositories on the Internet. The Cygwin API library is licensed under the GNU Lesser General Public License version 3 (or later), with an exception to allow linking to any free and open-source software whose license conforms to the Open Source Definition. Cygwin consists of two parts: A dynamic-link library in the form of a C standard library that acts as a compatibility layer for the POSIX API and A collection of software tools and applications that provide a Unix-like look and feel. Cygwin supports POSIX symbolic links, representing them as plain-text files with the system attribute set. Cygwin 1.5 represented them as Windows Explorer shortcuts, but this was changed for reasons of performance and POSIX correctness. Cygwin also recognises NTFS junction points and symbolic links and treats them as POSIX symbolic links, but it does not create them. The POSIX API for handling access control lists (ACLs) is supported. Technical details A Cygwin-specific version of the Unix mount command allows mounting Windows paths as "filesystems" in the Unix file space. Initial mount points can be configured in /etc/fstab, which has a format very similar to Unix systems, except that Windows paths appear in place of devices. Filesystems can be mounted in binary mode (by default), or in text mode, which enables automatic conversion between LF and CRLF endings (which only affects programs that open files without explicitly specifying text or binary mode). Cygwin 1.7 introduced comprehensive support for POSIX locales, and the UTF-8 Unicode encoding became the default. The fork system call for duplicating a process is fully implemented, but the copy-on-write optimization strategy could not be used. The Cygwin DLL contains a console driver that emulates a Unix-style terminal within the Windows console. Cygwin's default user interface is the bash shell running in the Cygwin console. The DLL also implements pseudo terminal (pty) devices. Cygwin ships with a number of terminal emulators that are based on them, including mintty, rxvt/urxvt, and xterm. The version of GCC that comes with Cygwin has various extensions for creating Windows DLLs, such as specifying whether a program is a windowing or console-mode program. Support for compiling programs that do not require the POSIX compatibility layer provided by the Cygwin DLL used to be included in the default GCC, but , it is provided by cross-compilers contributed by the MinGW-w64 project. Software packages Cygwin's base package selection is approximately 100MB, containing the bash (interactive user) and dash (installation) shells and the core file and text manipulation utilities. Additional packages are available as optional installs from within the Cygwin "setup" program and package manager ("setup-x86_64.exe" – 64 bit). The Cygwin Ports project provided additional packages that were not available in the Cygwin distribution itself. Examples included GNOME, K Desktop Environment 3, MySQL database, and the PHP scripting language. Most ports have been adopted by volunteer maintainers as Cygwin packages, and Cygwin Ports are no longer maintained. Cygwin ships with GTK+ and Qt. The Cygwin/X project allows graphical Unix programs to display their user interfaces on the Windows desktop for both local and remote programs. See also Notes References External links 1995 software Compatibility layers Programming tools Free and open source compilers Free emulation software Free software programmed in C Free software programmed in C++ Red Hat software System administration Unix emulators Windows-only free software
Cygwin
[ "Technology" ]
1,501
[ "Information systems", "System administration" ]
6,854
https://en.wikipedia.org/wiki/Church%E2%80%93Turing%20thesis
In computability theory, the Church–Turing thesis (also known as computability thesis, the Turing–Church thesis, the Church–Turing conjecture, Church's thesis, Church's conjecture, and Turing's thesis) is a thesis about the nature of computable functions. It states that a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine. The thesis is named after American mathematician Alonzo Church and the British mathematician Alan Turing. Before the precise definition of computable function, mathematicians often used the informal term effectively calculable to describe functions that are computable by paper-and-pencil methods. In the 1930s, several independent attempts were made to formalize the notion of computability: In 1933, Kurt Gödel, with Jacques Herbrand, formalized the definition of the class of general recursive functions: the smallest class of functions (with arbitrarily many arguments) that is closed under composition, recursion, and minimization, and includes zero, successor, and all projections. In 1936, Alonzo Church created a method for defining functions called the λ-calculus. Within λ-calculus, he defined an encoding of the natural numbers called the Church numerals. A function on the natural numbers is called λ-computable if the corresponding function on the Church numerals can be represented by a term of the λ-calculus. Also in 1936, before learning of Church's work, Alan Turing created a theoretical model for machines, now called Turing machines, that could carry out calculations from inputs by manipulating symbols on a tape. Given a suitable encoding of the natural numbers as sequences of symbols, a function on the natural numbers is called Turing computable if some Turing machine computes the corresponding function on encoded natural numbers. Church, Kleene, and Turing proved that these three formally defined classes of computable functions coincide: a function is λ-computable if and only if it is Turing computable, and if and only if it is general recursive. This has led mathematicians and computer scientists to believe that the concept of computability is accurately characterized by these three equivalent processes. Other formal attempts to characterize computability have subsequently strengthened this belief (see below). On the other hand, the Church–Turing thesis states that the above three formally-defined classes of computable functions coincide with the informal notion of an effectively calculable function. Although the thesis has near-universal acceptance, it cannot be formally proven, as the concept of effective calculability is only informally defined. Since its inception, variations on the original thesis have arisen, including statements about what can physically be realized by a computer in our universe (physical Church-Turing thesis) and what can be efficiently computed (Church–Turing thesis (complexity theory)). These variations are not due to Church or Turing, but arise from later work in complexity theory and digital physics. The thesis also has implications for the philosophy of mind (see below). Statement in Church's and Turing's words addresses the notion of "effective computability" as follows: "Clearly the existence of CC and RC (Church's and Rosser's proofs) presupposes a precise definition of 'effective'. 'Effective method' is here used in the rather special sense of a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps". Thus the adverb-adjective "effective" is used in a sense of "1a: producing a decided, decisive, or desired effect", and "capable of producing a result". In the following, the words "effectively calculable" will mean "produced by any intuitively 'effective' means whatsoever" and "effectively computable" will mean "produced by a Turing-machine or equivalent mechanical device". Turing's "definitions" given in a footnote in his 1938 Ph.D. thesis Systems of Logic Based on Ordinals, supervised by Church, are virtually the same: We shall use the expression "computable function" to mean a function calculable by a machine, and let "effectively calculable" refer to the intuitive idea without particular identification with any one of these definitions. The thesis can be stated as: Every effectively calculable function is a computable function. Church also stated that "No computational procedure will be considered as an algorithm unless it can be represented as a Turing Machine". Turing stated it this way: It was stated ... that "a function is effectively calculable if its values can be found by some purely mechanical process". We may take this literally, understanding that by a purely mechanical process one which could be carried out by a machine. The development ... leads to ... an identification of computability with effective calculability. [ is the footnote quoted above.] History One of the important problems for logicians in the 1930s was the Entscheidungsproblem of David Hilbert and Wilhelm Ackermann, which asked whether there was a mechanical procedure for separating mathematical truths from mathematical falsehoods. This quest required that the notion of "algorithm" or "effective calculability" be pinned down, at least well enough for the quest to begin. But from the very outset Alonzo Church's attempts began with a debate that continues to this day. the notion of "effective calculability" to be (i) an "axiom or axioms" in an axiomatic system, (ii) merely a definition that "identified" two or more propositions, (iii) an empirical hypothesis to be verified by observation of natural events, or (iv) just a proposal for the sake of argument (i.e. a "thesis")? Circa 1930–1952 In the course of studying the problem, Church and his student Stephen Kleene introduced the notion of λ-definable functions, and they were able to prove that several large classes of functions frequently encountered in number theory were λ-definable. The debate began when Church proposed to Gödel that one should define the "effectively computable" functions as the λ-definable functions. Gödel, however, was not convinced and called the proposal "thoroughly unsatisfactory". Rather, in correspondence with Church (c. 1934–1935), Gödel proposed axiomatizing the notion of "effective calculability"; indeed, in a 1935 letter to Kleene, Church reported that: But Gödel offered no further guidance. Eventually, he would suggest his recursion, modified by Herbrand's suggestion, that Gödel had detailed in his 1934 lectures in Princeton NJ (Kleene and Rosser transcribed the notes). But he did not think that the two ideas could be satisfactorily identified "except heuristically". Next, it was necessary to identify and prove the equivalence of two notions of effective calculability. Equipped with the λ-calculus and "general" recursion, Kleene with help of Church and J. Barkley Rosser produced proofs (1933, 1935) to show that the two calculi are equivalent. Church subsequently modified his methods to include use of Herbrand–Gödel recursion and then proved (1936) that the Entscheidungsproblem is unsolvable: there is no algorithm that can determine whether a well formed formula has a beta normal form. Many years later in a letter to Davis (c. 1965), Gödel said that "he was, at the time of these [1934] lectures, not at all convinced that his concept of recursion comprised all possible recursions". By 1963–1964 Gödel would disavow Herbrand–Gödel recursion and the λ-calculus in favor of the Turing machine as the definition of "algorithm" or "mechanical procedure" or "formal system". A hypothesis leading to a natural law?: In late 1936 Alan Turing's paper (also proving that the Entscheidungsproblem is unsolvable) was delivered orally, but had not yet appeared in print. On the other hand, Emil Post's 1936 paper had appeared and was certified independent of Turing's work. Post strongly disagreed with Church's "identification" of effective computability with the λ-calculus and recursion, stating: Rather, he regarded the notion of "effective calculability" as merely a "working hypothesis" that might lead by inductive reasoning to a "natural law" rather than by "a definition or an axiom". This idea was "sharply" criticized by Church. Thus Post in his 1936 paper was also discounting Kurt Gödel's suggestion to Church in 1934–1935 that the thesis might be expressed as an axiom or set of axioms. Turing adds another definition, Rosser equates all three: Within just a short time, Turing's 1936–1937 paper "On Computable Numbers, with an Application to the Entscheidungsproblem" appeared. In it he stated another notion of "effective computability" with the introduction of his a-machines (now known as the Turing machine abstract computational model). And in a proof-sketch added as an "Appendix" to his 1936–1937 paper, Turing showed that the classes of functions defined by λ-calculus and Turing machines coincided. Church was quick to recognise how compelling Turing's analysis was. In his review of Turing's paper he made clear that Turing's notion made "the identification with effectiveness in the ordinary (not explicitly defined) sense evident immediately". In a few years (1939) Turing would propose, like Church and Kleene before him, that his formal definition of mechanical computing agent was the correct one. Thus, by 1939, both Church (1934) and Turing (1939) had individually proposed that their "formal systems" should be definitions of "effective calculability"; neither framed their statements as theses. Rosser (1939) formally identified the three notions-as-definitions: Kleene proposes Thesis I: This left the overt expression of a "thesis" to Kleene. In 1943 Kleene proposed his "Thesis I": The Church–Turing Thesis: Stephen Kleene, in Introduction To Metamathematics, finally goes on to formally name "Church's Thesis" and "Turing's Thesis", using his theory of recursive realizability. Kleene having switched from presenting his work in the terminology of Church-Kleene lambda definability, to that of Gödel-Kleene recursiveness (partial recursive functions). In this transition, Kleene modified Gödel's general recursive functions to allow for proofs of the unsolvability of problems in the Intuitionism of E. J. Brouwer. In his graduate textbook on logic, "Church's thesis" is introduced and basic mathematical results are demonstrated to be unrealizable. Next, Kleene proceeds to present "Turing's thesis", where results are shown to be uncomputable, using his simplified derivation of a Turing machine based on the work of Emil Post. Both theses are proven equivalent by use of "Theorem XXX". Kleene, finally, uses for the first time the term the "Church-Turing thesis" in a section in which he helps to give clarifications to concepts in Alan Turing's paper "The Word Problem in Semi-Groups with Cancellation", as demanded in a critique from William Boone. Later developments An attempt to better understand the notion of "effective computability" led Robin Gandy (Turing's student and friend) in 1980 to analyze machine computation (as opposed to human-computation acted out by a Turing machine). Gandy's curiosity about, and analysis of, cellular automata (including Conway's game of life), parallelism, and crystalline automata, led him to propose four "principles (or constraints) ... which it is argued, any machine must satisfy". His most-important fourth, "the principle of causality" is based on the "finite velocity of propagation of effects and signals; contemporary physics rejects the possibility of instantaneous action at a distance". From these principles and some additional constraints—(1a) a lower bound on the linear dimensions of any of the parts, (1b) an upper bound on speed of propagation (the velocity of light), (2) discrete progress of the machine, and (3) deterministic behavior—he produces a theorem that "What can be calculated by a device satisfying principles I–IV is computable." In the late 1990s Wilfried Sieg analyzed Turing's and Gandy's notions of "effective calculability" with the intent of "sharpening the informal notion, formulating its general features axiomatically, and investigating the axiomatic framework". In his 1997 and 2002 work Sieg presents a series of constraints on the behavior of a computor—"a human computing agent who proceeds mechanically". These constraints reduce to: "(B.1) (Boundedness) There is a fixed bound on the number of symbolic configurations a computor can immediately recognize. "(B.2) (Boundedness) There is a fixed bound on the number of internal states a computor can be in. "(L.1) (Locality) A computor can change only elements of an observed symbolic configuration. "(L.2) (Locality) A computor can shift attention from one symbolic configuration to another one, but the new observed configurations must be within a bounded distance of the immediately previously observed configuration. "(D) (Determinacy) The immediately recognizable (sub-)configuration determines uniquely the next computation step (and id [instantaneous description])"; stated another way: "A computor's internal state together with the observed configuration fixes uniquely the next computation step and the next internal state." The matter remains in active discussion within the academic community. The thesis as a definition The thesis can be viewed as nothing but an ordinary mathematical definition. Comments by Gödel on the subject suggest this view, e.g. "the correct definition of mechanical computability was established beyond any doubt by Turing". The case for viewing the thesis as nothing more than a definition is made explicitly by Robert I. Soare, where it is also argued that Turing's definition of computability is no less likely to be correct than the epsilon-delta definition of a continuous function. Success of the thesis Other formalisms (besides recursion, the λ-calculus, and the Turing machine) have been proposed for describing effective calculability/computability. Kleene (1952) adds to the list the functions "reckonable in the system S1" of Kurt Gödel 1936, and Emil Post's (1943, 1946) "canonical [also called normal] systems". In the 1950s Hao Wang and Martin Davis greatly simplified the one-tape Turing-machine model (see Post–Turing machine). Marvin Minsky expanded the model to two or more tapes and greatly simplified the tapes into "up-down counters", which Melzak and Lambek further evolved into what is now known as the counter machine model. In the late 1960s and early 1970s researchers expanded the counter machine model into the register machine, a close cousin to the modern notion of the computer. Other models include combinatory logic and Markov algorithms. Gurevich adds the pointer machine model of Kolmogorov and Uspensky (1953, 1958): "... they just wanted to ... convince themselves that there is no way to extend the notion of computable function." All these contributions involve proofs that the models are computationally equivalent to the Turing machine; such models are said to be Turing complete. Because all these different attempts at formalizing the concept of "effective calculability/computability" have yielded equivalent results, it is now generally assumed that the Church–Turing thesis is correct. In fact, Gödel (1936) proposed something stronger than this; he observed that there was something "absolute" about the concept of "reckonable in S1": Informal usage in proofs Proofs in computability theory often invoke the Church–Turing thesis in an informal way to establish the computability of functions while avoiding the (often very long) details which would be involved in a rigorous, formal proof. To establish that a function is computable by Turing machine, it is usually considered sufficient to give an informal English description of how the function can be effectively computed, and then conclude "by the Church–Turing thesis" that the function is Turing computable (equivalently, partial recursive). Dirk van Dalen gives the following example for the sake of illustrating this informal use of the Church–Turing thesis: In order to make the above example completely rigorous, one would have to carefully construct a Turing machine, or λ-function, or carefully invoke recursion axioms, or at best, cleverly invoke various theorems of computability theory. But because the computability theorist believes that Turing computability correctly captures what can be computed effectively, and because an effective procedure is spelled out in English for deciding the set B, the computability theorist accepts this as proof that the set is indeed recursive. Variations The success of the Church–Turing thesis prompted variations of the thesis to be proposed. For example, the physical Church–Turing thesis states: "All physically computable functions are Turing-computable." The Church–Turing thesis says nothing about the efficiency with which one model of computation can simulate another. It has been proved for instance that a (multi-tape) universal Turing machine only suffers a logarithmic slowdown factor in simulating any Turing machine. A variation of the Church–Turing thesis addresses whether an arbitrary but "reasonable" model of computation can be efficiently simulated. This is called the feasibility thesis, also known as the (classical) complexity-theoretic Church–Turing thesis or the extended Church–Turing thesis, which is not due to Church or Turing, but rather was realized gradually in the development of complexity theory. It states: "A probabilistic Turing machine can efficiently simulate any realistic model of computation." The word 'efficiently' here means up to polynomial-time reductions. This thesis was originally called computational complexity-theoretic Church–Turing thesis by Ethan Bernstein and Umesh Vazirani (1997). The complexity-theoretic Church–Turing thesis, then, posits that all 'reasonable' models of computation yield the same class of problems that can be computed in polynomial time. Assuming the conjecture that probabilistic polynomial time (BPP) equals deterministic polynomial time (P), the word 'probabilistic' is optional in the complexity-theoretic Church–Turing thesis. A similar thesis, called the invariance thesis, was introduced by Cees F. Slot and Peter van Emde Boas. It states: Reasonable' machines can simulate each other within a polynomially bounded overhead in time and a constant-factor overhead in space." The thesis originally appeared in a paper at STOC'84, which was the first paper to show that polynomial-time overhead and constant-space overhead could be simultaneously achieved for a simulation of a Random Access Machine on a Turing machine. If BQP is shown to be a strict superset of BPP, it would invalidate the complexity-theoretic Church–Turing thesis. In other words, there would be efficient quantum algorithms that perform tasks that do not have efficient probabilistic algorithms. This would not however invalidate the original Church–Turing thesis, since a quantum computer can always be simulated by a Turing machine, but it would invalidate the classical complexity-theoretic Church–Turing thesis for efficiency reasons. Consequently, the quantum complexity-theoretic Church–Turing thesis states: "A quantum Turing machine can efficiently simulate any realistic model of computation." Eugene Eberbach and Peter Wegner claim that the Church–Turing thesis is sometimes interpreted too broadly, stating "Though [...] Turing machines express the behavior of algorithms, the broader assertion that algorithms precisely capture what can be computed is invalid". They claim that forms of computation not captured by the thesis are relevant today, terms which they call super-Turing computation. Philosophical implications Philosophers have interpreted the Church–Turing thesis as having implications for the philosophy of mind. B. Jack Copeland states that it is an open empirical question whether there are actual deterministic physical processes that, in the long run, elude simulation by a Turing machine; furthermore, he states that it is an open empirical question whether any such processes are involved in the working of the human brain. There are also some important open questions which cover the relationship between the Church–Turing thesis and physics, and the possibility of hypercomputation. When applied to physics, the thesis has several possible meanings: The universe is equivalent to a Turing machine; thus, computing non-recursive functions is physically impossible. This has been termed the strong Church–Turing thesis, or Church–Turing–Deutsch principle, and is a foundation of digital physics. The universe is not equivalent to a Turing machine (i.e., the laws of physics are not Turing-computable), but incomputable physical events are not "harnessable" for the construction of a hypercomputer. For example, a universe in which physics involves random real numbers, as opposed to computable reals, would fall into this category. The universe is a hypercomputer, and it is possible to build physical devices to harness this property and calculate non-recursive functions. For example, it is an open question whether all quantum mechanical events are Turing-computable, although it is known that rigorous models such as quantum Turing machines are equivalent to deterministic Turing machines. (They are not necessarily efficiently equivalent; see above.) John Lucas and Roger Penrose have suggested that the human mind might be the result of some kind of quantum-mechanically enhanced, "non-algorithmic" computation. There are many other technical possibilities which fall outside or between these three categories, but these serve to illustrate the range of the concept. Philosophical aspects of the thesis, regarding both physical and biological computers, are also discussed in Odifreddi's 1989 textbook on recursion theory. Non-computable functions One can formally define functions that are not computable. A well-known example of such a function is the Busy Beaver function. This function takes an input n and returns the largest number of symbols that a Turing machine with n states can print before halting, when run with no input. Finding an upper bound on the busy beaver function is equivalent to solving the halting problem, a problem known to be unsolvable by Turing machines. Since the busy beaver function cannot be computed by Turing machines, the Church–Turing thesis states that this function cannot be effectively computed by any method. Several computational models allow for the computation of (Church-Turing) non-computable functions. These are known as hypercomputers. Mark Burgin argues that super-recursive algorithms such as inductive Turing machines disprove the Church–Turing thesis. His argument relies on a definition of algorithm broader than the ordinary one, so that non-computable functions obtained from some inductive Turing machines are called computable. This interpretation of the Church–Turing thesis differs from the interpretation commonly accepted in computability theory, discussed above. The argument that super-recursive algorithms are indeed algorithms in the sense of the Church–Turing thesis has not found broad acceptance within the computability research community. See also Abstract machine Church's thesis in constructive mathematics Church–Turing–Deutsch principle, which states that every physical process can be simulated by a universal computing device Computability logic Computability theory Decidability Hypercomputation Model of computation Oracle (computer science) Super-recursive algorithm Turing completeness Footnotes References Includes original papers by Gödel, Church, Turing, Rosser, Kleene, and Post mentioned in this section. Cited by . Reprinted in The Undecidable, p. 255ff. Kleene refined his definition of "general recursion" and proceeded in his chapter "12. Algorithmic theories" to posit "Thesis I" (p. 274); he would later repeat this thesis (in ) and name it "Church's Thesis" (i.e., the Church thesis). and (See also: ) External links . —a comprehensive philosophical treatment of relevant issues. A special issue (Vol. 28, No. 4, 1987) of the Notre Dame Journal of Formal Logic was devoted to the Church–Turing thesis. Computability theory Alan Turing Theory of computation Philosophy of computer science
Church–Turing thesis
[ "Mathematics", "Technology" ]
5,222
[ "Computability theory", "Philosophy of computer science", "Mathematical logic", "Computer science" ]
6,857
https://en.wikipedia.org/wiki/Computer%20multitasking
In computing, multitasking is the concurrent execution of multiple tasks (also known as processes) over a certain period of time. New tasks can interrupt already started ones before they finish, instead of waiting for them to end. As a result, a computer executes segments of multiple tasks in an interleaved manner, while the tasks share common processing resources such as central processing units (CPUs) and main memory. Multitasking automatically interrupts the running program, saving its state (partial results, memory contents and computer register contents) and loading the saved state of another program and transferring control to it. This "context switch" may be initiated at fixed time intervals (pre-emptive multitasking), or the running program may be coded to signal to the supervisory software when it can be interrupted (cooperative multitasking). Multitasking does not require parallel execution of multiple tasks at exactly the same time; instead, it allows more than one task to advance over a given period of time. Even on multiprocessor computers, multitasking allows many more tasks to be run than there are CPUs. Multitasking is a common feature of computer operating systems since at least the 1960s. It allows more efficient use of the computer hardware; when a program is waiting for some external event such as a user input or an input/output transfer with a peripheral to complete, the central processor can still be used with another program. In a time-sharing system, multiple human operators use the same processor as if it was dedicated to their use, while behind the scenes the computer is serving many users by multitasking their individual programs. In multiprogramming systems, a task runs until it must wait for an external event or until the operating system's scheduler forcibly swaps the running task out of the CPU. Real-time systems such as those designed to control industrial robots, require timely processing; a single processor might be shared between calculations of machine movement, communications, and user interface. Often multitasking operating systems include measures to change the priority of individual tasks, so that important jobs receive more processor time than those considered less significant. Depending on the operating system, a task might be as large as an entire application program, or might be made up of smaller threads that carry out portions of the overall program. A processor intended for use with multitasking operating systems may include special hardware to securely support multiple tasks, such as memory protection, and protection rings that ensure the supervisory software cannot be damaged or subverted by user-mode program errors. The term "multitasking" has become an international term, as the same word is used in many other languages such as German, Italian, Dutch, Romanian, Czech, Danish and Norwegian. Multiprogramming In the early days of computing, CPU time was expensive, and peripherals were very slow. When the computer ran a program that needed access to a peripheral, the central processing unit (CPU) would have to stop executing program instructions while the peripheral processed the data. This was usually very inefficient. Multiprogramming is a computing technique that enables multiple programs to be concurrently loaded and executed into a computer's memory, allowing the CPU to switch between them swiftly. This optimizes CPU utilization by keeping it engaged with the execution of tasks, particularly useful when one program is waiting for I/O operations to complete. The Bull Gamma 60, initially designed in 1957 and first released in 1960, was the first computer designed with multiprogramming in mind. Its architecture featured a central memory and a Program Distributor feeding up to twenty-five autonomous processing units with code and data, and allowing concurrent operation of multiple clusters. Another such computer was the LEO III, first released in 1961. During batch processing, several different programs were loaded in the computer memory, and the first one began to run. When the first program reached an instruction waiting for a peripheral, the context of this program was stored away, and the second program in memory was given a chance to run. The process continued until all programs finished running. The use of multiprogramming was enhanced by the arrival of virtual memory and virtual machine technology, which enabled individual programs to make use of memory and operating system resources as if other concurrently running programs were, for all practical purposes, nonexistent. Multiprogramming gives no guarantee that a program will run in a timely manner. Indeed, the first program may very well run for hours without needing access to a peripheral. As there were no users waiting at an interactive terminal, this was no problem: users handed in a deck of punched cards to an operator, and came back a few hours later for printed results. Multiprogramming greatly reduced wait times when multiple batches were being processed. Cooperative multitasking Early multitasking systems used applications that voluntarily ceded time to one another. This approach, which was eventually supported by many computer operating systems, is known today as cooperative multitasking. Although it is now rarely used in larger systems except for specific applications such as CICS or the JES2 subsystem, cooperative multitasking was once the only scheduling scheme employed by Microsoft Windows and classic Mac OS to enable multiple applications to run simultaneously. Cooperative multitasking is still used today on RISC OS systems. As a cooperatively multitasked system relies on each process regularly giving up time to other processes on the system, one poorly designed program can consume all of the CPU time for itself, either by performing extensive calculations or by busy waiting; both would cause the whole system to hang. In a server environment, this is a hazard that makes the entire environment unacceptably fragile. Preemptive multitasking Preemptive multitasking allows the computer system to more reliably guarantee to each process a regular "slice" of operating time. It also allows the system to deal rapidly with important external events like incoming data, which might require the immediate attention of one or another process. Operating systems were developed to take advantage of these hardware capabilities and run multiple processes preemptively. Preemptive multitasking was implemented in the PDP-6 Monitor and Multics in 1964, in OS/360 MFT in 1967, and in Unix in 1969, and was available in some operating systems for computers as small as DEC's PDP-8; it is a core feature of all Unix-like operating systems, such as Linux, Solaris and BSD with its derivatives, as well as modern versions of Windows. At any specific time, processes can be grouped into two categories: those that are waiting for input or output (called "I/O bound"), and those that are fully utilizing the CPU ("CPU bound"). In primitive systems, the software would often "poll", or "busywait" while waiting for requested input (such as disk, keyboard or network input). During this time, the system was not performing useful work. With the advent of interrupts and preemptive multitasking, I/O bound processes could be "blocked", or put on hold, pending the arrival of the necessary data, allowing other processes to utilize the CPU. As the arrival of the requested data would generate an interrupt, blocked processes could be guaranteed a timely return to execution. Possibly the earliest preemptive multitasking OS available to home users was Microware's OS-9, available for computers based on the Motorola 6809 such as the TRS-80 Color Computer 2, with the operating system supplied by Tandy as an upgrade for disk-equipped systems. Sinclair QDOS on the Sinclair QL followed in 1984, but it was not a big success. Commodore's Amiga was released the following year, offering a combination of multitasking and multimedia capabilities. Microsoft made preemptive multitasking a core feature of their flagship operating system in the early 1990s when developing Windows NT 3.1 and then Windows 95. In 1988 Apple offered A/UX as a UNIX System V-based alternative to the Classic Mac OS. In 2001 Apple switched to the NeXTSTEP-influenced Mac OS X. A similar model is used in Windows 9x and the Windows NT family, where native 32-bit applications are multitasked preemptively. 64-bit editions of Windows, both for the x86-64 and Itanium architectures, no longer support legacy 16-bit applications, and thus provide preemptive multitasking for all supported applications. Real time Another reason for multitasking was in the design of real-time computing systems, where there are a number of possibly unrelated external activities needed to be controlled by a single processor system. In such systems a hierarchical interrupt system is coupled with process prioritization to ensure that key activities were given a greater share of available process time. Multithreading As multitasking greatly improved the throughput of computers, programmers started to implement applications as sets of cooperating processes (e. g., one process gathering input data, one process processing input data, one process writing out results on disk). This, however, required some tools to allow processes to efficiently exchange data. Threads were born from the idea that the most efficient way for cooperating processes to exchange data would be to share their entire memory space. Thus, threads are effectively processes that run in the same memory context and share other resources with their parent processes, such as open files. Threads are described as lightweight processes because switching between threads does not involve changing the memory context. While threads are scheduled preemptively, some operating systems provide a variant to threads, named fibers, that are scheduled cooperatively. On operating systems that do not provide fibers, an application may implement its own fibers using repeated calls to worker functions. Fibers are even more lightweight than threads, and somewhat easier to program with, although they tend to lose some or all of the benefits of threads on machines with multiple processors. Some systems directly support multithreading in hardware. Memory protection Essential to any multitasking system is to safely and effectively share access to system resources. Access to memory must be strictly managed to ensure that no process can inadvertently or deliberately read or write to memory locations outside the process's address space. This is done for the purpose of general system stability and data integrity, as well as data security. In general, memory access management is a responsibility of the operating system kernel, in combination with hardware mechanisms that provide supporting functionalities, such as a memory management unit (MMU). If a process attempts to access a memory location outside its memory space, the MMU denies the request and signals the kernel to take appropriate actions; this usually results in forcibly terminating the offending process. Depending on the software and kernel design and the specific error in question, the user may receive an access violation error message such as "segmentation fault". In a well designed and correctly implemented multitasking system, a given process can never directly access memory that belongs to another process. An exception to this rule is in the case of shared memory; for example, in the System V inter-process communication mechanism the kernel allocates memory to be mutually shared by multiple processes. Such features are often used by database management software such as PostgreSQL. Inadequate memory protection mechanisms, either due to flaws in their design or poor implementations, allow for security vulnerabilities that may be potentially exploited by malicious software. Memory swapping Use of a swap file or swap partition is a way for the operating system to provide more memory than is physically available by keeping portions of the primary memory in secondary storage. While multitasking and memory swapping are two completely unrelated techniques, they are very often used together, as swapping memory allows more tasks to be loaded at the same time. Typically, a multitasking system allows another process to run when the running process hits a point where it has to wait for some portion of memory to be reloaded from secondary storage. Programming Various concurrent computing techniques are used to avoid potential problems caused by multiple tasks attempting to access the same resource. Bigger systems were sometimes built with a central processor(s) and some number of I/O processors, a kind of asymmetric multiprocessing. Over the years, multitasking systems have been refined. Modern operating systems generally include detailed mechanisms for prioritizing processes, while symmetric multiprocessing has introduced new complexities and capabilities. See also Process state Task switching References Concurrent computing Operating system technology
Computer multitasking
[ "Technology" ]
2,541
[ "Computing platforms", "Concurrent computing", "IT infrastructure" ]
6,863
https://en.wikipedia.org/wiki/Compression%20ratio
The compression ratio is the ratio between the maximum and minimum volume during the compression stage of the power cycle in a piston or Wankel engine. A fundamental specification for such engines, it can be measured in two different ways. The simpler way is the static compression ratio: in a reciprocating engine, this is the ratio of the volume of the cylinder when the piston is at the bottom of its stroke to that volume when the piston is at the top of its stroke. The dynamic compression ratio is a more advanced calculation which also takes into account gases entering and exiting the cylinder during the compression phase. Effect and typical ratios A high compression ratio is desirable because it allows an engine to extract more mechanical energy from a given mass of air–fuel mixture due to its higher thermal efficiency. This occurs because internal combustion engines are heat engines, and higher compression ratios permit the same combustion temperature to be reached with less fuel, while giving a longer expansion cycle, creating more mechanical power output and lowering the exhaust temperature. Petrol engines In petrol (gasoline) engines used in passenger cars for the past 20 years, compression ratios have typically been between 8:1 and 12:1. Several production engines have used higher compression ratios, including: Cars built from 1955 to 1972 which were designed for high-octane leaded gasoline, which allowed compression ratios up to 13:1. Some Mazda SkyActiv engines released since 2012 have compression ratios up to 16:1. The SkyActiv engine achieves this compression ratio with ordinary unleaded gasoline (95 RON in the United Kingdom) through improved scavenging of exhaust gases (which ensures cylinder temperature is as low as possible before the intake stroke), in addition to direct injection. Toyota Dynamic Force engine has a compression ratio up to 14:1. The 2014 Ferrari 458 Speciale also has a compression ratio of 14:1. When forced induction (e.g. a turbocharger or supercharger) is used, the compression ratio is often lower than naturally aspirated engines. This is due to the turbocharger or supercharger already having compressed the air before it enters the cylinders. Engines using port fuel-injection typically run lower boost pressures and/or compression ratios than direct injected engines because port fuel injection causes the air–fuel mixture to be heated together, leading to detonation. Conversely, directly injected engines can run higher boost because heated air will not detonate without a fuel being present. Higher compression ratios can make gasoline (petrol) engines subject to engine knocking (also known as "detonation", "pre-ignition", or "pinging") if lower octane-rated fuel is used. This can reduce efficiency or damage the engine if knock sensors are not present to modify the ignition timing. Diesel engines Diesel engines use higher compression ratios than petrol engines, because the lack of a spark plug means that the compression ratio must increase the temperature of the air in the cylinder sufficiently to ignite the diesel using compression ignition. Compression ratios are often between 14:1 and 23:1 for direct injection diesel engines, and between 18:1 and 23:1 for indirect injection diesel engines. At the lower end of 14:1, NOx emissions are reduced at a cost of more difficult cold-start. Mazda's Skyactiv-D, the first such commercial engine from 2013, used adaptive fuel injectors among other techniques to ease cold start. Other fuels The compression ratio may be higher in engines running exclusively on liquefied petroleum gas (LPG or "propane autogas") or compressed natural gas, due to the higher octane rating of these fuels. Kerosene engines typically use a compression ratio of 6.5 or lower. The petrol-paraffin engine version of the Ferguson TE20 tractor had a compression ratio of 4.5:1 for operation on tractor vaporising oil with an octane rating between 55 and 70. Motorsport engines Motorsport engines often run on high-octane petrol and can therefore use higher compression ratios. For example, motorcycle racing engines can use compression ratios as high as 14.7:1, and it is common to find motorcycles with compression ratios above 12.0:1 designed for 95 or higher octane fuel. Ethanol and methanol can take significantly higher compression ratios than gasoline. Racing engines burning methanol and ethanol fuel often have a compression ratio of 14:1 to 16:1. Mathematical formula In a reciprocating engine, the static compression ratio () is the ratio between the volume of the cylinder and combustion chamber when the piston is at the bottom of its stroke, and the volume of the combustion chamber when the piston is at the top of its stroke. It is therefore calculated by the formula where is the displacement volume. This is the volume inside the cylinder displaced by the piston from the beginning of the compression stroke to the end of the stroke. is the clearance volume. This is the volume of the space in the cylinder left at the end of the compression stroke. can be estimated by the cylinder volume formula: where is the cylinder bore (diameter) is the piston stroke length Because of the complex shape of it is usually measured directly. This is often done by filling the cylinder with liquid and then measuring the volume of the used liquid. Variable compression ratio engines Most engines use a fixed compression ratio, however a variable compression ratio engine is able to adjust the compression ratio while the engine is in operation. The first production engine with a variable compression ratio was introduced in 2019. Variable compression ratio is a technology to adjust the compression ratio of an internal combustion engine while the engine is in operation. This is done to increase fuel efficiency while under varying loads. Variable compression engines allow the volume above the piston at top dead centre to be changed. Higher loads require lower ratios to increase power, while lower loads need higher ratios to increase efficiency, i.e. to lower fuel consumption. For automotive use this needs to be done as the engine is running in response to the load and driving demands. The 2019 Infiniti QX50 is the first commercially available car that uses a variable compression ratio engine. Dynamic compression ratio The static compression ratio discussed above — calculated solely based on the cylinder and combustion chamber volumes — does not take into account any gases entering or exiting the cylinder during the compression phase. In most automotive engines, the intake valve closure (which seals the cylinder) takes place during the compression phase (i.e. after bottom dead centre, BDC), which can cause some of the gases to be pushed back out through the intake valve. On the other hand, intake port tuning and scavenging can cause a greater amount of gas to be trapped in the cylinder than the static volume would suggest. The dynamic compression ratio accounts for these factors. The dynamic compression ratio is higher with more conservative intake camshaft timing (i.e. soon after BDC), and lower with more radical intake camshaft timing (i.e. later after BDC). Regardless, the dynamic compression ratio is always lower than the static compression ratio. Absolute cylinder pressure is used to calculate the dynamic compression ratio, using the following formula: where is a polytropic value for the ratio of specific heats for the combustion gases at the temperatures present (this compensates for the temperature rise caused by compression, as well as heat lost to the cylinder) Under ideal (adiabatic) conditions, the ratio of specific heats would be 1.4, but a lower value, generally between 1.2 and 1.3 is used, since the amount of heat lost will vary among engines based on design, size and materials used. For example, if the static compression ratio is 10:1, and the dynamic compression ratio is 7.5:1, a useful value for cylinder pressure would be 7.51.3 × atmospheric pressure, or 13.7 bar (relative to atmospheric pressure). The two corrections for dynamic compression ratio affect cylinder pressure in opposite directions, but not in equal strength. An engine with high static compression ratio and late intake valve closure will have a dynamic compression ratio similar to an engine with lower compression but earlier intake valve closure. See also Mean effective pressure References Engine technology Engineering ratios Piston engines
Compression ratio
[ "Mathematics", "Technology", "Engineering" ]
1,660
[ "Metrics", "Engines", "Engineering ratios", "Piston engines", "Quantity", "Engine technology" ]
6,867
https://en.wikipedia.org/wiki/Context-free%20language
In formal language theory, a context-free language (CFL), also called a Chomsky type-2 language, is a language generated by a context-free grammar (CFG). Context-free languages have many applications in programming languages, in particular, most arithmetic expressions are generated by context-free grammars. Background Context-free grammar Different context-free grammars can generate the same context-free language. Intrinsic properties of the language can be distinguished from extrinsic properties of a particular grammar by comparing multiple grammars that describe the language. Automata The set of all context-free languages is identical to the set of languages accepted by pushdown automata, which makes these languages amenable to parsing. Further, for a given CFG, there is a direct way to produce a pushdown automaton for the grammar (and thereby the corresponding language), though going the other way (producing a grammar given an automaton) is not as direct. Examples An example context-free language is , the language of all non-empty even-length strings, the entire first halves of which are 's, and the entire second halves of which are 's. is generated by the grammar . This language is not regular. It is accepted by the pushdown automaton where is defined as follows: Unambiguous CFLs are a proper subset of all CFLs: there are inherently ambiguous CFLs. An example of an inherently ambiguous CFL is the union of with . This set is context-free, since the union of two context-free languages is always context-free. But there is no way to unambiguously parse strings in the (non-context-free) subset which is the intersection of these two languages. Dyck language The language of all properly matched parentheses is generated by the grammar . Properties Context-free parsing The context-free nature of the language makes it simple to parse with a pushdown automaton. Determining an instance of the membership problem; i.e. given a string , determine whether where is the language generated by a given grammar ; is also known as recognition. Context-free recognition for Chomsky normal form grammars was shown by Leslie G. Valiant to be reducible to Boolean matrix multiplication, thus inheriting its complexity upper bound of O(n2.3728596). Conversely, Lillian Lee has shown O(n3−ε) Boolean matrix multiplication to be reducible to O(n3−3ε) CFG parsing, thus establishing some kind of lower bound for the latter. Practical uses of context-free languages require also to produce a derivation tree that exhibits the structure that the grammar associates with the given string. The process of producing this tree is called parsing. Known parsers have a time complexity that is cubic in the size of the string that is parsed. Formally, the set of all context-free languages is identical to the set of languages accepted by pushdown automata (PDA). Parser algorithms for context-free languages include the CYK algorithm and Earley's Algorithm. A special subclass of context-free languages are the deterministic context-free languages which are defined as the set of languages accepted by a deterministic pushdown automaton and can be parsed by a LR(k) parser. See also parsing expression grammar as an alternative approach to grammar and parser. Closure properties The class of context-free languages is closed under the following operations. That is, if L and P are context-free languages, the following languages are context-free as well: the union of L and P the reversal of L the concatenation of L and P the Kleene star of L the image of L under a homomorphism the image of L under an inverse homomorphism the circular shift of L (the language ) the prefix closure of L (the set of all prefixes of strings from L) the quotient L/R of L by a regular language R Nonclosure under intersection, complement, and difference The context-free languages are not closed under intersection. This can be seen by taking the languages and , which are both context-free. Their intersection is , which can be shown to be non-context-free by the pumping lemma for context-free languages. As a consequence, context-free languages cannot be closed under complementation, as for any languages A and B, their intersection can be expressed by union and complement: . In particular, context-free language cannot be closed under difference, since complement can be expressed by difference: . However, if L is a context-free language and D is a regular language then both their intersection and their difference are context-free languages. Decidability In formal language theory, questions about regular languages are usually decidable, but ones about context-free languages are often not. It is decidable whether such a language is finite, but not whether it contains every possible string, is regular, is unambiguous, or is equivalent to a language with a different grammar. The following problems are undecidable for arbitrarily given context-free grammars A and B: Equivalence: is ? Disjointness: is ? However, the intersection of a context-free language and a regular language is context-free, hence the variant of the problem where B is a regular grammar is decidable (see "Emptiness" below). Containment: is ? Again, the variant of the problem where B is a regular grammar is decidable, while that where A is regular is generally not. Universality: is ? Regularity: is a regular language? Ambiguity: is every grammar for ambiguous? The following problems are decidable for arbitrary context-free languages: Emptiness: Given a context-free grammar A, is ? Finiteness: Given a context-free grammar A, is finite? Membership: Given a context-free grammar G, and a word , does ? Efficient polynomial-time algorithms for the membership problem are the CYK algorithm and Earley's Algorithm. According to Hopcroft, Motwani, Ullman (2003), many of the fundamental closure and (un)decidability properties of context-free languages were shown in the 1961 paper of Bar-Hillel, Perles, and Shamir Languages that are not context-free The set is a context-sensitive language, but there does not exist a context-free grammar generating this language. So there exist context-sensitive languages which are not context-free. To prove that a given language is not context-free, one may employ the pumping lemma for context-free languages or a number of other methods, such as Ogden's lemma or Parikh's theorem. Notes References Works cited Further reading Formal languages Syntax
Context-free language
[ "Mathematics" ]
1,408
[ "Formal languages", "Mathematical logic" ]
6,868
https://en.wikipedia.org/wiki/Caffeine
Caffeine is a central nervous system (CNS) stimulant of the methylxanthine class and is the most commonly consumed psychoactive substance globally. It is mainly used for its eugeroic (wakefulness promoting), ergogenic (physical performance-enhancing), or nootropic (cognitive-enhancing) properties. Caffeine acts by blocking binding of adenosine at a number of adenosine receptor types, inhibiting the centrally depressant effects of adenosine and enhancing the release of acetylcholine. Caffeine has a three-dimensional structure similar to that of adenosine, which allows it to bind and block its receptors. Caffeine also increases cyclic AMP levels through nonselective inhibition of phosphodiesterase, increases calcium release from intracellular stores, and antagonizes GABA receptors, although these mechanisms typically occur at concentrations beyond usual human consumption. Caffeine is a bitter, white crystalline purine, a methylxanthine alkaloid, and is chemically related to the adenine and guanine bases of deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). It is found in the seeds, fruits, nuts, or leaves of a number of plants native to Africa, East Asia and South America and helps to protect them against herbivores and from competition by preventing the germination of nearby seeds, as well as encouraging consumption by select animals such as honey bees. The best-known source of caffeine is the coffee bean, the seed of the Coffea plant. People may drink beverages containing caffeine to relieve or prevent drowsiness and to improve cognitive performance. To make these drinks, caffeine is extracted by steeping the plant product in water, a process called infusion. Caffeine-containing drinks, such as coffee, tea, and cola, are consumed globally in high volumes. In 2020, almost 10 million tonnes of coffee beans were consumed globally. Caffeine is the world's most widely consumed psychoactive drug. Unlike most other psychoactive substances, caffeine remains largely unregulated and legal in nearly all parts of the world. Caffeine is also an outlier as its use is seen as socially acceptable in most cultures with it even being encouraged. Caffeine has both positive and negative health effects. It can treat and prevent the premature infant breathing disorders bronchopulmonary dysplasia of prematurity and apnea of prematurity. Caffeine citrate is on the WHO Model List of Essential Medicines. It may confer a modest protective effect against some diseases, including Parkinson's disease. Some people experience sleep disruption or anxiety if they consume caffeine, but others show little disturbance. Evidence of a risk during pregnancy is equivocal; some authorities recommend that pregnant women limit caffeine to the equivalent of two cups of coffee per day or less. Caffeine can produce a mild form of drug dependence – associated with withdrawal symptoms such as sleepiness, headache, and irritability – when an individual stops using caffeine after repeated daily intake. Tolerance to the autonomic effects of increased blood pressure and heart rate, and increased urine output, develops with chronic use (i.e., these symptoms become less pronounced or do not occur following consistent use). Caffeine is classified by the U.S. Food and Drug Administration (FDA) as generally recognized as safe. Toxic doses, over 10 grams per day for an adult, are much higher than the typical dose of under 500 milligrams per day. The European Food Safety Authority reported that up to 400 mg of caffeine per day (around 5.7 mg/kg of body mass per day) does not raise safety concerns for non-pregnant adults, while intakes up to 200 mg per day for pregnant and lactating women do not raise safety concerns for the fetus or the breast-fed infants. A cup of coffee contains 80–175 mg of caffeine, depending on what "bean" (seed) is used, how it is roasted, and how it is prepared (e.g., drip, percolation, or espresso). Thus it requires roughly 50–100 ordinary cups of coffee to reach the toxic dose. However, pure powdered caffeine, which is available as a dietary supplement, can be lethal in tablespoon-sized amounts. Uses Medical Caffeine is used for both prevention and treatment of bronchopulmonary dysplasia in premature infants. It may improve weight gain during therapy and reduce the incidence of cerebral palsy as well as reduce language and cognitive delay. On the other hand, subtle long-term side effects are possible. Caffeine is used as a primary treatment for apnea of prematurity, but not prevention. It is also used for orthostatic hypotension treatment. Some people use caffeine-containing beverages such as coffee or tea to try to treat their asthma. Evidence to support this practice is poor. It appears that caffeine in low doses improves airway function in people with asthma, increasing forced expiratory volume (FEV1) by 5% to 18% for up to four hours. The addition of caffeine (100–130 mg) to commonly prescribed pain relievers such as paracetamol or ibuprofen modestly improves the proportion of people who achieve pain relief. Consumption of caffeine after abdominal surgery shortens the time to recovery of normal bowel function and shortens length of hospital stay. Caffeine was formerly used as a second-line treatment for ADHD. It is considered less effective than methylphenidate or amphetamine but more so than placebo for children with ADHD. Children, adolescents, and adults with ADHD are more likely to consume caffeine, perhaps as a form of self-medication. Enhancing performance Cognitive performance Caffeine is a central nervous system stimulant that may reduce fatigue and drowsiness. At normal doses, caffeine has variable effects on learning and memory, but it generally improves reaction time, wakefulness, concentration, and motor coordination. The amount of caffeine needed to produce these effects varies from person to person, depending on body size and degree of tolerance. The desired effects arise approximately one hour after consumption, and the desired effects of a moderate dose usually subside after about three or four hours. Caffeine can delay or prevent sleep and improves task performance during sleep deprivation. Shift workers who use caffeine make fewer mistakes that could result from drowsiness. Caffeine in a dose dependent manner increases alertness in both fatigued and normal individuals. A systematic review and meta-analysis from 2014 found that concurrent caffeine and -theanine use has synergistic psychoactive effects that promote alertness, attention, and task switching; these effects are most pronounced during the first hour post-dose. Physical performance Caffeine is a proven ergogenic aid in humans. Caffeine improves athletic performance in aerobic (especially endurance sports) and anaerobic conditions. Moderate doses of caffeine (around 5 mg/kg) can improve sprint performance, cycling and running time trial performance, endurance (i.e., it delays the onset of muscle fatigue and central fatigue), and cycling power output. Caffeine increases basal metabolic rate in adults. Caffeine ingestion prior to aerobic exercise increases fat oxidation, particularly in persons with low physical fitness. Caffeine improves muscular strength and power, and may enhance muscular endurance. Caffeine also enhances performance on anaerobic tests. Caffeine consumption before constant load exercise is associated with reduced perceived exertion. While this effect is not present during exercise-to-exhaustion exercise, performance is significantly enhanced. This is congruent with caffeine reducing perceived exertion, because exercise-to-exhaustion should end at the same point of fatigue. Caffeine also improves power output and reduces time to completion in aerobic time trials, an effect positively (but not exclusively) associated with longer duration exercise. Specific populations Adults For the general population of healthy adults, Health Canada advises a daily intake of no more than 400 mg. This limit was found to be safe by a 2017 systematic review on caffeine toxicology. Children In healthy children, moderate caffeine intake under 400 mg produces effects that are "modest and typically innocuous". As early as six months old, infants can metabolize caffeine at the same rate as that of adults. Higher doses of caffeine (>400 mg) can cause physiological, psychological and behavioral harm, particularly for children with psychiatric or cardiac conditions. There is no evidence that coffee stunts a child's growth. The American Academy of Pediatrics recommends that caffeine consumption, particularly in the case of energy and sports drinks, is not appropriate for children and adolescents and should be avoided. This recommendation is based on a clinical report released by American Academy of Pediatrics in 2011 with a review of 45 publications from 1994 to 2011 and includes inputs from various stakeholders (Pediatricians, Committee on nutrition, Canadian Pediatric Society, Centers for Disease Control & Prevention, Food and Drug Administration, Sports Medicine & Fitness committee, National Federations of High School Associations). For children age 12 and under, Health Canada recommends a maximum daily caffeine intake of no more than 2.5 milligrams per kilogram of body weight. Based on average body weights of children, this translates to the following age-based intake limits: Adolescents Health Canada has not developed advice for adolescents because of insufficient data. However, they suggest that daily caffeine intake for this age group be no more than 2.5 mg/kg body weight. This is because the maximum adult caffeine dose may not be appropriate for light-weight adolescents or for younger adolescents who are still growing. The daily dose of 2.5 mg/kg body weight would not cause adverse health effects in the majority of adolescent caffeine consumers. This is a conservative suggestion since older and heavier-weight adolescents may be able to consume adult doses of caffeine without experiencing adverse effects. Pregnancy and breastfeeding The metabolism of caffeine is reduced in pregnancy, especially in the third trimester, and the half-life of caffeine during pregnancy can be increased up to 15 hours (as compared to 2.5 to 4.5 hours in non-pregnant adults). Evidence regarding the effects of caffeine on pregnancy and for breastfeeding are inconclusive. There is limited primary and secondary advice for, or against, caffeine use during pregnancy and its effects on the fetus or newborn. The UK Food Standards Agency has recommended that pregnant women should limit their caffeine intake, out of prudence, to less than 200 mg of caffeine a day – the equivalent of two cups of instant coffee, or one and a half to two cups of fresh coffee. The American Congress of Obstetricians and Gynecologists (ACOG) concluded in 2010 that caffeine consumption is safe up to 200 mg per day in pregnant women. For women who breastfeed, are pregnant, or may become pregnant, Health Canada recommends a maximum daily caffeine intake of no more than 300 mg, or a little over two 8 oz (237 mL) cups of coffee. A 2017 systematic review on caffeine toxicology found evidence supporting that caffeine consumption up to 300 mg/day for pregnant women is generally not associated with adverse reproductive or developmental effect. There are conflicting reports in the scientific literature about caffeine use during pregnancy. A 2011 review found that caffeine during pregnancy does not appear to increase the risk of congenital malformations, miscarriage or growth retardation even when consumed in moderate to high amounts. Other reviews, however, concluded that there is some evidence that higher caffeine intake by pregnant women may be associated with a higher risk of giving birth to a low birth weight baby, and may be associated with a higher risk of pregnancy loss. A systematic review, analyzing the results of observational studies, suggests that women who consume large amounts of caffeine (greater than 300 mg/day) prior to becoming pregnant may have a higher risk of experiencing pregnancy loss. Adverse effects Physiological Caffeine in coffee and other caffeinated drinks can affect gastrointestinal motility and gastric acid secretion. In postmenopausal women, high caffeine consumption can accelerate bone loss. Caffeine, alongside other factors such as stress and fatigue, can also increase the pressure in various muscles, including the eyelids. Acute ingestion of caffeine in large doses (at least 250–300 mg, equivalent to the amount found in 2–3 cups of coffee or 5–8 cups of tea) results in a short-term stimulation of urine output in individuals who have been deprived of caffeine for a period of days or weeks. This increase is due to both a diuresis (increase in water excretion) and a natriuresis (increase in saline excretion); it is mediated via proximal tubular adenosine receptor blockade. The acute increase in urinary output may increase the risk of dehydration. However, chronic users of caffeine develop a tolerance to this effect and experience no increase in urinary output. Psychological Minor undesired symptoms from caffeine ingestion not sufficiently severe to warrant a psychiatric diagnosis are common and include mild anxiety, jitteriness, insomnia, increased sleep latency, and reduced coordination. Caffeine can have negative effects on anxiety disorders. According to a 2011 literature review, caffeine use may induce anxiety and panic disorders in people with Parkinson's disease. At high doses, typically greater than 300 mg, caffeine can both cause and worsen anxiety. For some people, discontinuing caffeine use can significantly reduce anxiety. In moderate doses, caffeine has been associated with reduced symptoms of depression and lower suicide risk. Two reviews indicate that increased consumption of coffee and caffeine may reduce the risk of depression. Some textbooks state that caffeine is a mild euphoriant, while others state that it is not a euphoriant. Caffeine-induced anxiety disorder is a subclass of the DSM-5 diagnosis of substance/medication-induced anxiety disorder. Reinforcement disorders Addiction Whether caffeine can result in an addictive disorder depends on how addiction is defined. Compulsive caffeine consumption under any circumstances has not been observed, and caffeine is therefore not generally considered addictive. However, some diagnostic models, such as the and ICD-10, include a classification of caffeine addiction under a broader diagnostic model. Some state that certain users can become addicted and therefore unable to decrease use even though they know there are negative health effects. Caffeine does not appear to be a reinforcing stimulus, and some degree of aversion may actually occur, with people preferring placebo over caffeine in a study on drug abuse liability published in an NIDA research monograph. Some state that research does not provide support for an underlying biochemical mechanism for caffeine addiction. Other research states it can affect the reward system. "Caffeine addiction" was added to the ICDM-9 and ICD-10. However, its addition was contested with claims that this diagnostic model of caffeine addiction is not supported by evidence. The American Psychiatric Association's does not include the diagnosis of a caffeine addiction but proposes criteria for the disorder for more study. Dependence and withdrawal Withdrawal can cause mild to clinically significant distress or impairment in daily functioning. The frequency at which this occurs is self-reported at 11%, but in lab tests only half of the people who report withdrawal actually experience it, casting doubt on many claims of dependence. and most cases of caffeine withdrawal were 13% in the moderate sense. Moderately physical dependence and withdrawal symptoms may occur upon abstinence, with greater than 100 mg caffeine per day, although these symptoms last no longer than a day. Some symptoms associated with psychological dependence may also occur during withdrawal. The diagnostic criteria for caffeine withdrawal require a previous prolonged daily use of caffeine. Following 24 hours of a marked reduction in consumption, a minimum of 3 of these signs or symptoms is required to meet withdrawal criteria: difficulty concentrating, depressed mood/irritability, flu-like symptoms, headache, and fatigue. Additionally, the signs and symptoms must disrupt important areas of functioning and are not associated with effects of another condition. The ICD-11 includes caffeine dependence as a distinct diagnostic category, which closely mirrors the DSM-5's proposed set of criteria for "caffeine-use disorder".  Caffeine use disorder refers to dependence on caffeine characterized by failure to control caffeine consumption despite negative physiological consequences. The APA, which published the DSM-5, acknowledged that there was sufficient evidence in order to create a diagnostic model of caffeine dependence for the DSM-5, but they noted that the clinical significance of the disorder is unclear. Due to this inconclusive evidence on clinical significance, the DSM-5 classifies caffeine-use disorder as a "condition for further study". Tolerance to the effects of caffeine occurs for caffeine-induced elevations in blood pressure and the subjective feelings of nervousness. Sensitization, the process whereby effects become more prominent with use, may occur for positive effects such as feelings of alertness and wellbeing. Tolerance varies for daily, regular caffeine users and high caffeine users. High doses of caffeine (750 to 1200 mg/day spread throughout the day) have been shown to produce complete tolerance to some, but not all of the effects of caffeine. Doses as low as 100 mg/day, such as a cup of coffee or two to three servings of caffeinated soft-drink, may continue to cause sleep disruption, among other intolerances. Non-regular caffeine users have the least caffeine tolerance for sleep disruption. Some coffee drinkers develop tolerance to its undesired sleep-disrupting effects, but others apparently do not. Risk of other diseases A neuroprotective effect of caffeine against Alzheimer's disease and dementia is possible but the evidence is inconclusive. Caffeine may lessen the severity of acute mountain sickness if taken a few hours prior to attaining a high altitude. One meta analysis has found that caffeine consumption is associated with a reduced risk of type 2 diabetes. Regular caffeine consumption may reduce the risk of developing Parkinson's disease and may slow the progression of Parkinson's disease. Caffeine increases intraocular pressure in those with glaucoma but does not appear to affect normal individuals. The DSM-5 also includes other caffeine-induced disorders consisting of caffeine-induced anxiety disorder, caffeine-induced sleep disorder and unspecified caffeine-related disorders. The first two disorders are classified under "Anxiety Disorder" and "Sleep-Wake Disorder" because they share similar characteristics. Other disorders that present with significant distress and impairment of daily functioning that warrant clinical attention but do not meet the criteria to be diagnosed under any specific disorders are listed under "Unspecified Caffeine-Related Disorders". Energy crash Caffeine is reputed to cause a fall in energy several hours after drinking, but this is not well researched. Overdose Consumption of per day is associated with a condition known as caffeinism. Caffeinism usually combines caffeine dependency with a wide range of unpleasant symptoms including nervousness, irritability, restlessness, insomnia, headaches, and palpitations after caffeine use. Caffeine overdose can result in a state of central nervous system overstimulation known as caffeine intoxication, a clinically significant temporary condition that develops during, or shortly after, the consumption of caffeine. This syndrome typically occurs only after ingestion of large amounts of caffeine, well over the amounts found in typical caffeinated beverages and caffeine tablets (e.g., more than 400–500 mg at a time). According to the DSM-5, caffeine intoxication may be diagnosed if five (or more) of the following symptoms develop after recent consumption of caffeine: restlessness, nervousness, excitement, insomnia, flushed face, diuresis, gastrointestinal disturbance, muscle twitching, rambling flow of thought and speech, tachycardia or cardiac arrhythmia, periods of inexhaustibility, and psychomotor agitation. According to the International Classification of Diseases (ICD-11), cases of very high caffeine intake (e.g. > 5 g) may result in caffeine intoxication with symptoms including mania, depression, lapses in judgment, disorientation, disinhibition, delusions, hallucinations or psychosis, and rhabdomyolysis. Energy drinks High caffeine consumption in energy drinks (at least one liter or 320 mg of caffeine) was associated with short-term cardiovascular side effects including hypertension, prolonged QT interval, and heart palpitations. These cardiovascular side effects were not seen with smaller amounts of caffeine consumption in energy drinks (less than 200 mg). Severe intoxication there is no known antidote or reversal agent for caffeine intoxication. Treatment of mild caffeine intoxication is directed toward symptom relief; severe intoxication may require peritoneal dialysis, hemodialysis, or hemofiltration. Intralipid infusion therapy is indicated in cases of imminent risk of cardiac arrest in order to scavenge the free serum caffeine. Lethal dose Death from caffeine ingestion appears to be rare, and most commonly caused by an intentional overdose of medications. In 2016, 3702 caffeine-related exposures were reported to Poison Control Centers in the United States, of which 846 required treatment at a medical facility, and 16 had a major outcome; and several caffeine-related deaths are reported in case studies. The LD50 of caffeine in rats is 192 milligrams per kilogram of body mass. The fatal dose in humans is estimated to be 150–200 milligrams per kilogram, which is 10.5–14 grams for a typical adult, equivalent to about 75–100 cups of coffee. There are cases where doses as low as 57 milligrams per kilogram have been fatal. A number of fatalities have been caused by overdoses of readily available powdered caffeine supplements, for which the estimated lethal amount is less than a tablespoon. The lethal dose is lower in individuals whose ability to metabolize caffeine is impaired due to genetics or chronic liver disease. A death was reported in 2013 of a man with liver cirrhosis who overdosed on caffeinated mints. Interactions Caffeine is a substrate for CYP1A2, and interacts with many substances through this and other mechanisms. Alcohol According to DSST, alcohol causes a decrease in performance on their standardized tests, and caffeine causes a significant improvement. When alcohol and caffeine are consumed jointly, the effects of the caffeine are changed, but the alcohol effects remain the same. For example, consuming additional caffeine does not reduce the effect of alcohol. However, the jitteriness and alertness given by caffeine is decreased when additional alcohol is consumed. Alcohol consumption alone reduces both inhibitory and activational aspects of behavioral control. Caffeine antagonizes the effect of alcohol on the activational aspect of behavioral control, but has no effect on the inhibitory behavioral control. The Dietary Guidelines for Americans recommend avoidance of concomitant consumption of alcohol and caffeine, as taking them together may lead to increased alcohol consumption, with a higher risk of alcohol-associated injury. Smoking Smoking tobacco has been shown to increase caffeine clearance by 56% as a result of polycyclic aromatic hydrocarbons inducing the CYP1A2 enzyme. The CYP1A2 enzyme that is induced by smoking is responsible for the metabolism of caffeine; increased enzyme activity leads to increased caffeine clearance, and is associated with greater coffee consumption for regular smokers. Birth control Birth control pills can extend the half-life of caffeine by as much as 40%, requiring greater attention to caffeine consumption. Medications Caffeine sometimes increases the effectiveness of some medications, such as those for headaches. Caffeine was determined to increase the potency of some over-the-counter analgesic medications by 40%. The pharmacological effects of adenosine may be blunted in individuals taking large quantities of methylxanthines like caffeine. Some other examples of methylxanthines include the medications theophylline and aminophylline, which are prescribed to relieve symptoms of asthma or COPD. Pharmacology Pharmacodynamics In the absence of caffeine and when a person is awake and alert, little adenosine is present in CNS neurons. With a continued wakeful state, over time adenosine accumulates in the neuronal synapse, in turn binding to and activating adenosine receptors found on certain CNS neurons; when activated, these receptors produce a cellular response that ultimately increases drowsiness. When caffeine is consumed, it antagonizes adenosine receptors; in other words, caffeine prevents adenosine from activating the receptor by blocking the location on the receptor where adenosine binds to it. As a result, caffeine temporarily prevents or relieves drowsiness, and thus maintains or restores alertness. Receptor and ion channel targets Caffeine is an antagonist of adenosine A2A receptors, and knockout mouse studies have specifically implicated antagonism of the A2A receptor as responsible for the wakefulness-promoting effects of caffeine. Antagonism of A2A receptors in the ventrolateral preoptic area (VLPO) reduces inhibitory GABA neurotransmission to the tuberomammillary nucleus, a histaminergic projection nucleus that activation-dependently promotes arousal. This disinhibition of the tuberomammillary nucleus is the downstream mechanism by which caffeine produces wakefulness-promoting effects. Caffeine is an antagonist of all four adenosine receptor subtypes (A1, A2A, A2B, and A3), although with varying potencies. The affinity (KD) values of caffeine for the human adenosine receptors are 12 μM at A1, 2.4 μM at A2A, 13 μM at A2B, and 80 μM at A3. Antagonism of adenosine receptors by caffeine also stimulates the medullary vagal, vasomotor, and respiratory centers, which increases respiratory rate, reduces heart rate, and constricts blood vessels. Adenosine receptor antagonism also promotes neurotransmitter release (e.g., monoamines and acetylcholine), which endows caffeine with its stimulant effects; adenosine acts as an inhibitory neurotransmitter that suppresses activity in the central nervous system. Heart palpitations are caused by blockade of the A1 receptor. Because caffeine is both water- and lipid-soluble, it readily crosses the blood–brain barrier that separates the bloodstream from the interior of the brain. Once in the brain, the principal mode of action is as a nonselective antagonist of adenosine receptors (in other words, an agent that reduces the effects of adenosine). The caffeine molecule is structurally similar to adenosine, and is capable of binding to adenosine receptors on the surface of cells without activating them, thereby acting as a competitive antagonist. In addition to its activity at adenosine receptors, caffeine is an inositol trisphosphate receptor 1 antagonist and a voltage-independent activator of the ryanodine receptors (RYR1, RYR2, and RYR3). It is also a competitive antagonist of the ionotropic glycine receptor. Effects on striatal dopamine While caffeine does not directly bind to any dopamine receptors, it influences the binding activity of dopamine at its receptors in the striatum by binding to adenosine receptors that have formed GPCR heteromers with dopamine receptors, specifically the A1–D1 receptor heterodimer (this is a receptor complex with one adenosine A1 receptor and one dopamine D1 receptor) and the A2A–D2 receptor heterotetramer (this is a receptor complex with two adenosine A2A receptors and two dopamine D2 receptors). The A2A–D2 receptor heterotetramer has been identified as a primary pharmacological target of caffeine, primarily because it mediates some of its psychostimulant effects and its pharmacodynamic interactions with dopaminergic psychostimulants. Caffeine also causes the release of dopamine in the dorsal striatum and nucleus accumbens core (a substructure within the ventral striatum), but not the nucleus accumbens shell, by antagonizing A1 receptors in the axon terminal of dopamine neurons and A1–A2A heterodimers (a receptor complex composed of one adenosine A1 receptor and one adenosine A2A receptor) in the axon terminal of glutamate neurons. During chronic caffeine use, caffeine-induced dopamine release within the nucleus accumbens core is markedly reduced due to drug tolerance. Enzyme targets Caffeine, like other xanthines, also acts as a phosphodiesterase inhibitor. As a competitive nonselective phosphodiesterase inhibitor, caffeine raises intracellular cyclic AMP, activates protein kinase A, inhibits TNF-alpha and leukotriene synthesis, and reduces inflammation and innate immunity. Caffeine also affects the cholinergic system where it is a moderate inhibitor of the enzyme acetylcholinesterase. Pharmacokinetics Caffeine from coffee or other beverages is absorbed by the small intestine within 45 minutes of ingestion and distributed throughout all bodily tissues. Peak blood concentration is reached within 1–2 hours. It is eliminated by first-order kinetics. Caffeine can also be absorbed rectally, evidenced by suppositories of ergotamine tartrate and caffeine (for the relief of migraine) and of chlorobutanol and caffeine (for the treatment of hyperemesis). However, rectal absorption is less efficient than oral: the maximum concentration (Cmax) and total amount absorbed (AUC) are both about 30% (i.e., 1/3.5) of the oral amounts. Caffeine's biological half-life – the time required for the body to eliminate one-half of a dose – varies widely among individuals according to factors such as pregnancy, other drugs, liver enzyme function level (needed for caffeine metabolism) and age. In healthy adults, caffeine's half-life is between 3 and 7 hours. The half-life is decreased by 30-50% in adult male smokers, approximately doubled in women taking oral contraceptives, and prolonged in the last trimester of pregnancy. In newborns the half-life can be 80 hours or more, dropping rapidly with age, possibly to less than the adult value by age 6 months. The antidepressant fluvoxamine (Luvox) reduces the clearance of caffeine by more than 90%, and increases its elimination half-life more than tenfold, from 4.9 hours to 56 hours. Caffeine is metabolized in the liver by the cytochrome P450 oxidase enzyme system (particularly by the CYP1A2 isozyme) into three dimethylxanthines, each of which has its own effects on the body: Paraxanthine (84%): Increases lipolysis, leading to elevated glycerol and free fatty acid levels in blood plasma. Theobromine (12%): Dilates blood vessels and increases urine volume. Theobromine is also the principal alkaloid in the cocoa bean (chocolate). Theophylline (4%): Relaxes smooth muscles of the bronchi, and is used to treat asthma. The therapeutic dose of theophylline, however, is many times greater than the levels attained from caffeine metabolism. 1,3,7-Trimethyluric acid is a minor caffeine metabolite. 7-Methylxanthine is also a metabolite of caffeine. Each of the above metabolites is further metabolized and then excreted in the urine. Caffeine can accumulate in individuals with severe liver disease, increasing its half-life. A 2011 review found that increased caffeine intake was associated with a variation in two genes that increase the rate of caffeine catabolism. Subjects who had this mutation on both chromosomes consumed 40 mg more caffeine per day than others. This is presumably due to the need for a higher intake to achieve a comparable desired effect, not that the gene led to a disposition for greater incentive of habituation. Chemistry Pure anhydrous caffeine is a bitter-tasting, white, odorless powder with a melting point of 235–238 °C. Caffeine is moderately soluble in water at room temperature (2 g/100 mL), but quickly soluble in boiling water (66 g/100 mL). It is also moderately soluble in ethanol (1.5 g/100 mL). It is weakly basic (pKa of conjugate acid = ~0.6) requiring strong acid to protonate it. Caffeine does not contain any stereogenic centers and hence is classified as an achiral molecule. The xanthine core of caffeine contains two fused rings, a pyrimidinedione and imidazole. The pyrimidinedione in turn contains two amide functional groups that exist predominantly in a zwitterionic resonance the location from which the nitrogen atoms are double bonded to their adjacent amide carbons atoms. Hence all six of the atoms within the pyrimidinedione ring system are sp2 hybridized and planar. The imidazole ring also has a resonance. Therefore, the fused 5,6 ring core of caffeine contains a total of ten pi electrons and hence according to Hückel's rule is aromatic. Synthesis The biosynthesis of caffeine is an example of convergent evolution among different species. Caffeine may be synthesized in the lab starting with 1,3-dimethylurea and malonic acid. Production of synthesized caffeine largely takes place in pharmaceutical plants in China. Synthetic and natural caffeine are chemically identical and nearly indistinguishable. The primary distinction is that synthetic caffeine is manufactured from urea and chloroacetic acid, while natural caffeine is extracted from plant sources, a process known as decaffeination. Despite the different production methods, the final product and its effects on the body are similar. Research on synthetic caffeine supports that it has the same stimulating effects on the body as natural caffeine. And although many claim that natural caffeine is absorbed slower and therefore leads to a gentler caffeine crash, there is little scientific evidence supporting the notion. Decaffeination Germany, the birthplace of decaffeinated coffee, is home to several decaffeination plants, including the world's largest, Coffein Compagnie. Over half of the decaf coffee sold in the U.S. first travels from the tropics to Germany for caffeine removal before making its way to American consumers. Extraction of caffeine from coffee, to produce caffeine and decaffeinated coffee, can be performed using a number of solvents. Following are main methods: Water extraction: Coffee beans are soaked in water. The water, which contains many other compounds in addition to caffeine and contributes to the flavor of coffee, is then passed through activated charcoal, which removes the caffeine. The water can then be put back with the beans and evaporated dry, leaving decaffeinated coffee with its original flavor. Coffee manufacturers recover the caffeine and resell it for use in soft drinks and over-the-counter caffeine tablets. Supercritical carbon dioxide extraction: Supercritical carbon dioxide is an excellent nonpolar solvent for caffeine, and is safer than the organic solvents that are otherwise used. The extraction process is simple: is forced through the green coffee beans at temperatures above 31.1 °C and pressures above 73 atm. Under these conditions, is in a "supercritical" state: It has gaslike properties that allow it to penetrate deep into the beans but also liquid-like properties that dissolve 97–99% of the caffeine. The caffeine-laden is then sprayed with high-pressure water to remove the caffeine. The caffeine can then be isolated by charcoal adsorption (as above) or by distillation, recrystallization, or reverse osmosis. Extraction by organic solvents: Certain organic solvents such as ethyl acetate present much less health and environmental hazard than chlorinated and aromatic organic solvents used formerly. Another method is to use triglyceride oils obtained from spent coffee grounds. "Decaffeinated" coffees do in fact contain caffeine in many cases – some commercially available decaffeinated coffee products contain considerable levels. One study found that decaffeinated coffee contained 10 mg of caffeine per cup, compared to approximately 85 mg of caffeine per cup for regular coffee. Detection in body fluids Caffeine can be quantified in blood, plasma, or serum to monitor therapy in neonates, confirm a diagnosis of poisoning, or facilitate a medicolegal death investigation. Plasma caffeine levels are usually in the range of 2–10 mg/L in coffee drinkers, 12–36 mg/L in neonates receiving treatment for apnea, and 40–400 mg/L in victims of acute overdosage. Urinary caffeine concentration is frequently measured in competitive sports programs, for which a level in excess of 15 mg/L is usually considered to represent abuse. Analogs Some analog substances have been created which mimic caffeine's properties with either function or structure or both. Of the latter group are the xanthines DMPX and 8-chlorotheophylline, which is an ingredient in dramamine. Members of a class of nitrogen substituted xanthines are often proposed as potential alternatives to caffeine. Many other xanthine analogues constituting the adenosine receptor antagonist class have also been elucidated. Some other caffeine analogs: Dipropylcyclopentylxanthine 8-Cyclopentyl-1,3-dimethylxanthine 8-Phenyltheophylline Precipitation of tannins Caffeine, as do other alkaloids such as cinchonine, quinine or strychnine, precipitates polyphenols and tannins. This property can be used in a quantitation method. Natural occurrence Around thirty plant species are known to contain caffeine. Common sources are the "beans" (seeds) of the two cultivated coffee plants, Coffea arabica and Coffea canephora (the quantity varies, but 1.3% is a typical value); and of the cocoa plant, Theobroma cacao; the leaves of the tea plant; and kola nuts. Other sources include the leaves of yaupon holly, South American holly yerba mate, and Amazonian holly guayusa; and seeds from Amazonian maple guarana berries. Temperate climates around the world have produced unrelated caffeine-containing plants. Caffeine in plants acts as a natural pesticide: it can paralyze and kill predator insects feeding on the plant. High caffeine levels are found in coffee seedlings when they are developing foliage and lack mechanical protection. In addition, high caffeine levels are found in the surrounding soil of coffee seedlings, which inhibits seed germination of nearby coffee seedlings, thus giving seedlings with the highest caffeine levels fewer competitors for existing resources for survival. Caffeine is stored in tea leaves in two places. Firstly, in the cell vacuoles where it is complexed with polyphenols. This caffeine probably is released into the mouth parts of insects, to discourage herbivory. Secondly, around the vascular bundles, where it probably inhibits pathogenic fungi from entering and colonizing the vascular bundles. Caffeine in nectar may improve the reproductive success of the pollen producing plants by enhancing the reward memory of pollinators such as honey bees. The differing perceptions in the effects of ingesting beverages made from various plants containing caffeine could be explained by the fact that these beverages also contain varying mixtures of other methylxanthine alkaloids, including the cardiac stimulants theophylline and theobromine, and polyphenols that can form insoluble complexes with caffeine. Products Products containing caffeine include coffee, tea, soft drinks ("colas"), energy drinks, other beverages, chocolate, caffeine tablets, other oral products, and inhalation products. According to a 2020 study in the United States, coffee is the major source of caffeine intake in middle-aged adults, while soft drinks and tea are the major sources in adolescents. Energy drinks are more commonly consumed as a source of caffeine in adolescents as compared to adults. Beverages Coffee The world's primary source of caffeine is the coffee "bean" (the seed of the coffee plant), from which coffee is brewed. Caffeine content in coffee varies widely depending on the type of coffee bean and the method of preparation used; even beans within a given bush can show variations in concentration. In general, one serving of coffee ranges from 80 to 100 milligrams, for a single shot (30 milliliters) of arabica-variety espresso, to approximately 100–125 milligrams for a cup (120 milliliters) of drip coffee. Arabica coffee typically contains half the caffeine of the robusta variety. In general, dark-roast coffee has slightly less caffeine than lighter roasts because the roasting process reduces caffeine content of the bean by a small amount. Tea Tea contains more caffeine than coffee by dry weight. A typical serving, however, contains much less, since less of the product is used as compared to an equivalent serving of coffee. Also contributing to caffeine content are growing conditions, processing techniques, and other variables. Thus, teas contain varying amounts of caffeine. Tea contains small amounts of theobromine and slightly higher levels of theophylline than coffee. Preparation and many other factors have a significant impact on tea, and color is a poor indicator of caffeine content. Teas like the pale Japanese green tea, gyokuro, for example, contain far more caffeine than much darker teas like lapsang souchong, which has minimal caffeine content. Soft drinks and energy drinks Caffeine is also a common ingredient of soft drinks, such as cola, originally prepared from kola nuts. Soft drinks typically contain 0 to 55 milligrams of caffeine per 12 ounce () serving. By contrast, energy drinks, such as Red Bull, can start at 80 milligrams of caffeine per serving. The caffeine in these drinks either originates from the ingredients used or is an additive derived from the product of decaffeination or from chemical synthesis. Guarana, a primary ingredient of energy drinks, contains large amounts of caffeine with small amounts of theobromine and theophylline in a naturally occurring slow-release excipient. Other beverages Maté is a drink popular in many parts of South America. Its preparation consists of filling a gourd with the leaves of the South American holly yerba mate, pouring hot but not boiling water over the leaves, and drinking with a straw, the bombilla, which acts as a filter so as to draw only the liquid and not the yerba leaves. Guaraná is a soft drink originating in Brazil made from the seeds of the Guaraná fruit. The leaves of Ilex guayusa, the Ecuadorian holly tree, are placed in boiling water to make a guayusa tea. The leaves of Ilex vomitoria, the yaupon holly tree, are placed in boiling water to make a yaupon tea. Commercially prepared coffee-flavoured milk beverages are popular in Australia. Examples include Oak's Ice Coffee and Farmers Union Iced Coffee. The amount of caffeine in these beverages can vary widely. Caffeine concentrations can differ significantly from the manufacturer's claims. Cacao solids Cocoa solids (derived from cocoa bean) contain 230 mg caffeine per 100 g. The caffeine content varies between cocoa bean strains. Caffeine content mg/g (sorted by lowest caffeine content): Forastero (defatted): 1.3 mg/g Nacional (defatted): 2.4 mg/g Trinitario (defatted): 6.3/g Criollo (defatted): 11.3 mg/g Chocolate Caffeine per 100 g: Dark chocolate, 70-85% cacao solids: 80 mg Dark chocolate, 60-69% cacao solids: 86 mg Dark chocolate, 45- 59% cacao solids: 43 mg Milk chocolate: 20 mg The stimulant effect of chocolate may be due to a combination of theobromine and theophylline, as well as caffeine. Tablets Tablets offer several advantages over coffee, tea, and other caffeinated beverages, including convenience, known dosage, and avoidance of concomitant intake of sugar, acids, and fluids. The use of caffeine in this form is said to improve mental alertness. These tablets are commonly used by students studying for their exams and by people who work or drive for long hours. Other oral products One U.S. company is marketing oral dissolvable caffeine strips. Another intake route is SpazzStick, a caffeinated lip balm. Alert Energy Caffeine Gum was introduced in the United States in 2013, but was voluntarily withdrawn after an announcement of an investigation by the FDA of the health effects of added caffeine in foods. Inhalants Similar to an e-cigarette, a caffeine inhaler may be used to deliver caffeine or a stimulant like guarana by vaping. In 2012, the FDA sent a warning letter to one of the companies marketing an inhaler, expressing concerns for the lack of safety information available about inhaled caffeine. Combinations with other drugs Some beverages combine alcohol with caffeine to create a caffeinated alcoholic drink. The stimulant effects of caffeine may mask the depressant effects of alcohol, potentially reducing the user's awareness of their level of intoxication. Such beverages have been the subject of bans due to safety concerns. In particular, the United States Food and Drug Administration has classified caffeine added to malt liquor beverages as an "unsafe food additive". Ya ba contains a combination of methamphetamine and caffeine. Painkillers such as propyphenazone/paracetamol/caffeine combine caffeine with an analgesic. History Discovery and spread of use According to Chinese legend, the Chinese emperor Shennong, reputed to have reigned in about 3000 BCE, inadvertently discovered tea when he noted that when certain leaves fell into boiling water, a fragrant and restorative drink resulted. Shennong is also mentioned in Lu Yu's Cha Jing, a famous early work on the subject of tea. The earliest credible evidence of either coffee drinking or knowledge of the coffee plant appears in the middle of the fifteenth century, in the Sufi monasteries of the Yemen in southern Arabia. From Mocha, coffee spread to Egypt and North Africa, and by the 16th century, it had reached the rest of the Middle East, Persia and Turkey. From the Middle East, coffee drinking spread to Italy, then to the rest of Europe, and coffee plants were transported by the Dutch to the East Indies and to the Americas. Kola nut use appears to have ancient origins. It is chewed in many West African cultures, in both private and social settings, to restore vitality and ease hunger pangs. The earliest evidence of cocoa bean use comes from residue found in an ancient Mayan pot dated to 600 BCE. Also, chocolate was consumed in a bitter and spicy drink called xocolatl, often seasoned with vanilla, chile pepper, and achiote. Xocolatl was believed to fight fatigue, a belief probably attributable to the theobromine and caffeine content. Chocolate was an important luxury good throughout pre-Columbian Mesoamerica, and cocoa beans were often used as currency. Xocolatl was introduced to Europe by the Spaniards, and became a popular beverage by 1700. The Spaniards also introduced the cacao tree into the West Indies and the Philippines. The leaves and stems of the yaupon holly (Ilex vomitoria) were used by Native Americans to brew a tea called asi or the "black drink". Archaeologists have found evidence of this use far into antiquity, possibly dating to Late Archaic times. Chemical identification, isolation, and synthesis In 1819, the German chemist Friedlieb Ferdinand Runge isolated caffeine for the first time; he called it "Kaffebase" (i.e., a base that exists in coffee). According to Runge, he did this at the behest of Johann Wolfgang von Goethe. In 1821, caffeine was isolated both by the French chemist Pierre Jean Robiquet and by another pair of French chemists, Pierre-Joseph Pelletier and Joseph Bienaimé Caventou, according to Swedish chemist Jöns Jacob Berzelius in his yearly journal. Furthermore, Berzelius stated that the French chemists had made their discoveries independently of any knowledge of Runge's or each other's work. However, Berzelius later acknowledged Runge's priority in the extraction of caffeine, stating: "However, at this point, it should not remain unmentioned that Runge (in his Phytochemical Discoveries, 1820, pages 146–147) specified the same method and described caffeine under the name Caffeebase a year earlier than Robiquet, to whom the discovery of this substance is usually attributed, having made the first oral announcement about it at a meeting of the Pharmacy Society in Paris." Pelletier's article on caffeine was the first to use the term in print (in the French form from the French word for coffee: ). It corroborates Berzelius's account: Robiquet was one of the first to isolate and describe the properties of pure caffeine, whereas Pelletier was the first to perform an elemental analysis. In 1827, M. Oudry isolated "théine" from tea, but in 1838 it was proved by Mulder and by Carl Jobst that theine was actually the same as caffeine. In 1895, German chemist Hermann Emil Fischer (1852–1919) first synthesized caffeine from its chemical components (i.e. a "total synthesis"), and two years later, he also derived the structural formula of the compound. This was part of the work for which Fischer was awarded the Nobel Prize in 1902. Historic regulations Because it was recognized that coffee contained some compound that acted as a stimulant, first coffee and later also caffeine has sometimes been subject to regulation. For example, in the 16th century Islamists in Mecca and in the Ottoman Empire made coffee illegal for some classes. Charles II of England tried to ban it in 1676, Frederick II of Prussia banned it in 1777, and coffee was banned in Sweden at various times between 1756 and 1823. In 1911, caffeine became the focus of one of the earliest documented health scares, when the US government seized 40 barrels and 20 kegs of Coca-Cola syrup in Chattanooga, Tennessee, alleging the caffeine in its drink was "injurious to health". Although the Supreme Court later ruled in favor of Coca-Cola in United States v. Forty Barrels and Twenty Kegs of Coca-Cola, two bills were introduced to the U.S. House of Representatives in 1912 to amend the Pure Food and Drug Act, adding caffeine to the list of "habit-forming" and "deleterious" substances, which must be listed on a product's label. Society and culture Regulations United States The US Food and Drug Administration (FDA) considers safe beverages containing less than 0.02% caffeine; but caffeine powder, which is sold as a dietary supplement, is unregulated. It is a regulatory requirement that the label of most prepackaged foods must declare a list of ingredients, including food additives such as caffeine, in descending order of proportion. However, there is no regulatory provision for mandatory quantitative labeling of caffeine, (e.g., milligrams caffeine per stated serving size). There are a number of food ingredients that naturally contain caffeine. These ingredients must appear in food ingredient lists. However, as is the case for "food additive caffeine", there is no requirement to identify the quantitative amount of caffeine in composite foods containing ingredients that are natural sources of caffeine. While coffee or chocolate are broadly recognized as caffeine sources, some ingredients (e.g., guarana, yerba maté) are likely less recognized as caffeine sources. For these natural sources of caffeine, there is no regulatory provision requiring that a food label identify the presence of caffeine nor state the amount of caffeine present in the food. The FDA guidance was updated in 2018. Consumption Global consumption of caffeine has been estimated at 120,000 tonnes per year, making it the world's most popular psychoactive substance. The consumption of caffeine has remained stable between 1997 and 2015. Coffee, tea and soft drinks are the most common caffeine sources, with energy drinks contributing little to the total caffeine intake across all age groups. Religions The Seventh-day Adventist Church asked for its members to "abstain from caffeinated drinks", but has removed this from baptismal vows (while still recommending abstention as policy). Some from these religions believe that one is not supposed to consume a non-medical, psychoactive substance, or believe that one is not supposed to consume a substance that is addictive. The Church of Jesus Christ of Latter-day Saints has said the following with regard to caffeinated beverages: "... the Church revelation spelling out health practices (Doctrine and Covenants 89) does not mention the use of caffeine. The Church's health guidelines prohibit alcoholic drinks, smoking or chewing of tobacco, and 'hot drinks' – taught by Church leaders to refer specifically to tea and coffee." Gaudiya Vaishnavas generally also abstain from caffeine, because they believe it clouds the mind and overstimulates the senses. To be initiated under a guru, one must have had no caffeine, alcohol, nicotine or other drugs, for at least a year. Caffeinated beverages are widely consumed by Muslims. In the 16th century, some Muslim authorities made unsuccessful attempts to ban them as forbidden "intoxicating beverages" under Islamic dietary laws. Other organisms The bacteria Pseudomonas putida CBB5 can live on pure caffeine and can cleave caffeine into carbon dioxide and ammonia. Caffeine is toxic to birds and to dogs and cats, and has a pronounced adverse effect on mollusks, various insects, and spiders. This is at least partly due to a poor ability to metabolize the compound, causing higher levels for a given dose per unit weight. Caffeine has also been found to enhance the reward memory of honey bees. Research Caffeine has been used to double chromosomes in haploid wheat. See also Adderall Amphetamine Cocaine Health effects of coffee Health effects of tea List of chemical compounds in coffee Low caffeine coffee Methylliberine Nootropic Theobromine Theophylline Wakefulness-promoting agent References Notes Citations Bibliography External links GMD MS Spectrum Caffeine: ChemSub Online Caffeine at The Periodic Table of Videos (University of Nottingham) Acetylcholinesterase inhibitors Adenosine receptor antagonists Alkaloids found in plants Anxiogenics Bitter compounds Components of chocolate Diuretics Ergogenic aids Glycine receptor antagonists IARC Group 3 carcinogens X Mutagens Phosphodiesterase inhibitors Plant toxin insecticides Pro-motivational agents Stimulants Wakefulness-promoting agents Vasoconstrictors Xanthines
Caffeine
[ "Chemistry", "Technology" ]
12,104
[ "Plant toxin insecticides", "Chemical ecology", "Xanthines", "Alkaloids by chemical classification", "Components of chocolate", "Components" ]
6,874
https://en.wikipedia.org/wiki/Cyc
Cyc (pronounced ) is a long-term artificial intelligence project that aims to assemble a comprehensive ontology and knowledge base that spans the basic concepts and rules about how the world works. Hoping to capture common sense knowledge, Cyc focuses on implicit knowledge. The project began in July 1984 at MCC and was developed later by the Cycorp company. The name "Cyc" (from "encyclopedia") is a registered trademark owned by Cycorp. CycL has a publicly released specification, and dozens of HL (Heuristic Level) modules were described in Lenat and Guha's textbook, but the Cyc inference engine code and the full list of HL modules are Cycorp-proprietary. History The project began in July 1984 by Douglas Lenat as a project of the Microelectronics and Computer Technology Corporation (MCC), a research consortium started by two United States–based corporations "to counter a then ominous Japanese effort in AI, the so-called 'fifth-generation' project." The US passed the National Cooperative Research Act of 1984, which for the first time allowed US companies to "collude" on long-term research. Since January 1995, the project has been under active development by Cycorp, where Douglas Lenat was the CEO. The CycL representation language started as an extension of RLL (the Representation Language Language, developed in 1979–1980 by Lenat and his graduate student Russell Greiner while at Stanford University). In 1989, CycL had expanded in expressive power to higher-order logic (HOL). Cyc's ontology grew to about 100,000 terms in 1994, and as of 2017, it contained about 1,500,000 terms. The Cyc knowledge base involving ontological terms was largely created by hand axiom-writing; it was at about 1 million in 1994, and as of 2017, it is at about 24.5 million. In 2008, Cyc resources were mapped to many Wikipedia articles. Cyc is presently connected to Wikidata. Knowledge base The knowledge base is divided into microtheories. Unlike the knowledge base as a whole, each microtheory must be free from monotonic contradictions. Each microtheory is a first-class object in the Cyc ontology; it has a name that is a regular constant. The concept names in Cyc are CycL terms or constants. Constants start with an optional #$ and are case-sensitive. There are constants for: Individual items known as individuals, such as #$BillClinton or #$France. Collections, such as #$Tree-ThePlant (containing all trees) or #$EquivalenceRelation (containing all equivalence relations). A member of a collection is called an instance of that collection. Functions, which produce new terms from given ones. For example, #$FruitFn, when provided with an argument describing a type (or collection) of plants, will return the collection of its fruits. By convention, function constants start with an upper-case letter and end with the string Fn. Truth functions, which can apply to one or more other concepts and return either true or false. For example, #$siblings is the sibling relationship, true if the two arguments are siblings. By convention, truth function constants start with a lowercase letter. For every instance of the collection #$ChordataPhylum (i.e., for every chordate), there exists a female animal (instance of #$FemaleAnimal), which is its mother (described by the predicate #$biologicalMother). Inference engine An inference engine is a computer program that tries to derive answers from a knowledge base. The Cyc inference engine performs general logical deduction. It also performs inductive reasoning, statistical machine learning and symbolic machine learning, and abductive reasoning. The Cyc inference engine separates the epistemological problem from the heuristic problem. For the latter, Cyc used a community-of-agents architecture in which specialized modules, each with its own algorithm, became prioritized if they could make progress on the sub-problem. Releases OpenCyc The first version of OpenCyc was released in spring 2002 and contained only 6,000 concepts and 60,000 facts. The knowledge base was released under the Apache License. Cycorp stated its intention to release OpenCyc under parallel, unrestricted licences to meet the needs of its users. The CycL and SubL interpreter (the program that allows users to browse and edit the database as well as to draw inferences) was released free of charge, but only as a binary, without source code. It was made available for Linux and Microsoft Windows. The open source Texai project released the RDF-compatible content extracted from OpenCyc. The user interface was in Java 6. Cycorp was a participant of a working group for the Semantic Web, Standard Upper Ontology Working Group, which was active from 2001 to 2003. A Semantic Web version of OpenCyc was available starting in 2008, but ending sometime after 2016. OpenCyc 4.0 was released in June 2012. OpenCyc 4.0 contained 239,000 concepts and 2,093,000 facts; however, these are mainly taxonomic assertions. 4.0 was the last released version, and around March of 2017, OpenCyc was shutdown for the purported reason that "because such “fragmenting” led to divergence, and led to confusion amongst its users and the technical community generally that that OpenCyc fragment was Cyc.". ResearchCyc In July 2006, Cycorp released the executable of ResearchCyc 1.0, a version of Cyc aimed at the research community, at no charge. (ResearchCyc was in beta stage of development during all of 2004; a beta version was released in February 2005.) In addition to the taxonomic information, ResearchCyc includes more semantic knowledge; it also includes a large lexicon, English parsing and generation tools, and Java-based interfaces for knowledge editing and querying. It contains a system for ontology-based data integration. Applications In 2001, GlaxoSmithKline was funding the Cyc, though for unknown applications. In 2007, the Cleveland Clinic has used Cyc to develop a natural-language query interface of biomedical information on cardiothoracic surgeries. A query is parsed into a set of CycL fragments with open variables. The Terrorism Knowledge Base was an application of Cyc that tried to contain knowledge about "terrorist"-related descriptions. The knowledge is stored as statements in mathematical logic. The project lasted from 2004 to 2008. Lycos used Cyc for search term disambiguation, but stopped in 2001. CycSecure was produced in 2002, a network vulnerability assessment tool based on Cyc, with trials at the US STRATCOM Computer Emergency Response Team. One Cyc application has the stated aim to help students doing math at a 6th grade level. The application, called MathCraft, was supposed to play the role of a fellow student who is slightly more confused than the user about the subject. As the user gives good advice, Cyc allows the avatar to make fewer mistakes. Criticisms The Cyc project has been described as "one of the most controversial endeavors of the artificial intelligence history". Catherine Havasi, CEO of Luminoso, says that Cyc is the predecessor project to IBM's Watson. Machine-learning scientist Pedro Domingos refers to the project as a "catastrophic failure" for the unending amount of data required to produce any viable results and the inability for Cyc to evolve on its own. Gary Marcus, a cognitive scientist and the cofounder of an AI company called Geometric Intelligence, says "it represents an approach that is very different from all the deep-learning stuff that has been in the news." This is consistent with Doug Lenat's position that "Sometimes the veneer of intelligence is not enough". Notable employees This is a list of some of the notable people who work or have worked on Cyc either while it was a project at MCC (where Cyc was first started) or Cycorp. Douglas Lenat Michael Witbrock Pat Hayes Ramanathan V. Guha Stuart J. Russell Srinija Srinivasan Jared Friedman John McCarthy See also BabelNet DARPA Agent Markup Language DBpedia Fifth generation computer Freebase List of notable artificial intelligence projects References Further reading Alan Belasco et al. (2004). "Representing Knowledge Gaps Effectively". In: D. Karagiannis, U. Reimer (Eds.): Practical Aspects of Knowledge Management, Proceedings of PAKM 2004, Vienna, Austria, December 2–3, 2004. Springer-Verlag, Berlin Heidelberg. John Cabral & others (2005). "Converting Semantic Meta-Knowledge into Inductive Bias". In: Proceedings of the 15th International Conference on Inductive Logic Programming. Bonn, Germany, August 2005. Jon Curtis et al. (2005). "On the Effective Use of Cyc in a Question Answering System". In: Papers from the IJCAI Workshop on Knowledge and Reasoning for Answering Questions. Edinburgh, Scotland: 2005. Chris Deaton et al. (2005). "The Comprehensive Terrorism Knowledge Base in Cyc". In: Proceedings of the 2005 International Conference on Intelligence Analysis, McLean, Virginia, May 2005. Kenneth Forbus et al. (2005) ."Combining analogy, intelligent information retrieval, and knowledge integration for analysis: A preliminary report". In: Proceedings of the 2005 International Conference on Intelligence Analysis, McLean, Virginia, May 2005 douglas foxvog (2010), "Cyc". In: Theory and Applications of Ontology: Computer Applications , Springer. Fritz Lehmann and d. foxvog (1998), "Putting Flesh on the Bones: Issues that Arise in Creating Anatomical Knowledge Bases with Rich Relational Structures". In: Knowledge Sharing across Biological and Medical Knowledge Based Systems, AAAI. Douglas Lenat and R. V. Guha (1990). Building Large Knowledge-Based Systems: Representation and Inference in the Cyc Project. Addison-Wesley. . James Masters (2002). "Structured Knowledge Source Integration and its applications to information fusion". In: Proceedings of the Fifth International Conference on Information Fusion. Annapolis, MD, July 2002. James Masters and Z. Güngördü (2003). ."Structured Knowledge Source Integration: A Progress Report" In: Integration of Knowledge Intensive Multiagent Systems. Cambridge, Massachusetts, USA, 2003. Cynthia Matuszek et al. (2006). "An Introduction to the Syntax and Content of Cyc.". In: Proc. of the 2006 AAAI Spring Symposium on Formalizing and Compiling Background Knowledge and Its Applications to Knowledge Representation and Question Answering. Stanford, 2006 Cynthia Matuszek et al. (2005) ."Searching for Common Sense: Populating Cyc from the Web". In: Proceedings of the Twentieth National Conference on Artificial Intelligence. Pittsburgh, Pennsylvania, July 2005. Tom O'Hara et al. (2003). "Inducing criteria for mass noun lexical mappings using the Cyc Knowledge Base and its Extension to WordNet". In: Proceedings of the Fifth International Workshop on Computational Semantics. Tilburg, 2003. Fabrizio Morbini and Lenhart Schubert (2009). "Evaluation of EPILOG: a Reasoner for Episodic Logic". University of Rochester, Commonsense '09 Conference (describes Cyc's library of ~1600 'Commonsense Tests') Kathy Panton et al. (2002). "Knowledge Formation and Dialogue Using the KRAKEN Toolset". In: Eighteenth National Conference on Artificial Intelligence. Edmonton, Canada, 2002. Deepak Ramachandran P. Reagan & K. Goolsbey (2005). "First-Orderized ResearchCyc: Expressivity and Efficiency in a Common-Sense Ontology" . In: Papers from the AAAI Workshop on Contexts and Ontologies: Theory, Practice and Applications. Pittsburgh, Pennsylvania, July 2005. Stephen Reed and D. Lenat (2002). "Mapping Ontologies into Cyc". In: AAAI 2002 Conference Workshop on Ontologies For The Semantic Web. Edmonton, Canada, July 2002. Benjamin Rode et al. (2005). "Towards a Model of Pattern Recovery in Relational Data". In: Proceedings of the 2005 International Conference on Intelligence Analysis. McLean, Virginia, May 2005. Dave Schneider et al. (2005). "Gathering and Managing Facts for Intelligence Analysis". In: Proceedings of the 2005 International Conference on Intelligence Analysis. McLean, Virginia, May 2005. Schneider, D., & Witbrock, M. J. (2015, May). "Semantic construction grammar: bridging the NL/Logic divide" In Proceedings of the 24th International Conference on World Wide Web (pp. 673–678). Blake Shepard et al. (2005). "A Knowledge-Based Approach to Network Security: Applying Cyc in the Domain of Network Risk Assessment". In: Proceedings of the Seventeenth Innovative Applications of Artificial Intelligence Conference. Pittsburgh, Pennsylvania, July 2005. Nick Siegel et al. (2004). "Agent Architectures: Combining the Strengths of Software Engineering and Cognitive Systems". In: Papers from the AAAI Workshop on Intelligent Agent Architectures: Combining the Strengths of Software Engineering and Cognitive Systems. Technical Report WS-04-07, pp. 74–79. Menlo Park, California: AAAI Press, 2004. Nick Siegel et al. (2005). Hypothesis Generation and Evidence Assembly for Intelligence Analysis: Cycorp's Nooscape Application". In Proceedings of the 2005 International Conference on Intelligence Analysis, McLean, Virginia, May 2005. Michael Witbrock et al. (2002). "An Interactive Dialogue System for Knowledge Acquisition in Cyc". In: Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence. Acapulco, Mexico, 2003. Michael Witbrock et al. (2004). "Automated OWL Annotation Assisted by a Large Knowledge Base". In: Workshop Notes of the 2004 Workshop on Knowledge Markup and Semantic Annotation at the 3rd International Semantic Web Conference ISWC2004. Hiroshima, Japan, November 2004, pp. 71–80. Michael Witbrock et al. (2005). "Knowledge Begets Knowledge: Steps towards Assisted Knowledge Acquisition in Cyc". In: Papers from the 2005 AAAI Spring Symposium on Knowledge Collection from Volunteer Contributors (KCVC). pp. 99–105. Stanford, California, March 2005. William Jarrold (2001). "Validation of Intelligence in Large Rule-Based Systems with Common Sense". "Model-Based Validation of Intelligence: Papers from the 2001 AAAI Symposium" (AAAI Technical Report SS-01-04). William Jarrold. (2003). Using an Ontology to Evaluate a Large Rule Based Ontology: Theory and Practice. {\em Performance Metrics for Intelligent Systems PerMIS '03} (NIST Special Publication 1014). External links Cycorp website Common Lisp (programming language) software Ontology (information science) Knowledge bases Cognitive architecture
Cyc
[ "Engineering" ]
3,143
[ "Artificial intelligence engineering", "Cognitive architecture" ]
6,896
https://en.wikipedia.org/wiki/Outline%20of%20chemistry
The following outline acts as an overview of and topical guide to chemistry: Chemistry is the science of atomic matter (matter that is composed of chemical elements), especially its chemical reactions, but also including its properties, structure, composition, behavior, and changes as they relate to the chemical reactions. Chemistry is centrally concerned with atoms and their interactions with other atoms, and particularly with the properties of chemical bonds. Summary Chemistry can be described as all of the following: An academic discipline – one with academic departments, curricula and degrees; national and international societies; and specialized journals. A scientific field (a branch of science) – widely recognized category of specialized expertise within science, and typically embodies its own terminology and nomenclature. Such a field will usually be represented by one or more scientific journals, where peer-reviewed research is published. There are several chemistry-related scientific journals. A natural science – one that seeks to elucidate the rules that govern the natural world using empirical and scientific method. A physical science – one that studies non-living systems. A biological science – one that studies the role of chemicals and chemical processes in living organisms. See Outline of biochemistry. Branches Physical chemistry – study of the physical and fundamental basis of chemical systems and processes. In particular, the energetics and dynamics of such systems and processes are of interest to physical chemists. Important areas of study include chemical thermodynamics, chemical kinetics, electrochemistry, statistical mechanics, spectroscopy, and more recently, astrochemistry. Physical chemistry has large overlap with molecular physics. Physical chemistry involves the use of infinitesimal calculus in deriving equations. It is usually associated with quantum chemistry and theoretical chemistry. Physical chemistry is a distinct discipline from chemical physics, but again, there is very strong overlap. Chemical kinetics – study of rates of chemical processes. Chemical physics – investigates physicochemical phenomena using techniques from atomic and molecular physics and condensed matter physics; it is the branch of physics that studies chemical processes. Electrochemistry – branch of chemistry that studies chemical reactions which take place in a solution at the interface of an electron conductor (the electrode: a metal or a semiconductor) and an ionic conductor (the electrolyte), and which involve electron transfer between the electrode and the electrolyte or species in solution. Femtochemistry – area of physical chemistry that studies chemical reactions on extremely short timescales, approximately 10−15 seconds (one femtosecond). Geochemistry – chemical study of the mechanisms behind major systems studied in geology. Photochemistry – study of chemical reactions that proceed with the absorption of light by atoms or molecules. Quantum chemistry – branch of chemistry whose primary focus is the application of quantum mechanics in physical models and experiments of chemical systems. Solid-state chemistry – study of the synthesis, structure, and properties of solid phase materials, particularly, but not necessarily exclusively of, non-molecular solids. Spectroscopy – study of the interaction between matter and radiated energy. Stereochemistry – study of the relative spatial arrangement of atoms that form the structure of molecules Surface science – study of physical and chemical phenomena that occur at the interface of two phases, including solid–liquid interfaces, solid–gas interfaces, solid–vacuum interfaces, and liquid-gas interfaces. Thermochemistry – the branch of chemistry that studies the relation between chemical action and the amount of heat absorbed or generated. Calorimetry – the study of heat changes in physical and chemical processes. Organic chemistry (outline) – study of the structure, properties, composition, mechanisms, and reactions of organic compounds. An organic compound is defined as any compound based on a carbon skeleton. Biochemistry – study of the chemicals, chemical reactions and chemical interactions that take place in living organisms. Biochemistry and organic chemistry are closely related, as in medicinal chemistry or neurochemistry. Biochemistry is also associated with molecular biology and genetics. Neurochemistry – study of neurochemicals; including transmitters, peptides, proteins, lipids, sugars, and nucleic acids; their interactions, and the roles they play in forming, maintaining, and modifying the nervous system. Molecular biochemistry and genetic engineering – an area of biochemistry and molecular biology that studies the genes, their heritage and their expression. Bioorganic chemistry – combines organic chemistry and biochemistry toward biology. Biophysical chemistry – is a physical science that uses the concepts of physics and physical chemistry for the study of biological systems. Medicinal chemistry – discipline which applies chemistry for medical or drug related purposes. Organometallic chemistry – is the study of organometallic compounds, chemical compounds containing at least one chemical bond between a carbon atom of an organic molecule and a metal, including alkaline, alkaline earth, and transition metals, and sometimes broadened to include metalloids like boron, silicon, and tin. Physical organic chemistry – study of the interrelationships between structure and reactivity in organic molecules. Inorganic chemistry – study of the properties and reactions of inorganic compounds. The distinction between organic and inorganic disciplines is not absolute and there is much overlap, most importantly in the sub-discipline of organometallic chemistry. Bioinorganic chemistry – is a field that examines the role of metals in biology. Cluster chemistry – focuses crystalline materials most often existing on the 0-2 nanometer scale and characterizing their crystal structures and understanding their role in the nucleation and growth mechanisms of larger materials. Nuclear chemistry – study of how subatomic particles come together and make nuclei. Modern Transmutation is a large component of nuclear chemistry, and the table of nuclides is an important result and tool for this field. Analytical chemistry – analysis of material samples to gain an understanding of their chemical composition and structure. Analytical chemistry incorporates standardized experimental methods in chemistry. These methods may be used in all subdisciplines of chemistry, excluding purely theoretical chemistry. Other Astrochemistry – study of the abundance and reactions of chemical elements and molecules in the universe, and their interaction with radiation. Cosmochemistry – study of the chemical composition of matter in the universe and the processes that led to those compositions. Computational chemistry – is a branch of chemistry that uses computer simulations for solving chemical problems. Environmental chemistry – study of chemical and biochemical phenomena that occur diverse aspects of the environment such the air, soil, and water. It also studies the effects of human activity on the environment. Green chemistry is a philosophy of chemical research and engineering that encourages the design of products and processes that minimize the use and generation of hazardous substances. Supramolecular chemistry – refers to the domain of chemistry beyond that of molecules and focuses on the chemical systems made up of a discrete number of assembled molecular subunits or components. Theoretical chemistry – study of chemistry via fundamental theoretical reasoning (usually within mathematics or physics). In particular the application of quantum mechanics to chemistry is called quantum chemistry. Since the end of the Second World War, the development of computers has allowed a systematic development of computational chemistry, which is the art of developing and applying computer programs for solving chemical problems. Theoretical chemistry has large overlap with (theoretical and experimental) condensed matter physics and molecular physics. Polymer chemistry – multidisciplinary science that deals with the chemical synthesis and chemical properties of polymers or macromolecules. Wet chemistry – is a form of analytical chemistry that uses classical methods such as observation to analyze materials usually in liquid phase. Agrochemistry – study and application of both chemistry and biochemistry for agricultural production, the processing of raw products into foods and beverages, and environmental monitoring and remediation. Atmospheric chemistry – branch of atmospheric science which studies the chemistry of the Earth's atmosphere and that of other planets. Chemical biology – scientific discipline spanning the fields of chemistry and biology and involves the application of chemical techniques and tools, often compounds produced through synthetic chemistry, to analyze and manipulation of biological systems. Chemo-informatics – use of computer and informational techniques applied to a range of problems in the field of chemistry. Flow chemistry – study of chemical reactions in continuous flow, not as stationary batches, in industry and macro processing equipment. Immunohistochemistry – involves the process of detecting antigens (e.g., proteins) in cells of a tissue section by exploiting the principle of antibodies binding specifically to antigens in biological tissues. Immunochemistry – is a branch of chemistry that involves the study of the reactions and components on the immune system. Chemical oceanography – study of ocean chemistry: the behavior of the chemical elements within the Earth's oceans. Mathematical chemistry – area of study engaged in novel applications of mathematics to chemistry. It concerns itself principally with the mathematical modeling of chemical phenomena. Mechanochemistry – coupling of mechanical and chemical phenomena on a molecular scale. Molecular biology – study of interactions between the various systems of a cell. It overlaps with biochemistry. Petrochemistry – study of the transformation of petroleum and natural gas into useful products or raw materials. Phytochemistry – study of phytochemicals which come from plants. Radiochemistry – chemistry of radioactive materials. Sonochemistry – study of effect of sonic waves and wave properties on chemical systems. Synthetic chemistry – study of chemical synthesis. History History of chemistry Precursors to chemistry Alchemy (outline) History of alchemy History of the branches of chemistry History of analytical chemistry – history of the study of the separation, identification, and quantification of the chemical components of natural and artificial materials. History of astrochemistry – history of the study of the abundance and reactions of chemical elements and molecules in the universe, and their interaction with radiation. History of cosmochemistry – history of the study of the chemical composition of matter in the universe and the processes that led to those compositions History of atmospheric chemistry – history of the branch of atmospheric science in which the chemistry of the Earth's atmosphere and that of other planets is studied. It is a multidisciplinary field of research and draws on environmental chemistry, physics, meteorology, computer modeling, oceanography, geology and volcanology and other disciplines History of biochemistry – history of the study of chemical processes in living organisms, including, but not limited to, living matter. Biochemistry governs all living organisms and living processes. History of agrochemistry – history of the study of both chemistry and biochemistry which are important in agricultural production, the processing of raw products into foods and beverages, and in environmental monitoring and remediation. History of bioinorganic chemistry – history of the examines the role of metals in biology. History of bioorganic chemistry – history of the rapidly growing scientific discipline that combines organic chemistry and biochemistry. History of biophysical chemistry – history of the new branch of chemistry that covers a broad spectrum of research activities involving biological systems. History of environmental chemistry – history of the scientific study of the chemical and biochemical phenomena that occur in natural places. History of immunochemistry – history of the branch of chemistry that involves the study of the reactions and components on the immune system. History of medicinal chemistry – history of the discipline at the intersection of chemistry, especially synthetic organic chemistry, and pharmacology and various other biological specialties, where they are involved with design, chemical synthesis and development for market of pharmaceutical agents (drugs). History of natural product chemistry – history of the chemical compound or substance produced by a living organism – history of the found in nature that usually has a pharmacological or biological activity for use in pharmaceutical drug discovery and drug design. History of neurochemistry – history of the specific study of neurochemicals, which include neurotransmitters and other molecules such as neuro-active drugs that influence neuron function. History of computational chemistry – history of the branch of chemistry that uses principles of computer science to assist in solving chemical problems. History of chemo-informatics – history of the use of computer and informational techniques, applied to a range of problems in the field of chemistry. History of molecular mechanics – history of the uses Newtonian mechanics to model molecular systems. History of flavor chemistry – history of the someone who uses chemistry to engineer artificial and natural flavors. History of flow chemistry – history of the chemical reaction is run in a continuously flowing stream rather than in batch production. History of geochemistry – history of the study of the mechanisms behind major geological systems using chemistry History of aqueous geochemistry – history of the study of the role of various elements in watersheds, including copper, sulfur, mercury, and how elemental fluxes are exchanged through atmospheric-terrestrial-aquatic interactions History of isotope geochemistry – history of the study of the relative and absolute concentrations of the elements and their isotopes using chemistry and geology History of ocean chemistry – history of the studies the chemistry of marine environments including the influences of different variables. History of organic geochemistry – history of the study of the impacts and processes that organisms have had on Earth History of regional, environmental and exploration geochemistry – history of the study of the spatial variation in the chemical composition of materials at the surface of the Earth History of inorganic chemistry – history of the branch of chemistry concerned with the properties and behavior of inorganic compounds. History of nuclear chemistry – history of the subfield of chemistry dealing with radioactivity, nuclear processes and nuclear properties. History of radiochemistry – history of the chemistry of radioactive materials, where radioactive isotopes of elements are used to study the properties and chemical reactions of non-radioactive isotopes (often within radiochemistry the absence of radioactivity leads to a substance being described as being inactive as the isotopes are stable). History of organic chemistry – history of the study of the structure, properties, composition, reactions, and preparation (by synthesis or by other means) of carbon-based compounds, hydrocarbons, and their derivatives. History of petrochemistry – history of the branch of chemistry that studies the transformation of crude oil (petroleum) and natural gas into useful products or raw materials. History of organometallic chemistry – history of the study of chemical compounds containing bonds between carbon and a metal. History of photochemistry – history of the study of chemical reactions that proceed with the absorption of light by atoms or molecules.. History of physical chemistry – history of the study of macroscopic, atomic, subatomic, and particulate phenomena in chemical systems in terms of physical laws and concepts. History of chemical kinetics – history of the study of rates of chemical processes. History of chemical thermodynamics – history of the study of the interrelation of heat and work with chemical reactions or with physical changes of state within the confines of the laws of thermodynamics. History of electrochemistry – history of the branch of chemistry that studies chemical reactions which take place in a solution at the interface of an electron conductor (a metal or a semiconductor) and an ionic conductor (the electrolyte), and which involve electron transfer between the electrode and the electrolyte or species in solution. History of femtochemistry – history of the femtochemistry is the science that studies chemical reactions on extremely short timescales, approximately 10−15 seconds (one femtosecond, hence the name). History of mathematical chemistry – history of the area of research engaged in novel applications of mathematics to chemistry; it concerns itself principally with the mathematical modeling of chemical phenomena. History of mechanochemistry – history of the coupling of the mechanical and the chemical phenomena on a molecular scale and includes mechanical breakage, chemical behaviour of mechanically stressed solids (e.g., stress-corrosion cracking), tribology, polymer degradation under shear, cavitation-related phenomena (e.g., sonochemistry and sonoluminescence), shock wave chemistry and physics, and even the burgeoning field of molecular machines. History of physical organic chemistry – history of the study of the interrelationships between structure and reactivity in organic molecules. History of quantum chemistry – history of the branch of chemistry whose primary focus is the application of quantum mechanics in physical models and experiments of chemical systems. History of sonochemistry – history of the study of the effect of sonic waves and wave properties on chemical systems. History of stereochemistry – history of the study of the relative spatial arrangement of atoms within molecules. History of supramolecular chemistry – history of the area of chemistry beyond the molecules and focuses on the chemical systems made up of a discrete number of assembled molecular subunits or components. History of thermochemistry – history of the study of the energy and heat associated with chemical reactions and/or physical transformations. History of phytochemistry – history of the strict sense of the word the study of phytochemicals. History of polymer chemistry – history of the multidisciplinary science that deals with the chemical synthesis and chemical properties of polymers or macromolecules. History of solid-state chemistry – history of the study of the synthesis, structure, and properties of solid phase materials, particularly, but not necessarily exclusively of, non-molecular solids History of multidisciplinary fields involving chemistry: History of chemical biology – history of the scientific discipline spanning the fields of chemistry and biology that involves the application of chemical techniques and tools, often compounds produced through synthetic chemistry, to the study and manipulation of biological systems. History of chemical oceanography – history of the study of the behavior of the chemical elements within the Earth's oceans. History of chemical physics – history of the branch of physics that studies chemical processes from the point of view of physics.engineering. History of oenology – history of the science and study of all aspects of wine and winemaking except vine-growing and grape-harvesting, which is a subfield called viticulture. History of spectroscopy – history of the study of the interaction between matter and radiated energy History of surface science – history of the Surface science is the study of physical and chemical phenomena that occur at the interface of two phases, including solid–liquid interfaces, solid–gas interfaces, solid–vacuum interfaces, and liquid–gas interfaces. History of chemicals History of chemical elements History of carbon History of hydrogen Timeline of hydrogen technologies History of oxygen History of chemical products History of aspirin History of cosmetics History of gunpowder History of pharmaceutical drugs History of vitamins History of chemical processes History of manufactured gas History of the Haber process History of the chemical industry History of the petroleum industry History of the pharmaceutical industry History of the periodic table Chemicals Dictionary of chemical formulas List of biomolecules List of inorganic compounds Periodic table Atomic theory Atomic theory Atomic models Atomism – natural philosophy that theorizes that the world is composed of indivisible pieces. Plum pudding model Rutherford model Bohr model Thermochemistry Thermochemistry Terminology Thermochemistry – Chemical kinetics – the study of the rates of chemical reactions and investigates how different experimental conditions can influence the speed of a chemical reaction and yield information about the reaction's mechanism and transition states, as well as the construction of mathematical models that can describe the characteristics of a chemical reaction. Exothermic – a process or reaction in which the system release energy to its surroundings in the form of heat. They are denoted by negative heat flow. Endothermic – a process or reaction in which the system absorbs energy from its surroundings in the form of heat. They are denoted by positive heat flow. Thermochemical equation – Enthalpy change – internal energy of a system plus the product of pressure and volume. Its change in a system is equal to the heat brought to the system at constant pressure. Enthalpy of reaction – Temperature – an objective comparative measure of heat. Calorimeter – an object used for calorimetry, or the process of measuring the heat of chemical reactions or physical changes as well as heat capacity. Heat – A form of energy associated with the kinetic energy of atoms or molecules and capable of being transmitted through solid and fluid media by conduction, through fluid media by convection, and through empty space by radiation. Joule – a unit of energy. Calorie – Specific heat – Specific heat capacity – Latent heat – Heat of fusion – Heat of vaporization – Collision theory – Activation energy – Activated complex – Reaction rate – Catalyst – Thermochemical Equations Chemical equations that include the heat involved in a reaction, either on the reactant side or the product side. Examples: H2O(l) + 240kJ → H2O(g) N2 + 3H2 → 2NH3 + 92kJ Joule (J) – Enthalpy Enthalpy and Thermochemical Equations Endothermic Reactions Exothermic Reactions Potential Energy Diagrams Thermochemistry Stoichiometry Chemists For more chemists, see: Nobel Prize in Chemistry and List of chemists Amedeo Avogadro Elias James Corey Marie Curie John Dalton Humphry Davy George Eastman Michael Faraday Rosalind Franklin Eleuthère Irénée du Pont Dmitriy Mendeleyev Alfred Nobel Wilhelm Ostwald Louis Pasteur Linus Pauling Joseph Priestley Robert Burns Woodward Karl Ziegler Ahmed Zewail Chemistry literature Scientific literature – Scientific journal – Academic journal – List of important publications in chemistry List of scientific journals in chemistry List of science magazines Scientific American Lists Chemical elements data references List of chemical elements – atomic mass, atomic number, symbol, name List of minerals – Minerals Electron configurations of the elements (data page) – electron configuration, electrons per shell Densities of the elements (data page) – density (solid, liquid, gas) Electron affinity (data page) – electron affinity Melting points of the elements (data page) – melting point Boiling points of the elements (data page) – boiling point Critical points of the elements (data page) – critical point Heats of fusion of the elements (data page) – heat of fusion Heats of vaporization of the elements (data page) – heat of vaporization Heat capacities of the elements (data page) – heat capacity Vapor pressures of the elements (data page) – vapor pressure Electronegativities of the elements (data page) – electronegativity (Pauling scale) Ionization energies of the elements (data page) – ionization energies (in eV) and molar ionization energies (in kJ/mol) Atomic radii of the elements (data page) – atomic radius (empirical), atomic radius (calculated), van der Waals radius, covalent radius Electrical resistivities of the elements (data page) – electrical resistivity Thermal conductivities of the elements (data page) – thermal conductivity Thermal expansion coefficients of the elements (data page) – thermal expansion Speeds of sound of the elements (data page) – speed of sound Elastic properties of the elements (data page) – Young's modulus, Poisson ratio, bulk modulus, shear modulus Hardnesses of the elements (data page) – Mohs hardness, Vickers hardness, Brinell hardness Abundances of the elements (data page) – Earth's crust, sea water, Sun and solar system List of oxidation states of the elements – oxidation states List of compounds List of CAS numbers by chemical compound List of Extremely Hazardous Substances List of inorganic compounds List of organic compounds List of alkanes List of alloys Other List of thermal conductivities List of purification methods in chemistry List of unsolved problems in chemistry See also Outline of biochemistry Outline of physics References External links International Union of Pure and Applied Chemistry IUPAC Nomenclature Home Page, see especially the "Gold Book" containing definitions of standard chemical terms Interactive Mind Map of Chemistry / Chemical energetics Chemistry Chemistry
Outline of chemistry
[ "Chemistry" ]
4,846
[ "nan" ]
6,907
https://en.wikipedia.org/wiki/Chakra
A chakra (; ; ) is one of the various focal points used in a variety of ancient meditation practices, collectively denominated as Tantra, part of the inner traditions of Hinduism and Buddhism. The concept of the chakra arose in Hinduism. Beliefs differ between the Indian religions, with many Buddhist texts consistently mentioning five chakras, while Hindu sources reference six or seven. Early Sanskrit texts speak of them both as meditative visualizations combining flowers and mantras and as physical entities in the body. Within Kundalini yoga, the techniques of breathing exercises, visualizations, mudras, bandhas, kriyas, and mantras are focused on manipulating the flow of subtle energy through chakras. The modern "Western chakra system" arose from multiple sources, starting in the 1880s with H. P. Blavatsky and other Theosophists, followed by Sir John Woodroffe's 1919 book The Serpent Power, and Charles W. Leadbeater's 1927 book The Chakras. Psychological and other attributes, rainbow colours, and a wide range of supposed correspondences with other systems such as alchemy, astrology, gemstones, homeopathy, Kabbalah and Tarot were added later. Etymology Lexically, chakra is the Indic reflex of an ancestral Indo-European form *kʷékʷlos, whence also "wheel" and "cycle" (). It has both literal and metaphorical uses, as in the "wheel of time" or "wheel of dharma", such as in Rigveda hymn verse 1.164.11, pervasive in the earliest Vedic texts. In Buddhism, especially in Theravada, the Pali noun cakka connotes "wheel". Within the Buddhist scriptures refferred to as the Tripitaka, Shakyamuni Buddha variously refers the "dhammacakka", or "wheel of dharma", connoting that this dharma, universal in its advocacy, should bear the marks characteristic of any temporal dispensation. Shakyamuni Buddha spoke of freedom from cycles in and of themselves, whether karmic, reincarnative, liberative, cognitive or emotional. In Jainism, the term chakra also means "wheel" and appears in various contexts in its ancient literature. As in other Indian religions, chakra in esoteric theories in Jainism such as those by Buddhisagarsuri means a yogic energy center. Ancient history The word chakra appears to first emerge within the Vedas, though not in the sense of psychic energy centers, rather as chakravartin or the king who "turns the wheel of his empire" in all directions from a center, representing his influence and power. The iconography popular in representing the Chakras, states the scholar David Gordon White, traces back to the five symbols of yajna, the Vedic fire altar: "square, circle, triangle, half moon and dumpling". The hymn 10.136 of the Rigveda mentions a renunciate yogi with a female named kunannamā. Literally, it means "she who is bent, coiled", representing both a minor goddess and one of many embedded enigmas and esoteric riddles within the Rigveda. Some scholars, such as D.G. White and Georg Feuerstein, have suggested that she may be a reference to kundalini shakti and a precursor to the terminology associated with the chakras in later tantric traditions. Breath channels (nāḍi) are mentioned in the classical Upanishads of Hinduism from the 1st millennium BCE, but not psychic-energy chakra theories. Three classical Nadis are Ida, Pingala and Sushumna in which the central channel Sushumna is said to be foremost as per Kṣurikā-Upaniṣhad. The latter, states David Gordon White, were introduced about 8th-century CE in Buddhist texts as hierarchies of inner energy centers, such as in the Hevajra Tantra and Caryāgiti. These are called by various terms such as cakka, padma (lotus) or pitha (mound). These medieval Buddhist texts mention only four chakras, while later Hindu texts such as the Kubjikāmata and Kaulajñānanirnaya expanded the list to many more. In contrast to White, according to Feuerstein, early Upanishads of Hinduism do mention chakras in the sense of "psychospiritual vortices", along with other terms found in tantra: prana or vayu (life energy) along with nadi (energy carrying arteries). According to Gavin Flood, the ancient texts do not present chakra and kundalini-style yoga theories although these words appear in the earliest Vedic literature in many contexts. The chakra in the sense of four or more vital energy centers appear in the medieval era Hindu and Buddhist texts. Overview The Chakras are part of esoteric ideas and concepts about physiology and psychic centers that emerged across Indian traditions. The belief held that human life simultaneously exists in two parallel dimensions, one "physical body" (sthula sarira) and other "psychological, emotional, mind, non-physical" it is called the "subtle body" (sukshma sarira). This subtle body is energy, while the physical body is mass. The psyche or mind plane corresponds to and interacts with the body plane, and the belief holds that the body and the mind mutually affect each other. The subtle body consists of nadi (energy channels) connected by nodes of psychic energy called chakra. The belief grew into extensive elaboration, with some suggesting 88,000 chakras throughout the subtle body. The number of major chakras varied between various traditions, but they typically ranged between four and seven. Nyingmapa Vajrayana Buddhist teachings mention eight chakras and there is a complete yogic system for each of them. The important chakras are stated in Hindu and Buddhist texts to be arranged in a column along the spinal cord, from its base to the top of the head, connected by vertical channels. The tantric traditions sought to master them, awaken and energize them through various breathing exercises or with assistance of a teacher. These chakras were also symbolically mapped to specific human physiological capacity, seed syllables (bija), sounds, subtle elements (tanmatra), in some cases deities, colors and other motifs. Belief in the chakra system of Hinduism and Buddhism differs from the historic Chinese system of meridians in acupuncture. Unlike the latter, the chakra relates to subtle body, wherein it has a position but no definite nervous node or precise physical connection. The tantric systems envision it as continually present, highly relevant and a means to psychic and emotional energy. It is useful in a type of yogic rituals and meditative discovery of radiant inner energy (prana flows) and mind-body connections. The meditation is aided by extensive symbology, mantras, diagrams, models (deity and mandala). The practitioner proceeds step by step from perceptible models, to increasingly abstract models where deity and external mandala are abandoned, inner self and internal mandalas are awakened. These ideas are not unique to Hindu and Buddhist traditions. Similar and overlapping concepts emerged in other cultures in the East and the West, and these are variously called by other names such as subtle body, spirit body, esoteric anatomy, sidereal body and etheric body. According to Geoffrey Samuel and Jay Johnston, professors of Religious studies known for their studies on Yoga and esoteric traditions: Contrast with classical yoga Chakra and related beliefs have been important to the esoteric traditions, but they are not directly related to mainstream yoga. According to the Indologist Edwin Bryant and other scholars, the goals of classical yoga such as spiritual liberation (freedom, self-knowledge, moksha) is "attained entirely differently in classical yoga, and the cakra / nadi / kundalini physiology is completely peripheral to it." Number of chakras There is no consensus in Hinduism about the number of chakras because the concept of chakras has been evolved and interpreted differently by various sects, schools of thought, and spiritual traditions within Hinduism over the centuries. While some traditions follow the seven main chakra system, others recognize additional chakras or a different number of chakras. The lack of a universally accepted standard has led to variation and diversity in the interpretation and understanding of chakras within Hinduism. There are several sects within Hinduism that have their own unique interpretations and understandings of the concept of chakras. Here are some of the major sects that have different perspectives on chakras: Bhakti Yoga: In Bhakti Yoga, the number of chakras varies, but the focus is often on the heart chakra as the center of spiritual devotion. Ayurveda (3): In Ayurveda, there are three main chakras, known as the "Marmas," which are considered to be the focal points of the physical, mental, and spiritual energies in the body. Shaivism (5): In Shaivism, there are five chakras, with the focus being on the heart and crown chakras. Tantra (6): In Tantra, there are traditionally said to be four to six chakras, with the crown chakra being considered the highest. Kashmir Shaivism (6–7): In Kashmir Shaivism, there are six or seven chakras, with the focus being on the awakening of the divine energy within. Hatha Yoga (7): In Hatha Yoga, there are seven main chakras, but some Hatha Yoga traditions also recognize additional chakras. Kundalini Yoga (7): In Kundalini Yoga, there are seven main chakras, but additional minor chakras are also recognized. Nath Tradition (8): In the Nath tradition, there are eight main chakras, with the emphasis being on the awakening of the divine energy through these centers. Vaishnavism (12): In Vaishnavism, there are twelve chakras, with the emphasis being on the spiritual ascent through these centers. Classical traditions The classical eastern traditions, particularly those that developed in India during the 1st millennium AD, primarily describe nadi and chakra in a "subtle body" context. To them, they are in same dimension as of the psyche-mind reality that is invisible yet real. In the nadi and cakra flow the prana (breath, life energy). The concept of "life energy" varies between the texts, ranging from simple inhalation-exhalation to far more complex association with breath-mind-emotions-sexual energy. This prana or essence is what vanishes when a person dies, leaving a gross body. Some of this concept states this subtle body is what withdraws within, when one sleeps. All of it is believed to be reachable, awake-able and important for an individual's body-mind health, and how one relates to other people in one's life. This subtle body network of nadi and chakra is, according to some later Indian theories and many New Age speculations, closely associated with emotions. Hindu tantra Esoteric traditions in Hinduism mention numerous numbers and arrangements of chakras, of which a classical system of six-plus-one, the last being the Sahasrara, is most prevalent. This seven-part system, central to the core texts of hatha yoga, is one among many systems found in Hindu tantric literature. Hindu Tantra associates six Yoginis with six places in the subtle body, corresponding to the six chakras of the six-plus-one system. The Chakra methodology is extensively developed in the goddess tradition of Hinduism called Shaktism. It is an important concept along with yantras, mandalas and kundalini yoga in its practice. Chakra in Shakta tantrism means circle, an "energy center" within, as well as being a term for group rituals such as in chakra-puja (worship within a circle) which may or may not involve tantra practice. The cakra-based system is a part of the meditative exercises that came to be known as yoga. Buddhist tantra The esoteric traditions in Buddhism generally teach four chakras. In some early Buddhist sources, these chakras are identified as: manipura (navel), anahata (heart), vishuddha (throat) and ushnisha kamala (crown). In one development within the Nyingma lineage of the Mantrayana of Tibetan Buddhism a popular conceptualization of chakras in increasing subtlety and increasing order is as follows: Nirmanakaya (gross self), Sambhogakaya (subtle self), Dharmakaya (causal self), and Mahasukhakaya (non-dual self), each vaguely and indirectly corresponding to the categories within the Shaiva Mantramarga universe, i.e., Svadhisthana, Anahata, Visuddha, Sahasrara, etc. However, depending on the meditational tradition, these vary between three and six. The chakras are considered psycho-spiritual constituents, each bearing meaningful correspondences to cosmic processes and their postulated Buddha counterpart. A system of five chakras is common among the Mother class of Tantras and these five chakras along with their correspondences are: Basal chakra (Element: Earth, Buddha: Amoghasiddhi, Bija mantra: LAM) Abdominal chakra (Element: Water, Buddha: Ratnasambhava, Bija mantra: VAM) Heart chakra (Element: Fire, Buddha: Akshobhya, Bija mantra: RAM) Throat chakra (Element: Wind, Buddha: Amitabha, Bija mantra: YAM) Crown chakra (Element: Space, Buddha: Vairochana, Bija mantra: KHAM) Chakras clearly play a key role in Tibetan Buddhism, and are considered to be the pivotal providence of Tantric thinking. And, the precise use of the chakras across the gamut of tantric sadhanas gives little space to doubt the primary efficacy of Tibetan Buddhism as distinct religious agency, that being that precise revelation that, without Tantra there would be no Chakras, but more importantly, without Chakras, there is no Tibetan Buddhism. The highest practices in Tibetan Buddhism point to the ability to bring the subtle pranas of an entity into alignment with the central channel, and to thus penetrate the realisation of the ultimate unity, namely, the "organic harmony" of one's individual consciousness of Wisdom with the co-attainment of All-embracing Love, thus synthesizing a direct cognition of absolute Buddhahood. According to Samuel, the buddhist esoteric systems developed cakra and nadi as "central to their soteriological process". The theories were sometimes, but not always, coupled with a unique system of physical exercises, called yantra yoga or phrul khor. Chakras, according to the Bon tradition, enable the gestalt of experience, with each of the five major chakras, being psychologically linked with the five experiential qualities of unenlightened consciousness, the six realms of woe. The tsa lung practice embodied in the Trul khor lineage, unbaffles the primary channels, thus activating and circulating liberating prana. Yoga awakens the deep mind, thus bringing forth positive attributes, inherent gestalts, and virtuous qualities. In a computer analogy, the screen of one's consciousness is slated and an attribute-bearing file is called up that contains necessary positive or negative, supportive qualities. Tantric practice is said to eventually transform all experience into clear light. The practice aims to liberate from all negative conditioning, and the deep cognitive salvation of freedom from control and unity of perception and cognition. Seven chakra system The most studied chakra system incorporates six major chakras along with a seventh centre generally not regarded as a chakra. These points are arranged vertically along the axial channel (sushumna nadi in Hindu texts, Avadhuti in some Buddhist texts). According to Gavin Flood, this system of six chakras plus the sahasrara "center" at the crown first appears in the Kubjikāmata-tantra, an 11th-century Kaula work. It was this chakra system that was translated in the early 20th century by Sir John Woodroffe (also called Arthur Avalon) in his book The Serpent Power. Avalon translated the Hindu text Ṣaṭ-Cakra-Nirūpaṇa meaning the examination (nirūpaṇa) of the six (ṣaṭ) chakras (cakra). The Chakras are traditionally considered meditation aids. The yogi progresses from lower chakras to the highest chakra blossoming in the crown of the head, internalizing the journey of spiritual ascent. In both the Hindu kundalini and Buddhist candali traditions, the chakras are pierced by a dormant energy residing near or in the lowest chakra. In Hindu texts she is known as Kundalini, while in Buddhist texts she is called Candali or Tummo (Tibetan: gtum mo, "fierce one"). Below are the common new age description of these six chakras and the seventh point known as sahasrara. This new age version incorporates the Newtonian colours of the rainbow not found in any ancient Indian system. Western chakra system History Kurt Leland, for the Theosophical Society in America, concluded that the western chakra system was produced by an "unintentional collaboration" of many groups of people: esotericists and clairvoyants, often theosophical; Indologists; the scholar of myth, Joseph Campbell; the founders of the Esalen Institute and the psychological tradition of Carl Jung; the colour system of Charles W. Leadbeater's 1927 book The Chakras, treated as traditional lore by some modern Indian yogis; and energy healers such as Barbara Brennan. Leland states that far from being traditional, the two main elements of the modern system, the rainbow colours and the list of qualities, first appeared together only in 1977. The concept of a set of seven chakras came to the West in the 1880s; at that time each chakra was associated with a nerve plexus. In 1918, Sir John Woodroffe, alias Arthur Avalon, translated two Indian texts, the Ṣaṭ-Cakra-Nirūpaṇa and the Pādukā-Pañcaka, and in his book The Serpent Power drew Western attention to the seven chakra theory. In the 1920s, each of the seven chakras was associated with an endocrine gland, a tradition that has persisted. More recently, the lower six chakras have been linked to both nerve plexuses and glands. The seven rainbow colours were added by Leadbeater in 1927; a variant system in the 1930s proposed six colours plus white. Leadbeater's theory was influenced by Johann Georg Gichtel's 1696 book Theosophia Practica, which mentioned inner "force centres". Psychological and other attributes such as layers of the aura, developmental stages, associated diseases, Aristotelian elements, emotions, and states of consciousness were added still later. A wide range of supposed correspondences such as with alchemical metals, astrological signs and planets, foods, herbs, gemstones, homeopathic remedies, Kabbalistic spheres, musical notes, totem animals, and Tarot cards have also been proposed. New Age In Anatomy of the Spirit (1996), Caroline Myss described the function of chakras as follows: "Every thought and experience you've ever had in your life gets filtered through these chakra databases. Each event is recorded into your cells...". The chakras are described as being aligned in an ascending column from the base of the spine to the top of the head. New Age practices often associate each chakra with a certain colour. In various traditions, each chakra is associated with a physiological functions, an aspect of consciousness, and a classical element; these do not correspond to those used in ancient Indian systems. The chakras are visualised as lotuses or flowers with a different number of petals in every chakra. The chakras are thought to vitalise the physical body and to be associated with interactions of a physical, emotional and mental nature. They are considered loci of life spiritual energy or prana, which is thought to flow among them along pathways called nadi. The function of the chakras is to spin and draw in this energy to keep the spiritual, mental, emotional and physical health of the body in balance. Rudolf Steiner considered the chakra system to be dynamic and evolving. He suggested that this system has become different for modern people than it was in ancient times and that it will, in turn, be radically different in future. Skeptical response The not-for-profit Edinburgh Skeptics Society states that despite their popularity, "there has never been any evidence for these meridian lines or chakras". It adds that while practitioners sometimes cite "scientific evidence" for their claims, such evidence is often "incredibly shaky". See also Aura Dantian—energy centre in Chinese Taoist systems Surya Namaskar—the Sun Salutation, in which each posture is sometimes associated with a chakra and a mantra Indonesian Army, main combat auxiliary corps, known as Kostrad uses "Cakra" as its main war cry and symbol Notes References Further reading (Two volumes) Banerji, S. C. Tantra in Bengal. Second Revised and Enlarged Edition. (Manohar: Delhi, 1992) Goswami, Shyam Sundar. Layayoga: The Definitive Guide to the Chakras and Kundalini, Routledge & Kegan Paul, 1980. Khalsa, Guru Dharam Singh; O'Keeffe, Darryl. The Kundalini Yoga Experience Simon & Schuster, 2002. Judith, Anodea (1996). Eastern Body Western Mind: Psychology and the Chakra System As A Path to the Self. Berkeley, California, USA: Celestial Arts Publishing. Lowndes, Florin. 'Enlivening the Chakra of the Heart: The Fundamental Spiritual Exercises of Rudolf Steiner' , first English edition 1998 from the original German edition of 1996, comparing 'traditional' chakra teaching, and that of C.W. Leadbeater, with that of Rudolf Steiner. Consciousness–matter dualism Hindu philosophical concepts Meditation New Age Spiritual practice Theosophical philosophical concepts Vitalism Eastern esotericism
Chakra
[ "Biology" ]
4,726
[ "Non-Darwinian evolution", "Vitalism", "Biology theories" ]
6,910
https://en.wikipedia.org/wiki/Cloning
Cloning is the process of producing individual organisms with identical genomes, either by natural or artificial means. In nature, some organisms produce clones through asexual reproduction; this reproduction of an organism by itself without a mate is known as parthenogenesis. In the field of biotechnology, cloning is the process of creating cloned organisms of cells and of DNA fragments. The artificial cloning of organisms, sometimes known as reproductive cloning, is often accomplished via somatic-cell nuclear transfer (SCNT), a cloning method in which a viable embryo is created from a somatic cell and an egg cell. In 1996, Dolly the sheep achieved notoriety for being the first mammal cloned from a somatic cell. Another example of artificial cloning is molecular cloning, a technique in molecular biology in which a single living cell is used to clone a large population of cells that contain identical DNA molecules. In bioethics, there are a variety of ethical positions regarding the practice and possibilities of cloning. The use of embryonic stem cells, which can be produced through SCNT, in some stem cell research has attracted controversy. Cloning has been proposed as a means of reviving extinct species. In popular culture, the concept of cloning—particularly human cloning—is often depicted in science fiction; depictions commonly involve themes related to identity, the recreation of historical figures or extinct species, or cloning for exploitation (e.g. cloning soldiers for warfare). Etymology Coined by Herbert J. Webber, the term clone derives from the Ancient Greek word (), twig, which is the process whereby a new plant is created from a twig. In botany, the term lusus was used. In horticulture, the spelling clon was used until the early twentieth century; the final e came into use to indicate the vowel is a "long o" instead of a "short o". Since the term entered the popular lexicon in a more general context, the spelling clone has been used exclusively. Natural cloning Natural cloning is the production of clones without the involvement of genetic engineering techniques or human intervention (i.e. artificial cloning). Natural cloning occurs through a variety of natural mechanisms, from single-celled organisms to complex multicellular organisms, and has allowed life forms to spread for hundreds of millions of years. Versions of this reproduction method are used by plants, fungi, and bacteria, and is also the way that clonal colonies reproduce themselves. Some of the mechanisms are explored and used in plants and animals are binary fission, budding, fragmentation, and parthenogenesis. It can also occur during some forms of asexual reproduction, when a single parent organism produces genetically identical offspring by itself. Many plants are well known for natural cloning ability, including blueberry plants, Hazel trees, the Pando trees, the Kentucky coffeetree, Myrica, and the American sweetgum. It also occurs accidentally in the case of identical twins, which are formed when a fertilized egg splits, creating two or more embryos that carry identical DNA. Molecular cloning Molecular cloning refers to the process of making multiple molecules. Cloning is commonly used to amplify DNA fragments containing whole genes, but it can also be used to amplify any DNA sequence such as promoters, non-coding sequences and randomly fragmented DNA. It is used in a wide array of biological experiments and practical applications ranging from genetic fingerprinting to large scale protein production. Occasionally, the term cloning is misleadingly used to refer to the identification of the chromosomal location of a gene associated with a particular phenotype of interest, such as in positional cloning. In practice, localization of the gene to a chromosome or genomic region does not necessarily enable one to isolate or amplify the relevant genomic sequence. To amplify any DNA sequence in a living organism, that sequence must be linked to an origin of replication, which is a sequence of DNA capable of directing the propagation of itself and any linked sequence. However, a number of other features are needed, and a variety of specialised cloning vectors (small piece of DNA into which a foreign DNA fragment can be inserted) exist that allow protein production, affinity tagging, single-stranded RNA or DNA production and a host of other molecular biology tools. Cloning of any DNA fragment essentially involves four steps fragmentation - breaking apart a strand of DNA ligation – gluing together pieces of DNA in a desired sequence transfection – inserting the newly formed pieces of DNA into cells screening/selection – selecting out the cells that were successfully transfected with the new DNA Although these steps are invariable among cloning procedures a number of alternative routes can be selected; these are summarized as a cloning strategy. Initially, the DNA of interest needs to be isolated to provide a DNA segment of suitable size. Subsequently, a ligation procedure is used where the amplified fragment is inserted into a vector (piece of DNA). The vector (which is frequently circular) is linearised using restriction enzymes, and incubated with the fragment of interest under appropriate conditions with an enzyme called DNA ligase. Following ligation, the vector with the insert of interest is transfected into cells. A number of alternative techniques are available, such as chemical sensitisation of cells, electroporation, optical injection and biolistics. Finally, the transfected cells are cultured. As the aforementioned procedures are of particularly low efficiency, there is a need to identify the cells that have been successfully transfected with the vector construct containing the desired insertion sequence in the required orientation. Modern cloning vectors include selectable antibiotic resistance markers, which allow only cells in which the vector has been transfected, to grow. Additionally, the cloning vectors may contain colour selection markers, which provide blue/white screening (alpha-factor complementation) on X-gal medium. Nevertheless, these selection steps do not absolutely guarantee that the DNA insert is present in the cells obtained. Further investigation of the resulting colonies must be required to confirm that cloning was successful. This may be accomplished by means of PCR, restriction fragment analysis and/or DNA sequencing. Cell cloning Cloning unicellular organisms Cloning a cell means to derive a population of cells from a single cell. In the case of unicellular organisms such as bacteria and yeast, this process is remarkably simple and essentially only requires the inoculation of the appropriate medium. However, in the case of cell cultures from multi-cellular organisms, cell cloning is an arduous task as these cells will not readily grow in standard media. A useful tissue culture technique used to clone distinct lineages of cell lines involves the use of cloning rings (cylinders). In this technique a single-cell suspension of cells that have been exposed to a mutagenic agent or drug used to drive selection is plated at high dilution to create isolated colonies, each arising from a single and potentially clonal distinct cell. At an early growth stage when colonies consist of only a few cells, sterile polystyrene rings (cloning rings), which have been dipped in grease, are placed over an individual colony and a small amount of trypsin is added. Cloned cells are collected from inside the ring and transferred to a new vessel for further growth. Cloning stem cells Somatic-cell nuclear transfer, popularly known as SCNT, can also be used to create embryos for research or therapeutic purposes. The most likely purpose for this is to produce embryos for use in stem cell research. This process is also called "research cloning" or "therapeutic cloning". The goal is not to create cloned human beings (called "reproductive cloning"), but rather to harvest stem cells that can be used to study human development and to potentially treat disease. While a clonal human blastocyst has been created, stem cell lines are yet to be isolated from a clonal source. Therapeutic cloning is achieved by creating embryonic stem cells in the hopes of treating diseases such as diabetes and Alzheimer's. The process begins by removing the nucleus (containing the DNA) from an egg cell and inserting a nucleus from the adult cell to be cloned. In the case of someone with Alzheimer's disease, the nucleus from a skin cell of that patient is placed into an empty egg. The reprogrammed cell begins to develop into an embryo because the egg reacts with the transferred nucleus. The embryo will become genetically identical to the patient. The embryo will then form a blastocyst which has the potential to form/become any cell in the body. The reason why SCNT is used for cloning is because somatic cells can be easily acquired and cultured in the lab. This process can either add or delete specific genomes of farm animals. A key point to remember is that cloning is achieved when the oocyte maintains its normal functions and instead of using sperm and egg genomes to replicate, the donor's somatic cell nucleus is inserted into the oocyte. The oocyte will react to the somatic cell nucleus, the same way it would to a sperm cell's nucleus. The process of cloning a particular farm animal using SCNT is relatively the same for all animals. The first step is to collect the somatic cells from the animal that will be cloned. The somatic cells could be used immediately or stored in the laboratory for later use. The hardest part of SCNT is removing maternal DNA from an oocyte at metaphase II. Once this has been done, the somatic nucleus can be inserted into an egg cytoplasm. This creates a one-cell embryo. The grouped somatic cell and egg cytoplasm are then introduced to an electrical current. This energy will hopefully allow the cloned embryo to begin development. The successfully developed embryos are then placed in surrogate recipients, such as a cow or sheep in the case of farm animals. SCNT is seen as a good method for producing agriculture animals for food consumption. It successfully cloned sheep, cattle, goats, and pigs. Another benefit is SCNT is seen as a solution to clone endangered species that are on the verge of going extinct. However, stresses placed on both the egg cell and the introduced nucleus can be enormous, which led to a high loss in resulting cells in early research. For example, the cloned sheep Dolly was born after 277 eggs were used for SCNT, which created 29 viable embryos. Only three of these embryos survived until birth, and only one survived to adulthood. As the procedure could not be automated, and had to be performed manually under a microscope, SCNT was very resource intensive. The biochemistry involved in reprogramming the differentiated somatic cell nucleus and activating the recipient egg was also far from being well understood. However, by 2014 researchers were reporting cloning success rates of seven to eight out of ten and in 2016, a Korean Company Sooam Biotech was reported to be producing 500 cloned embryos per day. In SCNT, not all of the donor cell's genetic information is transferred, as the donor cell's mitochondria that contain their own mitochondrial DNA are left behind. The resulting hybrid cells retain those mitochondrial structures which originally belonged to the egg. As a consequence, clones such as Dolly that are born from SCNT are not perfect copies of the donor of the nucleus. Organism cloning Organism cloning (also called reproductive cloning) refers to the procedure of creating a new multicellular organism, genetically identical to another. In essence this form of cloning is an asexual method of reproduction, where fertilization or inter-gamete contact does not take place. Asexual reproduction is a naturally occurring phenomenon in many species, including most plants and some insects. Scientists have made some major achievements with cloning, including the asexual reproduction of sheep and cows. There is a lot of ethical debate over whether or not cloning should be used. However, cloning, or asexual propagation, has been common practice in the horticultural world for hundreds of years. Horticultural The term clone is used in horticulture to refer to descendants of a single plant which were produced by vegetative reproduction or apomixis. Many horticultural plant cultivars are clones, having been derived from a single individual, multiplied by some process other than sexual reproduction. As an example, some European cultivars of grapes represent clones that have been propagated for over two millennia. Other examples are potato and banana. Grafting can be regarded as cloning, since all the shoots and branches coming from the graft are genetically a clone of a single individual, but this particular kind of cloning has not come under ethical scrutiny and is generally treated as an entirely different kind of operation. Many trees, shrubs, vines, ferns and other herbaceous perennials form clonal colonies naturally. Parts of an individual plant may become detached by fragmentation and grow on to become separate clonal individuals. A common example is in the vegetative reproduction of moss and liverwort gametophyte clones by means of gemmae. Some vascular plants e.g. dandelion and certain viviparous grasses also form seeds asexually, termed apomixis, resulting in clonal populations of genetically identical individuals. Parthenogenesis Clonal derivation exists in nature in some animal species and is referred to as parthenogenesis (reproduction of an organism by itself without a mate). This is an asexual form of reproduction that is only found in females of some insects, crustaceans, nematodes, fish (for example the hammerhead shark), Cape honeybees, and lizards including the Komodo dragon and several whiptails. The growth and development occurs without fertilization by a male. In plants, parthenogenesis means the development of an embryo from an unfertilized egg cell, and is a component process of apomixis. In species that use the XY sex-determination system, the offspring will always be female. An example is the little fire ant (Wasmannia auropunctata), which is native to Central and South America but has spread throughout many tropical environments. Artificial cloning of organisms Artificial cloning of organisms may also be called reproductive cloning. First steps Hans Spemann, a German embryologist was awarded a Nobel Prize in Physiology or Medicine in 1935 for his discovery of the effect now known as embryonic induction, exercised by various parts of the embryo, that directs the development of groups of cells into particular tissues and organs. In 1924 he and his student, Hilde Mangold, were the first to perform somatic-cell nuclear transfer using amphibian embryos – one of the first steps towards cloning. Methods Reproductive cloning generally uses "somatic cell nuclear transfer" (SCNT) to create animals that are genetically identical. This process entails the transfer of a nucleus from a donor adult cell (somatic cell) to an egg from which the nucleus has been removed, or to a cell from a blastocyst from which the nucleus has been removed. If the egg begins to divide normally it is transferred into the uterus of the surrogate mother. Such clones are not strictly identical since the somatic cells may contain mutations in their nuclear DNA. Additionally, the mitochondria in the cytoplasm also contains DNA and during SCNT this mitochondrial DNA is wholly from the cytoplasmic donor's egg, thus the mitochondrial genome is not the same as that of the nucleus donor cell from which it was produced. This may have important implications for cross-species nuclear transfer in which nuclear-mitochondrial incompatibilities may lead to death. Artificial embryo splitting or embryo twinning, a technique that creates monozygotic twins from a single embryo, is not considered in the same fashion as other methods of cloning. During that procedure, a donor embryo is split in two distinct embryos, that can then be transferred via embryo transfer. It is optimally performed at the 6- to 8-cell stage, where it can be used as an expansion of IVF to increase the number of available embryos. If both embryos are successful, it gives rise to monozygotic (identical) twins. Dolly the sheep Dolly, a Finn-Dorset ewe, was the first mammal to have been successfully cloned from an adult somatic cell. Dolly was formed by taking a cell from the udder of her 6-year-old biological mother. Dolly's embryo was created by taking the cell and inserting it into a sheep ovum. It took 435 attempts before an embryo was successful. The embryo was then placed inside a female sheep that went through a normal pregnancy. She was cloned at the Roslin Institute in Scotland by British scientists Sir Ian Wilmut and Keith Campbell and lived there from her birth in 1996 until her death in 2003 when she was six. She was born on 5 July 1996 but not announced to the world until 22 February 1997. Her stuffed remains were placed at Edinburgh's Royal Museum, part of the National Museums of Scotland. Dolly was publicly significant because the effort showed that genetic material from a specific adult cell, designed to express only a distinct subset of its genes, can be redesigned to grow an entirely new organism. Before this demonstration, it had been shown by John Gurdon that nuclei from differentiated cells could give rise to an entire organism after transplantation into an enucleated egg. However, this concept was not yet demonstrated in a mammalian system. The first mammalian cloning (resulting in Dolly) had a success rate of 29 embryos per 277 fertilized eggs, which produced three lambs at birth, one of which lived. In a bovine experiment involving 70 cloned calves, one-third of the calves died quite young. The first successfully cloned horse, Prometea, took 814 attempts. Notably, although the first clones were frogs, no adult cloned frog has yet been produced from a somatic adult nucleus donor cell. There were early claims that Dolly had pathologies resembling accelerated aging. Scientists speculated that Dolly's death in 2003 was related to the shortening of telomeres, DNA-protein complexes that protect the end of linear chromosomes. However, other researchers, including Ian Wilmut who led the team that successfully cloned Dolly, argue that Dolly's early death due to respiratory infection was unrelated to problems with the cloning process. This idea that the nuclei have not irreversibly aged was shown in 2013 to be true for mice. Dolly was named after performer Dolly Parton because the cells cloned to make her were from a mammary gland cell, and Parton is known for her ample cleavage. Species cloned and applications The modern cloning techniques involving nuclear transfer have been successfully performed on several species. Notable experiments include: Tadpole: (1952) Robert Briggs and Thomas J. King successfully cloned northern leopard frogs: thirty-five complete embryos and twenty-seven tadpoles from one-hundred and four successful nuclear transfers. Carp: (1963) In China, embryologist Tong Dizhou produced the world's first cloned fish by inserting the DNA from a cell of a male carp into an egg from a female carp. He published the findings in a Chinese science journal. Zebrafish: (1981) George Streisinger produced the first cloned vertebrate. Sheep: (1984) Steen Willadsen produced the first cloned mammal from early embryonic cells. In June 1995, the Roslin Institute cloned Megan and Morag from differentiated embryonic cells. In July 1996, PPL Therapeutics and the Roslin Institute cloned Dolly the sheep from a somatic cell. Mouse: (1986) A mouse was successfully cloned from an early embryonic cell. In 1987, Soviet scientists Levon Chaylakhyan, Veprencev, Sviridova, and Nikitin cloned Masha, a mouse. Rhesus monkey: (October 1999) The Oregon National Primate Research Center cloned Tetra from embryo splitting and not nuclear transfer: a process more akin to artificial formation of twins. Pig: (March 2000) PPL Therapeutics cloned five piglets. By 2014, BGI in China was producing 500 cloned pigs a year to test new medicines. Gaur: (2001) was the first endangered species cloned. Cattle: Alpha and Beta (males, 2001) and (2005), Brazil In 2023, Chinese scientists reported the cloning of three supercows with a milk productivity "nearly 1.7 times the amount of milk an average cow in the United States produced in 2021" and a plan for 1,000 of such super cows in the near-term. According to a news report "[i]n many countries, including the United States, farmers breed clones with conventional animals to add desirable traits, such as high milk production or disease resistance, into the gene pool". Cat: CopyCat "CC" (female, late 2001), Little Nicky, 2004, was the first cat cloned for commercial reasons Rat: Ralph, the first cloned rat (2003) Mule: Idaho Gem, a john mule born 4 May 2003, was the first horse-family clone. Horse: Prometea, a Haflinger female born 28 May 2003, was the first horse clone. Przewalksi's Horse: An ongoing cloning program by the San Diego Zoo Wildlife Alliance and Revive & Restore attempts to reintroduce genetic diversity to this endangered species. Kurt, the first cloned Przewalski's horse, was born in 2020. He was cloned from the skin tissue of a stallion which was preserved in 1980. "Trey" was born in 2023. He was cloned from the same stallion's tissue as Kurt. Dog: Snuppy, a male Afghan hound was the first cloned dog (2005). In 2017, the world's first gene-editing clone dog, Apple, was created by Sinogene Biotechnology. Sooam Biotech, South Korea, was reported in 2015 to have cloned 700 dogs to date for their owners, including two Yakutian Laika hunting dogs, which are seriously endangered due to crossbreeding. Cloning of super sniffer dogs was reported in 2011, four years afterwards when the dogs started working. Cloning of a successful rescue dog was also reported in 2009 and of a similar police dog in 2019. Cancer-sniffing dogs have also been cloned. A review concluded that "qualified elite working dogs can be produced by cloning a working dog that exhibits both an appropriate temperament and good health." Wolf: Snuwolf and Snuwolffy, the first two cloned female wolves (2005). Water buffalo: Samrupa was the first cloned water buffalo. It was born on 6 February 2009, at India's Karnal National Dairy Research Institute but died five days later due to lung infection. Pyrenean ibex: (2009) was the first extinct animal to be cloned back to life; the clone lived for seven minutes before dying of lung defects. The extinct Pyrenean ibex is a sub-species of the still-thriving Spanish ibex. Camel: (2009) Injaz, was the first cloned camel. Pashmina goat: (2012) Noori, is the first cloned pashmina goat. Scientists at the faculty of veterinary sciences and animal husbandry of Sher-e-Kashmir University of Agricultural Sciences and Technology of Kashmir successfully cloned the first Pashmina goat (Noori) using the advanced reproductive techniques under the leadership of Riaz Ahmad Shah. Goat: (2001) Scientists of Northwest A&F University successfully cloned the first goat which use the adult female cell. Gastric brooding frog: (2013) The gastric brooding frog, Rheobatrachus silus, thought to have been extinct since 1983 was cloned in Australia, although the embryos died after a few days. Macaque monkey: (2017) First successful cloning of a primate species using nuclear transfer, with the birth of two live clones named Zhong Zhong and Hua Hua. Conducted in China in 2017, and reported in January 2018. In January 2019, scientists in China reported the creation of five identical cloned gene-edited monkeys, using the same cloning technique that was used with Zhong Zhong and Hua Hua and Dolly the sheep, and the gene-editing Crispr-Cas9 technique allegedly used by He Jiankui in creating the first ever gene-modified human babies Lulu and Nana. The monkey clones were made to study several medical diseases. Black-footed ferret: (2020) A team of scientists cloned a female named Willa, who died in the mid-1980s and left no living descendants. Her clone, a female named Elizabeth Ann, was born on 10 December. Scientists hope that the contribution of this individual will alleviate the effects of inbreeding and help black-footed ferrets better cope with plague. Experts estimate that this female's genome contains three times as much genetic diversity as any of the modern black-footed ferrets. First artificial parthenogenesis in mammals: (2022) Viable mice offspring was born from unfertilized eggs via targeted DNA methylation editing of seven imprinting control regions. Human cloning Human cloning is the creation of a genetically identical copy of a human. The term is generally used to refer to artificial human cloning, which is the reproduction of human cells and tissues. It does not refer to the natural conception and delivery of identical twins. The possibility of human cloning has raised controversies. These ethical concerns have prompted several nations to pass legislation regarding human cloning and its legality. As of right now, scientists have no intention of trying to clone people and they believe their results should spark a wider discussion about the laws and regulations the world needs to regulate cloning. Two commonly discussed types of theoretical human cloning are therapeutic cloning and reproductive cloning. Therapeutic cloning would involve cloning cells from a human for use in medicine and transplants, and is an active area of research, but is not in medical practice anywhere in the world, . Two common methods of therapeutic cloning that are being researched are somatic-cell nuclear transfer and, more recently, pluripotent stem cell induction. Reproductive cloning would involve making an entire cloned human, instead of just specific cells or tissues. Ethical issues of cloning There are a variety of ethical positions regarding the possibilities of cloning, especially human cloning. While many of these views are religious in origin, the questions raised by cloning are faced by secular perspectives as well. Perspectives on human cloning are theoretical, as human therapeutic and reproductive cloning are not commercially used; animals are currently cloned in laboratories and in livestock production. Advocates support development of therapeutic cloning to generate tissues and whole organs to treat patients who otherwise cannot obtain transplants, to avoid the need for immunosuppressive drugs, and to stave off the effects of aging. Advocates for reproductive cloning believe that parents who cannot otherwise procreate should have access to the technology. Opponents of cloning have concerns that technology is not yet developed enough to be safe and that it could be prone to abuse (leading to the generation of humans from whom organs and tissues would be harvested), as well as concerns about how cloned individuals could integrate with families and with society at large. Cloning humans could lead to serious violations of human rights. Religious groups are divided, with some opposing the technology as usurping "God's place" and, to the extent embryos are used, destroying a human life; others support therapeutic cloning's potential life-saving benefits. There is at least one religion, Raëlism, in which cloning plays a major role. Contemporary work on this topic is concerned with the ethics, adequate regulation and issues of any cloning carried out by humans, not potentially by extraterrestrials (including in the future), and largely also not replication – also described as mind cloning – of potential whole brain emulations. Cloning of animals is opposed by animal-groups due to the number of cloned animals that suffer from malformations before they die, and while food from cloned animals has been approved as safe by the US FDA, its use is opposed by groups concerned about food safety. In practical terms, the inclusion of "licensing requirements for embryo research projects and fertility clinics, restrictions on the commodification of eggs and sperm, and measures to prevent proprietary interests from monopolizing access to stem cell lines" in international cloning regulations has been proposed, albeit e.g. effective oversight mechanisms or cloning requirements have not been described. Cloning extinct and endangered species Cloning, or more precisely, the reconstruction of functional DNA from extinct species has, for decades, been a dream. Possible implications of this were dramatized in the 1984 novel Carnosaur and the 1990 novel Jurassic Park. The best current cloning techniques have an average success rate of 9.4 percent (and as high as 25 percent) when working with familiar species such as mice, while cloning wild animals is usually less than 1 percent successful. Conservation cloning Several tissue banks have come into existence, including the "Frozen zoo" at the San Diego Zoo, to store frozen tissue from the world's rarest and most endangered species. This is also referred to as "Conservation cloning". Engineers have proposed a "lunar ark" in 2021 – storing millions of seed, spore, sperm and egg samples from Earth's contemporary species in a network of lava tubes on the Moon as a genetic backup. Similar proposals have been made since at least 2008. These also include sending human customer DNA, and a proposal for "a lunar backup record of humanity" that includes genetic information by Avi Loeb et al. Scientists at the University of Newcastle and University of New South Wales announced in March 2013 that the very recently extinct gastric-brooding frog would be the subject of a cloning attempt to resurrect the species. Many such "De-extinction" projects are being championed by the non-profit Revive & Restore. De-extinction One of the most anticipated targets for cloning was once the woolly mammoth, but attempts to extract DNA from frozen mammoths have been unsuccessful, though a joint Russo-Japanese team is currently working toward this goal. In January 2011, it was reported by Yomiuri Shimbun that a team of scientists headed by Akira Iritani of Kyoto University had built upon research by Dr. Wakayama, saying that they will extract DNA from a mammoth carcass that had been preserved in a Russian laboratory and insert it into the egg cells of an Asian elephant in hopes of producing a mammoth embryo. The researchers said they hoped to produce a baby mammoth within six years. The challenges are formidable. Extensively degraded DNA that may be suitable for sequencing may not be suitable for cloning; it would have to be synthetically reconstituted. In any case, with currently available technology, DNA alone is not suitable for mammalian cloning; intact viable cell nuclei are required. Patching pieces of reconstituted mammoth DNA into an Asian elephant cell nucleus would result in an elephant-mammoth hybrid rather than a true mammoth. Moreover, true de-extinction of the wooly mammoth species would require a breeding population, which would require cloning of multiple genetically distinct but reproductively compatible individuals, multiplying both the amount of work and the uncertainties involved in the project. There are potentially other post-cloning problems associated with the survival of a reconstructed mammoth, such as the requirement of ruminants for specific symbiotic microbiota in their stomachs for digestion. In 2022, scientists showed major limitations and the scale of challenge of genetic-editing-based de-extinction, suggesting resources spent on more comprehensive de-extinction projects such as of the woolly mammoth may currently not be well allocated and substantially limited. Their analyses "show that even when the extremely high-quality Norway brown rat (R. norvegicus) is used as a reference, nearly 5% of the genome sequence is unrecoverable, with 1,661 genes recovered at lower than 90% completeness, and 26 completely absent", complicated further by that "distribution of regions affected is not random, but for example, if 90% completeness is used as the cutoff, genes related to immune response and olfaction are excessively affected" due to which "a reconstructed Christmas Island rat would lack attributes likely critical to surviving in its natural or natural-like environment". In a 2021 online session of the Russian Geographical Society, Russia's defense minister Sergei Shoigu mentioned using the DNA of 3,000-year-old Scythian warriors to potentially bring them back to life. The idea was described as absurd at least at this point in news reports and it was noted that Scythians likely weren't skilled warriors by default. The idea of cloning Neanderthals or bringing them back to life in general is controversial but some scientists have stated that it may be possible in the future and have outlined several issues or problems with such as well as broad rationales for doing so. Unsuccessful attempts In 2001, a cow named Bessie gave birth to a cloned Asian gaur, an endangered species, but the calf died after two days. In 2003, a banteng was successfully cloned, followed by three African wildcats from a thawed frozen embryo. These successes provided hope that similar techniques (using surrogate mothers of another species) might be used to clone extinct species. Anticipating this possibility, tissue samples from the last bucardo (Pyrenean ibex) were frozen in liquid nitrogen immediately after it died in 2000. Researchers are also considering cloning endangered species such as the Giant panda and Cheetah. In 2002, geneticists at the Australian Museum announced that they had replicated DNA of the thylacine (Tasmanian tiger), at the time extinct for about 65 years, using polymerase chain reaction. However, on 15 February 2005 the museum announced that it was stopping the project after tests showed the specimens' DNA had been too badly degraded by the (ethanol) preservative. On 15 May 2005 it was announced that the thylacine project would be revived, with new participation from researchers in New South Wales and Victoria. In 2003, for the first time, an extinct animal, the Pyrenean ibex mentioned above was cloned, at the Centre of Food Technology and Research of Aragon, using the preserved frozen cell nucleus of the skin samples from 2001 and domestic goat egg-cells. The ibex died shortly after birth due to physical defects in its lungs. Lifespan After an eight-year project involving the use of a pioneering cloning technique, Japanese researchers created 25 generations of healthy cloned mice with normal lifespans, demonstrating that clones are not intrinsically shorter-lived than naturally born animals. Other sources have noted that the offspring of clones tend to be healthier than the original clones and indistinguishable from animals produced naturally. Some posited that Dolly the sheep may have aged more quickly than naturally born animals, as she died relatively early for a sheep at the age of six. Ultimately, her death was attributed to a respiratory illness, and the "advanced aging" theory is disputed. A 2016 study indicated that once cloned animals survive the first month or two of life they are generally healthy. However, early pregnancy loss and neonatal losses are still greater with cloning than natural conception or assisted reproduction (IVF). Current research is attempting to overcome these problems. In popular culture Discussion of cloning in the popular media often presents the subject negatively. In an article in the 8 November 1993 article of Time, cloning was portrayed in a negative way, modifying Michelangelo's Creation of Adam to depict Adam with five identical hands. Newsweek 10 March 1997 issue also critiqued the ethics of human cloning, and included a graphic depicting identical babies in beakers. The concept of cloning, particularly human cloning, has featured a wide variety of science fiction works. An early fictional depiction of cloning is Bokanovsky's Process which features in Aldous Huxley's 1931 dystopian novel Brave New World. The process is applied to fertilized human eggs in vitro, causing them to split into identical genetic copies of the original. Following renewed interest in cloning in the 1950s, the subject was explored further in works such as Poul Anderson's 1953 story UN-Man, which describes a technology called "exogenesis", and Gordon Rattray Taylor's book The Biological Time Bomb, which popularised the term "cloning" in 1963. Cloning is a recurring theme in a number of contemporary science fiction films, ranging from action films such as Anna to the Infinite Power, The Boys from Brazil, Jurassic Park (1993), Alien Resurrection (1997), The 6th Day (2000), Resident Evil (2002), Star Wars: Episode II – Attack of the Clones (2002), The Island (2005), Tales of the Abyss (2006), and Moon (2009) to comedies such as Woody Allen's 1973 film Sleeper. The process of cloning is represented variously in fiction. Many works depict the artificial creation of humans by a method of growing cells from a tissue or DNA sample; the replication may be instantaneous, or take place through slow growth of human embryos in artificial wombs. In the long-running British television series Doctor Who, the Fourth Doctor and his companion Leela were cloned in a matter of seconds from DNA samples ("The Invisible Enemy", 1977) and then – in an apparent homage to the 1966 film Fantastic Voyage – shrunk to microscopic size to enter the Doctor's body to combat an alien virus. The clones in this story are short-lived, and can only survive a matter of minutes before they expire. Science fiction films such as The Matrix and Star Wars: Episode II – Attack of the Clones have featured scenes of human foetuses being cultured on an industrial scale in mechanical tanks. Cloning humans from body parts is also a common theme in science fiction. Cloning features strongly among the science fiction conventions parodied in Woody Allen's Sleeper, the plot of which centres around an attempt to clone an assassinated dictator from his disembodied nose. In the 2008 Doctor Who story "Journey's End", a duplicate version of the Tenth Doctor spontaneously grows from his severed hand, which had been cut off in a sword fight during an earlier episode. After the death of her beloved 14-year-old Coton de Tulear named Samantha in late 2017, Barbra Streisand announced that she had cloned the dog, and was now "waiting for [the two cloned pups] to get older so [she] can see if they have [Samantha's] brown eyes and her seriousness". The operation cost $50,000 through the pet cloning company ViaGen. In films such as Roger Spottiswoode's 2000 The 6th Day, which makes use of the trope of a "vast clandestine laboratory ... filled with row upon row of 'blank' human bodies kept floating in tanks of nutrient liquid or in suspended animation", clearly fear is to be incited. In Clark's view, the biotechnology is typically "given fantastic but visually arresting forms" while the science is either relegated to the background or fictionalised to suit a young audience. Genetic engineering methods are weakly represented in film; Michael Clark, writing for The Wellcome Trust, calls the portrayal of genetic engineering and biotechnology "seriously distorted" Cloning and identity Science fiction has used cloning, most commonly and specifically human cloning, to raise questions of identity. A Number is a 2002 play by English playwright Caryl Churchill which addresses the subject of human cloning and identity, especially nature and nurture. The story, set in the near future, is structured around the conflict between a father (Salter) and his sons (Bernard 1, Bernard 2, and Michael Black) – two of whom are clones of the first one. A Number was adapted by Caryl Churchill for television, in a co-production between the BBC and HBO Films. In 2012, a Japanese television series named "Bunshin" was created. The story's main character, Mariko, is a woman studying child welfare in Hokkaido. She grew up always doubtful about the love from her mother, who looked nothing like her and who died nine years before. One day, she finds some of her mother's belongings at a relative's house, and heads to Tokyo to seek out the truth behind her birth. She later discovered that she was a clone. In the 2013 television series Orphan Black, cloning is used as a scientific study on the behavioral adaptation of the clones. In a similar vein, the book The Double by Nobel Prize winner José Saramago explores the emotional experience of a man who discovers that he is a clone. Cloning as resurrection Cloning has been used in fiction as a way of recreating historical figures. In the 1976 Ira Levin novel The Boys from Brazil and its 1978 film adaptation, Josef Mengele uses cloning to create copies of Adolf Hitler. In Michael Crichton's 1990 novel Jurassic Park, which spawned a series of Jurassic Park feature films, the bioengineering company InGen develops a technique to resurrect extinct species of dinosaurs by creating cloned creatures using DNA extracted from fossils. The cloned dinosaurs are used to populate the Jurassic Park wildlife park for the entertainment of visitors. The scheme goes disastrously wrong when the dinosaurs escape their enclosures. Despite being selectively cloned as females to prevent them from breeding, the dinosaurs develop the ability to reproduce through parthenogenesis. Cloning for warfare The use of cloning for military purposes has also been explored in several fictional works. In Doctor Who, an alien race of armour-clad, warlike beings called Sontarans was introduced in the 1973 serial "The Time Warrior". Sontarans are depicted as squat, bald creatures who have been genetically engineered for combat. Their weak spot is a "probic vent", a small socket at the back of their neck which is associated with the cloning process. The concept of cloned soldiers being bred for combat was revisited in "The Doctor's Daughter" (2008), when the Doctor's DNA is used to create a female warrior called Jenny. The 1977 film Star Wars was set against the backdrop of a historical conflict called the Clone Wars. The events of this war were not fully explored until the prequel films Attack of the Clones (2002) and Revenge of the Sith (2005), which depict a space war waged by a massive army of heavily armoured clone troopers that leads to the foundation of the Galactic Empire. Cloned soldiers are "manufactured" on an industrial scale, genetically conditioned for obedience and combat effectiveness. It is also revealed that the popular character Boba Fett originated as a clone of Jango Fett, a mercenary who served as the genetic template for the clone troopers. Cloning for exploitation A recurring sub-theme of cloning fiction is the use of clones as a supply of organs for transplantation. The 2005 Kazuo Ishiguro novel Never Let Me Go and the 2010 film adaption are set in an alternate history in which cloned humans are created for the sole purpose of providing organ donations to naturally born humans, despite the fact that they are fully sentient and self-aware. The 2005 film The Island revolves around a similar plot, with the exception that the clones are unaware of the reason for their existence. The exploitation of human clones for dangerous and undesirable work was examined in the 2009 British science fiction film Moon. In the futuristic novel Cloud Atlas and subsequent film, one of the story lines focuses on a genetically engineered fabricant clone named Sonmi~451, one of millions raised in an artificial "wombtank", destined to serve from birth. She is one of thousands created for manual and emotional labor; Sonmi herself works as a server in a restaurant. She later discovers that the sole source of food for clones, called 'Soap', is manufactured from the clones themselves. In the film Us, at some point prior to the 1980s, the US Government creates clones of every citizen of the United States with the intention of using them to control their original counterparts, akin to voodoo dolls. This fails, as they were able to copy bodies, but unable to copy the souls of those they cloned. The project is abandoned and the clones are trapped exactly mirroring their above-ground counterparts' actions for generations. In the present day, the clones launch a surprise attack and manage to complete a mass-genocide of their unaware counterparts. See also Frozen Ark The President's Council on Bioethics Notes References Further reading Guo, Owen. "World's Biggest Animal Cloning Center Set for '16 in a Skeptical China". The New York Times, 26 November 2015 Lerner, K. Lee. "Animal cloning". The Gale Encyclopedia of Science, edited by K. Lee Lerner and Brenda Wilmoth Lerner, 5th ed., Gale, 2014. Science in Context, link Dutchen, Stephanie (11 July 2018). "Rise of the Clones". Harvard Medical School. External links Cloning Fact Sheet from Human Genome Project Information website. 'Cloning' Freeview video by the Vega Science Trust and the BBC/OU Cloning in Focus, an accessible and comprehensive look at cloning research from the University of Utah's Genetic Science Learning Center Click and Clone. Try it yourself in the virtual mouse cloning laboratory, from the University of Utah's Genetic Science Learning Center "Cloning Addendum: A statement on the cloning report issues by the President's Council on Bioethics" . National Review, 15 July 2002 8:45 am Molecular biology Cryobiology Applied genetics Asexual reproduction Selection
Cloning
[ "Physics", "Chemistry", "Engineering", "Biology" ]
9,477
[ "Evolutionary processes", "Physical phenomena", "Phase transitions", "Behavior", "Selection", "Reproduction", "Cloning", "Genetic engineering", "Cryobiology", "Asexual reproduction", "Molecular biology", "Biochemistry" ]
6,911
https://en.wikipedia.org/wiki/Cellulose
Cellulose is an organic compound with the formula , a polysaccharide consisting of a linear chain of several hundred to many thousands of β(1→4) linked D-glucose units. Cellulose is an important structural component of the primary cell wall of green plants, many forms of algae and the oomycetes. Some species of bacteria secrete it to form biofilms. Cellulose is the most abundant organic polymer on Earth. The cellulose content of cotton fibre is 90%, that of wood is 40–50%, and that of dried hemp is approximately 57%. Cellulose is mainly used to produce paperboard and paper. Smaller quantities are converted into a wide variety of derivative products such as cellophane and rayon. Conversion of cellulose from energy crops into biofuels such as cellulosic ethanol is under development as a renewable fuel source. Cellulose for industrial use is mainly obtained from wood pulp and cotton. Cellulose is also greatly affected by direct interaction with several organic liquids. Some animals, particularly ruminants and termites, can digest cellulose with the help of symbiotic micro-organisms that live in their guts, such as Trichonympha. In human nutrition, cellulose is a non-digestible constituent of insoluble dietary fiber, acting as a hydrophilic bulking agent for feces and potentially aiding in defecation. History Cellulose was discovered in 1838 by the French chemist Anselme Payen, who isolated it from plant matter and determined its chemical formula.<ref>Payen, A. (1838) "Mémoire sur la composition du tissu propre des plantes et du ligneux" (Memoir on the composition of the tissue of plants and of woody [material]), Comptes rendus, vol. 7, pp. 1052–1056. Payen added appendices to this paper on December 24, 1838 (see: Comptes rendus, vol. 8, p. 169 (1839)) and on February 4, 1839 (see: Comptes rendus, vol. 9, p. 149 (1839)). A committee of the French Academy of Sciences reviewed Payen's findings in : Jean-Baptiste Dumas (1839) "Rapport sur un mémoire de M. Payen, reltes rendus, vol. 8, pp. 51–53. In this report, the word "cellulose" is coined and author points out the similarity between the empirical formula of cellulose and that of "dextrine" (starch). The above articles are reprinted in: Brongniart and Guillemin, eds., Annales des sciences naturelles ..., 2nd series, vol. 11 (Paris, France: Crochard et Cie., 1839), [ pp. 21–31].</ref> Cellulose was used to produce the first successful thermoplastic polymer, celluloid, by Hyatt Manufacturing Company in 1870. Production of rayon ("artificial silk") from cellulose began in the 1890s and cellophane was invented in 1912. Hermann Staudinger determined the polymer structure of cellulose in 1920. The compound was first chemically synthesized (without the use of any biologically derived enzymes) in 1992, by Kobayashi and Shoda. Structure and properties Cellulose has no taste, is odorless, is hydrophilic with the contact angle of 20–30 degrees, is insoluble in water and most organic solvents, is chiral and is biodegradable. It was shown to melt at 467 °C in pulse tests made by Dauenhauer et al. (2016). It can be broken down chemically into its glucose units by treating it with concentrated mineral acids at high temperature. Cellulose is derived from D-glucose units, which condense through β(1→4)-glycosidic bonds. This linkage motif contrasts with that for α(1→4)-glycosidic bonds present in starch and glycogen. Cellulose is a straight chain polymer. Unlike starch, no coiling or branching occurs and the molecule adopts an extended and rather stiff rod-like conformation, aided by the equatorial conformation of the glucose residues. The multiple hydroxyl groups on the glucose from one chain form hydrogen bonds with oxygen atoms on the same or on a neighbour chain, holding the chains firmly together side-by-side and forming microfibrils with high tensile strength. This confers tensile strength in cell walls where cellulose microfibrils are meshed into a polysaccharide matrix. The high tensile strength of plant stems and of the tree wood also arises from the arrangement of cellulose fibers intimately distributed into the lignin matrix. The mechanical role of cellulose fibers in the wood matrix responsible for its strong structural resistance, can somewhat be compared to that of the reinforcement bars in concrete, lignin playing here the role of the hardened cement paste acting as the "glue" in between the cellulose fibres. Mechanical properties of cellulose in primary plant cell wall are correlated with growth and expansion of plant cells. Live fluorescence microscopy techniques are promising in investigation of the role of cellulose in growing plant cells. Compared to starch, cellulose is also much more crystalline. Whereas starch undergoes a crystalline to amorphous transition when heated beyond 60–70 °C in water (as in cooking), cellulose requires a temperature of 320 °C and pressure of 25 MPa to become amorphous in water. Several types of cellulose are known. These forms are distinguished according to the location of hydrogen bonds between and within strands. Natural cellulose is cellulose I, with structures Iα and Iβ. Cellulose produced by bacteria and algae is enriched in Iα while cellulose of higher plants consists mainly of Iβ. Cellulose in regenerated cellulose fibers is cellulose II. The conversion of cellulose I to cellulose II is irreversible, suggesting that cellulose I is metastable and cellulose II is stable. With various chemical treatments it is possible to produce the structures cellulose III and cellulose IV. Many properties of cellulose depend on its chain length or degree of polymerization, the number of glucose units that make up one polymer molecule. Cellulose from wood pulp has typical chain lengths between 300 and 1700 units; cotton and other plant fibers as well as bacterial cellulose have chain lengths ranging from 800 to 10,000 units. Molecules with very small chain length resulting from the breakdown of cellulose are known as cellodextrins; in contrast to long-chain cellulose, cellodextrins are typically soluble in water and organic solvents. The chemical formula of cellulose is (C6H10O5)n where n is the degree of polymerization and represents the number of glucose groups. Plant-derived cellulose is usually found in a mixture with hemicellulose, lignin, pectin and other substances, while bacterial cellulose is quite pure, has a much higher water content and higher tensile strength due to higher chain lengths. Cellulose consists of fibrils with crystalline and amorphous regions. These cellulose fibrils may be individualized by mechanical treatment of cellulose pulp, often assisted by chemical oxidation or enzymatic treatment, yielding semi-flexible cellulose nanofibrils generally 200 nm to 1 μm in length depending on the treatment intensity. Cellulose pulp may also be treated with strong acid to hydrolyze the amorphous fibril regions, thereby producing short rigid cellulose nanocrystals a few 100 nm in length. These nanocelluloses are of high technological interest due to their self-assembly into cholesteric liquid crystals, production of hydrogels or aerogels, use in nanocomposites with superior thermal and mechanical properties, and use as Pickering stabilizers for emulsions. Processing Biosynthesis In plants cellulose is synthesized at the plasma membrane by rosette terminal complexes (RTCs). The RTCs are hexameric protein structures, approximately 25 nm in diameter, that contain the cellulose synthase enzymes that synthesise the individual cellulose chains. Each RTC floats in the cell's plasma membrane and "spins" a microfibril into the cell wall. RTCs contain at least three different cellulose synthases, encoded by CesA (Ces is short for "cellulose synthase") genes, in an unknown stoichiometry. Separate sets of CesA genes are involved in primary and secondary cell wall biosynthesis. There are known to be about seven subfamilies in the plant CesA superfamily, some of which include the more cryptic, tentatively-named Csl (cellulose synthase-like) enzymes. These cellulose syntheses use UDP-glucose to form the β(1→4)-linked cellulose. Bacterial cellulose is produced using the same family of proteins, although the gene is called BcsA for "bacterial cellulose synthase" or CelA for "cellulose" in many instances. In fact, plants acquired CesA from the endosymbiosis event that produced the chloroplast. All cellulose synthases known belongs to glucosyltransferase family 2 (GT2). Cellulose synthesis requires chain initiation and elongation, and the two processes are separate. Cellulose synthase (CesA) initiates cellulose polymerization using a steroid primer, sitosterol-beta-glucoside, and UDP-glucose. It then utilises UDP-D-glucose precursors to elongate the growing cellulose chain. A cellulase may function to cleave the primer from the mature chain. Cellulose is also synthesised by tunicate animals, particularly in the tests of ascidians (where the cellulose was historically termed "tunicine" (tunicin)). Breakdown (cellulolysis) Cellulolysis is the process of breaking down cellulose into smaller polysaccharides called cellodextrins or completely into glucose units; this is a hydrolysis reaction. Because cellulose molecules bind strongly to each other, cellulolysis is relatively difficult compared to the breakdown of other polysaccharides. However, this process can be significantly intensified in a proper solvent, e.g. in an ionic liquid. Most mammals have limited ability to digest dietary fibre such as cellulose. Some ruminants like cows and sheep contain certain symbiotic anaerobic bacteria (such as Cellulomonas and Ruminococcus spp.) in the flora of the rumen, and these bacteria produce enzymes called cellulases that hydrolyze cellulose. The breakdown products are then used by the bacteria for proliferation. The bacterial mass is later digested by the ruminant in its digestive system (stomach and small intestine). Horses use cellulose in their diet by fermentation in their hindgut. Some termites contain in their hindguts certain flagellate protozoa producing such enzymes, whereas others contain bacteria or may produce cellulase. The enzymes used to cleave the glycosidic linkage in cellulose are glycoside hydrolases including endo-acting cellulases and exo-acting glucosidases. Such enzymes are usually secreted as part of multienzyme complexes that may include dockerins and carbohydrate-binding modules. Breakdown (thermolysis) At temperatures above 350 °C, cellulose undergoes thermolysis (also called 'pyrolysis'), decomposing into solid char, vapors, aerosols, and gases such as carbon dioxide. Maximum yield of vapors which condense to a liquid called bio-oil is obtained at 500 °C. Semi-crystalline cellulose polymers react at pyrolysis temperatures (350–600 °C) in a few seconds; this transformation has been shown to occur via a solid-to-liquid-to-vapor transition, with the liquid (called intermediate liquid cellulose or molten cellulose) existing for only a fraction of a second. Glycosidic bond cleavage produces short cellulose chains of two-to-seven monomers comprising the melt. Vapor bubbling of intermediate liquid cellulose produces aerosols, which consist of short chain anhydro-oligomers derived from the melt. Continuing decomposition of molten cellulose produces volatile compounds including levoglucosan, furans, pyrans, light oxygenates, and gases via primary reactions. Within thick cellulose samples, volatile compounds such as levoglucosan undergo 'secondary reactions' to volatile products including pyrans and light oxygenates such as glycolaldehyde. Hemicellulose Hemicelluloses are polysaccharides related to cellulose that comprises about 20% of the biomass of land plants. In contrast to cellulose, hemicelluloses are derived from several sugars in addition to glucose, especially xylose but also including mannose, galactose, rhamnose, and arabinose. Hemicelluloses consist of shorter chains – between 500 and 3000 sugar units. Furthermore, hemicelluloses are branched, whereas cellulose is unbranched. Regenerated cellulose Cellulose is soluble in several kinds of media, several of which are the basis of commercial technologies. These dissolution processes are reversible and are used in the production of regenerated celluloses (such as viscose and cellophane) from dissolving pulp. The most important solubilizing agent is carbon disulfide in the presence of alkali. Other agents include Schweizer's reagent, N-methylmorpholine N-oxide, and lithium chloride in dimethylacetamide. In general, these agents modify the cellulose, rendering it soluble. The agents are then removed concomitant with the formation of fibers. Cellulose is also soluble in many kinds of ionic liquids. The history of regenerated cellulose is often cited as beginning with George Audemars, who first manufactured regenerated nitrocellulose fibers in 1855. Although these fibers were soft and strong -resembling silk- they had the drawback of being highly flammable. Hilaire de Chardonnet perfected production of nitrocellulose fibers, but manufacturing of these fibers by his process was relatively uneconomical. In 1890, L.H. Despeissis invented the cuprammonium process – which uses a cuprammonium solution to solubilize cellulose – a method still used today for production of artificial silk. In 1891, it was discovered that treatment of cellulose with alkali and carbon disulfide generated a soluble cellulose derivative known as viscose. This process, patented by the founders of the Viscose Development Company, is the most widely used method for manufacturing regenerated cellulose products. Courtaulds purchased the patents for this process in 1904, leading to significant growth of viscose fiber production. By 1931, expiration of patents for the viscose process led to its adoption worldwide. Global production of regenerated cellulose fiber peaked in 1973 at 3,856,000 tons. Regenerated cellulose can be used to manufacture a wide variety of products. While the first application of regenerated cellulose was as a clothing textile, this class of materials is also used in the production of disposable medical devices as well as fabrication of artificial membranes. Cellulose esters and ethers The hydroxyl groups (−OH) of cellulose can be partially or fully reacted with various reagents to afford derivatives with useful properties like mainly cellulose esters and cellulose ethers (−OR). In principle, although not always in current industrial practice, cellulosic polymers are renewable resources. Ester derivatives include: Cellulose acetate and cellulose triacetate are film- and fiber-forming materials that find a variety of uses. Nitrocellulose was initially used as an explosive and was an early film forming material. When plasticized with camphor, nitrocellulose gives celluloid. Cellulose Ether derivatives include: The sodium carboxymethyl cellulose can be cross-linked to give the croscarmellose sodium (E468) for use as a disintegrant in pharmaceutical formulations. Furthermore, by the covalent attachment of thiol groups to cellulose ethers such as sodium carboxymethyl cellulose, ethyl cellulose or hydroxyethyl cellulose mucoadhesive and permeation enhancing properties can be introduced. Thiolated cellulose derivatives (see thiomers) exhibit also high binding properties for metal ions. Commercial applications Cellulose for industrial use is mainly obtained from wood pulp and from cotton. Paper products: Cellulose is the major constituent of paper, paperboard, and card stock. Electrical insulation paper: Cellulose is used in diverse forms as insulation in transformers, cables, and other electrical equipment. Fibres: Cellulose is the main ingredient of textiles. Cotton and synthetics (nylons) each have about 40% market by volume. Other plant fibres (jute, sisal, hemp) represent about 20% of the market. Rayon, cellophane and other "regenerated cellulose fibres" are a small portion (5%). Consumables: Microcrystalline cellulose (E460i) and powdered cellulose (E460ii) are used as inactive fillers in drug tablets and a wide range of soluble cellulose derivatives, E numbers E461 to E469, are used as emulsifiers, thickeners and stabilizers in processed foods. Cellulose powder is, for example, used in processed cheese to prevent caking inside the package. Cellulose occurs naturally in some foods and is an additive in manufactured foods, contributing an indigestible component used for texture and bulk, potentially aiding in defecation. Building material: Hydroxyl bonding of cellulose in water produces a sprayable, moldable material as an alternative to the use of plastics and resins. The recyclable material can be made water- and fire-resistant. It provides sufficient strength for use as a building material. Cellulose insulation made from recycled paper is becoming popular as an environmentally preferable material for building insulation. It can be treated with boric acid as a fire retardant. Miscellaneous: Cellulose can be converted into cellophane, a thin transparent film. It is the base material for the celluloid that was used for photographic and movie films until the mid-1930s. Cellulose is used to make water-soluble adhesives and binders such as methyl cellulose and carboxymethyl cellulose which are used in wallpaper paste. Cellulose is further used to make hydrophilic and highly absorbent sponges. Cellulose is the raw material in the manufacture of nitrocellulose (cellulose nitrate) which is used in smokeless gunpowder. Pharmaceuticals: Cellulose derivatives, such as microcrystalline cellulose (MCC), have the advantages of retaining water, being a stabilizer and thickening agent, and in reinforcement of drug tablets. Aspirational Energy crops: The major combustible component of non-food energy crops is cellulose, with lignin second. Non-food energy crops produce more usable energy than edible energy crops (which have a large starch component), but still compete with food crops for agricultural land and water resources. Typical non-food energy crops include industrial hemp, switchgrass, Miscanthus, Salix (willow), and Populus (poplar) species. A strain of Clostridium'' bacteria found in zebra dung, can convert nearly any form of cellulose into butanol fuel. Another possible application is as Insect repellents. See also Gluconic acid Isosaccharinic acid, a degradation product of cellulose Lignin Zeoform References External links Structure and morphology of cellulose by Serge Pérez and William Mackie, CERMAV-CNRS Cellulose, by Martin Chaplin, London South Bank University Clear description of a cellulose assay method at the Cotton Fiber Biosciences unit of the USDA. Cellulose films could provide flapping wings and cheap artificial muscles for robots – TechnologyReview.com Excipients Papermaking Polysaccharides E-number additives
Cellulose
[ "Chemistry" ]
4,450
[ "Carbohydrates", "Polysaccharides" ]
6,920
https://en.wikipedia.org/wiki/Column
A column or pillar in architecture and structural engineering is a structural element that transmits, through compression, the weight of the structure above to other structural elements below. In other words, a column is a compression member. The term column applies especially to a large round support (the shaft of the column) with a capital and a base or pedestal, which is made of stone, or appearing to be so. A small wooden or metal support is typically called a post. Supports with a rectangular or other non-round section are usually called piers. For the purpose of wind or earthquake engineering, columns may be designed to resist lateral forces. Other compression members are often termed "columns" because of the similar stress conditions. Columns are frequently used to support beams or arches on which the upper parts of walls or ceilings rest. In architecture, "column" refers to such a structural element that also has certain proportional and decorative features. These beautiful columns are available in a broad selection of styles and designs in round tapered, round straight, or square shaft styles. A column might also be a decorative element not needed for structural purposes; many columns are engaged, that is to say form part of a wall. A long sequence of columns joined by an entablature is known as a colonnade. History Antiquity All significant Iron Age civilizations of the Near East and Mediterranean made some use of columns. Egyptian In ancient Egyptian architecture as early as 2600 BC, the architect Imhotep made use of stone columns whose surface was carved to reflect the organic form of bundled reeds, like papyrus, lotus and palm. In later Egyptian architecture faceted cylinders were also common. Their form is thought to derive from archaic reed-built shrines. Carved from stone, the columns were highly decorated with carved and painted hieroglyphs, texts, ritual imagery and natural motifs. Egyptian columns are famously present in the Great Hypostyle Hall of Karnak (), where 134 columns are lined up in sixteen rows, with some columns reaching heights of 24 metres. One of the most important type are the papyriform columns. The origin of these columns goes back to the 5th Dynasty. They are composed of lotus (papyrus) stems which are drawn together into a bundle decorated with bands: the capital, instead of opening out into the shape of a bellflower, swells out and then narrows again like a flower in bud. The base, which tapers to take the shape of a half-sphere like the stem of the lotus, has a continuously recurring decoration of stipules. Greek and Roman The Minoans used whole tree-trunks, usually turned upside down in order to prevent re-growth, stood on a base set in the stylobate (floor base) and topped by a simple round capital. These were then painted as in the most famous Minoan palace of Knossos. The Minoans employed columns to create large open-plan spaces, light-wells and as a focal point for religious rituals. These traditions were continued by the later Mycenaean civilization, particularly in the megaron or hall at the heart of their palaces. The importance of columns and their reference to palaces and therefore authority is evidenced in their use in heraldic motifs such as the famous lion-gate of Mycenae where two lions stand each side of a column. Being made of wood these early columns have not survived, but their stone bases have and through these we may see their use and arrangement in these palace buildings. The Egyptians, Persians and other civilizations mostly used columns for the practical purpose of holding up the roof inside a building, preferring outside walls to be decorated with reliefs or painting, but the Ancient Greeks, followed by the Romans, loved to use them on the outside as well, and the extensive use of columns on the interior and exterior of buildings is one of the most characteristic features of classical architecture, in buildings like the Parthenon. The Greeks developed the classical orders of architecture, which are most easily distinguished by the form of the column and its various elements. Their Doric, Ionic, and Corinthian orders were expanded by the Romans to include the Tuscan and Composite orders. Persian Some of the most elaborate columns in the ancient world were those of the Persians, especially the massive stone columns erected in Persepolis. They included double-bull structures in their capitals. The Hall of Hundred Columns at Persepolis, measuring 70 × 70 metres, was built by the Achaemenid king Darius I (524–486 BC). Many of the ancient Persian columns are standing, some being more than 30 metres tall. Tall columns with bull's head capitals were used for porticoes and to support the roofs of the hypostylehall, partly inspired by the ancient Egyptian precedent. Since the columns carried timber beams rather than stone, they could be taller, slimmer and more widely spaced than Egyptian ones. Middle Ages Columns, or at least large structural exterior ones, became much less significant in the architecture of the Middle Ages. The classical forms were abandoned in both Byzantine and Romanesque architecture in favour of more flexible forms, with capitals often using various types of foliage decoration, and in the West scenes with figures carved in relief. During the Romanesque period, builders continued to reuse and imitate ancient Roman columns wherever possible; where new, the emphasis was on elegance and beauty, as illustrated by twisted columns. Often they were decorated with mosaics. Renaissance and later styles Renaissance architecture was keen to revive the classical vocabulary and styles, and the informed use and variation of the classical orders remained fundamental to the training of architects throughout Baroque, Rococo and Neo-classical architecture. Structure Early columns were constructed of stone, some out of a single piece of stone. Monolithic columns are among the heaviest stones used in architecture. Other stone columns are created out of multiple sections of stone, mortared or dry-fit together. In many classical sites, sectioned columns were carved with a centre hole or depression so that they could be pegged together, using stone or metal pins. The design of most classical columns incorporates entasis (the inclusion of a slight outward curve in the sides) plus a reduction in diameter along the height of the column, so that the top is as little as 83% of the bottom diameter. This reduction mimics the parallax effects which the eye expects to see, and tends to make columns look taller and straighter than they are while entasis adds to that effect. There are flutes and fillets that run up the shaft of columns. The flute is the part of the column that is indented in with a semi circular shape. The fillet of the column is the part between each of the flutes on the Ionic order columns. The flute width changes on all tapered columns as it goes up the shaft and stays the same on all non tapered columns. This was done to the columns to add visual interest to them. The Ionic and the Corinthian are the only orders that have fillets and flutes. The Doric style has flutes but not fillets. Doric flutes are connected at a sharp point where the fillets are located on Ionic and Corinthian order columns. Nomenclature Most classical columns arise from a basis, or base, that rests on the stylobate, or foundation, except for those of the Doric order, which usually rest directly on the stylobate. The basis may consist of several elements, beginning with a wide, square slab known as a plinth. The simplest bases consist of the plinth alone, sometimes separated from the column by a convex circular cushion known as a torus. More elaborate bases include two toruses, separated by a concave section or channel known as a scotia or trochilus. Scotiae could also occur in pairs, separated by a convex section called an astragal, or bead, narrower than a torus. Sometimes these sections were accompanied by still narrower convex sections, known as annulets or fillets. At the top of the shaft is a capital, upon which the roof or other architectural elements rest. In the case of Doric columns, the capital usually consists of a round, tapering cushion, or echinus, supporting a square slab, known as an abax or abacus. Ionic capitals feature a pair of volutes, or scrolls, while Corinthian capitals are decorated with reliefs in the form of acanthus leaves. Either type of capital could be accompanied by the same moldings as the base. In the case of free-standing columns, the decorative elements atop the shaft are known as a finial. Modern columns may be constructed out of steel, poured or precast concrete, or brick, left bare or clad in an architectural covering, or veneer. Used to support an arch, an impost, or pier, is the topmost member of a column. The bottom-most part of the arch, called the springing, rests on the impost. Equilibrium, instability, and loads As the axial load on a perfectly straight slender column with elastic material properties is increased in magnitude, this ideal column passes through three states: stable equilibrium, neutral equilibrium, and instability. The straight column under load is in stable equilibrium if a lateral force, applied between the two ends of the column, produces a small lateral deflection which disappears and the column returns to its straight form when the lateral force is removed. If the column load is gradually increased, a condition is reached in which the straight form of equilibrium becomes so-called neutral equilibrium, and a small lateral force will produce a deflection that does not disappear and the column remains in this slightly bent form when the lateral force is removed. The load at which neutral equilibrium of a column is reached is called the critical or buckling load. The state of instability is reached when a slight increase of the column load causes uncontrollably growing lateral deflections leading to complete collapse. For an axially loaded straight column with any end support conditions, the equation of static equilibrium, in the form of a differential equation, can be solved for the deflected shape and critical load of the column. With hinged, fixed or free end support conditions the deflected shape in neutral equilibrium of an initially straight column with uniform cross section throughout its length always follows a partial or composite sinusoidal curve shape, and the critical load is given by where E = elastic modulus of the material, Imin = the minimal moment of inertia of the cross section, and L = actual length of the column between its two end supports. A variant of (1) is given by where r = radius of gyration of column cross-section which is equal to the square root of (I/A), K = ratio of the longest half sine wave to the actual column length, Et = tangent modulus at the stress Fcr, and KL = effective length (length of an equivalent hinged-hinged column). From Equation (2) it can be noted that the buckling strength of a column is inversely proportional to the square of its length. When the critical stress, Fcr (Fcr =Pcr/A, where A = cross-sectional area of the column), is greater than the proportional limit of the material, the column is experiencing inelastic buckling. Since at this stress the slope of the material's stress-strain curve, Et (called the tangent modulus), is smaller than that below the proportional limit, the critical load at inelastic buckling is reduced. More complex formulas and procedures apply for such cases, but in its simplest form the critical buckling load formula is given as Equation (3), A column with a cross section that lacks symmetry may suffer torsional buckling (sudden twisting) before, or in combination with, lateral buckling. The presence of the twisting deformations renders both theoretical analyses and practical designs rather complex. Eccentricity of the load, or imperfections such as initial crookedness, decreases column strength. If the axial load on the column is not concentric, that is, its line of action is not precisely coincident with the centroidal axis of the column, the column is characterized as eccentrically loaded. The eccentricity of the load, or an initial curvature, subjects the column to immediate bending. The increased stresses due to the combined axial-plus-flexural stresses result in a reduced load-carrying ability. Column elements are considered to be massive if their smallest side dimension is equal to or more than 400 mm. Massive columns have the ability to increase in carrying strength over long time periods (even during periods of heavy load). Taking into account the fact, that possible structural loads may increase over time as well (and also the threat of progressive failure), massive columns have an advantage compared to non-massive ones. Extensions When a column is too long to be built or transported in one piece, it has to be extended or spliced at the construction site. A reinforced concrete column is extended by having the steel reinforcing bars protrude a few inches or feet above the top of the concrete, then placing the next level of reinforcing bars to overlap, and pouring the concrete of the next level. A steel column is extended by welding or bolting splice plates on the flanges and webs or walls of the columns to provide a few inches or feet of load transfer from the upper to the lower column section. A timber column is usually extended by the use of a steel tube or wrapped-around sheet-metal plate bolted onto the two connecting timber sections. Foundations A column that carries the load down to a foundation must have means to transfer the load without overstressing the foundation material. Reinforced concrete and masonry columns are generally built directly on top of concrete foundations. When seated on a concrete foundation, a steel column must have a base plate to spread the load over a larger area, and thereby reduce the bearing pressure. The base plate is a thick, rectangular steel plate usually welded to the bottom end of the column. Orders The Roman author Vitruvius, relying on the writings (now lost) of Greek authors, tells us that the ancient Greeks believed that their Doric order developed from techniques for building in wood. The earlier smoothed tree-trunk was replaced by a stone cylinder. Doric order The Doric order is the oldest and simplest of the classical orders. It is composed of a vertical cylinder that is wider at the bottom. It generally has neither a base nor a detailed capital. It is instead often topped with an inverted frustum of a shallow cone or a cylindrical band of carvings. It is often referred to as the masculine order because it is represented in the bottom level of the Colosseum and the Parthenon, and was therefore considered to be able to hold more weight. The height-to-thickness ratio is about 8:1. The shaft of a Doric Column is almost always fluted. The Greek Doric, developed in the western Dorian region of Greece, is the heaviest and most massive of the orders. It rises from the stylobate without any base; it is from four to six times as tall as its diameter; it has twenty broad flutes; the capital consists simply of a banded necking swelling out into a smooth echinus, which carries a flat square abacus; the Doric entablature is also the heaviest, being about one-fourth the height column. The Greek Doric order was not used after c. 100 B.C. until its “rediscovery” in the mid-eighteenth century. Tuscan order The Tuscan order, also known as Roman Doric, is also a simple design, the base and capital both being series of cylindrical disks of alternating diameter. The shaft is almost never fluted. The proportions vary, but are generally similar to Doric columns. Height to width ratio is about 7:1. Ionic order The Ionic column is considerably more complex than the Doric or Tuscan. It usually has a base and the shaft is often fluted (it has grooves carved up its length). The capital features a volute, an ornament shaped like a scroll, at the four corners. The height-to-thickness ratio is around 9:1. Due to the more refined proportions and scroll capitals, the Ionic column is sometimes associated with academic buildings. Ionic style columns were used on the second level of the Colosseum. Corinthian order The Corinthian order is named for the Greek city-state of Corinth, to which it was connected in the period. However, according to the architectural historian Vitruvius, the column was created by the sculptor Callimachus, probably an Athenian, who drew acanthus leaves growing around a votive basket. In fact, the oldest known Corinthian capital was found in Bassae, dated at 427 BC. It is sometimes called the feminine order because it is on the top level of the Colosseum and holding up the least weight, and also has the slenderest ratio of thickness to height. Height to width ratio is about 10:1. Composite order The Composite order draws its name from the capital being a composite of the Ionic and Corinthian capitals. The acanthus of the Corinthian column already has a scroll-like element, so the distinction is sometimes subtle. Generally the Composite is similar to the Corinthian in proportion and employment, often in the upper tiers of colonnades. Height to width ratio is about 11:1 or 12:1. Solomonic A Solomonic column, sometimes called "barley sugar", begins on a base and ends in a capital, which may be of any order, but the shaft twists in a tight spiral, producing a dramatic, serpentine effect of movement. Solomonic columns were developed in the ancient world, but remained rare there. A famous marble set, probably 2nd century, was brought to Old St. Peter's Basilica by Constantine I, and placed round the saint's shrine, and was thus familiar throughout the Middle Ages, by which time they were thought to have been removed from the Temple of Jerusalem. The style was used in bronze by Bernini for his spectacular St. Peter's baldachin, actually a ciborium (which displaced Constantine's columns), and thereafter became very popular with Baroque and Rococo church architects, above all in Latin America, where they were very often used, especially on a small scale, as they are easy to produce in wood by turning on a lathe (hence also the style's popularity for spindles on furniture and stairs). Caryatid A Caryatid is a sculpted female figure serving as an architectural support taking the place of a column or a pillar supporting an entablature on her head. The Greek term literally means "maidens of Karyai", an ancient town of Peloponnese. Engaged columns In architecture, an engaged column is a column embedded in a wall and partly projecting from the surface of the wall, sometimes defined as semi or three-quarter detached. Engaged columns are rarely found in classical Greek architecture, and then only in exceptional cases, but in Roman architecture they exist in abundance, most commonly embedded in the cella walls of pseudoperipteral buildings. Pillar tombs Pillar tombs are monumental graves, which typically feature a single, prominent pillar or column, often made of stone. A number of world cultures incorporated pillars into tomb structures. In the ancient Greek colony of Lycia in Anatolia, one of these edifices is located at the tomb of Xanthos. In the town of Hannassa in southern Somalia, ruins of houses with archways and courtyards have also been found along with other pillar tombs, including a rare octagonal tomb. Gallery See also Columnar jointing (geology) Core (architecture) Huabiao Linga Lingodbhava Load-bearing wall Marian and Holy Trinity columns Our Lady of the Pillar Post (structural) Pylon (architecture) Spur (architecture) Structural engineering References Sources Chisholm, Hugh, ed. (1911). "Engaged Column". Encyclopædia Britannica. 9 (11th ed.). Cambridge University Press. pp. 404–405. Stierlin, Henri. The Roman Empire: From the Etruscans to the Decline of the Roman Empire, TASCHEN, 2002 Alderman, Liz (7 July 2014). "Acropolis Maidens Glow Anew". The New York Times. Retrieved 9 July 2014. Stokstad, Marilyn; Cothren, Michael (2014). Art History (Volume 1 ed.). New Jersey: Pearson Education, Inc. p. 110. External links Architectural elements Structural system Earthquake engineering
Column
[ "Technology", "Engineering" ]
4,250
[ "Structural engineering", "Building engineering", "Structural system", "Architectural elements", "Civil engineering", "Columns and entablature", "Earthquake engineering", "Components", "Architecture" ]
6,933
https://en.wikipedia.org/wiki/Chromatin
Chromatin is a complex of DNA and protein found in eukaryotic cells. The primary function is to package long DNA molecules into more compact, denser structures. This prevents the strands from becoming tangled and also plays important roles in reinforcing the DNA during cell division, preventing DNA damage, and regulating gene expression and DNA replication. During mitosis and meiosis, chromatin facilitates proper segregation of the chromosomes in anaphase; the characteristic shapes of chromosomes visible during this stage are the result of DNA being coiled into highly condensed chromatin. The primary protein components of chromatin are histones. An octamer of two sets of four histone cores (Histone H2A, Histone H2B, Histone H3, and Histone H4) bind to DNA and function as "anchors" around which the strands are wound. In general, there are three levels of chromatin organization: DNA wraps around histone proteins, forming nucleosomes and the so-called beads on a string structure (euchromatin). Multiple histones wrap into a 30-nanometer fiber consisting of nucleosome arrays in their most compact form (heterochromatin). Higher-level DNA supercoiling of the 30 nm fiber produces the metaphase chromosome (during mitosis and meiosis). Many organisms, however, do not follow this organization scheme. For example, spermatozoa and avian red blood cells have more tightly packed chromatin than most eukaryotic cells, and trypanosomatid protozoa do not condense their chromatin into visible chromosomes at all. Prokaryotic cells have entirely different structures for organizing their DNA (the prokaryotic chromosome equivalent is called a genophore and is localized within the nucleoid region). The overall structure of the chromatin network further depends on the stage of the cell cycle. During interphase, the chromatin is structurally loose to allow access to RNA and DNA polymerases that transcribe and replicate the DNA. The local structure of chromatin during interphase depends on the specific genes present in the DNA. Regions of DNA containing genes which are actively transcribed ("turned on") are less tightly compacted and closely associated with RNA polymerases in a structure known as euchromatin, while regions containing inactive genes ("turned off") are generally more condensed and associated with structural proteins in heterochromatin. Epigenetic modification of the structural proteins in chromatin via methylation and acetylation also alters local chromatin structure and therefore gene expression. There is limited understanding of chromatin structure and it is active area of research in molecular biology. Dynamic chromatin structure and hierarchy Chromatin undergoes various structural changes during a cell cycle. Histone proteins are the basic packers and arrangers of chromatin and can be modified by various post-translational modifications to alter chromatin packing (histone modification). Most modifications occur on histone tails. The positively charged histone cores only partially counteract the negative charge of the DNA phosphate backbone resulting in a negative net charge of the overall structure. An imbalance of charge within the polymer causes electrostatic repulsion between neighboring chromatin regions that promote interactions with positively charged proteins, molecules, and cations. As these modifications occur, the electrostatic environment surrounding the chromatin will flux and the level of chromatin compaction will alter. The consequences in terms of chromatin accessibility and compaction depend both on the modified amino acid and the type of modification. For example, histone acetylation results in loosening and increased accessibility of chromatin for replication and transcription. Lysine trimethylation can either lead to increased transcriptional activity (trimethylation of histone H3 lysine 4) or transcriptional repression and chromatin compaction (trimethylation of histone H3, lysine 9 or lysine 27). Several studies suggested that different modifications could occur simultaneously. For example, it was proposed that a bivalent structure (with trimethylation of both lysine 4 and 27 on histone H3) is involved in early mammalian development. Another study tested the role of acetylation of histone 4 on lysine 16 on chromatin structure and found that homogeneous acetylation inhibited 30 nm chromatin formation and blocked adenosine triphosphate remodeling. This singular modification changed the dynamics of the chromatin which shows that acetylation of H4 at K16 is vital for proper intra- and inter- functionality of chromatin structure. Polycomb-group proteins play a role in regulating genes through modulation of chromatin structure. For additional information, see Chromatin variant, Histone modifications in chromatin regulation and RNA polymerase control by chromatin structure. Structure of DNA In nature, DNA can form three structures, A-, B-, and Z-DNA. A- and B-DNA are very similar, forming right-handed helices, whereas Z-DNA is a left-handed helix with a zig-zag phosphate backbone. Z-DNA is thought to play a specific role in chromatin structure and transcription because of the properties of the junction between B- and Z-DNA. At the junction of B- and Z-DNA, one pair of bases is flipped out from normal bonding. These play a dual role of a site of recognition by many proteins and as a sink for torsional stress from RNA polymerase or nucleosome binding. DNA bases are stored as a code structure with four chemical bases such as “Adenine (A), Guanine (G), Cytosine (C), and Thymine (T)”. The order and sequences of these chemical structures of DNA are reflected as information available for the creation and control of human organisms. “A with T and C with G” pairing up to build the DNA base pair. Sugar and phosphate molecules are also paired with these bases, making DNA nucleotides arrange 2 long spiral strands unitedly called “double helix”. In eukaryotes, DNA consists of a cell nucleus and the DNA is providing strength and direction to the mechanism of heredity. Moreover, between the nitrogenous bonds of the 2 DNA, homogenous bonds are forming. Nucleosomes and beads-on-a-string The basic repeat element of chromatin is the nucleosome, interconnected by sections of linker DNA, a far shorter arrangement than pure DNA in solution. In addition to core histones, a linker histone H1 exists that contacts the exit/entry of the DNA strand on the nucleosome. The nucleosome core particle, together with histone H1, is known as a chromatosome. Nucleosomes, with about 20 to 60 base pairs of linker DNA, can form, under non-physiological conditions, an approximately 11 nm beads on a string fibre. The nucleosomes bind DNA non-specifically, as required by their function in general DNA packaging. There are, however, large DNA sequence preferences that govern nucleosome positioning. This is due primarily to the varying physical properties of different DNA sequences: For instance, adenine (A), and thymine (T) is more favorably compressed into the inner minor grooves. This means nucleosomes can bind preferentially at one position approximately every 10 base pairs (the helical repeat of DNA)- where the DNA is rotated to maximise the number of A and T bases that will lie in the inner minor groove. (See nucleic acid structure.) 30-nm chromatin fiber in mitosis With addition of H1, during mitosis the beads-on-a-string structure can coil into a 30 nm-diameter helical structure known as the 30 nm fibre or filament. The precise structure of the chromatin fiber in the cell is not known in detail. This level of chromatin structure is thought to be the form of heterochromatin, which contains mostly transcriptionally silent genes. Electron microscopy studies have demonstrated that the 30 nm fiber is highly dynamic such that it unfolds into a 10 nm fiber beads-on-a-string structure when transversed by an RNA polymerase engaged in transcription. The existing models commonly accept that the nucleosomes lie perpendicular to the axis of the fibre, with linker histones arranged internally. A stable 30 nm fibre relies on the regular positioning of nucleosomes along DNA. Linker DNA is relatively resistant to bending and rotation. This makes the length of linker DNA critical to the stability of the fibre, requiring nucleosomes to be separated by lengths that permit rotation and folding into the required orientation without excessive stress to the DNA. In this view, different lengths of the linker DNA should produce different folding topologies of the chromatin fiber. Recent theoretical work, based on electron-microscopy images of reconstituted fibers supports this view. DNA loops The beads-on-a-string chromatin structure has a tendency to form loops. These loops allow interactions between different regions of DNA by bringing them closer to each other, which increases the efficiency of gene interactions. This process is dynamic, with loops forming and disappearing. The loops are regulated by two main elements: Cohesins, protein complexes that generate loops by extrusion of the DNA fiber through the ring-like structure of the complex itself. CTCF, a transcription factor that limits the frontier of the DNA loop. To stop the growth of a loop, two CTCF molecules must be positioned in opposite directions to block the movement of the cohesin ring (see video). There are many other elements involved. For example, Jpx regulates the binding sites of CTCF molecules along the DNA fiber. Spatial organization of chromatin in the cell nucleus The spatial arrangement of the chromatin within the nucleus is not random - specific regions of the chromatin can be found in certain territories. Territories are, for example, the lamina-associated domains (LADs), and the topologically associating domains (TADs), which are bound together by protein complexes. Currently, polymer models such as the Strings & Binders Switch (SBS) model and the Dynamic Loop (DL) model are used to describe the folding of chromatin within the nucleus. The arrangement of chromatin within the nucleus may also play a role in nuclear stress and restoring nuclear membrane deformation by mechanical stress. When chromatin is condensed, the nucleus becomes more rigid. When chromatin is decondensed, the nucleus becomes more elastic with less force exerted on the inner nuclear membrane. This observation sheds light on other possible cellular functions of chromatin organization outside of genomic regulation. Cell-cycle dependent structural organization Interphase: The structure of chromatin during interphase of mitosis is optimized to allow simple access of transcription and DNA repair factors to the DNA while compacting the DNA into the nucleus. The structure varies depending on the access required to the DNA. Genes that require regular access by RNA polymerase require the looser structure provided by euchromatin. Metaphase: The metaphase structure of chromatin differs vastly to that of interphase. It is optimised for physical strength and manageability, forming the classic chromosome structure seen in karyotypes. The structure of the condensed chromatin is thought to be loops of 30 nm fibre to a central scaffold of proteins. It is, however, not well-characterised. Chromosome scaffolds play an important role to hold the chromatin into compact chromosomes. Loops of 30 nm structure further condense with scaffold, into higher order structures. Chromosome scaffolds are made of proteins including condensin, type IIA topoisomerase and kinesin family member 4 (KIF4). The physical strength of chromatin is vital for this stage of division to prevent shear damage to the DNA as the daughter chromosomes are separated. To maximise strength the composition of the chromatin changes as it approaches the centromere, primarily through alternative histone H1 analogues. During mitosis, although most of the chromatin is tightly compacted, there are small regions that are not as tightly compacted. These regions often correspond to promoter regions of genes that were active in that cell type prior to chromatin formation. The lack of compaction of these regions is called bookmarking, which is an epigenetic mechanism believed to be important for transmitting to daughter cells the "memory" of which genes were active prior to entry into mitosis. This bookmarking mechanism is needed to help transmit this memory because transcription ceases during mitosis. Chromatin and bursts of transcription Chromatin and its interaction with enzymes has been researched, and a conclusion being made is that it is relevant and an important factor in gene expression. Vincent G. Allfrey, a professor at Rockefeller University, stated that RNA synthesis is related to histone acetylation. The lysine amino acid attached to the end of the histones is positively charged. The acetylation of these tails would make the chromatin ends neutral, allowing for DNA access. When the chromatin decondenses, the DNA is open to entry of molecular machinery. Fluctuations between open and closed chromatin may contribute to the discontinuity of transcription, or transcriptional bursting. Other factors are probably involved, such as the association and dissociation of transcription factor complexes with chromatin. Specifically, RNA polymerase and transcriptional proteins have been shown to congregate into droplets via phase separation, and recent studies have suggested that 10 nm chromatin demonstrates liquid-like behavior increasing the targetability of genomic DNA. The interactions between linker histones and disordered tail regions act as an electrostatic glue organizing large-scale chromatin into a dynamic, liquid-like domain. Decreased chromatin compaction comes with increased chromatin mobility and easier transcriptional access to DNA. The phenomenon, as opposed to simple probabilistic models of transcription, can account for the high variability in gene expression occurring between cells in isogenic populations. Alternative chromatin organizations During metazoan spermiogenesis, the spermatid's chromatin is remodeled into a more spaced-packaged, widened, almost crystal-like structure. This process is associated with the cessation of transcription and involves nuclear protein exchange. The histones are mostly displaced, and replaced by protamines (small, arginine-rich proteins). It is proposed that in yeast, regions devoid of histones become very fragile after transcription; HMO1, an HMG-box protein, helps in stabilizing nucleosomes-free chromatin. Chromatin and DNA repair A variety of internal and external agents can cause DNA damage in cells. Many factors influence how the repair route is selected, including the cell cycle phase and chromatin segment where the break occurred. In terms of initiating 5’ end DNA repair, the p53 binding protein 1 (53BP1) and BRCA1 are important protein components that influence double-strand break repair pathway selection. The 53BP1 complex attaches to chromatin near DNA breaks and activates downstream factors such as Rap1-Interacting Factor 1 (RIF1) and shieldin, which protects DNA ends against nucleolytic destruction. DNA damage process occurs within the condition of chromatin, and the constantly changing chromatin environment has a large effect on it. Accessing and repairing the damaged cell of DNA, the genome condenses into chromatin and repairing it through modifying the histone residues. Through altering the chromatin structure, histones residues are adding chemical groups namely phosphate, acetyl and one or more methyl groups and these control the expressions of gene building by proteins to acquire DNA. Moreover, resynthesis of the delighted zone, DNA will be repaired by processing and restructuring the damaged bases. In order to maintain genomic integrity, “homologous recombination and classical non-homologous end joining process” has been followed by DNA to be repaired. The packaging of eukaryotic DNA into chromatin presents a barrier to all DNA-based processes that require recruitment of enzymes to their sites of action. To allow the critical cellular process of DNA repair, the chromatin must be remodeled. In eukaryotes, ATP-dependent chromatin remodeling complexes and histone-modifying enzymes are two predominant factors employed to accomplish this remodeling process. Chromatin relaxation occurs rapidly at the site of DNA damage. This process is initiated by PARP1 protein that starts to appear at DNA damage in less than a second, with half maximum accumulation within 1.6 seconds after the damage occurs. Next the chromatin remodeler Alc1 quickly attaches to the product of PARP1, and completes arrival at the DNA damage within 10 seconds of the damage. About half of the maximum chromatin relaxation, presumably due to action of Alc1, occurs by 10 seconds. This then allows recruitment of the DNA repair enzyme MRE11, to initiate DNA repair, within 13 seconds. γH2AX, the phosphorylated form of H2AX is also involved in the early steps leading to chromatin decondensation after DNA damage occurrence. The histone variant H2AX constitutes about 10% of the H2A histones in human chromatin. γH2AX (H2AX phosphorylated on serine 139) can be detected as soon as 20 seconds after irradiation of cells (with DNA double-strand break formation), and half maximum accumulation of γH2AX occurs in one minute. The extent of chromatin with phosphorylated γH2AX is about two million base pairs at the site of a DNA double-strand break. γH2AX does not, itself, cause chromatin decondensation, but within 30 seconds of irradiation, RNF8 protein can be detected in association with γH2AX. RNF8 mediates extensive chromatin decondensation, through its subsequent interaction with CHD4, a component of the nucleosome remodeling and deacetylase complex NuRD. After undergoing relaxation subsequent to DNA damage, followed by DNA repair, chromatin recovers to a compaction state close to its pre-damage level after about 20 min. Methods to investigate chromatin ChIP-seq (Chromatin immunoprecipitation sequencing) is recognized as the vastly utilized chromatin identification method it has been using the antibodies that actively selected, identify and combine with proteins including "histones, histone restructuring, transaction factors and cofactors". This has been providing data about the state of chromatin and the transaction of a gene by trimming "oligonucleotides" that are unbound. Chromatin immunoprecipitation sequencing aimed against different histone modifications, can be used to identify chromatin states throughout the genome. Different modifications have been linked to various states of chromatin. DNase-seq (DNase I hypersensitive sites Sequencing) uses the sensitivity of accessible regions in the genome to the DNase I enzyme to map open or accessible regions in the genome. FAIRE-seq (Formaldehyde-Assisted Isolation of Regulatory Elements sequencing) uses the chemical properties of protein-bound DNA in a two-phase separation method to extract nucleosome depleted regions from the genome. ATAC-seq (Assay for Transposable Accessible Chromatin sequencing) uses the Tn5 transposase to integrate (synthetic) transposons into accessible regions of the genome consequentially highlighting the localisation of nucleosomes and transcription factors across the genome. DNA footprinting is a method aimed at identifying protein-bound DNA. It uses labeling and fragmentation coupled to gel electrophoresis to identify areas of the genome that have been bound by proteins. MNase-seq (Micrococcal Nuclease sequencing) uses the micrococcal nuclease enzyme to identify nucleosome positioning throughout the genome. Chromosome conformation capture determines the spatial organization of chromatin in the nucleus, by inferring genomic locations that physically interact. MACC profiling (Micrococcal nuclease ACCessibility profiling) uses titration series of chromatin digests with micrococcal nuclease to identify chromatin accessibility as well as to map nucleosomes and non-histone DNA-binding proteins in both open and closed regions of the genome. Chromatin and knots It has been a puzzle how decondensed interphase chromosomes remain essentially unknotted. The natural expectation is that in the presence of type II DNA topoisomerases that permit passages of double-stranded DNA regions through each other, all chromosomes should reach the state of topological equilibrium. The topological equilibrium in highly crowded interphase chromosomes forming chromosome territories would result in formation of highly knotted chromatin fibres. However, Chromosome Conformation Capture (3C) methods revealed that the decay of contacts with the genomic distance in interphase chromosomes is practically the same as in the crumpled globule state that is formed when long polymers condense without formation of any knots. To remove knots from highly crowded chromatin, one would need an active process that should not only provide the energy to move the system from the state of topological equilibrium but also guide topoisomerase-mediated passages in such a way that knots would be efficiently unknotted instead of making the knots even more complex. It has been shown that the process of chromatin-loop extrusion is ideally suited to actively unknot chromatin fibres in interphase chromosomes. Chromatin: alternative definitions The term, introduced by Walther Flemming, has multiple meanings: Simple and concise definition: Chromatin is a macromolecular complex of a DNA macromolecule and protein macromolecules (and RNA). The proteins package and arrange the DNA and control its functions within the cell nucleus. A biochemists' operational definition: Chromatin is the DNA/protein/RNA complex extracted from eukaryotic lysed interphase nuclei. Just which of the multitudinous substances present in a nucleus will constitute a part of the extracted material partly depends on the technique each researcher uses. Furthermore, the composition and properties of chromatin vary from one cell type to another, during the development of a specific cell type, and at different stages in the cell cycle. The DNA + histone = chromatin definition: The DNA double helix in the cell nucleus is packaged by special proteins termed histones. The formed protein/DNA complex is called chromatin. The basic structural unit of chromatin is the nucleosome. The first definition allows for "chromatins" to be defined in other domains of life like bacteria and archaea, using any DNA-binding proteins that condenses the molecule. These proteins are usually referred to nucleoid-associated proteins (NAPs); examples include AsnC/LrpC with HU. In addition, some archaea do produce nucleosomes from proteins homologous to eukaryotic histones. Chromatin Remodeling: Chromatin remodeling can result from covalent modification of histones that physically remodel, move or remove nucleosomes. Studies of Sanosaka et al. 2022, says that Chromatin remodeler CHD7 regulate cell type-specific gene expression in human neural crest cells. See also Active chromatin sequence Chromatid DAnCER database (2010) Epigenetics Histone-modifying enzymes Position-effect variegation Transcriptional bursting Notes References Additional sources Cooper, Geoffrey M. 2000. The Cell, 2nd edition, A Molecular Approach. Chapter 4.2 Chromosomes and Chromatin. Cremer, T. 1985. Von der Zellenlehre zur Chromosomentheorie: Naturwissenschaftliche Erkenntnis und Theorienwechsel in der frühen Zell- und Vererbungsforschung, Veröffentlichungen aus der Forschungsstelle für Theoretische Pathologie der Heidelberger Akademie der Wissenschaften. Springer-Vlg., Berlin, Heidelberg. Elgin, S. C. R. (ed.). 1995. Chromatin Structure and Gene Expression, vol. 9. IRL Press, Oxford, New York, Tokyo. Pollard, T., and W. Earnshaw. 2002. Cell Biology. Saunders. Saumweber, H. 1987. Arrangement of Chromosomes in Interphase Cell Nuclei, p. 223-234. In W. Hennig (ed.), Structure and Function of Eucaryotic Chromosomes, vol. 14. Springer-Verlag, Berlin, Heidelberg. Van Holde KE. 1989. Chromatin. New York: Springer-Verlag. . Van Holde, K., J. Zlatanova, G. Arents, and E. Moudrianakis. 1995. Elements of chromatin structure: histones, nucleosomes, and fibres, p. 1-26. In S. C. R. Elgin (ed.), Chromatin structure and gene expression. IRL Press at Oxford University Press, Oxford. External links Chromatin, Histones & Cathepsin; PMAP The Proteolysis Map-animation Nature journal: recent chromatin publications and news Protocol for in vitro Chromatin Assembly ENCODE threads Explorer Chromatin patterns at transcription factor binding sites. Nature (journal) Molecular genetics Nuclear substructures
Chromatin
[ "Chemistry", "Biology" ]
5,411
[ "Molecular genetics", "Molecular biology" ]
6,934
https://en.wikipedia.org/wiki/Condition%20number
In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input, and how much error in the output results from an error in the input. Very frequently, one is solving the inverse problem: given one is solving for x, and thus the condition number of the (local) inverse must be used. The condition number is derived from the theory of propagation of uncertainty, and is formally defined as the value of the asymptotic worst-case relative change in output for a relative change in input. The "function" is the solution of a problem and the "arguments" are the data in the problem. The condition number is frequently applied to questions in linear algebra, in which case the derivative is straightforward but the error could be in many different directions, and is thus computed from the geometry of the matrix. More generally, condition numbers can be defined for non-linear functions in several variables. A problem with a low condition number is said to be well-conditioned, while a problem with a high condition number is said to be ill-conditioned. In non-mathematical terms, an ill-conditioned problem is one where, for a small change in the inputs (the independent variables) there is a large change in the answer or dependent variable. This means that the correct solution/answer to the equation becomes hard to find. The condition number is a property of the problem. Paired with the problem are any number of algorithms that can be used to solve the problem, that is, to calculate the solution. Some algorithms have a property called backward stability; in general, a backward stable algorithm can be expected to accurately solve well-conditioned problems. Numerical analysis textbooks give formulas for the condition numbers of problems and identify known backward stable algorithms. As a rule of thumb, if the condition number , then you may lose up to digits of accuracy on top of what would be lost to the numerical method due to loss of precision from arithmetic methods. However, the condition number does not give the exact value of the maximum inaccuracy that may occur in the algorithm. It generally just bounds it with an estimate (whose computed value depends on the choice of the norm to measure the inaccuracy). General definition in the context of error analysis Given a problem and an algorithm with an input and output the error is the absolute error is and the relative error is In this context, the absolute condition number of a problem is and the relative condition number is Matrices For example, the condition number associated with the linear equation Ax = b gives a bound on how inaccurate the solution x will be after approximation. Note that this is before the effects of round-off error are taken into account; conditioning is a property of the matrix, not the algorithm or floating-point accuracy of the computer used to solve the corresponding system. In particular, one should think of the condition number as being (very roughly) the rate at which the solution x will change with respect to a change in b. Thus, if the condition number is large, even a small error in b may cause a large error in x. On the other hand, if the condition number is small, then the error in x will not be much bigger than the error in b. The condition number is defined more precisely to be the maximum ratio of the relative error in x to the relative error in b. Let e be the error in b. Assuming that A is a nonsingular matrix, the error in the solution A−1b is A−1e. The ratio of the relative error in the solution to the relative error in b is The maximum value (for nonzero b and e) is then seen to be the product of the two operator norms as follows: The same definition is used for any consistent norm, i.e. one that satisfies When the condition number is exactly one (which can only happen if A is a scalar multiple of a linear isometry), then a solution algorithm can find (in principle, meaning if the algorithm introduces no errors of its own) an approximation of the solution whose precision is no worse than that of the data. However, it does not mean that the algorithm will converge rapidly to this solution, just that it will not diverge arbitrarily because of inaccuracy on the source data (backward error), provided that the forward error introduced by the algorithm does not diverge as well because of accumulating intermediate rounding errors. The condition number may also be infinite, but this implies that the problem is ill-posed (does not possess a unique, well-defined solution for each choice of data; that is, the matrix is not invertible), and no algorithm can be expected to reliably find a solution. The definition of the condition number depends on the choice of norm, as can be illustrated by two examples. If is the matrix norm induced by the (vector) Euclidean norm (sometimes known as the L2 norm and typically denoted as ), then where and are maximal and minimal singular values of respectively. Hence: If is normal, then where and are maximal and minimal (by moduli) eigenvalues of respectively. If is unitary, then The condition number with respect to L2 arises so often in numerical linear algebra that it is given a name, the condition number of a matrix. If is the matrix norm induced by the (vector) norm and is lower triangular non-singular (i.e. for all ), then recalling that the eigenvalues of any triangular matrix are simply the diagonal entries. The condition number computed with this norm is generally larger than the condition number computed relative to the Euclidean norm, but it can be evaluated more easily (and this is often the only practicably computable condition number, when the problem to solve involves a non-linear algebra, for example when approximating irrational and transcendental functions or numbers with numerical methods). If the condition number is not significantly larger than one, the matrix is well-conditioned, which means that its inverse can be computed with good accuracy. If the condition number is very large, then the matrix is said to be ill-conditioned. Practically, such a matrix is almost singular, and the computation of its inverse, or solution of a linear system of equations is prone to large numerical errors. A matrix that is not invertible is often said to have a condition number equal to infinity. Alternatively, it can be defined as , where is the Moore-Penrose pseudoinverse. For square matrices, this unfortunately makes the condition number discontinuous, but it is a useful definition for rectangular matrices, which are never invertible but are still used to define systems of equations. Nonlinear Condition numbers can also be defined for nonlinear functions, and can be computed using calculus. The condition number varies with the point; in some cases one can use the maximum (or supremum) condition number over the domain of the function or domain of the question as an overall condition number, while in other cases the condition number at a particular point is of more interest. One variable The absolute condition number of a differentiable function in one variable is the absolute value of the derivative of the function: The relative condition number of as a function is . Evaluated at a point , this is Note that this is the absolute value of the elasticity of a function in economics. Most elegantly, this can be understood as (the absolute value of) the ratio of the logarithmic derivative of , which is , and the logarithmic derivative of , which is , yielding a ratio of . This is because the logarithmic derivative is the infinitesimal rate of relative change in a function: it is the derivative scaled by the value of . Note that if a function has a zero at a point, its condition number at the point is infinite, as infinitesimal changes in the input can change the output from zero to positive or negative, yielding a ratio with zero in the denominator, hence infinite relative change. More directly, given a small change in , the relative change in is , while the relative change in is . Taking the ratio yields The last term is the difference quotient (the slope of the secant line), and taking the limit yields the derivative. Condition numbers of common elementary functions are particularly important in computing significant figures and can be computed immediately from the derivative. A few important ones are given below: Several variables Condition numbers can be defined for any function mapping its data from some domain (e.g. an -tuple of real numbers ) into some codomain (e.g. an -tuple of real numbers ), where both the domain and codomain are Banach spaces. They express how sensitive that function is to small changes (or small errors) in its arguments. This is crucial in assessing the sensitivity and potential accuracy difficulties of numerous computational problems, for example, polynomial root finding or computing eigenvalues. The condition number of at a point (specifically, its relative condition number) is then defined to be the maximum ratio of the fractional change in to any fractional change in , in the limit where the change in becomes infinitesimally small: where is a norm on the domain/codomain of . If is differentiable, this is equivalent to: where denotes the Jacobian matrix of partial derivatives of at , and is the induced norm on the matrix. See also Numerical methods for linear least squares Numerical stability Hilbert matrix Ill-posed problem Singular value Wilson matrix References Further reading External links Condition Number of a Matrix at Holistic Numerical Methods Institute MATLAB library function to determine condition number Condition number – Encyclopedia of Mathematics Who Invented the Matrix Condition Number? by Nick Higham Numerical analysis Matrices
Condition number
[ "Mathematics" ]
2,006
[ "Mathematical objects", "Computational mathematics", "Matrices (mathematics)", "Mathematical relations", "Numerical analysis", "Approximations" ]
6,938
https://en.wikipedia.org/wiki/Classical%20order
An order in architecture is a certain assemblage of parts subject to uniform established proportions, regulated by the office that each part has to perform. Coming down to the present from Ancient Greek and Ancient Roman civilization, the architectural orders are the styles of classical architecture, each distinguished by its proportions and characteristic profiles and details, and most readily recognizable by the type of column employed. The three orders of architecture—the Doric, Ionic, and Corinthian—originated in Greece. To these the Romans added, in practice if not in name, the Tuscan, which they made simpler than Doric, and the Composite, which was more ornamental than the Corinthian. The architectural order of a classical building is akin to the mode or key of classical music; the grammar or rhetoric of a written composition. It is established by certain modules like the intervals of music, and it raises certain expectations in an audience attuned to its language. Whereas the orders were essentially structural in Ancient Greek architecture, which made little use of the arch until its late period, in Roman architecture where the arch was often dominant, the orders became increasingly decorative elements except in porticos and similar uses. Columns shrank into half-columns emerging from walls or turned into pilasters. This treatment continued after the conscious and "correct" use of the orders, initially following exclusively Roman models, returned in the Italian Renaissance. Greek Revival architecture, inspired by increasing knowledge of Greek originals, returned to more authentic models, including ones from relatively early periods. Elements Each style has distinctive capitals at the top of columns and horizontal entablatures which it supports, while the rest of the building does not in itself vary between the orders. The column shaft and base also varies with the order, and is sometimes articulated with vertical concave grooves known as fluting. The shaft is wider at the bottom than at the top, because its entasis, beginning a third of the way up, imperceptibly makes the column slightly more slender at the top, although some Doric columns, especially early Greek ones, are visibly "flared", with straight profiles that narrow going up the shaft. The capital rests on the shaft. It has a load-bearing function, which concentrates the weight of the entablature on the supportive column, but it primarily serves an aesthetic purpose. The necking is the continuation of the shaft, but is visually separated by one or many grooves. The echinus lies atop the necking. It is a circular block that bulges outwards towards the top to support the abacus, which is a square or shaped block that in turn supports the entablature. The entablature consists of three horizontal layers, all of which are visually separated from each other using moldings or bands. In Roman and post-Renaissance work, the entablature may be carried from column to column in the form of an arch that springs from the column that bears its weight, retaining its divisions and sculptural enrichment, if any. There are names for all the many parts of the orders. Measurement The heights of columns are calculated in terms of a ratio between the diameter of the shaft at its base and the height of the column. A Doric column can be described as seven diameters high, an Ionic column as eight diameters high, and a Corinthian column nine diameters high, although the actual ratios used vary considerably in both ancient and revived examples, but still keeping to the trend of increasing slimness between the orders. Sometimes this is phrased as "lower diameters high", to establish which part of the shaft has been measured. Greek orders There are three distinct orders in Ancient Greek architecture: Doric, Ionic, and Corinthian. These three were adopted by the Romans, who modified their capitals. The Roman adoption of the Greek orders took place in the 1st century BC. The three ancient Greek orders have since been consistently used in European Neoclassical architecture. Sometimes the Doric order is considered the earliest order, but there is no evidence to support this. Rather, the Doric and Ionic orders seem to have appeared at around the same time, the Ionic in eastern Greece and the Doric in the west and mainland. Both the Doric and the Ionic order appear to have originated in wood. The Temple of Hera in Olympia is the oldest well-preserved temple of Doric architecture. It was built just after 600 BC. The Doric order later spread across Greece and into Sicily, where it was the chief order for monumental architecture for 800 years. Early Greeks were no doubt aware of the use of stone columns with bases and capitals in ancient Egyptian architecture, and that of other Near Eastern cultures, although there they were mostly used in interiors, rather than as a dominant feature of all or part of exteriors, in the Greek style. Doric order The Doric order originated on the mainland and western Greece. It is the simplest of the orders, characterized by short, organized, heavy columns with plain, round capitals (tops) and no base. With a height that is only four to eight times its diameter, the columns are the most squat of all orders. The shaft of the Doric order is channeled with 20 flutes. The capital consists of a necking or annulet, which is a simple ring. The echinus is convex, or circular cushion like stone, and the abacus is a square slab of stone. Above the capital is a square abacus connecting the capital to the entablature. The entablature is divided into three horizontal registers, the lower part of which is either smooth or divided by horizontal lines. The upper half is distinctive for the Doric order. The frieze of the Doric entablature is divided into triglyphs and metopes. A triglyph is a unit consisting of three vertical bands which are separated by grooves. Metopes are the plain or carved reliefs between two triglyphs. The Greek forms of the Doric order come without an individual base. They instead are placed directly on the stylobate. Later forms, however, came with the conventional base consisting of a plinth and a torus. The Roman versions of the Doric order have smaller proportions. As a result, they appear lighter than the Greek orders. Ionic order The Ionic order came from eastern Greece, where its origins are entwined with the similar but little known Aeolic order. It is distinguished by slender, fluted pillars with a large base and two opposed volutes (also called "scrolls") in the echinus of the capital. The echinus itself is decorated with an egg-and-dart motif. The Ionic shaft comes with four more flutes than the Doric counterpart (totalling 24). The Ionic base has two convex moldings called tori, which are separated by a scotia. The Ionic order is also marked by an entasis, a curved tapering in the column shaft. A column of the Ionic order is nine times more tall than its lower diameter. The shaft itself is eight diameters high. The architrave of the entablature commonly consists of three stepped bands (fasciae). The frieze comes without the Doric triglyph and metope. The frieze sometimes comes with a continuous ornament such as carved figures instead. Corinthian order The Corinthian order is the most elaborated of the Greek orders, characterized by a slender fluted column having an ornate capital decorated with two rows of acanthus leaves and four scrolls. The shaft of the Corinthian order has 24 flutes. The column is commonly ten diameters high. The Roman writer Vitruvius credited the invention of the Corinthian order to Callimachus, a Greek sculptor of the 5th century BC. The oldest known building built according to this order is the Choragic Monument of Lysicrates in Athens, constructed from 335 to 334 BC. The Corinthian order was raised to rank by the writings of Vitruvius in the 1st century BC. Roman orders The Romans adapted all the Greek orders and also developed two orders of their own, basically modifications of Greek orders. However, it was not until the Renaissance that these were named and formalized as the Tuscan and Composite, respectively the plainest and most ornate of the orders. The Romans also invented the Superposed order. A superposed order is when successive stories of a building have different orders. The heaviest orders were at the bottom, whilst the lightest came at the top. This means that the Doric order was the order of the ground floor, the Ionic order was used for the middle story, while the Corinthian or the Composite order was used for the top story. The Giant order was invented by architects in the Renaissance. The Giant order is characterized by columns that extend the height of two or more stories. Tuscan order The Tuscan order has a very plain design, with a plain shaft, and a simple capital, base, and frieze. It is a simplified adaptation of the Greeks' Doric order. The Tuscan order is characterized by an unfluted shaft and a capital that consists of only an echinus and an abacus. In proportions it is similar to the Doric order, but overall it is significantly plainer. The column is normally seven diameters high. Compared to the other orders, the Tuscan order looks the most solid. Composite order The Composite order is a mixed order, combining the volutes of the Ionic with the leaves of the Corinthian order. Until the Renaissance it was not ranked as a separate order. Instead it was considered as a late Roman form of the Corinthian order. The column of the Composite order is typically ten diameters high. Historical development The Renaissance period saw renewed interest in the literary sources of the ancient cultures of Greece and Rome, and the fertile development of a new architecture based on classical principles. The treatise by Roman theoretician, architect and engineer Vitruvius, is the only architectural writing that survived from Antiquity. Effectively rediscovered in the 15th century, Vitruvius came to be regarded as the ultimate authority on architecture. However, in his text the word order is not to be found. To describe the four species of columns (he only mentions: Tuscan, Doric, Ionic and Corinthian) he uses, in fact, various words such as: genus (gender), mos (habit, fashion, manner), opera (work). The term order, as well as the idea of redefining the canon started circulating in Rome, at the beginning of the 16th century, probably during the studies of Vitruvius' text conducted and shared by Peruzzi, Raphael, and Sangallo. Ever since, the definition of the canon has been a collective endeavor that involved several generations of European architects, from Renaissance and Baroque periods, basing their theories both on the study of Vitruvius' writings and the observation of Roman ruins (the Greek ruins became available only after Greek Independence, 1821–1823). What was added were rules for the use of the Architectural Orders, and the exact proportions of them in minute detail. Commentary on the appropriateness of the orders for temples devoted to particular deities (Vitruvius I.2.5) were elaborated by Renaissance theorists, with Doric characterized as bold and manly, Ionic as matronly, and Corinthian as maidenly. Vignola defining the concept of "order" Following the examples of Vitruvius and the five books of the Regole generali di architettura sopra le cinque maniere de gli edifici by Sebastiano Serlio published from 1537 onwards, Giacomo Barozzi da Vignola produced an architecture rule book that was not only more practical than the previous two treatises, but also was systematically and consistently adopting, for the first time, the term 'order' to define each of the five different species of columns inherited from antiquity. A first publication of the various plates, as separate sheets, appeared in Rome in 1562, with the title: Regola delli cinque ordini d'architettura ("Canon of the Five Orders of Architecture"). As David Watkin has pointed out, Vignola's book "was to have an astonishing publishing history of over 500 editions in 400 years in ten languages, Italian, Dutch, English, Flemish, French, German, Portuguese, Russian, Spanish, Swedish, during which it became perhaps the most influential book of all times". The book consisted simply of an introduction followed by 32 annotated plates, highlighting the proportional system with all the minute details of the Five Architectural Orders. According to Christof Thoenes, the main expert of Renaissance architectural treatises, "in accordance with Vitruvius's example, Vignola chose a "module" equal to a half-diameter which is the base of the system. All the other measurements are expressed in fractions or in multiples of this module. The result is an arithmetical model, and with its help each order, harmoniously proportioned, can easily be adapted to any given height, of a façade or an interior. From this point of view, Vignola's Regola is a remarkable intellectual achievement". In America, The American Builder's Companion, written in the early 19th century by the architect Asher Benjamin, influenced many builders in the eastern states, particularly those who developed what became known as the Federal style. The last American re-interpretation of Vignola's Regola, was edited in 1904 by William Robert Ware. The break from the classical mode came first with the Gothic Revival architecture, then the development of modernism during the 19th century. The Bauhaus promoted pure functionalism, stripped of superfluous ornament, and that has become one of the defining characteristics of modern architecture. There are some exceptions. Postmodernism introduced an ironic use of the orders as a cultural reference, divorced from the strict rules of composition. On the other hand, a number of practitioners such as Quinlan Terry in England, and Michael Dwyer, Richard Sammons, and Duncan Stroik in the United States, continue the classical tradition, and use the classical orders in their work. Nonce orders Several orders, usually based upon the composite order and only varying in the design of the capitals, have been invented under the inspiration of specific occasions, but have not been used again. They are termed "nonce orders" by analogy to nonce words; several examples follow below. These nonce orders all express the "speaking architecture" (architecture parlante) that was taught in the Paris courses, most explicitly by Étienne-Louis Boullée, in which sculptural details of classical architecture could be enlisted to speak symbolically, the better to express the purpose of the structure and enrich its visual meaning with specific appropriateness. This idea was taken up strongly in the training of Beaux-Arts architecture, . French order The Hall of Mirrors in the Palace of Versailles contains pilasters with bronze capitals in the "French order". Designed by Charles Le Brun, the capitals display the national emblems of the Kingdom of France: the royal sun between two Gallic roosters above a fleur-de-lis. British orders Robert Adam's brother James was in Rome in 1762, drawing antiquities under the direction of Clérisseau; he invented a "British order" and published an engraving of it. Its capital the heraldic lion and unicorn take the place of the Composite's volutes, a Byzantine or Romanesque conception, but expressed in terms of neoclassical realism. Adam's ink-and-wash rendering with red highlighting is at the Avery Library, Columbia University. In 1789 George Dance invented an Ammonite order, a variant of Ionic, substituting volutes in the form of fossil ammonites for John Boydell's Shakespeare Gallery in Pall Mall, London. An adaptation of the Corinthian order by William Donthorne that used turnip leaves and mangelwurzel is termed the Agricultural order. Sir Edwin Lutyens, who from 1912 laid out New Delhi as the new seat of government for the British Empire in India, designed a Delhi order having a capital displaying a band of vertical ridges, and with bells hanging at each corner as a replacement for volutes. His design for the new city's central palace, Viceroy's House, now the Presidential residence Rashtrapati Bhavan, was a thorough integration of elements of Indian architecture into a building of classical forms and proportions, and made use of the order throughout. The Delhi Order reappears in some later Lutyens buildings including Campion Hall, Oxford. American orders In the United States Benjamin Latrobe, the architect of the Capitol building in Washington, DC, designed a series of botanical American orders. Most famous is the Corinthian order substituting ears of corn and their husks for the acanthus leaves, which was executed by Giuseppe Franzoni and used in the small domed vestibule of the Senate. Only this vestibule survived the Burning of Washington in 1814, nearly intact. With peace restored, Latrobe designed an American order that substituted tobacco leaves for the acanthus, of which he sent a sketch to Thomas Jefferson in a letter, 5 November 1816. He was encouraged to send a model of it, which remains at Monticello. In the 1830s Alexander Jackson Davis admired it enough to make a drawing of it. In 1809 Latrobe invented a second American order, employing magnolia flowers constrained within the profile of classical mouldings, as his drawing demonstrates. It was intended for "the Upper Columns in the Gallery of the Entrance of the Chamber of the Senate". See also Greek temple Roman temple Notes References Summerson, John, The Classical Language of Architecture, 1980 edition, Thames and Hudson World of Art series, Frédérique Lemerle et Yves Pauwels (dir.), Histoires d’ordres: le langage européen de l’architecture, Turhout, Brepols, 2021 Further reading Barletta, Barbara A., The Origins of the Greek Architectural Orders (Cambridge University Press) 2001 Barozzi da Vignola, Giacomo, Canon of the Five Orders, Translated into English, with an introduction and commentary by Branko Mitrovic, Acanthus Press, New York, 1999 Barozzi da Vignola, Giacomo, Canon of the Five Orders, Translated by John Leeke (1669), with an introduction by David Watkin, Dover Publications, New York, 2011 Classical orders and elements Ancient Roman architectural elements Ancient Greek architecture Classical architecture Neoclassical architecture Design history
Classical order
[ "Engineering" ]
3,856
[ "Design history", "Design" ]
6,943
https://en.wikipedia.org/wiki/Cathode%20ray
Cathode rays or electron beams (e-beam) are streams of electrons observed in discharge tubes. If an evacuated glass tube is equipped with two electrodes and a voltage is applied, glass behind the positive electrode is observed to glow, due to electrons emitted from the cathode (the electrode connected to the negative terminal of the voltage supply). They were first observed in 1859 by German physicist Julius Plücker and Johann Wilhelm Hittorf, and were named in 1876 by Eugen Goldstein Kathodenstrahlen, or cathode rays. In 1897, British physicist J. J. Thomson showed that cathode rays were composed of a previously unknown negatively charged particle, which was later named the electron. Cathode-ray tubes (CRTs) use a focused beam of electrons deflected by electric or magnetic fields to render an image on a screen. Description Cathode rays are so named because they are emitted by the negative electrode, or cathode, in a vacuum tube. To release electrons into the tube, they first must be detached from the atoms of the cathode. In the early experimental cold cathode vacuum tubes in which cathode rays were discovered, called Crookes tubes, this was done by using a high electrical potential of thousands of volts between the anode and the cathode to ionize the residual gas atoms in the tube. The positive ions were accelerated by the electric field toward the cathode, and when they collided with it they knocked electrons out of its surface; these were the cathode rays. Modern vacuum tubes use thermionic emission, in which the cathode is made of a thin wire filament which is heated by a separate electric current passing through it. The increased random heat motion of the filament knocks electrons out of the surface of the filament, into the evacuated space of the tube. Since the electrons have a negative charge, they are repelled by the negative cathode and attracted to the positive anode. They travel in parallel lines through the empty tube. The voltage applied between the electrodes accelerates these low mass particles to high velocities. Cathode rays are invisible, but their presence was first detected in these Crookes tubes when they struck the glass wall of the tube, exciting the atoms of the glass coating and causing them to emit light, a glow called fluorescence. Researchers noticed that objects placed in the tube in front of the cathode could cast a shadow on the glowing wall, and realized that something must be traveling in straight lines from the cathode. After the electrons strike the back of the tube they make their way to the anode, then travel through the anode wire through the power supply and back through the cathode wire to the cathode, so cathode rays carry electric current through the tube. The current in a beam of cathode rays through a vacuum tube can be controlled by passing it through a metal screen of wires (a grid) between cathode and anode, to which a small negative voltage is applied. The electric field of the wires deflects some of the electrons, preventing them from reaching the anode. The amount of current that gets through to the anode depends on the voltage on the grid. Thus, a small voltage on the grid can be made to control a much larger voltage on the anode. This is the principle used in vacuum tubes to amplify electrical signals. The triode vacuum tube developed between 1907 and 1914 was the first electronic device that could amplify, and is still used in some applications such as radio transmitters. High speed beams of cathode rays can also be steered and manipulated by electric fields created by additional metal plates in the tube to which voltage is applied, or magnetic fields created by coils of wire (electromagnets). These are used in cathode-ray tubes, found in televisions and computer monitors, and in electron microscopes. History After the invention of the vacuum pump in 1654 by Otto von Guericke, physicists began to experiment with passing high voltage electricity through rarefied air. In 1705, it was noted that electrostatic generator sparks travel a longer distance through low pressure air than through atmospheric pressure air. Gas discharge tubes In 1838, Michael Faraday applied a high voltage between two metal electrodes at either end of a glass tube that had been partially evacuated of air, and noticed a strange light arc with its beginning at the cathode (negative electrode) and its end at the anode (positive electrode). In 1857, German physicist and glassblower Heinrich Geissler sucked even more air out with an improved pump, to a pressure of around 10−3 atm and found that, instead of an arc, a glow filled the tube. The voltage applied between the two electrodes of the tubes, generated by an induction coil, was anywhere between a few kilovolts and 100 kV. These were called Geissler tubes, similar to today's neon signs. The explanation of these effects was that the high voltage accelerated free electrons and electrically charged atoms (ions) naturally present in the air of the tube. At low pressure, there was enough space between the gas atoms that the electrons could accelerate to high enough speeds that when they struck an atom they knocked electrons off of it, creating more positive ions and free electrons, which went on to create more ions and electrons in a chain reaction, known as a glow discharge. The positive ions were attracted to the cathode and when they struck it knocked more electrons out of it, which were attracted toward the anode. Thus the ionized air was electrically conductive and an electric current flowed through the tube. Geissler tubes had enough air in them that the electrons could only travel a tiny distance before colliding with an atom. The electrons in these tubes moved in a slow diffusion process, never gaining much speed, so these tubes didn't produce cathode rays. Instead, they produced a colorful glow discharge (as in a modern neon light), caused when the electrons struck gas atoms, exciting their orbital electrons to higher energy levels. The electrons released this energy as light. This process is called fluorescence. Cathode rays By the 1870s, British physicist William Crookes and others were able to evacuate tubes to a lower pressure, below 10−6 atm. These were called Crookes tubes. Faraday had been the first to notice a dark space just in front of the cathode, where there was no luminescence. This came to be called the "cathode dark space", "Faraday dark space" or "Crookes dark space". Crookes found that as he pumped more air out of the tubes, the Faraday dark space spread down the tube from the cathode toward the anode, until the tube was totally dark. But at the anode (positive) end of the tube, the glass of the tube itself began to glow. What was happening was that as more air was pumped from the tube, the electrons knocked out of the cathode when positive ions struck it could travel farther, on average, before they struck a gas atom. By the time the tube was dark, most of the electrons could travel in straight lines from the cathode to the anode end of the tube without a collision. With no obstructions, these low mass particles were accelerated to high velocities by the voltage between the electrodes. These were the cathode rays. When they reached the anode end of the tube, they were traveling so fast that, although they were attracted to it, they often flew past the anode and struck the back wall of the tube. When they struck atoms in the glass wall, they excited their orbital electrons to higher energy levels. When the electrons returned to their original energy level, they released the energy as light, causing the glass to fluoresce, usually a greenish or bluish color. Later researchers painted the inside back wall with fluorescent chemicals such as zinc sulfide, to make the glow more visible. Cathode rays themselves are invisible, but this accidental fluorescence allowed researchers to notice that objects in the tube in front of the cathode, such as the anode, cast sharp-edged shadows on the glowing back wall. In 1869, German physicist Johann Hittorf was first to realize that something must be traveling in straight lines from the cathode to cast the shadows. Eugen Goldstein named them cathode rays (German Kathodenstrahlen). Discovery of the electron At this time, atoms were the smallest particles known, and were believed to be indivisible. What carried electric currents was a mystery. During the last quarter of the 19th century, many historic experiments were done with Crookes tubes to determine what cathode rays were. There were two theories. Crookes and Arthur Schuster believed they were particles of "radiant matter," that is, electrically charged atoms. German scientists Eilhard Wiedemann, Heinrich Hertz and Goldstein believed they were "aether waves", some new form of electromagnetic radiation, and were separate from what carried the electric current through the tube. The debate was resolved in 1897 when J. J. Thomson measured the mass of cathode rays, showing they were made of particles, but were around 1800 times lighter than the lightest atom, hydrogen. Therefore, they were not atoms, but a new particle, the first subatomic particle to be discovered, which he originally called "corpuscle" but was later named electron, after particles postulated by George Johnstone Stoney in 1874. He also showed they were identical with particles given off by photoelectric and radioactive materials. It was quickly recognized that they are the particles that carry electric currents in metal wires, and carry the negative electric charge of the atom. Thomson was given the 1906 Nobel Prize in Physics for this work. Philipp Lenard also contributed a great deal to cathode-ray theory, winning the Nobel Prize in 1905 for his research on cathode rays and their properties. Vacuum tubes The gas ionization (or cold cathode) method of producing cathode rays used in Crookes tubes was unreliable, because it depended on the pressure of the residual air in the tube. Over time, the air was absorbed by the walls of the tube, and it stopped working. A more reliable and controllable method of producing cathode rays was investigated by Hittorf and Goldstein, and rediscovered by Thomas Edison in 1880. A cathode made of a wire filament heated red hot by a separate current passing through it would release electrons into the tube by a process called thermionic emission. The first true electronic vacuum tubes, invented in 1904 by John Ambrose Fleming, used this hot cathode technique, and they superseded Crookes tubes. These tubes didn't need gas in them to work, so they were evacuated to a lower pressure, around 10−9 atm (10−4 Pa). The ionization method of creating cathode rays used in Crookes tubes is today only used in a few specialized gas discharge tubes such as krytrons. In 1906, Lee De Forest found that a small voltage on a grid of metal wires between the cathode and anode could control a current in a beam of cathode rays passing through a vacuum tube. His invention, called the triode, was the first device that could amplify electric signals, and revolutionized electrical technology, creating the new field of electronics. Vacuum tubes made radio and television broadcasting possible, as well as radar, talking movies, audio recording, and long-distance telephone service, and were the foundation of consumer electronic devices until the 1960s, when the transistor brought the era of vacuum tubes to a close. Cathode rays are now usually called electron beams. The technology of manipulating electron beams pioneered in these early tubes was applied practically in the design of vacuum tubes, particularly in the invention of the cathode-ray tube (CRT) by Ferdinand Braun in 1897, which was used in television sets and oscilloscopes. Today, electron beams are employed in sophisticated devices such as electron microscopes, electron beam lithography and particle accelerators. Properties Like a wave, cathode rays travel in straight lines, and produce a shadow when obstructed by objects. Ernest Rutherford demonstrated that rays could pass through thin metal foils, behavior expected of a particle. These conflicting properties caused disruptions when trying to classify it as a wave or particle. Crookes insisted it was a particle, while Hertz maintained it was a wave. The debate was resolved when an electric field was used to deflect the rays by J. J. Thomson. This was evidence that the beams were composed of particles because scientists knew it was impossible to deflect electromagnetic waves with an electric field. These can also create mechanical effects, fluorescence, etc. Louis de Broglie later (1924) suggested in his doctoral dissertation that electrons are like photons and can act as waves. The wave-like behaviour of cathode rays was later directly demonstrated using reflection from a nickel surface by Davisson and Germer, and transmission through celluloid thin films and later metal films by George Paget Thomson and Alexander Reid in 1927. (Alexander Reid, who was Thomson's graduate student, performed the first experiments but he died soon after in a motorcycle accident and is rarely mentioned.) See also β (beta) particles Electron beam processing Electron diffraction Electron microscope Electron beam melting Electron beam welding Electron beam technology Electron gun Electron irradiation Ionizing radiation Particle accelerator Sterilisation (microbiology) References General Chemistry (structure and properties of matter) by Aruna Bandara (2010) External links The Cathode Ray Tube site Crookes tube with maltese cross operating Electromagnetism Electron beam Articles containing video clips
Cathode ray
[ "Physics", "Chemistry" ]
2,848
[ "Electron", "Electromagnetism", "Physical phenomena", "Electron beam", "Fundamental interactions" ]
6,944
https://en.wikipedia.org/wiki/Cathode
A cathode is the electrode from which a conventional current leaves a polarized electrical device such as a lead-acid battery. This definition can be recalled by using the mnemonic CCD for Cathode Current Departs. A conventional current describes the direction in which positive charges move. Electrons have a negative electrical charge, so the movement of electrons is opposite to that of the conventional current flow. Consequently, the mnemonic cathode current departs also means that electrons flow into the device's cathode from the external circuit. For example, the end of a household battery marked with a + (plus) is the cathode. The electrode through which conventional current flows the other way, into the device, is termed an anode. Charge flow Conventional current flows from cathode to anode outside the cell or device (with electrons moving in the opposite direction), regardless of the cell or device type and operating mode. Cathode polarity with respect to the anode can be positive or negative depending on how the device is being operated. Inside a device or a cell, positively charged cations always move towards the cathode and negatively charged anions move towards the anode, although cathode polarity depends on the device type, and can even vary according to the operating mode. Whether the cathode is negatively polarized (such as recharging a battery) or positively polarized (such as a battery in use), the cathode will draw electrons into it from outside, as well as attract positively charged cations from inside. A battery or galvanic cell in use has a cathode that is the positive terminal since that is where conventional current flows out of the device. This outward current is carried internally by positive ions moving from the electrolyte to the positive cathode (chemical energy is responsible for this "uphill" motion). It is continued externally by electrons moving into the battery which constitutes positive current flowing outwards. For example, the Daniell galvanic cell's copper electrode is the positive terminal and the cathode. A battery that is recharging or an electrolytic cell performing electrolysis has its cathode as the negative terminal, from which current exits the device and returns to the external generator as charge enters the battery/ cell. For example, reversing the current direction in a Daniell galvanic cell converts it into an electrolytic cell where the copper electrode is the positive terminal and also the anode. In a diode, the cathode is the negative terminal at the pointed end of the arrow symbol, where current flows out of the device. Note: electrode naming for diodes is always based on the direction of the forward current (that of the arrow, in which the current flows "most easily"), even for types such as Zener diodes or solar cells where the current of interest is the reverse current. In vacuum tubes (including cathode-ray tubes) it is the negative terminal where electrons enter the device from the external circuit and proceed into the tube's near-vacuum, constituting a positive current flowing out of the device. Etymology The word was coined in 1834 from the Greek κάθοδος (kathodos), 'descent' or 'way down', by William Whewell, who had been consulted by Michael Faraday over some new names needed to complete a paper on the recently discovered process of electrolysis. In that paper Faraday explained that when an electrolytic cell is oriented so that electric current traverses the "decomposing body" (electrolyte) in a direction "from East to West, or, which will strengthen this help to the memory, that in which the sun appears to move", the cathode is where the current leaves the electrolyte, on the West side: "kata downwards, 'odos a way; the way which the sun sets". The use of 'West' to mean the 'out' direction (actually 'out' → 'West' → 'sunset' → 'down', i.e. 'out of view') may appear unnecessarily contrived. Previously, as related in the first reference cited above, Faraday had used the more straightforward term "exode" (the doorway where the current exits). His motivation for changing it to something meaning 'the West electrode' (other candidates had been "westode", "occiode" and "dysiode") was to make it immune to a possible later change in the direction convention for current, whose exact nature was not known at the time. The reference he used to this effect was the Earth's magnetic field direction, which at that time was believed to be invariant. He fundamentally defined his arbitrary orientation for the cell as being that in which the internal current would run parallel to and in the same direction as a hypothetical magnetizing current loop around the local line of latitude which would induce a magnetic dipole field oriented like the Earth's. This made the internal current East to West as previously mentioned, but in the event of a later convention change it would have become West to East, so that the West electrode would not have been the 'way out' any more. Therefore, "exode" would have become inappropriate, whereas "cathode" meaning 'West electrode' would have remained correct with respect to the unchanged direction of the actual phenomenon underlying the current, then unknown but, he thought, unambiguously defined by the magnetic reference. In retrospect the name change was unfortunate, not only because the Greek roots alone do not reveal the cathode's function any more, but more importantly because, as we now know, the Earth's magnetic field direction on which the "cathode" term is based is subject to reversals whereas the current direction convention on which the "exode" term was based has no reason to change in the future. Since the later discovery of the electron, an easier to remember, and more durably technically correct (although historically false), etymology has been suggested: cathode, from the Greek kathodos, 'way down', 'the way (down) into the cell (or other device) for electrons'. In chemistry In chemistry, a cathode is the electrode of an electrochemical cell at which reduction occurs. The cathode can be negative like when the cell is electrolytic (where electrical energy provided to the cell is being used for decomposing chemical compounds); or positive as when the cell is galvanic (where chemical reactions are used for generating electrical energy). The cathode supplies electrons to the positively charged cations which flow to it from the electrolyte (even if the cell is galvanic, i.e., when the cathode is positive and therefore would be expected to repel the positively charged cations; this is due to electrode potential relative to the electrolyte solution being different for the anode and cathode metal/electrolyte systems in a galvanic cell). The cathodic current, in electrochemistry, is the flow of electrons from the cathode interface to a species in solution. The anodic current is the flow of electrons into the anode from a species in solution. Electrolytic cell In an electrolytic cell, the cathode is where the negative polarity is applied to drive the cell. Common results of reduction at the cathode are hydrogen gas or pure metal from metal ions. When discussing the relative reducing power of two redox agents, the couple for generating the more reducing species is said to be more "cathodic" with respect to the more easily reduced reagent. Galvanic cell In a galvanic cell, the cathode is where the positive pole is connected to allow the circuit to be completed: as the anode of the galvanic cell gives off electrons, they return from the circuit into the cell through the cathode. Electroplating metal cathode (electrolysis) When metal ions are reduced from ionic solution, they form a pure metal surface on the cathode. Items to be plated with pure metal are attached to and become part of the cathode in the electrolytic solution. In electronics Vacuum tubes In a vacuum tube or electronic vacuum system, the cathode is a metal surface which emits free electrons into the evacuated space. Since the electrons are attracted to the positive nuclei of the metal atoms, they normally stay inside the metal and require energy to leave it; this is called the work function of the metal. Cathodes are induced to emit electrons by several mechanisms: Thermionic emission: The cathode can be heated. The increased thermal motion of the metal atoms "knocks" electrons out of the surface, an effect called thermionic emission. This technique is used in most vacuum tubes. Field electron emission: A strong electric field can be applied to the surface by placing an electrode with a high positive voltage near the cathode. The positively charged electrode attracts the electrons, causing some electrons to leave the cathode's surface. This process is used in cold cathodes in some electron microscopes, and in microelectronics fabrication, Secondary emission: An electron, atom or molecule colliding with the surface of the cathode with enough energy can knock electrons out of the surface. These electrons are called secondary electrons. This mechanism is used in gas-discharge lamps such as neon lamps. Photoelectric emission: Electrons can also be emitted from the electrodes of certain metals when light of frequency greater than the threshold frequency falls on it. This effect is called photoelectric emission, and the electrons produced are called photoelectrons. This effect is used in phototubes and image intensifier tubes. Cathodes can be divided into two types: Hot cathode A hot cathode is a cathode that is heated by a filament to produce electrons by thermionic emission. The filament is a thin wire of a refractory metal like tungsten heated red-hot by an electric current passing through it. Before the advent of transistors in the 1960s, virtually all electronic equipment used hot-cathode vacuum tubes. Today hot cathodes are used in vacuum tubes in radio transmitters and microwave ovens, to produce the electron beams in older cathode-ray tube (CRT) type televisions and computer monitors, in x-ray generators, electron microscopes, and fluorescent tubes. There are two types of hot cathodes: Directly heated cathode: In this type, the filament itself is the cathode and emits the electrons directly. Directly heated cathodes were used in the first vacuum tubes, but today they are only used in fluorescent tubes, some large transmitting vacuum tubes, and all X-ray tubes. Indirectly heated cathode: In this type, the filament is not the cathode but rather heats the cathode which then emits electrons. Indirectly heated cathodes are used in most devices today. For example, in most vacuum tubes the cathode is a nickel tube with the filament inside it, and the heat from the filament causes the outside surface of the tube to emit electrons. The filament of an indirectly heated cathode is usually called the heater. The main reason for using an indirectly heated cathode is to isolate the rest of the vacuum tube from the electric potential across the filament. Many vacuum tubes use alternating current to heat the filament. In a tube in which the filament itself was the cathode, the alternating electric field from the filament surface would affect the movement of the electrons and introduce hum into the tube output. It also allows the filaments in all the tubes in an electronic device to be tied together and supplied from the same current source, even though the cathodes they heat may be at different potentials. In order to improve electron emission, cathodes are treated with chemicals, usually compounds of metals with a low work function. Treated cathodes require less surface area, lower temperatures and less power to supply the same cathode current. The untreated tungsten filaments used in early tubes (called "bright emitters") had to be heated to , white-hot, to produce sufficient thermionic emission for use, while modern coated cathodes produce far more electrons at a given temperature so they only have to be heated to There are two main types of treated cathodes: Coated cathode – In these the cathode is covered with a coating of alkali metal oxides, often barium and strontium oxide. These are used in low-power tubes. Thoriated tungsten – In high-power tubes, ion bombardment can destroy the coating on a coated cathode. In these tubes a directly heated cathode consisting of a filament made of tungsten incorporating a small amount of thorium is used. The layer of thorium on the surface which reduces the work function of the cathode is continually replenished as it is lost by diffusion of thorium from the interior of the metal. Cold cathode This is a cathode that is not heated by a filament. They may emit electrons by field electron emission, and in gas-filled tubes by secondary emission. Some examples are electrodes in neon lights, cold-cathode fluorescent lamps (CCFLs) used as backlights in laptops, thyratron tubes, and Crookes tubes. They do not necessarily operate at room temperature; in some devices the cathode is heated by the electron current flowing through it to a temperature at which thermionic emission occurs. For example, in some fluorescent tubes a momentary high voltage is applied to the electrodes to start the current through the tube; after starting the electrodes are heated enough by the current to keep emitting electrons to sustain the discharge. Cold cathodes may also emit electrons by photoelectric emission. These are often called photocathodes and are used in phototubes used in scientific instruments and image intensifier tubes used in night vision goggles. Diodes In a semiconductor diode, the cathode is the N–doped layer of the p–n junction with a high density of free electrons due to doping, and an equal density of fixed positive charges, which are the dopants that have been thermally ionized. In the anode, the converse applies: It features a high density of free "holes" and consequently fixed negative dopants which have captured an electron (hence the origin of the holes). When P and N-doped layers are created adjacent to each other, diffusion ensures that electrons flow from high to low density areas: That is, from the N to the P side. They leave behind the fixed positively charged dopants near the junction. Similarly, holes diffuse from P to N leaving behind fixed negative ionised dopants near the junction. These layers of fixed positive and negative charges are collectively known as the depletion layer because they are depleted of free electrons and holes. The depletion layer at the junction is at the origin of the diode's rectifying properties. This is due to the resulting internal field and corresponding potential barrier which inhibit current flow in reverse applied bias which increases the internal depletion layer field. Conversely, they allow it in forwards applied bias where the applied bias reduces the built in potential barrier. Electrons which diffuse from the cathode into the P-doped layer, or anode, become what are termed "minority carriers" and tend to recombine there with the majority carriers, which are holes, on a timescale characteristic of the material which is the p-type minority carrier lifetime. Similarly, holes diffusing into the N-doped layer become minority carriers and tend to recombine with electrons. In equilibrium, with no applied bias, thermally assisted diffusion of electrons and holes in opposite directions across the depletion layer ensure a zero net current with electrons flowing from cathode to anode and recombining, and holes flowing from anode to cathode across the junction or depletion layer and recombining. Like a typical diode, there is a fixed anode and cathode in a Zener diode, but it will conduct current in the reverse direction (electrons flow from anode to cathode) if its breakdown voltage or "Zener voltage" is exceeded. See also Battery Cathode bias Cathodic protection Electrolysis Electrolytic cell Gas-filled tube Oxidation-reduction PEDOT Vacuum tube References External links The Cathode Ray Tube site How to define anode and cathode Electrodes
Cathode
[ "Chemistry" ]
3,488
[ "Electrochemistry", "Electrodes" ]
6,949
https://en.wikipedia.org/wiki/Carbamazepine
Carbamazepine, sold under the brand name Tegretol among others, is an anticonvulsant medication used in the treatment of epilepsy and neuropathic pain. It is used as an adjunctive treatment in schizophrenia along with other medications and as a second-line agent in bipolar disorder. Carbamazepine appears to work as well as phenytoin and valproate for focal and generalized seizures. It is not effective for absence or myoclonic seizures. Carbamazepine was discovered in 1953 by Swiss chemist Walter Schindler. It was first marketed in 1962. It is available as a generic medication. It is on the World Health Organization's List of Essential Medicines. In 2020, it was the 185th most commonly prescribed medication in the United States, with more than 2million prescriptions. Photoswitchable analogues of carbamazepine have been developed to control its pharmacological activity locally and on demand using light (photopharmacology), with the purpose of reducing the adverse systemic effects of the drug. One of these light-regulated compounds (carbadiazocine, based on a bridged azobenzene or diazocine) has been shown to produce analgesia with noninvasive illumination in vivo in a rat model of neuropathic pain. Medical uses Carbamazepine is typically used for the treatment of seizure disorders and neuropathic pain. It is used off-label as a second-line treatment for bipolar disorder and in combination with an antipsychotic in some cases of schizophrenia when treatment with a conventional antipsychotic alone has failed. However, evidence does not support its usage for schizophrenia. It is not effective for absence seizures or myoclonic seizures. Although carbamazepine may have a similar effectiveness (as measured by people continuing to use a medication) and efficacy (as measured by the medicine reducing seizure recurrence and improving remission) when compared to phenytoin and valproate, choice of medication should be evaluated on an individual basis as further research is needed to determine which medication is most helpful for people with newly-onset seizures. In the United States, carbamazepine is indicated for the treatment of epilepsy (including partial seizures, generalized tonic-clonic seizures and mixed seizures), and trigeminal neuralgia. Carbamazepine is the only medication that is approved by the Food and Drug Administration for the treatment of trigeminal neuralgia. As of 2014, a controlled release formulation was available for which there is tentative evidence showing fewer side effects and unclear evidence with regard to whether there is a difference in efficacy. It has also been shown to improve symptoms of "typewriter tinnitus", a type of tinnitus caused by the neurovascular compression of the cochleovestibular nerve. Adverse effects In the US, the label for carbamazepine contains warnings concerning: effects on the body's production of red blood cells, white blood cells, and platelets: rarely, there are major effects of aplastic anemia and agranulocytosis reported and more commonly, there are minor changes such as decreased white blood cell or platelet counts, but these do not progress to more serious problems. increased risks of suicide increased risks of hyponatremia and SIADH risk of seizures, if the person stops taking the drug abruptly risks to the fetus in women who are pregnant, specifically congenital malformations like spina bifida, and developmental disorders. Pancreatitis Hepatitis Dizziness Bone marrow suppression Stevens–Johnson syndrome Common adverse effects may include drowsiness, dizziness, headaches and migraines, ataxia, nausea, vomiting, and/or constipation. Alcohol use while taking carbamazepine may lead to enhanced depression of the central nervous system. Less common side effects may include increased risk of seizures in people with mixed seizure disorders, abnormal heart rhythms, blurry or double vision. Also, rare case reports of an auditory side effect have been made, whereby patients perceive sounds about a semitone lower than previously; this unusual side effect is usually not noticed by most people, and disappears after the person stops taking carbamazepine. Pharmacogenetics Serious skin reactions such as Stevens–Johnson syndrome (SJS) or toxic epidermal necrolysis (TEN) due to carbamazepine therapy are more common in people with a particular human leukocyte antigen gene-variant (allele), HLA-B*1502. Odds ratios for the development of SJS or TEN in people who carry the allele can be in the double, triple or even quadruple digits, depending on the population studied. HLA-B*1502 occurs almost exclusively in people with ancestry across broad areas of Asia, but has a very low or absent frequency in European, Japanese, Korean and African populations. However, the HLA-A*31:01 allele has been shown to be a strong predictor of both mild and severe adverse reactions, such as the DRESS form of severe cutaneous reactions, to carbamazepine among Japanese, Chinese, Korean, and Europeans. It is suggested that carbamazepine acts as a potent antigen that binds to the antigen-presenting area of HLA-B*1502 alike, triggering an everlasting activation signal on immature CD8-T cells, thus resulting in widespread cytotoxic reactions like SJS/TEN. Interactions Carbamazepine has a potential for drug interactions. Drugs that decrease breaking down of carbamazepine or otherwise increase its levels include erythromycin, cimetidine, propoxyphene, and calcium channel blockers. Grapefruit juice raises the bioavailability of carbamazepine by inhibiting the enzyme CYP3A4 in the gut wall and in the liver. Lower levels of carbamazepine are seen when administered with phenobarbital, phenytoin, or primidone, which can result in breakthrough seizure activity. Valproic acid and valnoctamide both inhibit microsomal epoxide hydrolase (mEH), the enzyme responsible for the breakdown of the active metabolite carbamazepine-10,11-epoxide into inactive metabolites. By inhibiting mEH, valproic acid and valnoctamide cause a build-up of the active metabolite, prolonging the effects of carbamazepine and delaying its excretion. Carbamazepine, as an inducer of cytochrome P450 enzymes, may increase clearance of many drugs, decreasing their concentration in the blood to subtherapeutic levels and reducing their desired effects. Drugs that are more rapidly metabolized with carbamazepine include warfarin, lamotrigine, phenytoin, theophylline, valproic acid, many benzodiazepines, and methadone. Carbamazepine also increases the metabolism of the hormones in birth control pills and can reduce their effectiveness, potentially leading to unexpected pregnancies. Pharmacology Mechanism of action Carbamazepine is a sodium channel blocker. It binds preferentially to voltage-gated sodium channels in their inactive conformation, which prevents repetitive and sustained firing of an action potential. Carbamazepine has effects on serotonin systems but the relevance to its antiseizure effects is uncertain. There is evidence that it is a serotonin releasing agent and possibly even a serotonin reuptake inhibitor. It has been suggested that carbamazepine can also block voltage-gated calcium channels, which will reduce neurotransmitter release. Pharmacokinetics Carbamazepine is relatively slowly but practically completely absorbed after administration by mouth. Highest concentrations in the blood plasma are reached after 4 to 24 hours depending on the dosage form. Slow release tablets result in about 15% lower absorption and 25% lower peak plasma concentrations than ordinary tablets, as well as in less fluctuation of the concentration, but not in significantly lower minimum concentrations. In the circulation, carbamazepine itself comprises 20 to 30% of total residues. The remainder is in the form of metabolites; 70 to 80% of residues is bound to plasma proteins. Concentrations in breast milk are 25 to 60% of those in the blood plasma. Carbamazepine itself is not pharmacologically active. It is activated, mainly by CYP3A4, to carbamazepine-10,11-epoxide, which is solely responsible for the drug's anticonvulsant effects. The epoxide is then inactivated by microsomal epoxide hydrolase (mEH) to carbamazepine-trans-10,11-diol and further to its glucuronides. Other metabolites include various hydroxyl derivatives and carbamazepine-N-glucuronide. The plasma half-life is about 35 to 40 hours when carbamazepine is given as single dose, but it is a strong inducer of liver enzymes, and the plasma half-life shortens to about 12 to 17 hours when it is given repeatedly. The half-life can be further shortened to 9–10 hours by other enzyme inducers such as phenytoin or phenobarbital. About 70% are excreted via the urine, almost exclusively in form of its metabolites, and 30% via the faeces. History Carbamazepine was discovered by chemist Walter Schindler at J.R. Geigy AG (now part of Novartis) in Basel, Switzerland, in 1953. It was first marketed as a drug to treat epilepsy in Switzerland in 1963 under the brand name Tegretol; its use for trigeminal neuralgia (formerly known as tic douloureux) was introduced at the same time. It has been used as an anticonvulsant and antiepileptic in the United Kingdom since 1965, and has been approved in the United States since 1968. Carbamazepine was studied for bipolar disorder throughout the 1970s. Society and culture Environmental impact Carbamazepine and its bio-transformation products have been detected in wastewater treatment plant effluent and in streams receiving treated wastewater. Field and laboratory studies have been conducted to understand the accumulation of carbamazepine in food plants grown in soil treated with sludge, which vary with respect to the concentrations of carbamazepine present in sludge and in the concentrations of sludge in the soil. Taking into account only studies that used concentrations commonly found in the environment, a 2014 review concluded that "the accumulation of carbamazepine into plants grown in soil amended with biosolids poses a de minimis risk to human health according to the approach." Brand names Carbamazepine is available worldwide under many brand names including Tegretol. Research References Further reading External links Carbamazepine. UK National Health Service Anticonvulsants Antidiuretics CYP3A4 inducers Dermatoxins Dibenzazepines Drugs developed by Novartis GABAA receptor positive allosteric modulators Hepatotoxins Mood stabilizers Prodrugs Ureas World Health Organization essential medicines Wikipedia medicine articles ready to translate
Carbamazepine
[ "Chemistry" ]
2,396
[ "Organic compounds", "Chemicals in medicine", "Prodrugs", "Ureas" ]
6,956
https://en.wikipedia.org/wiki/Conservation%20law
In physics, a conservation law states that a particular measurable property of an isolated physical system does not change as the system evolves over time. Exact conservation laws include conservation of mass-energy, conservation of linear momentum, conservation of angular momentum, and conservation of electric charge. There are also many approximate conservation laws, which apply to such quantities as mass, parity, lepton number, baryon number, strangeness, hypercharge, etc. These quantities are conserved in certain classes of physics processes, but not in all. A local conservation law is usually expressed mathematically as a continuity equation, a partial differential equation which gives a relation between the amount of the quantity and the "transport" of that quantity. It states that the amount of the conserved quantity at a point or within a volume can only change by the amount of the quantity which flows in or out of the volume. From Noether's theorem, every differentiable symmetry leads to a conservation law. Other conserved quantities can exist as well. Conservation laws as fundamental laws of nature Conservation laws are fundamental to our understanding of the physical world, in that they describe which processes can or cannot occur in nature. For example, the conservation law of energy states that the total quantity of energy in an isolated system does not change, though it may change form. In general, the total quantity of the property governed by that law remains unchanged during physical processes. With respect to classical physics, conservation laws include conservation of energy, mass (or matter), linear momentum, angular momentum, and electric charge. With respect to particle physics, particles cannot be created or destroyed except in pairs, where one is ordinary and the other is an antiparticle. With respect to symmetries and invariance principles, three special conservation laws have been described, associated with inversion or reversal of space, time, and charge. Conservation laws are considered to be fundamental laws of nature, with broad application in physics, as well as in other fields such as chemistry, biology, geology, and engineering. Most conservation laws are exact, or absolute, in the sense that they apply to all possible processes. Some conservation laws are partial, in that they hold for some processes but not for others. One particularly important result concerning conservation laws is Noether's theorem, which states that there is a one-to-one correspondence between each one of them and a differentiable symmetry of the Universe. For example, the conservation of energy follows from the uniformity of time and the conservation of angular momentum arises from the isotropy of space, i.e. because there is no preferred direction of space. Notably, there is no conservation law associated with time-reversal, although more complex conservation laws combining time-reversal with other symmetries are known. Exact laws A partial listing of physical conservation equations due to symmetry that are said to be exact laws, or more precisely have never been proven to be violated: Another exact symmetry is CPT symmetry, the simultaneous inversion of space and time coordinates, together with swapping all particles with their antiparticles; however being a discrete symmetry Noether's theorem does not apply to it. Accordingly, the conserved quantity, CPT parity, can usually not be meaningfully calculated or determined. Approximate laws There are also approximate conservation laws. These are approximately true in particular situations, such as low speeds, short time scales, or certain interactions. Conservation of mechanical energy Conservation of mass (approximately true for nonrelativistic speeds) Conservation of baryon number (See chiral anomaly and sphaleron) Conservation of lepton number (In the Standard Model) Conservation of flavor (violated by the weak interaction) Conservation of strangeness (violated by the weak interaction) Conservation of space-parity (violated by the weak interaction) Conservation of charge-parity (violated by the weak interaction) Conservation of time-parity (violated by the weak interaction) Conservation of CP parity (violated by the weak interaction); in the Standard Model, this is equivalent to conservation of time-parity. Global and local conservation laws The total amount of some conserved quantity in the universe could remain unchanged if an equal amount were to appear at one point A and simultaneously disappear from another separate point B. For example, an amount of energy could appear on Earth without changing the total amount in the Universe if the same amount of energy were to disappear from some other region of the Universe. This weak form of "global" conservation is really not a conservation law because it is not Lorentz invariant, so phenomena like the above do not occur in nature. Due to special relativity, if the appearance of the energy at A and disappearance of the energy at B are simultaneous in one inertial reference frame, they will not be simultaneous in other inertial reference frames moving with respect to the first. In a moving frame one will occur before the other; either the energy at A will appear before or after the energy at B disappears. In both cases, during the interval energy will not be conserved. A stronger form of conservation law requires that, for the amount of a conserved quantity at a point to change, there must be a flow, or flux of the quantity into or out of the point. For example, the amount of electric charge at a point is never found to change without an electric current into or out of the point that carries the difference in charge. Since it only involves continuous local changes, this stronger type of conservation law is Lorentz invariant; a quantity conserved in one reference frame is conserved in all moving reference frames. This is called a local conservation law. Local conservation also implies global conservation; that the total amount of the conserved quantity in the Universe remains constant. All of the conservation laws listed above are local conservation laws. A local conservation law is expressed mathematically by a continuity equation, which states that the change in the quantity in a volume is equal to the total net "flux" of the quantity through the surface of the volume. The following sections discuss continuity equations in general. Differential forms In continuum mechanics, the most general form of an exact conservation law is given by a continuity equation. For example, conservation of electric charge is where is the divergence operator, is the density of (amount per unit volume), is the flux of (amount crossing a unit area in unit time), and is time. If we assume that the motion u of the charge is a continuous function of position and time, then In one space dimension this can be put into the form of a homogeneous first-order quasilinear hyperbolic equation: where the dependent variable is called the density of a conserved quantity, and is called the current Jacobian, and the subscript notation for partial derivatives has been employed. The more general inhomogeneous case: is not a conservation equation but the general kind of balance equation describing a dissipative system. The dependent variable is called a nonconserved quantity, and the inhomogeneous term is the-source, or dissipation. For example, balance equations of this kind are the momentum and energy Navier-Stokes equations, or the entropy balance for a general isolated system. In the one-dimensional space a conservation equation is a first-order quasilinear hyperbolic equation that can be put into the advection form: where the dependent variable is called the density of the conserved (scalar) quantity, and is called the current coefficient, usually corresponding to the partial derivative in the conserved quantity of a current density of the conserved quantity : In this case since the chain rule applies: the conservation equation can be put into the current density form: In a space with more than one dimension the former definition can be extended to an equation that can be put into the form: where the conserved quantity is , denotes the scalar product, is the nabla operator, here indicating a gradient, and is a vector of current coefficients, analogously corresponding to the divergence of a vector current density associated to the conserved quantity : This is the case for the continuity equation: Here the conserved quantity is the mass, with density and current density , identical to the momentum density, while is the flow velocity. In the general case a conservation equation can be also a system of this kind of equations (a vector equation) in the form: where is called the conserved (vector) quantity, is its gradient, is the zero vector, and is called the Jacobian of the current density. In fact as in the former scalar case, also in the vector case A(y) usually corresponding to the Jacobian of a current density matrix : and the conservation equation can be put into the form: For example, this the case for Euler equations (fluid dynamics). In the simple incompressible case they are: where: is the flow velocity vector, with components in a N-dimensional space , is the specific pressure (pressure per unit density) giving the source term, It can be shown that the conserved (vector) quantity and the current density matrix for these equations are respectively: where denotes the outer product. Integral and weak forms Conservation equations can usually also be expressed in integral form: the advantage of the latter is substantially that it requires less smoothness of the solution, which paves the way to weak form, extending the class of admissible solutions to include discontinuous solutions. By integrating in any space-time domain the current density form in 1-D space: and by using Green's theorem, the integral form is: In a similar fashion, for the scalar multidimensional space, the integral form is: where the line integration is performed along the boundary of the domain, in an anticlockwise manner. Moreover, by defining a test function φ(r,t) continuously differentiable both in time and space with compact support, the weak form can be obtained pivoting on the initial condition. In 1-D space it is: In the weak form all the partial derivatives of the density and current density have been passed on to the test function, which with the former hypothesis is sufficiently smooth to admit these derivatives. See also Invariant (physics) Momentum Cauchy momentum equation Energy Conservation of energy and the First law of thermodynamics Conservative system Conserved quantity Some kinds of helicity are conserved in dissipationless limit: hydrodynamical helicity, magnetic helicity, cross-helicity. Principle of mutability Conservation law of the Stress–energy tensor Riemann invariant Philosophy of physics Totalitarian principle Convection–diffusion equation Uniformity of nature Examples and applications Advection Mass conservation, or Continuity equation Charge conservation Euler equations (fluid dynamics) inviscid Burgers equation Kinematic wave Conservation of energy Traffic flow Notes References Philipson, Schuster, Modeling by Nonlinear Differential Equations: Dissipative and Conservative Processes, World Scientific Publishing Company 2009. Victor J. Stenger, 2000. Timeless Reality: Symmetry, Simplicity, and Multiple Universes. Buffalo NY: Prometheus Books. Chpt. 12 is a gentle introduction to symmetry, invariance, and conservation laws. E. Godlewski and P.A. Raviart, Hyperbolic systems of conservation laws, Ellipses, 1991. External links Conservation Laws – Ch. 11–15 in an online textbook Scientific laws Symmetry Thermodynamic systems
Conservation law
[ "Physics", "Chemistry", "Mathematics" ]
2,307
[ "Thermodynamic systems", "Equations of physics", "Conservation laws", "Mathematical objects", "Scientific laws", "Equations", "Physical systems", "Thermodynamics", "Geometry", "Dynamical systems", "Symmetry", "Physics theorems" ]
6,966
https://en.wikipedia.org/wiki/Chinese%20calendar
The traditional Chinese calendar, dating back to the Han dynasty, is a lunisolar calendar that blends solar, lunar, and other cycles for social and agricultural purposes. While modern China primarily uses the Gregorian calendar for official purposes, the traditional calendar remains culturally significant. It determines the timing of Chinese New Year with traditions like the twelve animals of the Chinese Zodiac still widely observed. The traditional Chinese calendar uses the sexagenary cycle, a repeating system of Heavenly Stems and Earthly Branches, to mark years, months, and days. This system, along with astronomical observations and mathematical calculations, was developed to align solar and lunar cycles, though some approximations are necessary due to the natural differences between these cycles. Over centuries, the calendar was refined through advancements in astronomy and horology, with dynasties introducing variations to improve accuracy and meet cultural or political needs. While the Gregorian calendar has become now standard for civic daily use in China, the traditional lunisolar calendar continues to influence festivals, cultural practices, and zodiac-based customs. Beyond China, it has shaped other East Asian calendars, including the Korean, Vietnamese, and Japanese lunar systems, each adapting the same lunisolar principles while integrating local customs and terminology. Epochs, or fixed starting points for year counting, have played an essential role in the Chinese calendar's structure. Some epochs are based on historical figures, such as the inauguration of the Yellow Emperor (Huangdi), while others marked the rise of dynasties or significant political shifts. This system allowed for the numbering of years based on regnal eras, with the start of a ruler's reign often resetting the count. The Chinese calendar also tracks time in smaller units, including months, days, and double-hour periods called shichen. These timekeeping methods have influenced broader fields of horology, with some principles, such as precise time subdivisions, still evident in modern scientific timekeeping. The continued use of the calendar today highlights its enduring cultural, historical, and scientific significance. Etymology The name of calendar is in , and was represented in earlier character forms variants (), and ultimately derived from an ancient form (秝). The ancient form of the character consists of two stalks of rice plant (), arranged in parallel. This character represents the order in space and also the order in time. As its meaning became complex, the modern dedicated character () was created to represent the meaning of calendar. Maintaining the correctness of calendars was an important task to maintain the authority of rulers, being perceived as a way to measure the ability of a ruler. For example, someone seen as a competent ruler would foresee the coming of seasons and prepare accordingly. This understanding was also relevant in predicting abnormalities of the Earth and celestial bodies, such as lunar and solar eclipses. The significant relationship between authority and timekeeping helps to explain why there are 102 calendars in Chinese history, trying to predict the correct courses of sun, moon and stars, and marking good time and bad time. Each calendar is named as and recorded in a dedicated calendar section in history books of different eras. The last one in imperial era was . A ruler would issue an almanac before the commencement of each year. There were private almanac issuers, usually illegal, when a ruler lost his control to some territories. Various modern Chinese calendar names resulted from the struggle between the introduction of Gregorian calendar by government and the preservation of customs by the public in the era of Republic of China. The government wanted to abolish the Chinese calendar to force everyone to use the Gregorian calendar, and even abolished the Lunar New Year, but faced great opposition. The public needed the astronomical Chinese calendar to do things at a proper time, for example farming and fishing; also, a wide spectrum of festivals and customs observations have been based on the calendar. The government finally compromised and rebranded it as the agricultural calendar in 1947, depreciating the calendar to merely agricultural use. Epochs An epoch is a point in time chosen as the origin of a particular calendar era, thus serving as a reference point from which subsequent time or dates are measured. The use of epochs in Chinese calendar system allow for a chronological starting point from whence to begin point continuously numbering subsequent dates. Various epochs have been used. Similarly, nomenclature similar to that of the Christian era has occasionally been used: No reference date is universally accepted. The most popular is the Gregorian calendar (). During the 17th century, the Jesuit missionaries tried to determine the epochal year of the Chinese calendar. In his Sinicae historiae decas prima (published in Munich in 1658), Martino Martini (1614–1661) dated the Yellow Emperor's ascension at 2697 BCE and began the Chinese calendar with the reign of Fuxi (which, according to Martini, began in 2952 BCE). Philippe Couplet's 1686 Chronological table of Chinese monarchs (Tabula chronologica monarchiae sinicae) gave the same date for the Yellow Emperor. The Jesuits' dates provoked interest in Europe, where they were used for comparison with Biblical chronology. Modern Chinese chronology has generally accepted Martini's dates, except that it usually places the reign of the Yellow Emperor at 2698 BCE and omits his predecessors Fuxi and Shennong as "too legendary to include". Publications began using the estimated birth date of the Yellow Emperor as the first year of the Han calendar in 1903, with newspapers and magazines proposing different dates. Jiangsu province counted 1905 as the year 4396 (using a year 1 of 2491 BCE, and implying that CE is ), and the newspaper Ming Pao () reckoned 1905 as 4603 (using a year 1 of 2698 BCE, and implying that CE is ). Liu Shipei (, 1884–1919) created the Yellow Emperor Calendar (), with year 1 as the birth of the emperor (which he determined as 2711 BCE, implying that CE is ). There is no evidence that this calendar was used before the 20th century. Liu calculated that the 1900 international expedition sent by the Eight-Nation Alliance to suppress the Boxer Rebellion entered Beijing in the 4611th year of the Yellow Emperor. Taoists later adopted Yellow Emperor Calendar and named it Tao Calendar (). On 2 January 1912, Sun Yat-sen announced changes to the official calendar and era. 1 January was 14 Shíyīyuè 4609 Huángdì year, assuming a year 1 of 2698 BCE, making CE year . Many overseas Chinese communities like San Francisco's Chinatown adopted the change. The modern Chinese standard calendar uses the epoch of the Gregorian calendar, which is on 1 January of the year 1 CE. Calendar types Lunisolar Lunisolar calendars involve correlations of the cycles of the sun (solar) and the moon (lunar). Solar and agricultural A solar calendar (also called the Tung Shing, the Yellow Calendar or Imperial Calendar, both alluding to Yellow Emperor) keeps track of the seasons as the earth and the sun move in the solar system relatively to each other. A purely solar calendar may be useful in planning times for agricultural activities such as planting and harvesting. Solar calendars tend to use astronomically observable points of reference such as equinoxes and solstices, events which may be approximately predicted using fundamental methods of observation and basic mathematical analysis. Modern Chinese calendar and horology The topic of the Chinese calendar also includes variations of the modern Chinese calendar, influenced by the Gregorian calendar. Variations include methodologies of the People's Republic of China and Taiwan. Modern calendars In China, the modern calendar is defined by the Chinese national standard GB/T 33661–2017, "Calculation and Promulgation of the Chinese Calendar", issued by the Standardization Administration of China on 12 May 2017. Influence of Gregorian calendar Although modern-day China uses the Gregorian calendar, the traditional Chinese calendar governs holidays, such as the Chinese New Year and Lantern Festival, in both China and overseas Chinese communities. It also provides the traditional Chinese nomenclature of dates within a year which people use to select auspicious days for weddings, funerals, moving or starting a business. The evening state-run news program Xinwen Lianbo in the People's Republic of China continues to announce the months and dates in both the Gregorian and the traditional lunisolar calendar. History The Chinese calendar system has a long history, which has traditionally been associated with specific dynastic periods. Various individual calendar types have been developed with different names. In terms of historical development, some of the calendar variations are associated with dynastic changes along a spectrum beginning with a prehistorical/mythological time to and through well attested historical dynastic periods. Many individuals have been associated with the development of the Chinese calendar, including researchers into underlying astronomy; and, furthermore, the development of instruments of observation are historically important. Influences from India, Islam, and Jesuits also became significant. Phenology Early calendar systems often were closely tied to natural phenomena. Phenology is the study of periodic events in biological life cycles and how these are influenced by seasonal and interannual variations in climate, as well as habitat factors (such as elevation). The plum-rains season (), the rainy season in late spring and early summer, begins on the first bǐng day after Mangzhong () and ends on the first wèi day after Xiaoshu (). The Three Fu () are three periods of hot weather, counted from the first gēng day after the summer solstice. The first fu () is 10 days long. The mid-fu () is 10 or 20 days long. The last fu () is 10 days from the first gēng day after the beginning of autumn. The Shujiu cold days () are the 81 days after the winter solstice (divided into nine sets of nine days), and are considered the coldest days of the year. Each nine-day unit is known by its order in the set, followed by "nine" (). In traditional Chinese culture, "nine" represents the infinity, which is also the number of "Yang". According to one belief nine times accumulation of "Yang" gradually reduces the "Yin", and finally the weather becomes warm. Names of months Lunar months were originally named according to natural phenomena. Current naming conventions use numbers as the month names. Every month is also associated with one of the twelve Earthly Branches. Gregorian dates are approximate and should be used with caution. Many years have intercalary months. Chinese astronomy The Chinese calendar has been a development involving much observation and calculation of the apparent movements of the Sun, Moon, planets, and stars, as observed from Earth. Chinese astronomers Many Chinese astronomers have contributed to the development of the Chinese calendar. Many were of the scholarly or shi class (), including writers of history, such as Sima Qian. Notable Chinese astronomers who have contributed to the development of the calendar include Gan De, Shi Shen, and Zu Chongzhi Technology Early technological developments aiding in calendar development include the development of the gnomon. Later technological developments useful to the calendar system include naming, numbering and mapping of the sky, the development of analog computational devices such as the armillary sphere and the water clock, and the establishment of observatories. Chinese calendar names Ancient six calendars From the Warring States period (ending in 221 BCE), six especially significant calendar systems are known to have begun to be developed. Later on, during their future course in history, the modern names for the ancient six calendars were also developed, and can be translated into English as Huangdi, Yin, Zhou, Xia, Zhuanxu, and Lu. Calendar variations There are various Chinese terms for calendar variations including: Nongli Calendar (traditional Chinese: 農曆; simplified Chinese: 农历; pinyin: nónglì; lit. 'agricultural calendar') Jiuli Calendar (traditional Chinese: 舊曆; simplified Chinese: 旧历; pinyin: jiùlì; Jyutping: Gau6 Lik6; lit.'former calendar') Laoli Calendar (traditional Chinese: 老曆; simplified Chinese: 老历; pinyin: lǎolì; lit. 'old calendar') Zhongli Calendar (traditional Chinese: 中曆; simplified Chinese: 中历; pinyin: zhōnglì; Jyutping: zung1 lik6; lit. 'Chinese calendar') Huali Calendar (traditional Chinese: 華曆; simplified Chinese: 华历; pinyin: huálì; Jyutping: waa4 lik6; lit. 'Chinese calendar') Solar calendars The traditional Chinese calendar was developed between 771 BCE and 476 BCE, during the Spring and Autumn period of the Eastern Zhou dynasty. Solar calendars were used before the Zhou dynasty period, along with the basic sexagenary system. Five-elements calendar One version of the solar calendar is the five-elements calendar (), which derives from the Wu Xing. A 365-day year was divided into five phases of 73 days, with each phase corresponding to a Day 1 Wu Xing element. A phase began with a governing-element day (), followed by six 12-day weeks. Each phase consisted of two three-week months, making each year ten months long. Years began on a jiǎzǐ () day (and a 72-day wood phase), followed by a bǐngzǐ day () and a 72-day fire phase; a wùzǐ () day and a 72-day earth phase; a gēngzǐ () day and a 72-day metal phase, and a rénzǐ day () followed by a water phase. Other days were tracked using the Yellow River Map (He Tu). Four-quarters calendar Another version is a four-quarters calendar (, or ). The weeks were ten days long, with one month consisting of three weeks. A year had 12 months, with a ten-day week intercalated in summer as needed to keep up with the tropical year. The 10 Heavenly Stems and 12 Earthly Branches were used to mark days. Balanced calendar A third version is the balanced calendar (). A year was 365.25 days, and a month was 29.5 days. After every 16th month, a half-month was intercalated. According to oracle bone records, the Shang dynasty calendar ( BCE) was a balanced calendar with 12 to 14 months in a year; the month after the winter solstice was Zhēngyuè. Lunisolar calendars by dynasty Six ancient calendars Modern historical knowledge and records are limited for the earlier calendars. These calendars are known as the six ancient calendars (), or quarter-remainder calendars, (), since all calculate a year as days long. Months begin on the day of the new moon, and a year has 12 or 13 months. Intercalary months (a 13th month) are added to the end of the year. The Qiang and Dai calendars are modern versions of the Zhuanxu calendar, used by mountain peoples. Zhou dynasty The first lunisolar calendar was the Zhou calendar (), introduced under the Zhou dynasty (1046 BCE – 256 BCE). This calendar sets the beginning of the year at the day of the new moon before the winter solstice. Competing Warring states calendars Several competing lunisolar calendars were also introduced as Zhou devolved into the Warring States, especially by states fighting Zhou control during the Warring States period (perhaps 475 BCE - 221 BCE). The state of Lu issued its own Lu calendar(). Jin issued the Xia calendar () with a year beginning on the day of the new moon nearest the March equinox. Qin issued the Zhuanxu calendar (), with a year beginning on the day of the new moon nearest the winter solstice. Song's Yin calendar () began its year on the day of the new moon after the winter solstice. Qin and early Han dynasties After Qin Shi Huang unified China under the Qin dynasty in 221 BCE, the Qin calendar () was introduced. It followed most of the rules governing the Zhuanxu calendar, but the month order was that of the Xia calendar; the year began with month 10 and ended with month 9, analogous to a Gregorian calendar beginning in October and ending in September. The intercalary month, known as the second Jiǔyuè (), was placed at the end of the year. The Qin calendar was used going into the Han dynasty. Han dynasty Tàichū calendar Emperor Wu of Han introduced reforms in the seventh of the eleven named eras of his reign, Tàichū (), 104 BCE – 101 BCE. His Tàichū Calendar () defined a solar year as days (365;06:00:14.035), and the lunar month had days (29;12:44:44.444). Since the 19 years cycle used for the 7 additional months was taken as an exact one, and not as an approximation. This calendar introduced the 24 solar terms, dividing the year into 24 equal parts of 15° each. Solar terms were paired, with the 12 combined periods known as climate terms. The first solar term of the period was known as a pre-climate (节气), and the second was a mid-climate (中气). Months were named for the mid-climate to which they were closest, and a month without a mid-climate was an intercalary month. The Taichu calendar established a framework for traditional calendars, with later calendars adding to the basic formula. Northern and Southern Dynasties Dàmíng calendar The Dàmíng Calendar (), created in the Northern and Southern Dynasties by Zu Chongzhi (429 CE – 500 CE), introduced the equinoxes. Tang dynasty Wùyín Yuán calendar The use of syzygy to determine the lunar month was first described in the Tang dynasty Wùyín Yuán Calendar (). Yuan dynasty Shòushí calendar The Yuan dynasty Shòushí calendar () used spherical trigonometry to find the length of the tropical year. The calendar had a 365.2425-day year, identical to the Gregorian calendar. Shíxiàn calendar From 1645 to 1913 the Shíxiàn or Chongzhen was developed. During the late Ming dynasty, the Chinese Emperor appointed Xu Guangqi in 1629 to be the leader of the ShiXian calendar reform. Assisted by Jesuits, he translated Western astronomical works and introduced new concepts, such as those of Nicolaus Copernicus, Johannes Kepler, Galileo Galilei, and Tycho Brahe; however, the new calendar was not released before the end of the dynasty. In the early Qing dynasty, Johann Adam Schall von Bell submitted the calendar which was edited by the lead of Xu Guangqi to the Shunzhi Emperor. The Qing government issued it as the Shíxiàn (seasonal) calendar. In this calendar, the solar terms are 15° each along the ecliptic and it can be used as a solar calendar. However, the length of the climate term near the perihelion is less than 30 days and there may be two mid-climate terms. The Shíxiàn calendar changed the mid-climate-term rule to "decide the month in sequence, except the intercalary month." The present traditional calendar follows the Shíxiàn calendar, except: The baseline is Chinese Standard Time, rather than Beijing local time. (Modern) astronomical data, rather than mathematical calculations, is used. Republic of China The Chinese calendar lost its place as the country's official calendar at the beginning of the 20th century, its use has continued. The Republic of China Calendar published by the Beiyang government of the Republic of China still listed the dates of the Chinese calendar in addition to the Gregorian calendar. In 1929, the Nationalist government tried to ban the traditional Chinese calendar. The Kuómín Calendar published by the government no longer listed the dates of the Chinese calendar. However, Chinese people were used to the traditional calendar and many traditional customs were based on the Chinese calendar. The ban failed and was lifted in 1934. The latest Chinese calendar was "New Edition of Wànniánlì, revised edition", edited by Beijing Purple Mountain Observatory, People's Republic of China. To optimize the Chinese calendar, astronomers have proposed a number of changes. Kao Ping-tse (; 1888–1970), a Chinese astronomer who co-founded the Purple Mountain Observatory, proposed that month numbers be calculated before the new moon and solar terms to be rounded to the day. Since the intercalary month is determined by the first month without a mid-climate and the mid-climate time varies by time zone, countries that adopted the calendar but calculate with their own time could vary from the time in China. Horology Horology, or chronometry, refers to the measurement of time. In the context of the Chinese calendar, horology involves the definition and mathematical measurement of terms or elements such observable astronomic movements or events such as are associated with days, months, years, hours, and so on. These measurements are based upon objective, observable phenomena. Calendar accuracy is based upon accuracy and precision of measurements. The Chinese calendar is lunisolar, similar to the Hindu, Hebrew and ancient Babylonian calendars. In this case the calendar is in part based in objective, observable phenomena and in part by mathematical analysis to correlate the observed phenomena. Lunisolar calendars especially attempt to correlate the solar and lunar cycles, but other considerations can be agricultural and seasonal or phenological, or religious, or even political. Basic horologic definitions include that days begin and end at midnight, and months begin on the day of the new moon. Years start on the second (or third) new moon after the winter solstice. Solar terms govern the beginning, middle, and end of each month. A sexagenary cycle, comprising the heavenly stems () and the earthly branches (), is used as identification alongside each year and month, including intercalary months or leap months. Months are also annotated as either long ( for months with 30 days) or short ( for months with 29 days). There are also other elements of the traditional Chinese calendar. Day Days are Sun oriented, based upon divisions of the solar year. A day () is considered both traditionally and currently to be the time from one midnight to the next. Traditionally days (including the night-time portion) were divided into 12 double-hours, and in modern times the 24 hour system has become more standard. Month Months are Moon oriented. Month (), the time from one new moon to the next. These synodic months are about days long. This includes the Date (), when a day occurs in the month. Days are numbered in sequence from 1 to 29 (or 30). And, a Calendar month (), is when a month occurs within a year. Some months may be repeated. Year A year () is based upon the time of one revolution of Earth around the Sun, rounded to whole days. Traditionally, the year is measured from the first day of spring (lunisolar year) or the winter solstice (solar year). A year is astronomically about days. This includes the calendar () year, when it is authoritatively determined on which day one year ends and another begins. The year usually begins on the new moon closest to Lichun, the first day of spring. This is typically the second and sometimes third new moon after the winter solstice. A calendar year is 353–355 or 383–385 days long. Also includes Zodiac, year, or 30° on the ecliptic. A zodiacal year is about days. Solar terms Solar term (), year, or 15° on the ecliptic. A solar term is about days. Planets The movements of the Sun, Moon, Mercury, Venus, Mars, Jupiter and Saturn (sometimes known as the seven luminaries) are the references for calendar calculations. The distance between Mercury and the sun is less than 30° (the sun's height at chénshí:, 8:00 to 10:00 am), so Mercury was sometimes called the "chen star" (); it is more commonly known as the "water star" (). Venus appears at dawn and dusk and is known as the "bright star" () or "long star" (). Mars looks like fire and occurs irregularly, and is known as the "fire star" ( or ). Mars is the punisher in Chinese mythology. When Mars is near Antares (), it is a bad omen and can forecast an emperor's death or a chancellor's removal (). Jupiter's revolution period is 11.86 years, so Jupiter is called the "age star" (); 30° of Jupiter's revolution is about a year on earth. Saturn's revolution period is about 28 years. Known as the "guard star" (), Saturn guards one of the 28 Mansions every year. Stars Big Dipper The Big Dipper is the celestial compass, and its handle's direction indicates or some said determines the season and month. 3 Enclosures and 28 Mansions The stars are divided into Three Enclosures and 28 Mansions according to their location in the sky relative to Ursa Minor, at the center. Each mansion is named with a character describing the shape of its principal asterism. The Three Enclosures are Purple Forbidden, (), Supreme Palace (), and Heavenly Market. () The eastern mansions are , , , , , , . Southern mansions are , , , , , , . Western mansions are , , , , , , . Northern mansions are , , , , , , . The moon moves through about one lunar mansion per day, so the 28 mansions were also used to count days. In the Tang dynasty, Yuan Tiangang () matched the 28 mansions, seven luminaries and yearly animal signs to yield combinations such as "horn-wood-flood dragon" (). List of lunar mansions The names and determinative stars of the mansions are: Descriptive mathematics Several coding systems are used to avoid ambiguity. The Heavenly Stems is a decimal system. The Earthly Branches, a duodecimal system, mark dual hours ( or ) and climatic terms. The 12 characters progress from the first day with the same branch as the month (first Yín day () of Zhēngyuè; first Mǎo day () of Èryuè), and count the days of the month. The stem-branches is a sexagesimal system. The Heavenly Stems and Earthly Branches make up 60 stem-branches. The stem branches mark days and years. The five Wu Xing elements are assigned to each stem, branch, or stem branch. Sexagenary system Twelve branches Day China has used the Western hour-minute-second system to divide the day since the Qing dynasty. Several era-dependent systems had been in use; systems using multiples of twelve and ten were popular, since they could be easily counted and aligned with the Heavenly Stems and Earthly Branches. Week As early as the Bronze Age Xia dynasty, days were grouped into nine- or ten-day weeks known as xún (). Months consisted of three xún. The first 10 days were the early xún (), the middle 10 the mid xún (), and the last nine (or 10) days were the late xún (). Japan adopted this pattern, with 10-day-weeks known as . In Korea, they were known as sun (,). The structure of xún led to public holidays every five or ten days. Officials of the Han dynasty were legally required to rest every five days (twice a xún, or 5–6 times a month). The name of these breaks became huan (, "wash"). Grouping days into sets of ten is still used today in referring to specific natural events. "Three Fu" (), a 29–30-day period which is the hottest of the year, reflects its three-xún length. After the winter solstice, nine sets of nine days were counted to calculate the end of winter. The seven-day week was adopted from the Hellenistic system by the 4th century CE, although its method of transmission into China is unclear. It was again transmitted to China in the 8th century by Manichaeans via Kangju (a Central Asian kingdom near Samarkand), and is the most-used system in modern China. Month Months are defined by the time between new moons, which averages approximately days. There is no specified length of any particular Chinese month, so the first month could have 29 days (short month, ) in some years and 30 days (long month, ) in other years. A 12-month-year using this system has 354 days, which would drift significantly from the tropical year. To fix this, traditional Chinese years have a 13-month year approximately once every three years. The 13-month version has the same long and short months alternating, but adds a 30-day leap month (). Years with 12 months are called common years, and 13-month years are known as long years. Although most of the above rules were used until the Tang dynasty, different eras used different systems to keep lunar and solar years aligned. The synodic month of the Taichu calendar was days long. The 7th-century, Tang-dynasty Wùyín Yuán Calendar was the first to determine month length by synodic month instead of the cycling method. Since then, month lengths have primarily been determined by observation and prediction. The days of the month are always written with two characters and numbered beginning with 1. Days one to 10 are written with the day's numeral, preceded by the character Chū (); Chūyī () is the first day of the month, and Chūshí () the 10th. Days 11 to 20 are written as regular Chinese numerals; Shíwǔ () is the 15th day of the month, and Èrshí () the 20th. Days 21 to 29 are written with the character Niàn () before the characters one through nine; Niànsān (), for example, is the 23rd day of the month. Day 30 (when applicable) is written as the numeral Sānshí (). History books use days of the month numbered with the 60 stem-branches: Because astronomical observation determines month length, dates on the calendar correspond to moon phases. The first day of each month is the new moon. On the seventh or eighth day of each month, the first-quarter moon is visible in the afternoon and early evening. On the 15th or 16th day of each month, the full moon is visible all night. On the 22nd or 23rd day of each month, the last-quarter moon is visible late at night and in the morning. Since the beginning of the month is determined by when the new moon occurs, other countries using this calendar use their own time standards to calculate it; this results in deviations. The first new moon in 1968 was at 16:29 UTC on 29 January. Since North Vietnam used UTC+07:00 to calculate their Vietnamese calendar and South Vietnam used UTC+08:00 (Beijing time) to calculate theirs, North Vietnam began the Tết holiday at 29 January at 23:29 while South Vietnam began it on 30 January at 00:15. The time difference allowed asynchronous attacks in the Tet Offensive. Names of months and lunar date conventions Current naming conventions use numbers as the month names, although Lunar months were originally named according to natural phenomena phenology. Each month is also associated with one of the twelve Earthly Branches. Correspondences with Gregorian dates are approximate and should be used with caution. Many years have intercalary months. Though the numbered month names are often used for the corresponding month number in the Gregorian calendar, it is important to realize that the numbered month names are not interchangeable with the Gregorian months when talking about lunar dates. Incorrect: The Dragon Boat Festival falls on 5 May in the Lunar Calendar, whereas the Double Ninth Festival, Lantern Festival, and Qixi Festival fall on 9 September, 15 January, and 7 July in the Lunar Calendar, respectively. Correct: The Dragon Boat Festival falls on Wǔyuè 5th (or, 5th day of the fifth month) in the Lunar Calendar, whereas the Double Ninth Festival, Lantern Festival and Qixi Festival fall on Jiǔyuè 9th (or, 9th day of the ninth month), Zhēngyuè 15th (or, 15th day of the first month) and Qīyuè 7th (or, 7th day of the seventh month) in the Lunar Calendar, respectively. Alternate Chinese Zodiac correction: The Dragon Boat Festival falls on Horse Month 5th in the Lunar Calendar, whereas the Double Ninth Festival, Lantern Festival and Qixi Festival fall on Dog Month 9th, Tiger Month 15th and Monkey Month 7th in the Lunar Calendar, respectively. One may identify the heavenly stem and earthly branch corresponding to a particular day in the month, and those corresponding to its month, and those to its year, to determine the Four Pillars of Destiny associated with it, for which the Tung Shing, also referred to as the Chinese Almanac of the year, or the Huangli, and containing the essential information concerning Chinese astrology, is the most convenient publication to consult. Days rotate through a sexagenary cycle marked by coordination between heavenly stems and earthly branches, hence the referral to the Four Pillars of Destiny as, "Bazi", or "Birth Time Eight Characters", with each pillar consisting of a character for its corresponding heavenly stem, and another for its earthly branch. Since Huangli days are sexagenaric, their order is quite independent of their numeric order in each month, and of their numeric order within a week (referred to as True Animals in relation to the Chinese zodiac). Therefore, it does require painstaking calculation for one to arrive at the Four Pillars of Destiny of a particular given date, which rarely outpaces the convenience of simply consulting the Huangli by looking up its Gregorian date. Solar term The solar year (), the time between winter solstices, is divided into 24 solar terms known as jié qì (節氣). Each term is a 15° portion of the ecliptic. These solar terms mark both Western and Chinese seasons, as well as equinoxes, solstices, and other Chinese events. The even solar terms (marked with "Z", for , Zhongqi) are considered the major terms, while the odd solar terms (marked with "J", for , Jieqi) are deemed minor. The solar terms qīng míng (清明) on 5 April and dōng zhì (冬至) on 22 December are both celebrated events in China. Solar year The calendar solar year, known as the suì, () begins on the December solstice and proceeds through the 24 solar terms. Since the speed of the Sun's apparent motion in the elliptical is variable, the time between major solar terms is not fixed. This variation in time between major solar terms results in different solar year lengths. There are generally 11 or 12 complete months, plus two incomplete months around the winter solstice, in a solar year. The complete months are numbered from 0 to 10, and the incomplete months are considered the 11th month. If there are 12 complete months in the solar year, it is known as a leap solar year, or leap suì. Due to the inconsistencies in the length of the solar year, different versions of the traditional calendar might have different average solar year lengths. For example, one solar year of the 1st century BCE Tàichū calendar is (365.25016) days. A solar year of the 13th-century Shòushí calendar is (365.2425) days, identical to the Gregorian calendar. The additional .00766 day from the Tàichū calendar leads to a one-day shift every 130.5 years. Pairs of solar terms are climate terms, or solar months. The first solar term is "pre-climate" (), and the second is "mid-climate" (). If there are 12 complete months within a solar year, the first month without a mid-climate is the leap, or intercalary, month. In other words, the first month that does not include a major solar term is the leap month. Leap months are numbered with rùn , the character for "intercalary", plus the name of the month they follow. In 2017, the intercalary month after month six was called Rùn Liùyuè, or "intercalary sixth month" () and written as 6i or 6+. The next intercalary month (in 2020, after month four) will be called Rùn Sìyuè () and written 4i or 4+. Lunisolar year The lunisolar year begins with the first spring month, Zhēngyuè (), and ends with the last winter month, Làyuè (). All other months are named for their number in the month order. See below on the timing of the Chinese New Year. Years were traditionally numbered by the reign in ancient China, but this was abolished after founding the People's Republic of China in 1949. For example, the year from 12 February 2021 to 31 January 2022 was a Xīnchǒu year () of 12 months or 354 days. The Tang dynasty used the Earthly Branches to mark the months from December 761 to May 762. Over this period, the year began with the winter solstice. Age reckoning In modern China, a person's official age is based on the Gregorian calendar. For traditional use, age is based on the Chinese Sui calendar. A child is considered one year old at birth. After each Chinese New Year, one year is added to their traditional age. Their age therefore is the number of Chinese calendar years in which they have lived. Due to the potential for confusion, the age of infants is often given in months instead of years. After the Gregorian calendar was introduced in China, the Chinese traditional-age was referred to as the "nominal age" () and the Gregorian age was known as the "real age" (). Year-numbering systems Eras Ancient China numbered years from an emperor's ascension to the throne or his declaration of a new era name. The first recorded reign title was Jiànyuán (), from 140 BCE; the last reign title was Xuāntǒng (), from 1908 CE. The era system was abolished in 1912, after which the current or Republican era was used. Stem-branches The 60 stem-branches have been used to mark the date since the Shang dynasty (1600 BCE – 1046 BCE). Astrologers knew that the orbital period of Jupiter is about 12×361 = 4332 days, which they divided period into 12 years () of 361 days each. The stem-branches system solved the era system's problem of unequal reign lengths. Chinese New Year The date of the Chinese New Year accords with the patterns of the lunisolar calendar and hence is variable from year to year. The invariant between years is that the winter solstice, Dongzhi is required to be in the eleventh month of the year This means that Chinese New Year will be on the second new moon after the previous winter solstice, unless there is a leap month 11 or 12 in the previous year. This rule is accurate, however there are two other mostly (but not completely) accurate rules that are commonly stated: The new year is on the new moon closest to Lichun (typically 4 February). The new year is on the first new moon after Dahan (typically 20 January) It has been found that Chinese New Year moves back by either 10, 11, or 12 days in most years. If it falls on or before 31 January, then it moves forward in the next year by either 18, 19, or 20 days. Chinese lunar date conventions Though the numbered month names are often used for the corresponding month number in the Gregorian calendar, it is important to realize that the numbered month names are not interchangeable with the Gregorian months when talking about lunar dates. Holidays Various traditional and religious holidays shared by communities throughout the world use the Chinese (Lunisolar) calendar: Holidays with the same day and same month The Chinese New Year (known as the Spring Festival/春節 in China) is on the first day of the first month and was traditionally called the Yuan Dan (元旦) or Zheng Ri (正日). In Vietnam it is known as Tết Nguyên Đán (). Traditionally it was the most important holiday of the year. It is an official holiday in China, Hong Kong, Macau, Taiwan, Vietnam, Korea, the Philippines, Malaysia, Singapore, Indonesia, and Mauritius. It is also a public holiday in Thailand's Narathiwat, Pattani, Yala and Satun provinces, and is an official public school holiday in New York City. The Double Third Festival is on the third day of the third month. The Dragon Boat Festival, or the Duanwu Festival (端午節), is on the fifth day of the fifth month and is an official holiday in China, Hong Kong, Macau, and Taiwan. It is also celebrated in Vietnam where it is known as Tết Đoan Dương (節端陽) The Qixi Festival (七夕節) is celebrated in the evening of the seventh day of the seventh month. It is also celebrated in Vietnam where it is known as Tết Ngâu. The Double Ninth Festival (重陽節) is celebrated on the ninth day of the ninth month. It is also celebrated in Vietnam where it is known as Tết Trùng Cửu (節重九). Full moon holidays (holidays on the fifteenth day) The Lantern Festival is celebrated on the fifteenth day of the first month and was traditionally called the Yuan Xiao (元宵) or Shang Yuan Festival (上元節). In Vietnam, it is known as Rằm tháng giêng. The Zhong Yuan Festival is celebrated on the fifteenth day of the seventh month. In Vietnam, it is celebrated as Lễ Vu Lan (禮盂蘭). The Mid-Autumn Festival is celebrated on the fifteenth day of the eighth month. In Vietnam, it is celebrated as Tết Trung Thu (節中秋). The Xia Yuan Festival is celebrated on the fifteenth day of the tenth month. In Vietnam, it is celebrated as Lễ mừng lúa mới. Celebrations of the twelfth month The Laba Festival is on the eighth day of the twelfth month. It is the enlightenment day of Sakyamuni Buddha and in Vietnam is known as Lễ Vía Phật Thích Ca thành đạo. The Kitchen God Festival is celebrated on the twenty-third day of the twelfth month in northern regions of China and on the twenty-fourth day of the twelfth month in southern regions of China. Chinese New Year's Eve is also known as the Chuxi Festival and is celebrated on the evening of the last day of the lunar calendar. It is celebrated wherever the lunar calendar is observed. Celebrations of solar-term holidays The Qingming Festival (清明节) is celebrated on the fifteenth day after the Spring Equinox. The Dongzhi Festival (冬至) or the Winter Solstice is celebrated. Religious holidays based on the lunar calendar East Asian Mahayana, Daoist, and some Cao Dai holidays and/or vegetarian observances are based on the Lunar Calendar. Celebrations in Japan Many of the above holidays of the lunar calendar are also celebrated in Japan, but since the Meiji era on the similarly numbered dates of the Gregorian calendar. Double celebrations due to intercalary months In the case when there is a corresponding intercalary month, the holidays may be celebrated twice. For example, in the hypothetical situation in which there is an additional intercalary seventh month, the Zhong Yuan Festival will be celebrated in the seventh month followed by another celebration in the intercalary seventh month. (The next such occasion will be 2033, the first such since the calendar reform of 1645. Similar calendars Like Chinese characters, variants of the Chinese calendar have been used in different parts of the Sinosphere throughout history: this includes Vietnam, Korea, Singapore, Japan and Ryukyu, Mongolia, and elsewhere. Outlying areas of China Calendars of ethnic groups in mountains and plateaus of southwestern China and grasslands of northern China are based on their phenology and algorithms of traditional calendars of different periods, particularly the Tang and pre-Qin dynasties. Non-Chinese areas Korea, Vietnam, and the Ryukyu Islands adopted the Chinese calendar. In the respective regions, the Chinese calendar has been adapted into the Korean, Vietnamese, and Ryukyuan calendars, with the main difference from the Chinese calendar being the use of different meridians due to geography, leading to some astronomical events — and calendar events based on them — falling on different dates. The traditional Japanese calendar was also derived from the Chinese calendar (based on a Japanese meridian), but Japan abolished its official use in 1873 after Meiji Restoration reforms. Calendars in Mongolia and Tibet have absorbed elements of the traditional Chinese calendar but are not direct descendants of it. See also Chinese calendar correspondence table Chinese numerals East Asian age reckoning Guo Shoujing, an astronomer tasked with calendar reform during the 13th century List of festivals in Asia Metonic cycle of 19 years is used to reckon leap years with intercalary months in the Hebrew and Babylonian calendars Notes References Sources Further reading External links Calendars Chinese months Gregorian-Lunar calendar years (1901–2100) Chinese calendar and holidays Chinese calendar with Auspicious Events Chinese Calendar Online Calendar conversion 2000-year Chinese-Western calendar converter From 1 CE to 2100 CE. Useful for historical studies. To use, put the western year 年 month 月day 日in the bottom row and click on 執行. Western-Chinese calendar converter Rules Mathematics of the Chinese Calendar The Structure of the Chinese Calendar Calendar Horology Lunisolar calendars Specific calendars
Chinese calendar
[ "Physics" ]
9,453
[ "Spacetime", "Horology", "Physical quantities", "Time" ]
6,968
https://en.wikipedia.org/wiki/Customer%20relationship%20management
Customer relationship management (CRM) is a process in which a business or another organization administers its interactions with customers, typically using data analysis to study large amounts of information. CRM systems compile data from a range of different communication channels, including a company's website, telephone (which many software come with a softphone), email, live chat, marketing materials and more recently, social media. They allow businesses to learn more about their target audiences and how to better cater to their needs, thus retaining customers and driving sales growth. CRM may be used with past, present or potential customers. The concepts, procedures, and rules that a corporation follows when communicating with its consumers are referred to as CRM. This complete connection covers direct contact with customers, such as sales and service-related operations, forecasting, and the analysis of consumer patterns and behaviours, from the perspective of the company. The global customer relationship management market size is projected to grow from $101.41 billion in 2024 to $262.74 billion by 2032, at a CAGR of 12.6% History The concept of customer relationship management started in the early 1970s, when customer satisfaction was evaluated using annual surveys or by front-line asking. At that time, businesses had to rely on standalone mainframe systems to automate sales, but the extent of technology allowed them to categorize customers in spreadsheets and lists. One of the best-known precursors of modern-day CRM is the Farley File. Developed by Franklin Roosevelt's campaign manager, James Farley, the Farley File was a comprehensive set of records detailing political and personal facts about people FDR and Farley met or were supposed to meet. Using it, people that FDR met were impressed by his "recall" of facts about their family and what they were doing professionally and politically. In 1982, Kate and Robert D. Kestenbaum introduced the concept of database marketing, namely applying statistical methods to analyze and gather customer data. By 1986, Pat Sullivan and Mike Muhney had released a customer evaluation system called ACT! based on the principle of a digital Rolodex, which offered a contact management service for the first time. The trend was followed by numerous companies and independent developers trying to maximize lead potential, including Tom Siebel of Siebel Systems, who designed the first CRM product, Siebel Customer Relationship Management, in 1993. In order to compete with these new and quickly growing stand-alone CRM solutions, established enterprise resource planning (ERP) software companies like Oracle, Zoho Corporation, SAP, Peoplesoft (an Oracle subsidiary as of 2005) and Navision started extending their sales, distribution and customer service capabilities with embedded CRM modules. This included embedding sales force automation or extended customer service (e.g. inquiry, activity management) as CRM features in their ERP. Customer relationship management was popularized in 1997 due to the work of Siebel, Gartner, and IBM. Between 1997 and 2000, leading CRM products were enriched with shipping and marketing capabilities. Siebel introduced the first mobile CRM app called Siebel Sales Handheld in 1999. The idea of a stand-alone, cloud-hosted customer base was soon adopted by other leading providers at the time, including PeopleSoft (acquired by Oracle), Oracle, SAP and Salesforce.com. The first open-source CRM system was developed by SugarCRM in 2004. During this period, CRM was rapidly migrating to the cloud, as a result of which it became accessible to sole entrepreneurs and small teams. This increase in accessibility generated a huge wave of price reduction. Around 2009, developers began considering the options to profit from social media's momentum and designed tools to help companies become accessible on all users' favourite networks. Many startups at the time benefited from this trend to provide exclusively social CRM solutions, including Base and Nutshell. The same year, Gartner organized and held the first Customer Relationship Management Summit, and summarized the features systems should offer to be classified as CRM solutions. In 2013 and 2014, most of the popular CRM products were linked to business intelligence systems and communication software to improve corporate communication and end-users' experience. The leading trend is to replace standardized CRM solutions with industry-specific ones, or to make them customizable enough to meet the needs of every business. In November 2016, Forrester released a report where it "identified the nine most significant CRM suites from eight prominent vendors". Types Strategic Strategic CRM concentrates upon the development of a customer-centric business culture. The focus of a business on being customer-centric (in design and implementation of their CRM strategy) will translate into an improved CLV. Operational The primary goal of CRM systems is integration and automation of sales, marketing, and customer support. Therefore, these systems typically have a dashboard that gives an overall view of the three functions on a single customer view, a single page for each customer that a company may have. The dashboard may provide client information, past sales, previous marketing efforts, and more, summarizing all of the relationships between the customer and the firm. Operational CRM is made up of three main components: sales force automation, marketing automation, and service automation. Sales force automation works with all stages in the sales cycle, from initially entering contact information to converting a prospective client into an actual client. It implements sales promotion analysis, automates the tracking of a client's account history for repeated sales or future sales and coordinates sales, marketing, call centers, and retail outlets. It prevents duplicate efforts between a salesperson and a customer and also automatically tracks all contacts and follow-ups between both parties. Marketing automation focuses on easing the overall marketing process to make it more effective and efficient. CRM tools with marketing automation capabilities can automate repeated tasks, for example, sending out automated marketing emails at certain times to customers or posting marketing information on social media. The goal with marketing automation is to turn a sales lead into a full customer. CRM systems today also work on customer engagement through social media. Service automation is the part of the CRM system that focuses on direct customer service technology. Through service automation, customers are supported through multiple channels such as phone, email, knowledge bases, ticketing portals, FAQs, and more. Analytical The role of analytical CRM systems is to analyze customer data collected through multiple sources and present it so that business managers can make more informed decisions. Analytical CRM systems use techniques such as data mining, correlation, and pattern recognition to analyze customer data. These analytics help improve customer service by finding small problems which can be solved, perhaps by marketing to different parts of a consumer audience differently. For example, through the analysis of a customer base's buying behavior, a company might see that this customer base has not been buying a lot of products recently. After reviewing their data, the company might think to market to this subset of consumers differently to best communicate how this company's products might benefit this group specifically. Collaborative The third primary aim of CRM systems is to incorporate external stakeholders such as suppliers, vendors, and distributors, and share customer information across groups/departments and organizations. For example, feedback can be collected from technical support calls, which could help provide direction for marketing products and services to that particular customer in the future. Customer data platform A customer data platform (CDP) is a computer system used by marketing departments that assembles data about individual people from various sources into one database, with which other software systems can interact. , about twenty companies were selling such systems and revenue for them was around US$300 million. Components The main components of CRM are building and managing customer relationships through marketing, observing relationships as they mature through distinct phases, managing these relationships at each stage and recognizing that the distribution of the value of a relationship to the firm is not homogeneous. When building and managing customer relationships through marketing, firms might benefit from using a variety of tools to help organizational design, incentive schemes, customer structures, and more to optimize the reach of their marketing campaigns. Through the acknowledgment of the distinct phases of CRM, businesses will be able to benefit from seeing the interaction of multiple relationships as connected transactions. The final factor of CRM highlights the importance of CRM through accounting for the profitability of customer relationships. By studying the particular spending habits of customers, a firm may be able to dedicate different resources and amounts of attention to different types of consumers. Relational Intelligence, which is the awareness of the variety of relationships a customer can have with a firm and the ability of the firm to reinforce or change those connections, is an important component of the main phases of CRM. Companies may be good at capturing demographic data, such as gender, age, income, and education, and connecting them with purchasing information to categorize customers into profitability tiers, but this is only a firm's industrial view of customer relationships. A lack of relational intelligence is a sign that firms still see customers as resources that can be used for up-sell or cross-sell opportunities, rather than people looking for interesting and personalized interactions. CRM systems include: Data warehouse technology, which is used to aggregate transaction information, to merge the information with CRM products, and to provide key performance indicators. Opportunity management, which helps the company to manage unpredictable growth and demand and implement a good forecasting model to integrate sales history with sales projections. CRM systems that track and measure marketing campaigns over multiple networks, tracking customer analysis by customer clicks and sales. Some CRM software is available as a software as a service (SaaS), delivered via the internet and accessed via a web browser instead of being installed on a local computer. Businesses using the software do not purchase it but typically pay a recurring subscription fee to the software vendor. For small businesses, a CRM system may consist of a contact management system that integrates emails, documents, jobs, faxes, and scheduling for individual accounts. CRM systems available for specific markets (legal, finance) frequently focus on event management and relationship tracking as opposed to financial return on investment (ROI). CRM systems for eCommerce focus on marketing automation tasks such as cart rescue, re-engaging users with email, and personalization. Customer-centric relationship management (CCRM) is a nascent sub-discipline that focuses on customer preferences instead of customer leverage. CCRM aims to add value by engaging customers in individual, interactive relationships. Systems for non-profit and membership-based organizations help track constituents, fundraising, sponsors' demographics, membership levels, membership directories, volunteering and communication with individuals. CRM not only indicates technology and strategy but also indicates an integrated approach that includes employees knowledge and organizational culture to embrace the CRM philosophy. Effect on customer satisfaction Customer satisfaction has important implications for the economic performance of firms because it has the ability to increase customer loyalty and usage behavior and reduce customer complaints and the likelihood of customer defection. The implementation of a CRM approach is likely to affect customer satisfaction and customer knowledge for a variety of different reasons. Firstly, firms can customize their offerings for each customer. By accumulating information across customer interactions and processing this information to discover hidden patterns, CRM applications help firms customize their offerings to suit the individual tastes of their customers. This customization enhances the perceived quality of products and services from a customer's viewpoint, and because the perceived quality is a determinant of customer satisfaction, it follows that CRM applications indirectly affect customer satisfaction. CRM applications also enable firms to provide timely, accurate processing of customer orders and requests and the ongoing management of customer accounts. For example, Piccoli and Applegate discuss how Wyndham uses IT tools to deliver a consistent service experience across its various properties to a customer. Both an improved ability to customize and reduced variability of the consumption experience enhance perceived quality, which in turn positively affects customer satisfaction. CRM applications also help firms manage customer relationships more effectively across the stages of relationship initiation, maintenance, and termination. Customer benefits With CRM systems, customers are served on the day-to-day process. With more reliable information, their demand for self-service from companies will decrease. If there is less need to interact with the company for different problems, then the customer satisfaction level is expected to increase. These central benefits of CRM will be connected hypothetically to the three kinds of equity, which are relationship, value, and brand, and in the end to customer equity. Eight benefits were recognized to provide value drivers. Enhanced ability to target profitable customers. Integrated assistance across channels. Enhanced sales force efficiency and effectiveness. Improved pricing. Customized products and services. Improved customer service efficiency and effectiveness. Individualized marketing messages are also called campaigns. Connect customers and all channels on a single platform. Examples Research has found a 5% increase in customer retention boosts lifetime customer profits by 50% on average across multiple industries, as well as a boost of up to 90% within specific industries such as insurance. Companies that have mastered customer relationship strategies have the most successful CRM programs. For example, MBNA Europe has had a 75% annual profit growth since 1995. The firm heavily invests in screening potential cardholders. Once proper clients are identified, the firm retains 97% of its profitable customers. They implement CRM by marketing the right products to the right customers. The firm's customers' card usage is 52% above the industry norm, and the average expenditure is 30% more per transaction. Also 10% of their account holders ask for more information on cross-sale products. Amazon has also seen successes through its customer proposition. The firm implemented personal greetings, collaborative filtering, and more for the customer. They also used CRM training for the employees to see up to 80% of customers repeat. Customer profile A customer profile is a detailed description of any particular classification of customer which is created to represent the typical users of a product or service. Customer profiling is a method to understand your customers in terms of demographics, behaviour and lifestyle. It is used to help make customer-focused decisions without confusing the scope of the project with personal opinion. Overall profiling is gathering information that sums up consumption habits so far and projects them into the future so that they can be grouped for marketing and advertising purposes. Customer or consumer profiles are the essences of the data that is collected alongside core data (name, address, company) and processed through customer analytics methods, essentially a type of profiling. The three basic methods of customer profiling are the psychographic approach, the consumer typology approach, and the consumer characteristics approach. These customer profiling methods help you design your business around who your customers are and help you make better customer-centered decisions. Improving CRM Consultants hold that it is important for companies to establish strong CRM systems to improve their relational intelligence. According to this argument, a company must recognize that people have many different types of relationships with different brands. One research study analyzed relationships between consumers in China, Germany, Spain, and the United States, with over 200 brands in 11 industries including airlines, cars, and media. This information is valuable as it provides demographic, behavioral, and value-based customer segmentation. These types of relationships can be both positive and negative. Some customers view themselves as friends of the brands, while others as enemies, and some are mixed with a love-hate relationship with the brand. Some relationships are distant, intimate, or anything in between. Data analysis Managers must understand the different reasons for the types of relationships, and provide the customer with what they are looking for. Companies can collect this information by using surveys, interviews, and more, with current customers. Companies must also improve the relational intelligence of their CRM systems. Companies store and receive huge amounts of data through emails, online chat sessions, phone calls, and more. Many companies do not properly make use of this great amount of data, however. All of these are signs of what types of relationships the customer wants with the firm, and therefore companies may consider investing more time and effort in building out their relational intelligence. Companies can use data mining technologies and web searches to understand relational signals. Social media such as social networking sites, blogs, and forums can also be used to collect and analyze information. Understanding the customer and capturing this data allows companies to convert customers' signals into information and knowledge that the firm can use to understand a potential customer's desired relations with a brand. Employee training Many firms have also implemented training programs to teach employees how to recognize and create strong customer-brand relationships. Other employees have also been trained in social psychology and the social sciences to help bolster customer relationships. Customer service representatives must be trained to value customer relationships and trained to understand existing customer profiles. Even the finance and legal departments should understand how to manage and build relationships with customers. In practice Call centers Contact centre CRM providers are popular for small and mid-market businesses. These systems codify the interactions between the company and customers by using analytics and key performance indicators to give the users information on where to focus their marketing and customer service. This allows agents to have access to a caller's history to provide personalized customer communication. The intention is to maximize average revenue per user, decrease churn rate and decrease idle and unproductive contact with the customers. Growing in popularity is the idea of gamifying, or using game design elements and game principles in a non-game environment such as customer service environments. The gamification of customer service environments includes providing elements found in games like rewards and bonus points to customer service representatives as a method of feedback for a job well done. Gamification tools can motivate agents by tapping into their desire for rewards, recognition, achievements, and competition. Contact-center automation Contact-center automation, CCA, the practice of having an integrated system that coordinates contacts between an organization and the public, is designed to reduce the repetitive and tedious parts of a contact center agent's job. Automation prevents this by having pre-recorded audio messages that help customers solve their problems. For example, an automated contact center may be able to re-route a customer through a series of commands asking him or her to select a certain number to speak with a particular contact center agent who specializes in the field in which the customer has a question. Software tools can also integrate with the agent's desktop tools to handle customer questions and requests. This also saves time on behalf of the employees. Social media Social CRM involves the use of social media and technology to engage and learn from consumers. Because the public, especially young people, are increasingly using social networking sites, companies use these sites to draw attention to their products, services and brands, with the aim of building up customer relationships to increase demand. With the increase in the use of social media platforms, integrating CRM with the help of social media can potentially be a quicker and more cost-friendly process. Some CRM systems integrate social media sites like Twitter, LinkedIn, and Facebook to track and communicate with customers. These customers also share their own opinions and experiences with a company's products and services, giving these firms more insight. Therefore, these firms can both share their own opinions and also track the opinions of their customers. Enterprise feedback management software platforms combine internal survey data with trends identified through social media to allow businesses to make more accurate decisions on which products to supply. Location-based services CRM systems can also include technologies that create geographic marketing campaigns. The systems take in information based on a customer's physical location and sometimes integrates it with popular location-based GPS applications. It can be used for networking or contact management as well to help increase sales based on location. Business-to-business transactions Despite the general notion that CRM systems were created for customer-centric businesses, they can also be applied to B2B environments to streamline and improve customer management conditions. For the best level of CRM operation in a B2B environment, the software must be personalized and delivered at individual levels. The main differences between business-to-consumer (B2C) and business-to-business CRM systems concern aspects like sizing of contact databases and length of relationships. Market trends Social networking In the Gartner CRM Summit 2010 challenges like "system tries to capture data from social networking traffic like Twitter, handles Facebook page addresses or other online social networking sites" were discussed and solutions were provided that would help in bringing more clientele. The era of the "social customer" refers to the use of social media by customers. Mobile Some CRM systems are equipped with mobile capabilities, making information accessible to remote sales staff. Cloud computing and SaaS Many CRM vendors offer subscription-based web tools (cloud computing) and SaaS. Salesforce.com was the first company to provide enterprise applications through a web browser, and has maintained its leadership position. Traditional providers moved into the cloud-based market via acquisitions of smaller providers: Oracle purchased RightNow in October 2011, and Taleo and Eloqua in 2012; SAP acquired SuccessFactors in December 2011 and NetSuite acquired Verenia in 2022. Sales and sales force automation Sales forces also play an important role in CRM, as maximizing sales effectiveness and increasing sales productivity is a driving force behind the adoption of CRM software. Some of the top CRM trends identified in 2021 include focusing on customer service automation such as chatbots, hyper-personalization based on customer data and insights, and the use of unified CRM systems. CRM vendors support sales productivity with different products, such as tools that measure the effectiveness of ads that appear in 3D video games. Pharmaceutical companies were some of the first investors in sales force automation (SFA) and some are on their third- or fourth-generation implementations. However, until recently, the deployments did not extend beyond SFA—limiting their scope and interest to Gartner analysts. Vendor relationship management Another related development is vendor relationship management (VRM), which provide tools and services that allow customers to manage their individual relationship with vendors. VRM development has grown out of efforts by ProjectVRM at Harvard's Berkman Center for Internet & Society and Identity Commons' Internet Identity Workshops, as well as by a growing number of startups and established companies. VRM was the subject of a cover story in the May 2010 issue of CRM Magazine. Customer success Another trend worth noting is the rise of Customer Success as a discipline within companies. More and more companies establish Customer Success teams as separate from the traditional Sales team and task them with managing existing customer relations. This trend fuels demand for additional capabilities for a more holistic understanding of customer health, which is a limitation for many existing vendors in the space. As a result, a growing number of new entrants enter the market while existing vendors add capabilities in this area to their suites. AI and predictive analytics In 2017, artificial intelligence and predictive analytics were identified as the newest trends in CRM. Criticism Companies face large challenges when trying to implement CRM systems. Consumer companies frequently manage their customer relationships haphazardly and unprofitably. They may not effectively or adequately use their connections with their customers, due to misunderstandings or misinterpretations of a CRM system's analysis. Clients may be treated like an exchange party, rather than a unique individual, due to, occasionally, a lack of a bridge between the CRM data and the CRM analysis output. Many studies show that customers are frequently frustrated by a company's inability to meet their relationship expectations, and on the other side, companies do not always know how to translate the data they have gained from CRM software into a feasible action plan. In 2003, a Gartner report estimated that more than $2 billion had been spent on software that was not being used. According to CSO Insights, less than 40 percent of 1,275 participating companies had end-user adoption rates above 90 percent. Many corporations only use CRM systems on a partial or fragmented basis. In a 2007 survey from the UK, four-fifths of senior executives reported that their biggest challenge is getting their staff to use the systems they had installed. Forty-three percent of respondents said they use less than half the functionality of their existing systems. However, market research regarding consumers' preferences may increase the adoption of CRM among developing countries' consumers. Collection of customer data such as personally identifiable information must strictly obey customer privacy laws, which often requires extra expenditures on legal support. Part of the paradox with CRM stems from the challenge of determining exactly what CRM is and what it can do for a company. The CRM paradox, also referred to as the "dark side of CRM", may entail favoritism and differential treatment of some customers. This can happen because a business prioritizes customers who are more profitable, more relationship-orientated or tend to have increased loyalty to the company. Although focusing on such customers by itself is not a bad thing, it can leave other customers feeling left out and alienated potentially decreasing profits because of it. CRM technologies can easily become ineffective if there is no proper management, and they are not implemented correctly. The data sets must also be connected, distributed, and organized properly so that the users can access the information that they need quickly and easily. Research studies also show that customers are increasingly becoming dissatisfied with contact center experiences due to lags and wait times. They also request and demand multiple channels of communication with a company, and these channels must transfer information seamlessly. Therefore, it is increasingly important for companies to deliver a cross-channel customer experience that can be both consistent as well as reliable. See also References Business computing Office and administrative support occupations Marketing techniques Services marketing
Customer relationship management
[ "Technology" ]
5,267
[ "Computing and society", "Business computing" ]
6,979
https://en.wikipedia.org/wiki/Cell%20Cycle
Cell Cycle is a biweekly peer-reviewed scientific journal covering all aspects of cell biology. It was established in 2002. Originally published bimonthly, it is now published biweekly. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 5-year impact factor of 7.7. See also Autophagy Cell Biology International Cell and Tissue Research References External links Molecular and cellular biology journals Biweekly journals Academic journals established in 2002 English-language journals Taylor & Francis academic journals
Cell Cycle
[ "Chemistry" ]
114
[ "Molecular and cellular biology journals", "Molecular biology" ]
6,985
https://en.wikipedia.org/wiki/Chlorophyll
Chlorophyll is any of several related green pigments found in cyanobacteria and in the chloroplasts of algae and plants. Its name is derived from the Greek words (, "pale green") and (, "leaf"). Chlorophyll allows plants to absorb energy from light. Those pigments are involved in oxygenic photosynthesis, as opposed to bacteriochlorophylls, related molecules found only in bacteria and involved in anoxygenic photosynthesis. Chlorophylls absorb light most strongly in the blue portion of the electromagnetic spectrum as well as the red portion. Conversely, it is a poor absorber of green and near-green portions of the spectrum. Hence chlorophyll-containing tissues appear green because green light, diffusively reflected by structures like cell walls, is less absorbed. Two types of chlorophyll exist in the photosystems of green plants: chlorophyll a and b. History Chlorophyll was first isolated and named by Joseph Bienaimé Caventou and Pierre Joseph Pelletier in 1817. The presence of magnesium in chlorophyll was discovered in 1906, and was the first detection of that element in living tissue. After initial work done by German chemist Richard Willstätter spanning from 1905 to 1915, the general structure of chlorophyll a was elucidated by Hans Fischer in 1940. By 1960, when most of the stereochemistry of chlorophyll a was known, Robert Burns Woodward published a total synthesis of the molecule. In 1967, the last remaining stereochemical elucidation was completed by Ian Fleming, and in 1990 Woodward and co-authors published an updated synthesis. Chlorophyll f was announced to be present in cyanobacteria and other oxygenic microorganisms that form stromatolites in 2010; a molecular formula of C55H70O6N4Mg and a structure of (2-formyl)-chlorophyll a were deduced based on NMR, optical and mass spectra. Photosynthesis Chlorophyll is vital for photosynthesis, which allows plants to absorb energy from light. Chlorophyll molecules are arranged in and around photosystems that are embedded in the thylakoid membranes of chloroplasts. In these complexes, chlorophyll serves three functions: The function of the vast majority of chlorophyll (up to several hundred molecules per photosystem) is to absorb light. Having done so, these same centers execute their second function: The transfer of that energy by resonance energy transfer to a specific chlorophyll pair in the reaction center of the photosystems. This specific pair performs the final function of chlorophylls: Charge separation, which produces the unbound protons (H) and electrons (e) that separately propel biosynthesis. The two currently accepted photosystem units are and which have their own distinct reaction centres, named P700 and P680, respectively. These centres are named after the wavelength (in nanometers) of their red-peak absorption maximum. The identity, function and spectral properties of the types of chlorophyll in each photosystem are distinct and determined by each other and the protein structure surrounding them. The function of the reaction center of chlorophyll is to absorb light energy and transfer it to other parts of the photosystem. The absorbed energy of the photon is transferred to an electron in a process called charge separation. The removal of the electron from the chlorophyll is an oxidation reaction. The chlorophyll donates the high energy electron to a series of molecular intermediates called an electron transport chain. The charged reaction center of chlorophyll (P680+) is then reduced back to its ground state by accepting an electron stripped from water. The electron that reduces P680+ ultimately comes from the oxidation of water into O2 and H+ through several intermediates. This reaction is how photosynthetic organisms such as plants produce O2 gas, and is the source for practically all the O2 in Earth's atmosphere. Photosystem I typically works in series with Photosystem II; thus the P700+ of Photosystem I is usually reduced as it accepts the electron, via many intermediates in the thylakoid membrane, by electrons coming, ultimately, from Photosystem II. Electron transfer reactions in the thylakoid membranes are complex, however, and the source of electrons used to reduce P700+ can vary. The electron flow produced by the reaction center chlorophyll pigments is used to pump H+ ions across the thylakoid membrane, setting up a proton-motive force a chemiosmotic potential used mainly in the production of ATP (stored chemical energy) or to reduce NADP+ to NADPH. NADPH is a universal agent used to reduce CO2 into sugars as well as other biosynthetic reactions. Reaction center chlorophyll–protein complexes are capable of directly absorbing light and performing charge separation events without the assistance of other chlorophyll pigments, but the probability of that happening under a given light intensity is small. Thus, the other chlorophylls in the photosystem and antenna pigment proteins all cooperatively absorb and funnel light energy to the reaction center. Besides chlorophyll a, there are other pigments, called accessory pigments, which occur in these pigment–protein antenna complexes. Chemical structure Several chlorophylls are known. All are defined as derivatives of the parent chlorin by the presence of a fifth, ketone-containing ring beyond the four pyrrole-like rings. Most chlorophylls are classified as chlorins, which are reduced relatives of porphyrins (found in hemoglobin). They share a common biosynthetic pathway with porphyrins, including the precursor uroporphyrinogen III. Unlike hemes, which contain iron bound to the N4 center, most chlorophylls bind magnesium. The axial ligands attached to the Mg2+ center are often omitted for clarity. Appended to the chlorin ring are various side chains, usually including a long phytyl chain (). The most widely distributed form in terrestrial plants is chlorophyll a. Chlorophyll a has methyl group in place of a formyl group in chlorophyll b. This difference affects the absorption spectrum, allowing plants to absorb a greater portion of visible light. The structures of chlorophylls are summarized below: Chlorophyll e is reserved for a pigment that has been extracted from algae in 1966 but not chemically described. Besides the lettered chlorophylls, a wide variety of sidechain modifications to the chlorophyll structures are known in the wild. For example, Prochlorococcus, a cyanobacterium, uses 8-vinyl Chl a and b. Measurement of chlorophyll content Chlorophylls can be extracted from the protein into organic solvents. In this way, the concentration of chlorophyll within a leaf can be estimated. Methods also exist to separate chlorophyll a and chlorophyll b. In diethyl ether, chlorophyll a has approximate absorbance maxima of 430 nm and 662 nm, while chlorophyll b has approximate maxima of 453 nm and 642 nm. The absorption peaks of chlorophyll a are at 465 nm and 665 nm. Chlorophyll a fluoresces at 673 nm (maximum) and 726 nm. The peak molar absorption coefficient of chlorophyll a exceeds 105 M−1 cm−1, which is among the highest for small-molecule organic compounds. In 90% acetone-water, the peak absorption wavelengths of chlorophyll a are 430 nm and 664 nm; peaks for chlorophyll b are 460 nm and 647 nm; peaks for chlorophyll c1 are 442 nm and 630 nm; peaks for chlorophyll c2 are 444 nm and 630 nm; peaks for chlorophyll d are 401 nm, 455 nm and 696 nm. Ratio fluorescence emission can be used to measure chlorophyll content. By exciting chlorophyll a fluorescence at a lower wavelength, the ratio of chlorophyll fluorescence emission at and can provide a linear relationship of chlorophyll content when compared with chemical testing. The ratio F735/F700 provided a correlation value of r2 0.96 compared with chemical testing in the range from 41 mg m−2 up to 675 mg m−2. Gitelson also developed a formula for direct readout of chlorophyll content in mg m−2. The formula provided a reliable method of measuring chlorophyll content from 41 mg m−2 up to 675 mg m−2 with a correlation r2 value of 0.95. Also, the chlorophyll concentration can be estimated by measuring the light transmittance through the plant leaves . The assessment of leaf chlorophyll content using optical sensors such as Dualex and SPAD allows researchers to perform real-time and non-destructive measurements . Research shows that these methods have a positive correlation with laboratory measurements of chlorophyll. Biosynthesis In some plants, chlorophyll is derived from glutamate and is synthesised along a branched biosynthetic pathway that is shared with heme and siroheme. Chlorophyll synthase is the enzyme that completes the biosynthesis of chlorophyll a: chlorophyllide a + phytyl diphosphate chlorophyll a + diphosphate This conversion forms an ester of the carboxylic acid group in chlorophyllide a with the 20-carbon diterpene alcohol phytol. Chlorophyll b is made by the same enzyme acting on chlorophyllide b. The same is known for chlorophyll d and f, both made from corresponding chlorophyllides ultimately made from chlorophyllide a. In Angiosperm plants, the later steps in the biosynthetic pathway are light-dependent. Such plants are pale (etiolated) if grown in darkness. Non-vascular plants and green algae have an additional light-independent enzyme and grow green even in darkness. Chlorophyll is bound to proteins. Protochlorophyllide, one of the biosynthetic intermediates, occurs mostly in the free form and, under light conditions, acts as a photosensitizer, forming free radicals, which can be toxic to the plant. Hence, plants regulate the amount of this chlorophyll precursor. In angiosperms, this regulation is achieved at the step of aminolevulinic acid (ALA), one of the intermediate compounds in the biosynthesis pathway. Plants that are fed by ALA accumulate high and toxic levels of protochlorophyllide; so do the mutants with a damaged regulatory system. Senescence and the chlorophyll cycle The process of plant senescence involves the degradation of chlorophyll: for example the enzyme chlorophyllase () hydrolyses the phytyl sidechain to reverse the reaction in which chlorophylls are biosynthesised from chlorophyllide a or b. Since chlorophyllide a can be converted to chlorophyllide b and the latter can be re-esterified to chlorophyll b, these processes allow cycling between chlorophylls a and b. Moreover, chlorophyll b can be directly reduced (via ) back to chlorophyll a, completing the cycle. In later stages of senescence, chlorophyllides are converted to a group of colourless tetrapyrroles known as nonfluorescent chlorophyll catabolites (NCC's) with the general structure: These compounds have also been identified in ripening fruits and they give characteristic autumn colours to deciduous plants. Distribution Chlorophyll maps from 2002 to 2024, provided by NASA, show milligrams of chlorophyll per cubic meter of seawater each month. Places where chlorophyll amounts are very low, indicating very low numbers of phytoplankton, are blue. Places where chlorophyll concentrations are high, meaning many phytoplankton were growing, are yellow. The observations come from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Aqua satellite. Land is dark gray, and places where MODIS could not collect data because of sea ice, polar darkness, or clouds are light gray. The highest chlorophyll concentrations, where tiny surface-dwelling ocean plants are, are in cold polar waters or in places where ocean currents bring cold water to the surface, such as around the equator and along the shores of continents. It is not the cold water itself that stimulates the phytoplankton. Instead, the cool temperatures are often a sign that the water has welled up to the surface from deeper in the ocean, carrying nutrients that have built up over time. In polar waters, nutrients accumulate in surface waters during the dark winter months when plants cannot grow. When sunlight returns in the spring and summer, the plants flourish in high concentrations. Uses Culinary Synthetic chlorophyll is registered as a food additive colorant, and its E number is E140. Chefs use chlorophyll to color a variety of foods and beverages green, such as pasta and spirits. Absinthe gains its green color naturally from the chlorophyll introduced through the large variety of herbs used in its production. Chlorophyll is not soluble in water, and it is first mixed with a small quantity of vegetable oil to obtain the desired solution. In marketing In years 1950–1953 in particular, chlorophyll was used as a marketing tool to promote toothpaste, sanitary towels, soap and other products. This was based on claims that it was an odor blocker — a finding from research by F. Howard Westcott in the 1940s — and the commercial value of this attribute in advertising led to many companies creating brands containing the compound. However, it was soon determined that the hype surrounding chlorophyll was not warranted and the underlying research may even have been a hoax. As a result, brands rapidly discontinued its use. In the 2020s, chlorophyll again became the subject of unsubstantiated medical claims, as social media influencers promoted its use in the form of "chlorophyll water", for example. See also Bacteriochlorophyll, related compounds in phototrophic bacteria Chlorophyllin, a semi-synthetic derivative of chlorophyll Deep chlorophyll maximum Chlorophyll fluorescence, to measure plant stress Purple Earth hypothesis, a scientific hypothesis that explains the evolution of red-blue spectral affinity of chlorophyll. References Tetrapyrroles Photosynthetic pigments Articles containing video clips E-number additives
Chlorophyll
[ "Chemistry" ]
3,257
[ "Photosynthetic pigments", "Photosynthesis" ]
6,986
https://en.wikipedia.org/wiki/Carotene
The term carotene (also carotin, from the Latin carota, "carrot") is used for many related unsaturated hydrocarbon substances having the formula C40Hx, which are synthesized by plants but in general cannot be made by animals (with the exception of some aphids and spider mites which acquired the synthesizing genes from fungi). Carotenes are photosynthetic pigments important for photosynthesis. Carotenes contain no oxygen atoms. They absorb ultraviolet, violet, and blue light and scatter orange or red light, and yellow light(in low concentrations). Carotenes are responsible for the orange colour of the carrot, after which this class of chemicals is named, and for the colours of many other fruits, vegetables and fungi (for example, sweet potatoes, chanterelle and orange cantaloupe melon). Carotenes are also responsible for the orange (but not all of the yellow) colours in dry foliage. They also (in lower concentrations) impart the yellow coloration to milk-fat and butter. Omnivorous animal species which are relatively poor converters of coloured dietary carotenoids to colourless retinoids, such as humans and chickens, have yellow-coloured body fat, as a result of the carotenoid retention from the vegetable portion of their diet. Carotenes contribute to photosynthesis by transmitting the light energy they absorb to chlorophyll. They also protect plant tissues by helping to absorb the energy from singlet oxygen, an excited form of the oxygen molecule O2 which is formed during photosynthesis. β-Carotene is composed of two retinyl groups, and is broken down in the mucosa of the human small intestine by β-carotene 15,15'-monooxygenase to retinal,a form of vitamin A. β-Carotene can be stored in the liver and body fat and converted to retinal as needed, thus making it a form of vitamin A for humans and some other mammals. The carotenes α-carotene and γ-carotene, due to their single retinyl group (β-ionone ring), also have some vitamin A activity (though less than β-carotene), as does the xanthophyll carotenoid β-cryptoxanthin. All other carotenoids, including lycopene, have no beta-ring and thus no vitamin A activity (although they may have antioxidant activity and thus biological activity in other ways). Animal species differ greatly in their ability to convert retinyl (beta-ionone) containing carotenoids to retinals. Carnivores in general are poor converters of dietary ionone-containing carotenoids. Pure carnivores such as ferrets lack β-carotene 15,15'-monooxygenase and cannot convert any carotenoids to retinals at all (resulting in carotenes not being a form of vitamin A for this species); while cats can convert a trace of β-carotene to retinol, although the amount is totally insufficient for meeting their daily retinol needs. Molecular structure Carotenes are polyunsaturated hydrocarbons containing 40 carbon atoms per molecule, variable numbers of hydrogen atoms, and no other elements. Some carotenes are terminated by rings, on one or both ends of the molecule. All are coloured, due to the presence of conjugated double bonds. Carotenes are tetraterpenes, meaning that they are derived from eight 5-carbon isoprene units (or four 10-carbon terpene units). Carotenes are found in plants in two primary forms designated by characters from the Greek alphabet: alpha-carotene (α-carotene) and beta-carotene (β-carotene). Gamma-, delta-, epsilon-, and zeta-carotene (γ, δ, ε, and ζ-carotene) also exist. Since they are hydrocarbons, and therefore contain no oxygen, carotenes are fat-soluble and insoluble in water (in contrast with other carotenoids, the xanthophylls, which contain oxygen and thus are less chemically hydrophobic). History The discovery of carotene from carrot juice is credited to Heinrich Wilhelm Ferdinand Wackenroder, a finding made during a search for antihelminthics, which he published in 1831. He obtained it in small ruby-red flakes soluble in ether, which when dissolved in fats gave "a beautiful yellow colour". William Christopher Zeise recognised its hydrocarbon nature in 1847, but his analyses gave him a composition of C5H8. It was Léon-Albert Arnaud in 1886 who confirmed its hydrocarbon nature and gave the formula C26H38, which is close to the theoretical composition of C40H56. Adolf Lieben in studies, also published in 1886, of the colouring matter in corpora lutea, first came across carotenoids in animal tissue, but did not recognise the nature of the pigment. Johann Ludwig Wilhelm Thudichum, in 1868–1869, after stereoscopic spectral examination, applied the term 'luteine' (lutein) to this class of yellow crystallizable substances found in animals and plants. Richard Martin Willstätter, who gained the Nobel Prize in Chemistry in 1915, mainly for his work on chlorophyll, assigned the composition of C40H56, distinguishing it from the similar but oxygenated xanthophyll, C40H56O2. With Heinrich Escher, in 1910, lycopene was isolated from tomatoes and shown to be an isomer of carotene. Later work by Escher also differentiated the 'luteal' pigments in egg yolk from that of the carotenes in cow corpus luteum. Dietary sources The following foods contain carotenes in notable amounts: carrots wolfberries (goji) cantaloupe mangoes red bell pepper papaya spinach kale sweet potato tomato dandelion greens broccoli collard greens winter squash pumpkin cassava Absorption from these foods is enhanced if eaten with fats, as carotenes are fat soluble, and if the food is cooked for a few minutes until the plant cell wall splits and the color is released into any liquid. 12 μg of dietary β-carotene supplies the equivalent of 1 μg of retinol, and 24 μg of α-carotene or β-cryptoxanthin provides the equivalent of 1 μg of retinol. Forms of carotene The two primary isomers of carotene, α-carotene and β-carotene, differ in the position of a double bond (and thus a hydrogen) in the cyclic group at one end (the right end in the diagram at right). β-Carotene is the more common form and can be found in yellow, orange, and green leafy fruits and vegetables. As a rule of thumb, the greater the intensity of the orange colour of the fruit or vegetable, the more β-carotene it contains. Carotene protects plant cells against the destructive effects of ultraviolet light so β-carotene is an antioxidant. β-Carotene and physiology β-Carotene and cancer An article on the American Cancer Society says that The Cancer Research Campaign has called for warning labels on β-carotene supplements to caution smokers that such supplements may increase the risk of lung cancer. The New England Journal of Medicine published an article in 1994 about a trial which examined the relationship between daily supplementation of β-carotene and vitamin E (α-tocopherol) and the incidence of lung cancer. The study was done using supplements and researchers were aware of the epidemiological correlation between carotenoid-rich fruits and vegetables and lower lung cancer rates. The research concluded that no reduction in lung cancer was found in the participants using these supplements, and furthermore, these supplements may, in fact, have harmful effects. The Journal of the National Cancer Institute and The New England Journal of Medicine published articles in 1996 about a trial with a goal to determine if vitamin A (in the form of retinyl palmitate) and β-carotene (at about 30 mg/day, which is 10 times the Reference Daily Intake) supplements had any beneficial effects to prevent cancer. The results indicated an increased risk of lung and prostate cancers for the participants who consumed the β-carotene supplement and who had lung irritation from smoking or asbestos exposure, causing the trial to be stopped early. A review of all randomized controlled trials in the scientific literature by the Cochrane Collaboration published in JAMA in 2007 found that synthetic β-carotene increased mortality by 1–8% (Relative Risk 1.05, 95% confidence interval 1.01–1.08). However, this meta-analysis included two large studies of smokers, so it is not clear that the results apply to the general population. The review only studied the influence of synthetic antioxidants and the results should not be translated to potential effects of fruits and vegetables. β-Carotene and photosensitivity Oral β-carotene is prescribed to people suffering from erythropoietic protoporphyria. It provides them some relief from photosensitivity. Carotenemia Carotenemia or hypercarotenemia is excess carotene, but unlike excess vitamin A, carotene is non-toxic. Although hypercarotenemia is not particularly dangerous, it can lead to an oranging of the skin (carotenodermia), but not the conjunctiva of eyes (thus easily distinguishing it visually from jaundice). It is most commonly associated with consumption of an abundance of carrots, but it also can be a medical sign of more dangerous conditions. Production Carotenes are produced in a general manner for other terpenoids and terpenes, i.e. by coupling, cyclization, and oxygenation reactions of isoprene derivatives. Lycopene is the key precursor to carotenoids. It is formed by coupling of geranylgeranyl pyrophosphate and geranyllinally pyrophosphate. Most of the world's synthetic supply of carotene comes from a manufacturing complex located in Freeport, Texas and owned by DSM. The other major supplier BASF also uses a chemical process to produce β-carotene. Together these suppliers account for about 85% of the β-carotene on the market. In Spain Vitatene produces natural β-carotene from fungus Blakeslea trispora, as does DSM but at much lower amount when compared to its synthetic β-carotene operation. In Australia, organic β-carotene is produced by Aquacarotene Limited from dried marine algae Dunaliella salina grown in harvesting ponds situated in Karratha, Western Australia. BASF Australia is also producing β-carotene from microalgae grown in two sites in Australia that are the world's largest algae farms. In Portugal, the industrial biotechnology company Biotrend is producing natural all-trans-β-carotene from a non-genetically modified bacteria of the genus Sphingomonas isolated from soil. Carotenes are also found in palm oil, corn, and in the milk of dairy cows, causing cow's milk to be light yellow, depending on the feed of the cattle, and the amount of fat in the milk (high-fat milks, such as those produced by Guernsey cows, tend to be yellower because their fat content causes them to contain more carotene). Carotenes are also found in some species of termites, where they apparently have been picked up from the diet of the insects. Synthesis There are currently two commonly used methods of total synthesis of β-carotene. The first was developed by BASF and is based on the Wittig reaction with Wittig himself as patent holder:<ref name="β-Carotin-1">Wittig G.; Pommer H.: DBP 954247, 1956</ref> The second is a Grignard reaction, elaborated by Hoffman-La Roche from the original synthesis of Inhoffen et al. They are both symmetrical; the BASF synthesis is C20 + C20, and the Hoffman-La Roche synthesis is C19 + C2 + C19. Nomenclature Carotenes are carotenoids containing no oxygen. Carotenoids containing some oxygen are known as xanthophylls. The two ends of the β-carotene molecule are structurally identical, and are called β-rings. Specifically, the group of nine carbon atoms at each end form a β-ring. The α-carotene molecule has a β-ring at one end; the other end is called an ε-ring. There is no such thing as an "α-ring". These and similar names for the ends of the carotenoid molecules form the basis of a systematic naming scheme, according to which: α-carotene is β,ε-carotene; β-carotene is β,β-carotene; γ-carotene (with one β ring and one uncyclized end that is labelled psi'') is β,ψ-carotene; δ-carotene (with one ε ring and one uncyclized end) is ε,ψ-carotene; ε-carotene is ε,ε-carotene lycopene is ψ,ψ-carotene ζ-Carotene is the biosynthetic precursor of neurosporene, which is the precursor of lycopene, which, in turn, is the precursor of the carotenes α through ε. Food additive Carotene is used to colour products such as juice, cakes, desserts, butter and margarine. It is approved for use as a food additive in the EU (listed as additive E160a) Australia and New Zealand (listed as 160a) and the US. See also Antioxidant References External links Vitamin A Food colorings Carotenoids Hydrocarbons E-number additives
Carotene
[ "Chemistry", "Biology" ]
3,016
[ "Hydrocarbons", "Biomarkers", "Vitamin A", "Carotenoids", "Organic compounds", "Biomolecules" ]
6,988
https://en.wikipedia.org/wiki/Cyclic%20adenosine%20monophosphate
Cyclic adenosine monophosphate (cAMP, cyclic AMP, or 3',5'-cyclic adenosine monophosphate) is a second messenger, or cellular signal occurring within cells, that is important in many biological processes. cAMP is a derivative of adenosine triphosphate (ATP) and used for intracellular signal transduction in many different organisms, conveying the cAMP-dependent pathway. History Earl Sutherland of Vanderbilt University won a Nobel Prize in Physiology or Medicine in 1971 "for his discoveries concerning the mechanisms of the action of hormones", especially epinephrine, via second messengers (such as cyclic adenosine monophosphate, cyclic AMP). Synthesis The synthesis of cAMP is stimulated by trophic hormones that bind to receptors on the cell surface. cAMP levels reach maximal levels within minutes and decrease gradually over an hour in cultured cells. Cyclic AMP is synthesized from ATP by adenylate cyclase located on the inner side of the plasma membrane and anchored at various locations in the interior of the cell. Adenylate cyclase is activated by a range of signaling molecules through the activation of adenylate cyclase stimulatory G (Gs)-protein-coupled receptors. Adenylate cyclase is inhibited by agonists of adenylate cyclase inhibitory G (Gi)-protein-coupled receptors. Liver adenylate cyclase responds more strongly to glucagon, and muscle adenylate cyclase responds more strongly to adrenaline. cAMP decomposition into AMP is catalyzed by the enzyme phosphodiesterase. Functions cAMP is a second messenger, used for intracellular signal transduction, such as transferring into cells the effects of hormones like glucagon and adrenaline, which cannot pass through the plasma membrane. It is also involved in the activation of protein kinases. In addition, cAMP binds to and regulates the function of ion channels such as the HCN channels and a few other cyclic nucleotide-binding proteins such as Epac1 and RAPGEF2. Role in eukaryotic cells cAMP is associated with kinases function in several biochemical processes, including the regulation of glycogen, sugar, and lipid metabolism. In eukaryotes, cyclic AMP works by activating protein kinase A (PKA, or cAMP-dependent protein kinase). PKA is normally inactive as a tetrameric holoenzyme, consisting of two catalytic and two regulatory units (C2R2), with the regulatory units blocking the catalytic centers of the catalytic units. Cyclic AMP binds to specific locations on the regulatory units of the protein kinase, and causes dissociation between the regulatory and catalytic subunits, thus enabling those catalytic units to phosphorylate substrate proteins. The active subunits catalyze the transfer of phosphate from ATP to specific serine or threonine residues of protein substrates. The phosphorylated proteins may act directly on the cell's ion channels, or may become activated or inhibited enzymes. Protein kinase A can also phosphorylate specific proteins that bind to promoter regions of DNA, causing increases in transcription. Not all protein kinases respond to cAMP. Several classes of protein kinases, including protein kinase C, are not cAMP-dependent. Further effects mainly depend on cAMP-dependent protein kinase, which vary based on the type of cell. Still, there are some minor PKA-independent functions of cAMP, e.g., activation of calcium channels, providing a minor pathway by which growth hormone-releasing hormone causes a release of growth hormone. However, the view that the majority of the effects of cAMP are controlled by PKA is an outdated one. In 1998 a family of cAMP-sensitive proteins with guanine nucleotide exchange factor (GEF) activity was discovered. These are termed Exchange proteins activated by cAMP (Epac) and the family comprises Epac1 and Epac2. The mechanism of activation is similar to that of PKA: the GEF domain is usually masked by the N-terminal region containing the cAMP binding domain. When cAMP binds, the domain dissociates and exposes the now-active GEF domain, allowing Epac to activate small Ras-like GTPase proteins, such as Rap1. Additional role of secreted cAMP in social amoebae In the species Dictyostelium discoideum, cAMP acts outside the cell as a secreted signal. The chemotactic aggregation of cells is organized by periodic waves of cAMP that propagate between cells over distances as large as several centimetres. The waves are the result of a regulated production and secretion of extracellular cAMP and a spontaneous biological oscillator that initiates the waves at centers of territories. Role in bacteria In bacteria, the level of cAMP varies depending on the medium used for growth. In particular, cAMP is low when glucose is the carbon source. This occurs through inhibition of the cAMP-producing enzyme, adenylate cyclase, as a side-effect of glucose transport into the cell. The transcription factor cAMP receptor protein (CRP) also called CAP (catabolite gene activator protein) forms a complex with cAMP and thereby is activated to bind to DNA. CRP-cAMP increases expression of a large number of genes, including some encoding enzymes that can supply energy independent of glucose. cAMP, for example, is involved in the positive regulation of the lac operon. In an environment with a low glucose concentration, cAMP accumulates and binds to the allosteric site on CRP (cAMP receptor protein), a transcription activator protein. The protein assumes its active shape and binds to a specific site upstream of the lac promoter, making it easier for RNA polymerase to bind to the adjacent promoter to start transcription of the lac operon, increasing the rate of lac operon transcription. With a high glucose concentration, the cAMP concentration decreases, and the CRP disengages from the lac operon. Pathology Since cyclic AMP is a second messenger and plays vital role in cell signalling, it has been implicated in various disorders but not restricted to the roles given below: Role in human carcinoma Some research has suggested that a deregulation of cAMP pathways and an aberrant activation of cAMP-controlled genes is linked to the growth of some cancers. Role in prefrontal cortex disorders Recent research suggests that cAMP affects the function of higher-order thinking in the prefrontal cortex through its regulation of ion channels called hyperpolarization-activated cyclic nucleotide-gated channels (HCN). When cAMP stimulates the HCN, the channels open, This research, especially the cognitive deficits in age-related illnesses and ADHD, is of interest to researchers studying the brain. cAMP is involved in activation of trigeminocervical system leading to neurogenic inflammation and causing migraine. Role in infectious disease agents' pathogenesis Disrupted functioning of cAMP has been noted as one of the mechanisms of several bacterial exotoxins. They can be subgrouped into two distinct categories: Toxins that interfere with enzymes ADP-ribosyl-transferases, and invasive adenylate cyclases. ADP-ribosyl-transferases related toxins Cholera toxin is an AB toxin that has five B subunints and one A subunit. The toxin acts by the following mechanism: First, the B subunit ring of the cholera toxin binds to GM1 gangliosides on the surface of target cells. If a cell lacks GM1 the toxin most likely binds to other types of glycans, such as Lewis Y and Lewis X, attached to proteins instead of lipids. Uses Forskolin is commonly used as a tool in biochemistry to raise levels of cAMP in the study and research of cell physiology. See also Cyclic guanosine monophosphate (cGMP) 8-Bromoadenosine 3',5'-cyclic monophosphate (8-Br-cAMP) Acrasin specific to chemotactic use in Dictyostelium discoideum. phosphodiesterase 4 (PDE 4) which degrades cAMP References Nucleotides Signal transduction Cell signaling Cyclic nucleotides
Cyclic adenosine monophosphate
[ "Chemistry", "Biology" ]
1,700
[ "Biochemistry", "Neurochemistry", "Second messenger system", "Signal transduction" ]
7,011
https://en.wikipedia.org/wiki/Control%20engineering
Control engineering, also known as control systems engineering and, in some European countries, automation engineering, is an engineering discipline that deals with control systems, applying control theory to design equipment and systems with desired behaviors in control environments. The discipline of controls overlaps and is usually taught along with electrical engineering, chemical engineering and mechanical engineering at many institutions around the world. The practice uses sensors and detectors to measure the output performance of the process being controlled; these measurements are used to provide corrective feedback helping to achieve the desired performance. Systems designed to perform without requiring human input are called automatic control systems (such as cruise control for regulating the speed of a car). Multi-disciplinary in nature, control systems engineering activities focus on implementation of control systems mainly derived by mathematical modeling of a diverse range of systems. Overview Modern day control engineering is a relatively new field of study that gained significant attention during the 20th century with the advancement of technology. It can be broadly defined or classified as practical application of control theory. Control engineering plays an essential role in a wide range of control systems, from simple household washing machines to high-performance fighter aircraft. It seeks to understand physical systems, using mathematical modelling, in terms of inputs, outputs and various components with different behaviors; to use control system design tools to develop controllers for those systems; and to implement controllers in physical systems employing available technology. A system can be mechanical, electrical, fluid, chemical, financial or biological, and its mathematical modelling, analysis and controller design uses control theory in one or many of the time, frequency and complex-s domains, depending on the nature of the design problem. Control engineering is the engineering discipline that focuses on the modeling of a diverse range of dynamic systems (e.g. mechanical systems) and the design of controllers that will cause these systems to behave in the desired manner. Although such controllers need not be electrical, many are and hence control engineering is often viewed as a subfield of electrical engineering. Electrical circuits, digital signal processors and microcontrollers can all be used to implement control systems. Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles. In most cases, control engineers utilize feedback when designing control systems. This is often accomplished using a PID controller system. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system, which adjusts the motor's torque accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback. In practically all such systems stability is important and control theory can help ensure stability is achieved. Although feedback is an important aspect of control engineering, control engineers may also work on the control of systems without feedback. This is known as open loop control. A classic example of open loop control is a washing machine that runs through a pre-determined cycle without the use of sensors. History Automatic control systems were first developed over two thousand years ago. The first feedback control device on record is thought to be the ancient Ktesibios's water clock in Alexandria, Egypt, around the third century BCE. It kept time by regulating the water level in a vessel and, therefore, the water flow from that vessel. This certainly was a successful device as water clocks of similar design were still being made in Baghdad when the Mongols captured the city in 1258 CE. A variety of automatic devices have been used over the centuries to accomplish useful tasks or simply just to entertain. The latter includes the automata, popular in Europe in the 17th and 18th centuries, featuring dancing figures that would repeat the same task over and over again; these automata are examples of open-loop control. Milestones among feedback, or "closed-loop" automatic control devices, include the temperature regulator of a furnace attributed to Drebbel, circa 1620, and the centrifugal flyball governor used for regulating the speed of steam engines by James Watt in 1788. In his 1868 paper "On Governors", James Clerk Maxwell was able to explain instabilities exhibited by the flyball governor using differential equations to describe the control system. This demonstrated the importance and usefulness of mathematical models and methods in understanding complex phenomena, and it signaled the beginning of mathematical control and systems theory. Elements of control theory had appeared earlier but not as dramatically and convincingly as in Maxwell's analysis. Control theory made significant strides over the next century. New mathematical techniques, as well as advances in electronic and computer technologies, made it possible to control significantly more complex dynamical systems than the original flyball governor could stabilize. New mathematical techniques included developments in optimal control in the 1950s and 1960s followed by progress in stochastic, robust, adaptive, nonlinear control methods in the 1970s and 1980s. Applications of control methodology have helped to make possible space travel and communication satellites, safer and more efficient aircraft, cleaner automobile engines, and cleaner and more efficient chemical processes. Before it emerged as a unique discipline, control engineering was practiced as a part of mechanical engineering and control theory was studied as a part of electrical engineering since electrical circuits can often be easily described using control theory techniques. In the first control relationships, a current output was represented by a voltage control input. However, not having adequate technology to implement electrical control systems, designers were left with the option of less efficient and slow responding mechanical systems. A very effective mechanical controller that is still widely used in some hydro plants is the governor. Later on, previous to modern power electronics, process control systems for industrial applications were devised by mechanical engineers using pneumatic and hydraulic control devices, many of which are still in use today. Mathematical modelling David Quinn Mayne, (1930–2024) was among the early developers of a rigorous mathematical method for analysing Model predictive control algorithms (MPC). It is currently used in tens of thousands of applications and is a core part of the advanced control technology by hundreds of process control producers. MPC's major strength is its capacity to deal with nonlinearities and hard constraints in a simple and intuitive fashion. His work underpins a class of algorithms that are provably correct, heuristically explainable, and yield control system designs which meet practically important objectives. Control systems Control theory Education At many universities around the world, control engineering courses are taught primarily in electrical engineering and mechanical engineering, but some courses can be instructed in mechatronics engineering, and aerospace engineering. In others, control engineering is connected to computer science, as most control techniques today are implemented through computers, often as embedded systems (as in the automotive field). The field of control within chemical engineering is often known as process control. It deals primarily with the control of variables in a chemical process in a plant. It is taught as part of the undergraduate curriculum of any chemical engineering program and employs many of the same principles in control engineering. Other engineering disciplines also overlap with control engineering as it can be applied to any system for which a suitable model can be derived. However, specialised control engineering departments do exist, for example, in Italy there are several master in Automation & Robotics that are fully specialised in Control engineering or the Department of Automatic Control and Systems Engineering at the University of Sheffield or the Department of Robotics and Control Engineering at the United States Naval Academy and the Department of Control and Automation Engineering at the Istanbul Technical University. Control engineering has diversified applications that include science, finance management, and even human behavior. Students of control engineering may start with a linear control system course dealing with the time and complex-s domain, which requires a thorough background in elementary mathematics and Laplace transform, called classical control theory. In linear control, the student does frequency and time domain analysis. Digital control and nonlinear control courses require Z transformation and algebra respectively, and could be said to complete a basic control education. Careers A control engineer's career starts with a bachelor's degree and can continue through the college process. Control engineer degrees are typically paired with an electrical or mechanical engineering degree, but can also be paired with a degree in chemical engineering. According to a Control Engineering survey, most of the people who answered were control engineers in various forms of their own career. There are not very many careers that are classified as "control engineer", most of them are specific careers that have a small semblance to the overarching career of control engineering. A majority of the control engineers that took the survey in 2019 are system or product designers, or even control or instrument engineers. Most of the jobs involve process engineering or production or even maintenance, they are some variation of control engineering. Because of this, there are many job opportunities in aerospace companies, manufacturing companies, automobile companies, power companies, chemical companies, petroleum companies, and government agencies. Some places that hire Control Engineers include companies such as Rockwell Automation, NASA, Ford, Phillips 66, Eastman, and Goodrich. Control Engineers can possibly earn $66k annually from Lockheed Martin Corp. They can also earn up to $96k annually from General Motors Corporation. Process Control Engineers, typically found in Refineries and Specialty Chemical plants, can earn upwards of $90k annually. Recent advancement Originally, control engineering was all about continuous systems. Development of computer control tools posed a requirement of discrete control system engineering because the communications between the computer-based digital controller and the physical system are governed by a computer clock. The equivalent to Laplace transform in the discrete domain is the Z-transform. Today, many of the control systems are computer controlled and they consist of both digital and analog components. Therefore, at the design stage either digital components are mapped into the continuous domain and the design is carried out in the continuous domain, or analog components are mapped into discrete domain and design is carried out there. The first of these two methods is more commonly encountered in practice because many industrial systems have many continuous systems components, including mechanical, fluid, biological and analog electrical components, with a few digital controllers. Similarly, the design technique has progressed from paper-and-ruler based manual design to computer-aided design and now to computer-automated design or CAD which has been made possible by evolutionary computation. CAD can be applied not just to tuning a predefined control scheme, but also to controller structure optimisation, system identification and invention of novel control systems, based purely upon a performance requirement, independent of any specific control scheme. Resilient control systems extend the traditional focus of addressing only planned disturbances to frameworks and attempt to address multiple types of unexpected disturbance; in particular, adapting and transforming behaviors of the control system in response to malicious actors, abnormal failure modes, undesirable human action, etc. See also Artificial intelligence Automation Automation engineering Electrical engineering Communications engineering Satellite navigation Outline of control engineering Advanced process control Building automation Computer-automated design (CAutoD, CAutoCSD) Control reconfiguration Feedback H-infinity Lead–lag compensator List of control engineering topics Quantitative feedback theory Robotic unicycle State space Sliding mode control Systems engineering Testing controller VisSim Control Engineering (magazine) Time series Process control system Robotic control Mechatronics SCADA References Further reading External links Control Labs Worldwide The Michigan Chemical Engineering Process Dynamics and Controls Open Textbook Control System Integrators Association List of control systems integrators Institution of Mechanical Engineers - Mechatronics, Informatics and Control Group (MICG) Systems Science & Control Engineering: An Open Access Journal Electrical engineering Mechanical engineering Systems engineering Engineering disciplines Automation
Control engineering
[ "Physics", "Engineering" ]
2,329
[ "Systems engineering", "Applied and interdisciplinary physics", "Automation", "Control engineering", "Mechanical engineering", "nan", "Electrical engineering" ]
7,021
https://en.wikipedia.org/wiki/Crookes%20radiometer
The Crookes radiometer (also known as a light mill) consists of an airtight glass bulb containing a partial vacuum, with a set of vanes which are mounted on a spindle inside. The vanes rotate when exposed to light, with faster rotation for more intense light, providing a quantitative measurement of electromagnetic radiation intensity. The reason for the rotation was a cause of much scientific debate in the ten years following the invention of the device, but in 1879 the currently accepted explanation for the rotation was published. Today the device is mainly used in physics education as a demonstration of a heat engine run by light energy. It was invented in 1873 by the chemist Sir William Crookes as the by-product of some chemical research. In the course of very accurate quantitative chemical work, he was weighing samples in a partially evacuated chamber to reduce the effect of air currents, and noticed the weighings were disturbed when sunlight shone on the balance. Investigating this effect, he created the device named after him. It is still manufactured and sold as an educational aid or for curiosity. General description The radiometer is made from a glass bulb from which much of the air has been removed to form a partial vacuum. Inside the bulb, on a low-friction spindle, is a rotor with several (usually four) vertical lightweight vanes spaced equally around the axis. The vanes are polished or white on one side and black on the other. When exposed to sunlight, artificial light, or infrared radiation (even the heat of a hand nearby can be enough), the vanes turn with no apparent motive power, the dark sides retreating from the radiation source and the light sides advancing. Cooling the outside of the radiometer rapidly causes rotation in the opposite direction. Effect observations The effect begins to be observed at partial vacuum pressures of several hundred pascals (or several torrs), reaches a peak at around and has disappeared by the time the vacuum reaches (see explanations note 1). At these very high vacuums the effect of photon radiation pressure on the vanes can be observed in very sensitive apparatus (see Nichols radiometer), but this is insufficient to cause rotation. Origin of the name The prefix "radio-" in the title originates from the combining form of Latin radius, a ray: here it refers to electromagnetic radiation. A Crookes radiometer, consistent with the suffix "-meter" in its title, can provide a quantitative measurement of electromagnetic radiation intensity. This can be done, for example, by visual means (e.g., a spinning slotted disk, which functions as a simple stroboscope) without interfering with the measurement itself. Radiometers are now commonly sold worldwide as a novelty ornament; needing no batteries, but only light to get the vanes to turn. They come in various forms, such as the one pictured, and are often used in science museums to illustrate "radiation pressure" – a scientific principle that they do not in fact demonstrate. Thermodynamic explanation Movement with absorption When a radiant energy source is directed at a Crookes radiometer, the radiometer becomes a heat engine. The operation of a heat engine is based on a difference in temperature that is converted to a mechanical output. In this case, the black side of the vane becomes hotter than the other side, as radiant energy from a light source warms the black side by absorption faster than the silver or white side. The internal air molecules are heated up when they touch the black side of the vane. The warmer side of the vane is subjected to a force which moves it forward. The internal temperature rises as the black vanes impart heat to the air molecules, but the molecules are cooled again when they touch the bulb's glass surface, which is at ambient temperature. This heat loss through the glass keeps the internal bulb temperature steady with the result that the two sides of the vanes develop a temperature difference. The white or silver side of the vanes are slightly warmer than the internal air temperature but cooler than the black side, as some heat conducts through the vane from the black side. The two sides of each vane must be thermally insulated to some degree so that the polished or white side does not immediately reach the temperature of the black side. If the vanes are made of metal, then the black or white paint can be the insulation. The glass stays much closer to ambient temperature than the temperature reached by the black side of the vanes. The external air helps conduct heat away from the glass. The air pressure inside the bulb needs to strike a balance between too low and too high. A strong vacuum inside the bulb does not permit motion, because there are not enough air molecules to cause the air currents that propel the vanes and transfer heat to the outside before both sides of each vane reach thermal equilibrium by heat conduction through the vane material. High inside pressure inhibits motion because the temperature differences are not enough to push the vanes through the higher concentration of air: there is too much air resistance for "eddy currents" to occur, and any slight air movement caused by the temperature difference is damped by the higher pressure before the currents can "wrap around" to the other side. Movement with radiation When the radiometer is heated in the absence of a light source, it turns in the forward direction (i.e. black sides trailing). If a person's hands are placed around the glass without touching it, the vanes will turn slowly or not at all, but if the glass is touched to warm it quickly, they will turn more noticeably. Directly heated glass gives off enough infrared radiation to turn the vanes, but glass blocks much of the far-infrared radiation from a source of warmth not in contact with it. However, near-infrared and visible light more easily penetrate the glass. If the glass is cooled quickly in the absence of a strong light source by putting ice on the glass or placing it in the freezer with the door almost closed, it turns backwards (i.e. the silver sides trail). This demonstrates radiation from the black sides of the vanes rather than absorption. The wheel turns backwards because the net exchange of heat between the black sides and the environment initially cools the black sides faster than the white sides. Upon reaching equilibrium, typically after a minute or two, reverse rotation ceases. This contrasts with sunlight, with which forward rotation can be maintained all day. Explanations for the force on the vanes Over the years, there have been many attempts to explain how a Crookes radiometer works: Incorrect theories Crookes incorrectly suggested that the force was due to the pressure of light. This theory was originally supported by James Clerk Maxwell, who had predicted this force. This explanation is still often seen in leaflets packaged with the device. The first experiment to test this theory was done by Arthur Schuster in 1876, who observed that there was a force on the glass bulb of the Crookes radiometer that was in the opposite direction to the rotation of the vanes. This showed that the force turning the vanes was generated inside the radiometer. If light pressure were the cause of the rotation, then the better the vacuum in the bulb, the less air resistance to movement, and the faster the vanes should spin. In 1901, with a better vacuum pump, Pyotr Lebedev showed that in fact, the radiometer only works when there is low-pressure gas in the bulb, and the vanes stay motionless in a hard vacuum. Finally, if light pressure were the motive force, the radiometer would spin in the opposite direction, as the photons on the shiny side being reflected would deposit more momentum than on the black side, where the photons are absorbed. This results from conservation of momentum – the momentum of the reflected photon exiting on the light side must be matched by a reaction on the vane that reflected it. The actual pressure exerted by light is far too small to move these vanes, but can be measured with devices such as the Nichols radiometer. It is in fact possible to make the radiometer spin in the opposite direction by either heating it or putting it in a cold environment (like a freezer) in absence of light, when black sides become cooler than the white ones due to the thermal radiation. Another incorrect theory was that the heat on the dark side was causing the material to outgas, which pushed the radiometer around. This was later effectively disproved by both Schuster's experiments (1876) and Lebedev's (1901) Partially correct theory A partial explanation is that gas molecules hitting the warmer side of the vane will pick up some of the heat, bouncing off the vane with increased speed. Giving the molecule this extra boost effectively means that a minute pressure is exerted on the vane. The imbalance of this effect between the warmer black side and the cooler silver side means the net pressure on the vane is equivalent to a push on the black side and as a result the vanes spin round with the black side trailing. The problem with this idea is that while the faster moving molecules produce more force, they also do a better job of stopping other molecules from reaching the vane, so the net force on the vane should be the same. The greater temperature causes a decrease in local density which results in the same force on both sides. Years after this explanation was dismissed, Albert Einstein showed that the two pressures do not cancel out exactly at the edges of the vanes because of the temperature difference there. The force predicted by Einstein would be enough to move the vanes, but not fast enough. Currently accepted theory The currently accepted theory was formulated by Osborne Reynolds, who theorized that thermal transpiration was the cause of the motion. Reynolds found that if a porous plate is kept hotter on one side than the other, the interactions between gas molecules and the plates are such that gas will flow through from the cooler to the hotter side. The vanes of a typical Crookes radiometer are not porous, but the space past their edges behaves like the pores in Reynolds's plate. As gas moves from the cooler to the hotter side, the pressure on the hotter side increases. When the plate is fixed, the pressure on the hotter side increases until the ratio of pressures between the sides equals the square root of the ratio of absolute temperatures. Because the plates in a radiometer are not fixed, the pressure difference from cooler to hotter side causes the vane to move. The cooler (white) side moves forward, pushed by the higher pressure behind it. From a molecular point of view, the vane moves due to the tangential force of the rarefied gas colliding differently with the edges of the vane between the hot and cold sides. The Reynolds paper went unpublished for a while because it was refereed by Maxwell, who then published a paper of his own, which contained a critique of the mathematics in Reynolds's unpublished paper. Maxwell died that year and the Royal Society refused to publish Reynolds's critique of Maxwell's rebuttal to Reynolds's unpublished paper, as it was felt that this would be an inappropriate argument when one of the people involved had already died. All-black light mill To rotate, a light mill does not have to be coated with different colors across each vane. In 2009, researchers at the University of Texas, Austin created a monocolored light mill which has four curved vanes; each vane forms a convex and a concave surface. The light mill is uniformly coated by gold nanocrystals, which are a strong light absorber. Upon exposure, due to geometric effect, the convex side of the vane receives more photon energy than the concave side does, and subsequently the gas molecules receive more heat from the convex side than from the concave side. At rough vacuum, this asymmetric heating effect generates a net gas movement across each vane, from the concave side to the convex side, as shown by the researchers' direct simulation Monte Carlo modeling. The gas movement causes the light mill to rotate with the concave side moving forward, due to Newton's third law. This monocolored design promotes the fabrication of micrometer- or nanometer-scaled light mills, as it is difficult to pattern materials of distinct optical properties within a very narrow, three-dimensional space. Horizontal vane light mill The thermal creep from the hot side of a vane to the cold side has been demonstrated in a mill with horizontal vanes that have a two-tone surface with a black half and a white half. This design is called a Hettner radiometer. This radiometer's angular speed was found to be limited by the behavior of the drag force due to the gas in the vessel more than by the behavior of the thermal creep force. This design does not experience the Einstein effect because the faces are parallel to the temperature gradient. Nanoscale light mill In 2010 researchers at the University of California, Berkeley succeeded in building a nanoscale light mill that works on an entirely different principle to the Crookes radiometer. A gold light mill, only 100 nanometers in diameter, was built and illuminated by laser light that had been tuned. The possibility of doing this had been suggested by the Princeton physicist Richard Beth in 1936. The torque was greatly enhanced by the resonant coupling of the incident light to plasmonic waves in the gold structure. Practical applications The radiometric effect has not been often used for practical applications. Marcel Bétrisey made in 2001 two different clocks (Le Chronolithe and Conti) powered by the light. Their pendulums had bulb lamps located outside the glass dôme and pointing against 4 mica vanes. One meter pendulum gives one second, two lamps placed in either side light up alternately, thus “pushing” the 4 kilos pendulum each time. As there was vacuum inside, its accuracy was of the order of 2 seconds per month. See also Crookes tube Marangoni effect Nichols radiometer Photophoresis Solar energy Solar wind Thermophoresis References General information Loeb, Leonard B. (1934) The Kinetic Theory of Gases (2nd Edition);McGraw-Hill Book Company; pp 353–386 Kennard, Earle H. (1938) Kinetic Theory of Gases; McGraw-Hill Book Company; pp 327–337 Patents External links Crooke's Radiometer applet How does a light-mill work?-Physics FAQ The Cathode Ray Tube site . 1933 Bell and Green experiment describing the effect of different gas pressures on the vanes. The Properties of the Force Exerted in a Radiometer archived Radiometric clocks made by Marcel Bétrisey: "Le Chronolithe" and "Conti" Hot air engines Electromagnetic radiation meters Radiometry External combustion engines Heat transfer Energy conversion Novelty items 19th-century inventions British inventions
Crookes radiometer
[ "Physics", "Chemistry", "Technology", "Engineering" ]
3,003
[ "Transport phenomena", "External combustion engines", "Physical phenomena", "Heat transfer", "Telecommunications engineering", "Spectrum (physical sciences)", "Engines", "Electromagnetic radiation meters", "Electromagnetic spectrum", "Measuring instruments", "Thermodynamics", "Radiometry" ]
7,030
https://en.wikipedia.org/wiki/Code%20coverage
In software engineering, code coverage, also called test coverage, is a percentage measure of the degree to which the source code of a program is executed when a particular test suite is run. A program with high code coverage has more of its source code executed during testing, which suggests it has a lower chance of containing undetected software bugs compared to a program with low code coverage. Many different metrics can be used to calculate test coverage. Some of the most basic are the percentage of program subroutines and the percentage of program statements called during execution of the test suite. Code coverage was among the first methods invented for systematic software testing. The first published reference was by Miller and Maloney in Communications of the ACM, in 1963. Coverage criteria To measure what percentage of code has been executed by a test suite, one or more coverage criteria are used. These are usually defined as rules or requirements, which a test suite must satisfy. Basic coverage criteria There are a number of coverage criteria, but the main ones are: Function coveragehas each function (or subroutine) in the program been called? Statement coveragehas each statement in the program been executed? Edge coveragehas every edge in the control-flow graph been executed? Branch coveragehas each branch (also called the DD-path) of each control structure (such as in if and case statements) been executed? For example, given an if statement, have both the true and false branches been executed? (This is a subset of edge coverage.) Condition coveragehas each Boolean sub-expression evaluated both to true and false? (Also called predicate coverage.) For example, consider the following C function: int foo (int x, int y) { int z = 0; if ((x > 0) && (y > 0)) { z = x; } return z; } Assume this function is a part of some bigger program and this program was run with some test suite. Function coverage will be satisfied if, during this execution, the function foo was called at least once. Statement coverage for this function will be satisfied if it was called for example as foo(1,1), because in this case, every line in the function would be executed—including z = x;. Branch coverage will be satisfied by tests calling foo(1,1) and foo(0,1) because, in the first case, both if conditions are met and z = x; is executed, while in the second case, the first condition, (x>0), is not satisfied, which prevents the execution of z = x;. Condition coverage will be satisfied with tests that call foo(1,0), foo(0,1), and foo(1,1). These are necessary because in the first case, (x>0) is evaluated to true, while in the second, it is evaluated to false. At the same time, the first case makes (y>0) false, the second case does not evaluate (y>0) (because of the lazy-evaluation of the Boolean operator), the third case makes it true. In programming languages that do not perform short-circuit evaluation, condition coverage does not necessarily imply branch coverage. For example, consider the following Pascal code fragment: if a and b then Condition coverage can be satisfied by two tests: a=true, b=false a=false, b=true However, this set of tests does not satisfy branch coverage since neither case will meet the if condition. Fault injection may be necessary to ensure that all conditions and branches of exception-handling code have adequate coverage during testing. Modified condition/decision coverage A combination of function coverage and branch coverage is sometimes also called decision coverage. This criterion requires that every point of entry and exit in the program has been invoked at least once, and every decision in the program has taken on all possible outcomes at least once. In this context, the decision is a Boolean expression comprising conditions and zero or more Boolean operators. This definition is not the same as branch coverage, however, the term decision coverage is sometimes used as a synonym for it. Condition/decision coverage requires that both decision and condition coverage be satisfied. However, for safety-critical applications (such as avionics software) it is often required that modified condition/decision coverage (MC/DC) be satisfied. This criterion extends condition/decision criteria with requirements that each condition should affect the decision outcome independently. For example, consider the following code: if (a or b) and c then The condition/decision criteria will be satisfied by the following set of tests: However, the above tests set will not satisfy modified condition/decision coverage, since in the first test, the value of 'b' and in the second test the value of 'c' would not influence the output. So, the following test set is needed to satisfy MC/DC: Multiple condition coverage This criterion requires that all combinations of conditions inside each decision are tested. For example, the code fragment from the previous section will require eight tests: Parameter value coverage Parameter value coverage (PVC) requires that in a method taking parameters, all the common values for such parameters be considered. The idea is that all common possible values for a parameter are tested. For example, common values for a string are: 1) null, 2) empty, 3) whitespace (space, tabs, newline), 4) valid string, 5) invalid string, 6) single-byte string, 7) double-byte string. It may also be appropriate to use very long strings. Failure to test each possible parameter value may result in a bug. Testing only one of these could result in 100% code coverage as each line is covered, but as only one of seven options are tested, there is only 14.2% PVC. Other coverage criteria There are further coverage criteria, which are used less often: Linear Code Sequence and Jump (LCSAJ) coverage a.k.a. JJ-Path coverage has every LCSAJ/JJ-path been executed? Path coverageHas every possible route through a given part of the code been executed? Entry/exit coverageHas every possible call and return of the function been executed? Loop coverageHas every possible loop been executed zero times, once, and more than once? State coverageHas each state in a finite-state machine been reached and explored? Data-flow coverageHas each variable definition and its usage been reached and explored? Safety-critical or dependable applications are often required to demonstrate 100% of some form of test coverage. For example, the ECSS-E-ST-40C standard demands 100% statement and decision coverage for two out of four different criticality levels; for the other ones, target coverage values are up to negotiation between supplier and customer. However, setting specific target values - and, in particular, 100% - has been criticized by practitioners for various reasons (cf.) Martin Fowler writes: "I would be suspicious of anything like 100% - it would smell of someone writing tests to make the coverage numbers happy, but not thinking about what they are doing". Some of the coverage criteria above are connected. For instance, path coverage implies decision, statement and entry/exit coverage. Decision coverage implies statement coverage, because every statement is part of a branch. Full path coverage, of the type described above, is usually impractical or impossible. Any module with a succession of decisions in it can have up to paths within it; loop constructs can result in an infinite number of paths. Many paths may also be infeasible, in that there is no input to the program under test that can cause that particular path to be executed. However, a general-purpose algorithm for identifying infeasible paths has been proven to be impossible (such an algorithm could be used to solve the halting problem). Basis path testing is for instance a method of achieving complete branch coverage without achieving complete path coverage. Methods for practical path coverage testing instead attempt to identify classes of code paths that differ only in the number of loop executions, and to achieve "basis path" coverage the tester must cover all the path classes. In practice The target software is built with special options or libraries and run under a controlled environment, to map every executed function to the function points in the source code. This allows testing parts of the target software that are rarely or never accessed under normal conditions, and helps reassure that the most important conditions (function points) have been tested. The resulting output is then analyzed to see what areas of code have not been exercised and the tests are updated to include these areas as necessary. Combined with other test coverage methods, the aim is to develop a rigorous, yet manageable, set of regression tests. In implementing test coverage policies within a software development environment, one must consider the following: What are coverage requirements for the end product certification and if so what level of test coverage is required? The typical level of rigor progression is as follows: Statement, Branch/Decision, Modified Condition/Decision Coverage (MC/DC), LCSAJ (Linear Code Sequence and Jump) Will coverage be measured against tests that verify requirements levied on the system under test (DO-178B)? Is the object code generated directly traceable to source code statements? Certain certifications, (i.e. DO-178B Level A) require coverage at the assembly level if this is not the case: "Then, additional verification should be performed on the object code to establish the correctness of such generated code sequences" (DO-178B) para-6.4.4.2. Software authors can look at test coverage results to devise additional tests and input or configuration sets to increase the coverage over vital functions. Two common forms of test coverage are statement (or line) coverage and branch (or edge) coverage. Line coverage reports on the execution footprint of testing in terms of which lines of code were executed to complete the test. Edge coverage reports which branches or code decision points were executed to complete the test. They both report a coverage metric, measured as a percentage. The meaning of this depends on what form(s) of coverage have been used, as 67% branch coverage is more comprehensive than 67% statement coverage. Generally, test coverage tools incur computation and logging in addition to the actual program thereby slowing down the application, so typically this analysis is not done in production. As one might expect, there are classes of software that cannot be feasibly subjected to these coverage tests, though a degree of coverage mapping can be approximated through analysis rather than direct testing. There are also some sorts of defects which are affected by such tools. In particular, some race conditions or similar real time sensitive operations can be masked when run under test environments; though conversely, some of these defects may become easier to find as a result of the additional overhead of the testing code. Most professional software developers use C1 and C2 coverage. C1 stands for statement coverage and C2 for branch or condition coverage. With a combination of C1 and C2, it is possible to cover most statements in a code base. Statement coverage would also cover function coverage with entry and exit, loop, path, state flow, control flow and data flow coverage. With these methods, it is possible to achieve nearly 100% code coverage in most software projects. Notable code coverage tools Hardware manufacturers Aldec Mentor Graphics Silvaco Synopsys Software LDRA Testbed Parasoft C / C++ Cantata++ Gcov Insure++ LDRA Testbed Tcov Trucov Squish (Froglogic) C# .NET DevPartner Studio JetBrains NCover Java Clover DevPartner Java EMMA Jtest LDRA Testbed PHP PHPUnit, also need Xdebug to make coverage reports Usage in industry Test coverage is one consideration in the safety certification of avionics equipment. The guidelines by which avionics gear is certified by the Federal Aviation Administration (FAA) is documented in DO-178B and DO-178C. Test coverage is also a requirement in part 6 of the automotive safety standard ISO 26262 Road Vehicles - Functional Safety. See also Cyclomatic complexity Intelligent verification Linear code sequence and jump Modified condition/decision coverage Mutation testing Regression testing Software metric Static program analysis White-box testing Java code coverage tools References Software metrics Software testing tools
Code coverage
[ "Mathematics", "Engineering" ]
2,529
[ "Software engineering", "Quantity", "Metrics", "Software metrics" ]
7,037
https://en.wikipedia.org/wiki/Chlamydia
Chlamydia, or more specifically a chlamydia infection, is a sexually transmitted infection caused by the bacterium Chlamydia trachomatis. Most people who are infected have no symptoms. When symptoms do appear, they may occur only several weeks after infection; the incubation period between exposure and being able to infect others is thought to be on the order of two to six weeks. Symptoms in women may include vaginal discharge or burning with urination. Symptoms in men may include discharge from the penis, burning with urination, or pain and swelling of one or both testicles. The infection can spread to the upper genital tract in women, causing pelvic inflammatory disease, which may result in future infertility or ectopic pregnancy. Chlamydia infections can occur in other areas besides the genitals, including the anus, eyes, throat, and lymph nodes. Repeated chlamydia infections of the eyes that go without treatment can result in trachoma, a common cause of blindness in the developing world. Chlamydia can be spread during vaginal, anal, oral, or manual sex and can be passed from an infected mother to her baby during childbirth. The eye infections may also be spread by personal contact, flies, and contaminated towels in areas with poor sanitation. Infection by the bacterium Chlamydia trachomatis only occurs in humans. Diagnosis is often by screening, which is recommended yearly in sexually active women under the age of 25, others at higher risk, and at the first prenatal visit. Testing can be done on the urine or a swab of the cervix, vagina, or urethra. Rectal or mouth swabs are required to diagnose infections in those areas. Prevention is by not having sex, the use of condoms, or having sex with only one other person, who is not infected. Chlamydia can be cured by antibiotics, with typically either azithromycin or doxycycline being used. Erythromycin or azithromycin is recommended in babies and during pregnancy. Sexual partners should also be treated, and infected people should be advised not to have sex for seven days and until symptom free. Gonorrhea, syphilis, and HIV should be tested for in those who have been infected. Following treatment, people should be tested again after three months. Chlamydia is one of the most common sexually transmitted infections, affecting about 4.2% of women and 2.7% of men worldwide. In 2015, about 61 million new cases occurred globally. In the United States, about 1.4 million cases were reported in 2014. Infections are most common among those between the ages of 15 and 25 and are more common in women than men. In 2015, infections resulted in about 200 deaths. The word chlamydia is from the Greek , meaning 'cloak'. Signs and symptoms Genital disease Women Chlamydial infection of the cervix (neck of the womb) is a sexually transmitted infection which has no symptoms for around 70% of women infected. The infection can be passed through vaginal, anal, oral, or manual sex. Of those who have an asymptomatic infection that is not detected by their doctor, approximately half will develop pelvic inflammatory disease (PID), a generic term for infection of the uterus, fallopian tubes, and/or ovaries. PID can cause scarring inside the reproductive organs, which can later cause serious complications, including chronic pelvic pain, difficulty becoming pregnant, ectopic (tubal) pregnancy, and other dangerous complications of pregnancy. Chlamydia is known as the "silent epidemic", as at least 70% of genital C. trachomatis infections in women (and 50% in men) are asymptomatic at the time of diagnosis, and can linger for months or years before being discovered. Signs and symptoms may include abnormal vaginal bleeding or discharge, abdominal pain, painful sexual intercourse, fever, painful urination or the urge to urinate more often than usual (urinary urgency). For sexually active women who are not pregnant, screening is recommended in those under 25 and others at risk of infection. Risk factors include a history of chlamydial or other sexually transmitted infection, new or multiple sexual partners, and inconsistent condom use. Guidelines recommend all women attending for emergency contraceptive are offered chlamydia testing, with studies showing up to 9% of women aged under 25 years had chlamydia. Men In men, those with a chlamydial infection show symptoms of infectious inflammation of the urethra in about 50% of cases. Symptoms that may occur include: a painful or burning sensation when urinating, an unusual discharge from the penis, testicular pain or swelling, or fever. If left untreated, chlamydia in men can spread to the testicles causing epididymitis, which in rare cases can lead to sterility if not treated. Chlamydia is also a potential cause of prostatic inflammation in men, although the exact relevance in prostatitis is difficult to ascertain due to possible contamination from urethritis. Eye disease Trachoma is a chronic conjunctivitis caused by Chlamydia trachomatis. It was once the leading cause of blindness worldwide, but its role diminished from 15% of blindness cases by trachoma in 1995 to 3.6% in 2002. The infection can be spread from eye to eye by fingers, shared towels or cloths, coughing and sneezing and eye-seeking flies. Symptoms include mucopurulent ocular discharge, irritation, redness, and lid swelling. Newborns can also develop chlamydia eye infection through childbirth (see below). Using the SAFE strategy (acronym for surgery for in-growing or in-turned lashes, antibiotics, facial cleanliness, and environmental improvements), the World Health Organization aimed (unsuccessfully) for the global elimination of trachoma by 2020 (GET 2020 initiative). The updated World Health Assembly neglected tropical diseases road map (2021–2030) sets 2030 as the new timeline for global elimination. Joints Chlamydia may also cause reactive arthritis—the triad of arthritis, conjunctivitis and urethral inflammation—especially in young men. About 15,000 men develop reactive arthritis due to chlamydia infection each year in the U.S., and about 5,000 are permanently affected by it. It can occur in both sexes, though is more common in men. Infants As many as half of all infants born to mothers with chlamydia will be born with the disease. Chlamydia can affect infants by causing spontaneous abortion; premature birth; conjunctivitis, which may lead to blindness; and pneumonia. Conjunctivitis due to chlamydia typically occurs one week after birth (compared with chemical causes (within hours) or gonorrhea (2–5 days)). Other conditions A different serovar of Chlamydia trachomatis is also the cause of lymphogranuloma venereum, an infection of the lymph nodes and lymphatics. It usually presents with genital ulceration and swollen lymph nodes in the groin, but it may also manifest as rectal inflammation, fever or swollen lymph nodes in other regions of the body. Transmission Chlamydia can be transmitted during vaginal, anal, oral, or manual sex or direct contact with infected tissue such as conjunctiva. Chlamydia can also be passed from an infected mother to her baby during vaginal childbirth. It is assumed that the probability of becoming infected is proportionate to the number of bacteria one is exposed to. Pathophysiology Chlamydia bacteria have the ability to establish long-term associations with host cells. When an infected host cell is starved for various nutrients such as amino acids (for example, tryptophan), iron, or vitamins, this has a negative consequence for chlamydia bacteria since the organism is dependent on the host cell for these nutrients. Long-term cohort studies indicate that approximately 50% of those infected clear within a year, 80% within two years, and 90% within three years. The starved chlamydia bacteria can enter a persistent growth state where they stop cell division and become morphologically aberrant by increasing in size. Persistent organisms remain viable as they are capable of returning to a normal growth state once conditions in the host cell improve. There is debate as to whether persistence has relevance: some believe that persistent chlamydia bacteria are the cause of chronic chlamydial diseases. Some antibiotics such as β-lactams have been found to induce a persistent-like growth state. Diagnosis The diagnosis of genital chlamydial infections evolved rapidly from the 1990s through 2006. Nucleic acid amplification tests (NAAT), such as polymerase chain reaction (PCR), transcription mediated amplification (TMA), and the DNA strand displacement amplification (SDA) now are the mainstays. NAAT for chlamydia may be performed on swab specimens sampled from the cervix (women) or urethra (men), on self-collected vaginal swabs, or on voided urine. NAAT has been estimated to have a sensitivity of approximately 90% and a specificity of approximately 99%, regardless of sampling from a cervical swab or by urine specimen. In women seeking treatment in a sexually transmitted infection clinic where a urine test is negative, a subsequent cervical swab has been estimated to be positive in approximately 2% of the time. At present, the NAATs have regulatory approval only for testing urogenital specimens, although rapidly evolving research indicates that they may give reliable results on rectal specimens. Because of improved test accuracy, ease of specimen management, convenience in specimen management, and ease of screening sexually active men and women, the NAATs have largely replaced culture, the historic gold standard for chlamydia diagnosis, and the non-amplified probe tests. The latter test is relatively insensitive, successfully detecting only 60–80% of infections in asymptomatic women, and often giving falsely-positive results. Culture remains useful in selected circumstances and is currently the only assay approved for testing non-genital specimens. Other methods also exist including: ligase chain reaction (LCR), direct fluorescent antibody resting, enzyme immunoassay, and cell culture. The swab sample for chlamydial infections does not show difference whether the sample was collected in home or in clinic in terms of numbers of patient treated. The implications in cured patients, reinfection, partner management, and safety are unknown. Rapid point-of-care tests are, as of 2020, not thought to be effective for diagnosing chlamydia in men of reproductive age and non-pregnant women because of high false-negative rates. Prevention Prevention is by not having sex, the use of condoms, or having sex with only one other person, who is not infected. Screening For sexually active women who are not pregnant, screening is recommended in those under 25 and others at risk of infection. Risk factors include a history of chlamydial or other sexually transmitted infection, new or multiple sexual partners, and inconsistent condom use. For pregnant women, guidelines vary: screening women with age or other risk factors is recommended by the U.S. Preventive Services Task Force (USPSTF) (which recommends screening women under 25) and the American Academy of Family Physicians (which recommends screening women aged 25 or younger). The American College of Obstetricians and Gynecologists recommends screening all at risk, while the Centers for Disease Control and Prevention recommend universal screening of pregnant women. The USPSTF acknowledges that in some communities there may be other risk factors for infection, such as ethnicity. Evidence-based recommendations for screening initiation, intervals and termination are currently not possible. For men, the USPSTF concludes evidence is currently insufficient to determine if regular screening of men for chlamydia is beneficial. They recommend regular screening of men who are at increased risk for HIV or syphilis infection. A Cochrane review found that the effects of screening are uncertain in terms of chlamydia transmission but that screening probably reduces the risk of pelvic inflammatory disease in women. In the United Kingdom the National Health Service (NHS) aims to: Prevent and control chlamydia infection through early detection and treatment of asymptomatic infection; Reduce onward transmission to sexual partners; Prevent the consequences of untreated infection; Test at least 25 percent of the sexually active under 25 population annually. Retest after treatment. Treatment C. trachomatis infection can be effectively cured with antibiotics. Guidelines recommend azithromycin, doxycycline, erythromycin, levofloxacin or ofloxacin. In men, doxycycline (100 mg twice a day for 7 days) is probably more effective than azithromycin (1 g single dose) but evidence for the relative effectiveness of antibiotics in women is very uncertain. Agents recommended during pregnancy include erythromycin or amoxicillin. An option for treating sexual partners of those with chlamydia or gonorrhea includes patient-delivered partner therapy (PDT or PDPT), which is the practice of treating the sex partners of index cases by providing prescriptions or medications to the patient to take to his/her partner without the health care provider first examining the partner. Following treatment people should be tested again after three months to check for reinfection. Test of cure may be false-positive due to the limitations of NAAT in a bacterial (rather than a viral) context, since targeted genetic material may persist in the absence of viable organisms. Epidemiology Globally, as of 2015, sexually transmitted chlamydia affects approximately 61 million people. It is more common in women (3.8%) than men (2.5%). In 2015 it resulted in about 200 deaths. In the United States about 1.6 million cases were reported in 2016. The CDC estimates that if one includes unreported cases there are about 2.9 million each year. It affects around 2% of young people. Chlamydial infection is the most common bacterial sexually transmitted infection in the UK. Chlamydia causes more than 250,000 cases of epididymitis in the U.S. each year. Chlamydia causes 250,000 to 500,000 cases of PID every year in the United States. Women infected with chlamydia are up to five times more likely to become infected with HIV, if exposed. See also Chlamydia research References External links WHO fact sheet on chlamydia About Chlamydia from the CDC Sexually transmitted diseases and infections Chlamydiota Infectious causes of cancer Wikipedia medicine articles ready to translate Vaccine-preventable diseases
Chlamydia
[ "Biology" ]
3,138
[ "Vaccination", "Vaccine-preventable diseases" ]
7,038
https://en.wikipedia.org/wiki/Candidiasis
Candidiasis is a fungal infection due to any species of the genus Candida (a yeast). When it affects the mouth, in some countries it is commonly called thrush. Signs and symptoms include white patches on the tongue or other areas of the mouth and throat. Other symptoms may include soreness and problems swallowing. When it affects the vagina, it may be referred to as a yeast infection or thrush. Signs and symptoms include genital itching, burning, and sometimes a white "cottage cheese-like" discharge from the vagina. Yeast infections of the penis are less common and typically present with an itchy rash. Very rarely, yeast infections may become invasive, spreading to other parts of the body. This may result in fevers, among other symptoms. More than 20 types of Candida may cause infection with Candida albicans being the most common. Infections of the mouth are most common among children less than one month old, the elderly, and those with weak immune systems. Conditions that result in a weak immune system include HIV/AIDS, the medications used after organ transplantation, diabetes, and the use of corticosteroids. Other risk factors include during breastfeeding, following antibiotic therapy, and the wearing of dentures. Vaginal infections occur more commonly during pregnancy, in those with weak immune systems, and following antibiotic therapy. Individuals at risk for invasive candidiasis include low birth weight babies, people recovering from surgery, people admitted to intensive care units, and those with an otherwise compromised immune system. Efforts to prevent infections of the mouth include the use of chlorhexidine mouthwash in those with poor immune function and washing out the mouth following the use of inhaled steroids. Little evidence supports probiotics for either prevention or treatment, even among those with frequent vaginal infections. For infections of the mouth, treatment with topical clotrimazole or nystatin is usually effective. Oral or intravenous fluconazole, itraconazole, or amphotericin B may be used if these do not work. A number of topical antifungal medications may be used for vaginal infections, including clotrimazole. In those with widespread disease, an echinocandin such as caspofungin or micafungin is used. A number of weeks of intravenous amphotericin B may be used as an alternative. In certain groups at very high risk, antifungal medications may be used preventively, and concomitantly with medications known to precipitate infections. Infections of the mouth occur in about 6% of babies less than a month old. About 20% of those receiving chemotherapy for cancer and 20% of those with AIDS also develop the disease. About three-quarters of women have at least one yeast infection at some time during their lives. Widespread disease is rare except in those who have risk factors. Signs and symptoms Signs and symptoms of candidiasis vary depending on the area affected. Most candidal infections result in minimal complications such as redness, itching, and discomfort, though complications may be severe or even fatal if left untreated in certain populations. In healthy (immunocompetent) persons, candidiasis is usually a localized infection of the skin, fingernails or toenails (onychomycosis), or mucosal membranes, including the oral cavity and pharynx (thrush), esophagus, and the sex organs (vagina, penis, etc.); less commonly in healthy individuals, the gastrointestinal tract, urinary tract, and respiratory tract are sites of candida infection. In immunocompromised individuals, Candida infections in the esophagus occur more frequently than in healthy individuals and have a higher potential of becoming systemic, causing a much more serious condition, a fungemia called candidemia. Symptoms of esophageal candidiasis include difficulty swallowing, painful swallowing, abdominal pain, nausea, and vomiting. Mouth Infection in the mouth is characterized by white discolorations in the tongue, around the mouth, and in the throat. Irritation may also occur, causing discomfort when swallowing. Thrush is commonly seen in infants. It is not considered abnormal in infants unless it lasts longer than a few weeks. Genitals Infection of the vagina or vulva may cause severe itching, burning, soreness, irritation, and a whitish or whitish-gray cottage cheese-like discharge. Symptoms of infection of the male genitalia (balanitis thrush) include red skin around the head of the penis, swelling, irritation, itchiness and soreness of the head of the penis, thick, lumpy discharge under the foreskin, unpleasant odour, difficulty retracting the foreskin (phimosis), and pain when passing urine or during sex. Skin Signs and symptoms of candidiasis in the skin include itching, irritation, and chafing or broken skin. Invasive infection Common symptoms of gastrointestinal candidiasis in healthy individuals are anal itching, belching, bloating, indigestion, nausea, diarrhea, gas, intestinal cramps, vomiting, and gastric ulcers. Perianal candidiasis can cause anal itching; the lesion can be red, papular, or ulcerative in appearance, and it is not considered to be a sexually transmitted infection. Abnormal proliferation of the candida in the gut may lead to dysbiosis. While it is not yet clear, this alteration may be the source of symptoms generally described as the irritable bowel syndrome, and other gastrointestinal diseases. Neurological symptoms Systemic candidiasis can affect the central nervous system causing a variety of neurological symptoms, with a presentation similar to meningitis. Causes Candida yeasts are generally present in healthy humans, frequently part of the human body's normal oral and intestinal flora, and particularly on the skin; however, their growth is normally limited by the human immune system and by competition of other microorganisms, such as bacteria occupying the same locations in the human body. Candida requires moisture for growth, notably on the skin. For example, wearing wet swimwear for long periods of time is believed to be a risk factor. Candida can also cause diaper rashes in babies. In extreme cases, superficial infections of the skin or mucous membranes may enter the bloodstream and cause systemic Candida infections. Factors that increase the risk of candidiasis include HIV/AIDS, mononucleosis, cancer treatments, steroids, stress, antibiotic therapy, diabetes, and nutrient deficiency. Hormone replacement therapy and infertility treatments may also be predisposing factors. Use of inhaled corticosteroids increases risk of candidiasis of the mouth. Inhaled corticosteroids with other risk factors such as antibiotics, oral glucocorticoids, not rinsing mouth after use of inhaled corticosteroids or high dose of inhaled corticosteroids put people at even higher risk. Treatment with antibiotics can lead to eliminating the yeast's natural competitors for resources in the oral and intestinal flora, thereby increasing the severity of the condition. A weakened or undeveloped immune system or metabolic illnesses are significant predisposing factors of candidiasis. Almost 15% of people with weakened immune systems develop a systemic illness caused by Candida species. Diets high in simple carbohydrates have been found to affect rates of oral candidiases. C. albicans was isolated from the vaginas of 19% of apparently healthy women, i.e., those who experienced few or no symptoms of infection. External use of detergents or douches or internal disturbances (hormonal or physiological) can perturb the normal vaginal flora, consisting of lactic acid bacteria, such as lactobacilli, and result in an overgrowth of Candida cells, causing symptoms of infection, such as local inflammation. Pregnancy and the use of oral contraceptives have been reported as risk factors. Diabetes mellitus and the use of antibiotics are also linked to increased rates of yeast infections. In penile candidiasis, the causes include sexual intercourse with an infected individual, low immunity, antibiotics, and diabetes. Male genital yeast infections are less common, but a yeast infection on the penis caused from direct contact via sexual intercourse with an infected partner is not uncommon. Breast-feeding mothers may also develop candidiasis on and around the nipple as a result of moisture created by excessive milk-production. Vaginal candidiasis can cause congenital candidiasis in newborns. Diagnosis In oral candidiasis, simply inspecting the person's mouth for white patches and irritation may make the diagnosis. A sample of the infected area may also be taken to determine what organism is causing the infection. Symptoms of vaginal candidiasis are also present in the more common bacterial vaginosis; aerobic vaginitis is distinct and should be excluded in the differential diagnosis. In a 2002 study, only 33% of women who were self-treating for a yeast infection were found to have such an infection, while most had either bacterial vaginosis or a mixed-type infection. Diagnosis of a yeast infection is confirmed either via microscopic examination or culturing. For identification by light microscopy, a scraping or swab of the affected area is placed on a microscope slide. A single drop of 10% potassium hydroxide (KOH) solution is then added to the specimen. The KOH dissolves the skin cells, but leaves the Candida cells intact, permitting visualization of pseudohyphae and budding yeast cells typical of many Candida species. For the culturing method, a sterile swab is rubbed on the infected skin surface. The swab is then streaked on a culture medium. The culture is incubated at 37 °C (98.6 °F) for several days, to allow development of yeast or bacterial colonies. The characteristics (such as morphology and colour) of the colonies may allow initial diagnosis of the organism causing disease symptoms. Respiratory, gastrointestinal, and esophageal candidiasis require an endoscopy to diagnose. For gastrointestinal candidiasis, it is necessary to obtain a 3–5 milliliter sample of fluid from the duodenum for fungal culture. The diagnosis of gastrointestinal candidiasis is based upon the culture containing in excess of 1,000 colony-forming units per milliliter. Classification Candidiasis may be divided into these types: Mucosal candidiasis Oral candidiasis (thrush, oropharyngeal candidiasis) Pseudomembranous candidiasis Erythematous candidiasis Hyperplastic candidiasis Denture-related stomatitis — Candida organisms are involved in about 90% of cases Angular cheilitis — Candida species are responsible for about 20% of cases, mixed infection of C. albicans and Staphylococcus aureus for about 60% of cases. Median rhomboid glossitis Candidal vulvovaginitis (vaginal yeast infection) Candidal balanitis — infection of the glans penis, almost exclusively occurring in uncircumcised males Esophageal candidiasis (candidal esophagitis) Gastrointestinal candidiasis Respiratory candidiasis Cutaneous candidiasis Candidal folliculitis Candidal intertrigo Candidal paronychia Perianal candidiasis, may present as pruritus ani Candidid Chronic mucocutaneous candidiasis Congenital cutaneous candidiasis Diaper candidiasis: an infection of a child's diaper area Erosio interdigitalis blastomycetica Candidal onychomycosis (nail infection) caused by Candida Systemic candidiasis Candidemia, a form of fungemia which may lead to sepsis Invasive candidiasis (disseminated candidiasis) — organ infection by Candida Chronic systemic candidiasis (hepatosplenic candidiasis) — sometimes arises during recovery from neutropenia Antibiotic candidiasis (iatrogenic candidiasis) Prevention A diet that supports the immune system and is not high in simple carbohydrates contributes to a healthy balance of the oral and intestinal flora. While yeast infections are associated with diabetes, the level of blood sugar control may not affect the risk. Wearing cotton underwear may help to reduce the risk of developing skin and vaginal yeast infections, along with not wearing wet clothes for long periods of time. For women who experience recurrent yeast infections, there is limited evidence that oral or intravaginal probiotics help to prevent future infections. This includes either as pills or as yogurt. Oral hygiene can help prevent oral candidiasis when people have a weakened immune system. For people undergoing cancer treatment, chlorhexidine mouthwash can prevent or reduce thrush. People who use inhaled corticosteroids can reduce the risk of developing oral candidiasis by rinsing the mouth with water or mouthwash after using the inhaler. People with dentures should also disinfect their dentures regularly to prevent oral candidiasis. Treatment Candidiasis is treated with antifungal medications; these include clotrimazole, nystatin, fluconazole, voriconazole, amphotericin B, and echinocandins. Intravenous fluconazole or an intravenous echinocandin such as caspofungin are commonly used to treat immunocompromised or critically ill individuals. The 2016 revision of the clinical practice guideline for the management of candidiasis lists a large number of specific treatment regimens for Candida infections that involve different Candida species, forms of antifungal drug resistance, immune statuses, and infection localization and severity. Gastrointestinal candidiasis in immunocompetent individuals is treated with 100–200 mg fluconazole per day for 2–3 weeks. Localized infection Mouth and throat candidiasis are treated with antifungal medication. Oral candidiasis usually responds to topical treatments; otherwise, systemic antifungal medication may be needed for oral infections. Candidal skin infections in the skin folds (candidal intertrigo) typically respond well to topical antifungal treatments (e.g., nystatin or miconazole). For breastfeeding mothers topical miconazole is the most effective treatment for treating candidiasis on the breasts. Gentian violet can be used for thrush in breastfeeding babies. Systemic treatment with antifungals by mouth is reserved for severe cases or if treatment with topical therapy is unsuccessful. Candida esophagitis may be treated orally or intravenously; for severe or azole-resistant esophageal candidiasis, treatment with amphotericin B may be necessary. Vaginal yeast infections are typically treated with topical antifungal agents. Penile yeast infections are also treated with antifungal agents, but while an internal treatment may be used (such as a pessary) for vaginal yeast infections, only external treatments – such as a cream – can be recommended for penile treatment. A one-time dose of fluconazole by mouth is 90% effective in treating a vaginal yeast infection. For severe nonrecurring cases, several doses of fluconazole is recommended. Local treatment may include vaginal suppositories or medicated douches. Other types of yeast infections require different dosing. C. albicans can develop resistance to fluconazole, this being more of an issue in those with HIV/AIDS who are often treated with multiple courses of fluconazole for recurrent oral infections. For vaginal yeast infection in pregnancy, topical imidazole or triazole antifungals are considered the therapy of choice owing to available safety data. Systemic absorption of these topical formulations is minimal, posing little risk of transplacental transfer. In vaginal yeast infection in pregnancy, treatment with topical azole antifungals is recommended for seven days instead of a shorter duration. For vaginal yeast infections, many complementary treatments are proposed, however a number have side effects. No benefit from probiotics has been found for active infections. Blood-borne infection Candidemia occurs when any Candida species infects the blood. Its treatment typically consists of oral or intravenous antifungal medications. Examples include intravenous fluconazole or an echinocandin such as caspofungin may be used. Amphotericin B is another option. Prognosis In hospitalized patients who develop candidemia, age is an important prognostic factor. Mortality following candidemia is 50% in patients aged ≥75 years and 24% in patients aged <75 years. Among individuals being treated in intensive care units, the mortality rate is about 30–50% when systemic candidiasis develops. Epidemiology Oral candidiasis is the most common fungal infection of the mouth, and it also represents the most common opportunistic oral infection in humans. Infections of the mouth occur in about 6% of babies less than a month old. About 20% of those receiving chemotherapy for cancer and 20% of those with AIDS also develop the disease. It is estimated that 20% of women may be asymptomatically colonized by vaginal yeast. In the United States there are approximately 1.4 million doctor office visits every year for candidiasis. About three-quarters of women have at least one yeast infection at some time during their lives. Esophageal candidiasis is the most common esophageal infection in persons with AIDS and accounts for about 50% of all esophageal infections, often coexisting with other esophageal diseases. About two-thirds of people with AIDS and esophageal candidiasis also have oral candidiasis. Candidal sepsis is rare. Candida is the fourth most common cause of bloodstream infections among hospital patients in the United States. The incidence of bloodstream candida in intensive care units varies widely between countries. History Descriptions of what sounds like oral thrush go back to the time of Hippocrates circa 460–370 BCE. The first description of a fungus as the causative agent of an oropharyngeal and oesophageal candidosis was by Bernhard von Langenbeck in 1839. Vulvovaginal candidiasis was first described in 1849 by Wilkinson. In 1875, Haussmann demonstrated the causative organism in both vulvovaginal and oral candidiasis is the same. With the advent of antibiotics following World War II, the rates of candidiasis increased. The rates then decreased in the 1950s following the development of nystatin. The colloquial term "thrush" is of unknown origin but may stem from an unrecorded Old English word *þrusc or from a Scandinavian root. The term is not related to the bird of the same name. The term candidosis is largely used in British English, and candidiasis in American English. Candida is also pronounced differently; in American English, the stress is on the "i", whereas in British English the stress is on the first syllable. The genus Candida and species C. albicans were described by botanist Christine Marie Berkhout in her doctoral thesis at the University of Utrecht in 1923. Over the years, the classification of the genera and species has evolved. Obsolete names for this genus include Mycotorula and Torulopsis. The species has also been known in the past as Monilia albicans and Oidium albicans. The current classification is nomen conservandum, which means the name is authorized for use by the International Botanical Congress (IBC). The genus Candida includes about 150 different species. However, only a few are known to cause human infections. C. albicans is the most significant pathogenic species. Other species pathogenic in humans include C. auris, C. tropicalis, C. parapsilosis, C. dubliniensis, and C. lusitaniae. The name Candida was proposed by Berkhout. It is from the Latin word toga candida, referring to the white toga (robe) worn by candidates for the Senate of the ancient Roman republic. The specific epithet albicans also comes from Latin, albicare meaning "to whiten". These names refer to the generally white appearance of Candida species when cultured. Alternative medicine A 2005 publication noted that "a large pseudoscientific cult" has developed around the topic of Candida, with claims stating that up to one in three people are affected by yeast-related illness, particularly a condition called "Candidiasis hypersensitivity". Some practitioners of alternative medicine have promoted these purported conditions and sold dietary supplements as supposed cures; a number of them have been prosecuted. In 1990, alternative health vendor Nature's Way signed an FTC consent agreement not to misrepresent in advertising any self-diagnostic test concerning yeast conditions or to make any unsubstantiated representation concerning any food or supplement's ability to control yeast conditions, with a fine of $30,000 payable to the National Institutes of Health for research in genuine candidiasis. Research High level Candida colonization is linked to several diseases of the gastrointestinal tract including Crohn's disease. There has been an increase in resistance to antifungals worldwide over the past 30–40 years. References External links Animal fungal diseases Bird diseases Bovine diseases Horse diseases Mycosis-related cutaneous conditions Sheep and goat diseases Wikipedia medicine articles ready to translate Wikipedia emergency medicine articles ready to translate Fungal diseases
Candidiasis
[ "Biology" ]
4,574
[ "Fungi", "Fungal diseases" ]
7,039
https://en.wikipedia.org/wiki/Control%20theory
Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality. To do this, a controller with the requisite corrective behavior is required. This controller monitors the controlled process variable (PV), and compares it with the reference or set point (SP). The difference between actual and desired value of the process variable, called the error signal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are also studied are controllability and observability. Control theory is used in control system engineering to design automation that have revolutionized manufacturing, aircraft, communications and other industries, and created new fields such as robotics. Extensive use is usually made of a diagrammatic style known as the block diagram. In it the transfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on the differential equations describing the system. Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described by James Clerk Maxwell. Control theory was further advanced by Edward Routh in 1874, Charles Sturm and in 1895, Adolf Hurwitz, who all contributed to the establishment of control stability criteria; and from 1922 onwards, the development of PID control theory by Nicolas Minorsky. Although a major application of mathematical control theory is in control systems engineering, which deals with the design of process control systems for industry, other applications range far beyond this. As the general theory of feedback systems, control theory is useful wherever feedback occurs - thus control theory also has applications in life sciences, computer engineering, sociology and operations research. History Although control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of the centrifugal governor, conducted by the physicist James Clerk Maxwell in 1868, entitled On Governors. A centrifugal governor was already used to regulate the velocity of windmills. Maxwell described and analyzed the phenomenon of self-oscillation, in which lags in the system may lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell's classmate, Edward John Routh, abstracted Maxwell's results for the general class of linear systems. Independently, Adolf Hurwitz analyzed system stability using differential equations in 1877, resulting in what is now known as the Routh–Hurwitz theorem. A notable application of dynamic control was in the area of crewed flight. The Wright brothers made their first successful test flights on December 17, 1903, and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known). Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds. By World War II, control theory was becoming an important area of research. Irmgard Flügge-Lotz developed the theory of discontinuous automatic control systems, and applied the bang-bang principle to the development of automatic flight control equipment for aircraft. Other areas of application for discontinuous controls included fire-control systems, guidance systems and electronics. Sometimes, mechanical methods are used to improve the stability of systems. For example, ship stabilizers are fins mounted beneath the waterline and emerging laterally. In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship. The Space Race also depended on accurate spacecraft control, and control theory has also seen an increasing use in fields such as economics and artificial intelligence. Here, one might say that the goal is to find an internal model that obeys the good regulator theorem. So, for example, in economics, the more accurately a (stock or commodities) trading model represents the actions of the market, the more easily it can control that market (and extract "useful work" (profits) from it). In AI, an example might be a chatbot modelling the discourse state of humans: the more accurately it can model the human state (e.g. on a telephone voice-support hotline), the better it can manipulate the human (e.g. into performing the corrective actions to resolve the problem that caused the phone call to the help-line). These last two examples take the narrow historical interpretation of control theory as a set of differential equations modeling and regulating kinetic motion, and broaden it into a vast generalization of a regulator interacting with a plant. Open-loop and closed-loop (feedback) control Classical control theory Linear and nonlinear control theory The field of control theory can be divided into two branches: Linear control theory – This applies to systems made of devices which obey the superposition principle, which means roughly that the output is proportional to the input. They are governed by linear differential equations. A major subclass is systems which in addition have parameters which do not change with time, called linear time invariant (LTI) systems. These systems are amenable to powerful frequency domain mathematical techniques of great generality, such as the Laplace transform, Fourier transform, Z transform, Bode plot, root locus, and Nyquist stability criterion. These lead to a description of the system using terms like bandwidth, frequency response, eigenvalues, gain, resonant frequencies, zeros and poles, which give solutions for system response and design techniques for most systems of interest. Nonlinear control theory – This covers a wider class of systems that do not obey the superposition principle, and applies to more real-world systems because all real control systems are nonlinear. These systems are often governed by nonlinear differential equations. The few mathematical techniques which have been developed to handle them are more difficult and much less general, often applying only to narrow categories of systems. These include limit cycle theory, Poincaré maps, Lyapunov stability theorem, and describing functions. Nonlinear systems are often analyzed using numerical methods on computers, for example by simulating their operation using a simulation language. If only solutions near a stable point are of interest, nonlinear systems can often be linearized by approximating them by a linear system using perturbation theory, and linear techniques can be used. Analysis techniques - frequency domain and time domain Mathematical techniques for analyzing and designing control systems fall into two different categories: Frequency domain – In this type the values of the state variables, the mathematical variables representing the system's input, output and feedback are represented as functions of frequency. The input signal and the system's transfer function are converted from time functions to functions of frequency by a transform such as the Fourier transform, Laplace transform, or Z transform. The advantage of this technique is that it results in a simplification of the mathematics; the differential equations that represent the system are replaced by algebraic equations in the frequency domain which is much simpler to solve. However, frequency domain techniques can only be used with linear systems, as mentioned above. Time-domain state space representation – In this type the values of the state variables are represented as functions of time. With this model, the system being analyzed is represented by one or more differential equations. Since frequency domain techniques are limited to linear systems, time domain is widely used to analyze real-world nonlinear systems. Although these are more difficult to solve, modern computer simulation techniques such as simulation languages have made their analysis routine. In contrast to the frequency domain analysis of the classical control theory, modern control theory utilizes the time-domain state space representation, a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs, and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the latter only being possible when the dynamical system is linear). The state space representation (also known as the "time-domain approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With inputs and outputs, we would otherwise have to write down Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions. "State space" refers to the space whose axes are the state variables. The state of the system can be represented as a point within that space. System interfacing - SISO & MIMO Control systems can be divided into different categories depending on the number of inputs and outputs. Single-input single-output (SISO) – This is the simplest and most common type, in which one output is controlled by one control signal. Examples are the cruise control example above, or an audio system, in which the control input is the input audio signal and the output is the sound waves from the speaker. Multiple-input multiple-output (MIMO) – These are found in more complicated systems. For example, modern large telescopes such as the Keck and MMT have mirrors composed of many separate segments each controlled by an actuator. The shape of the entire mirror is constantly adjusted by a MIMO active optics control system using input from multiple sensors at the focal plane, to compensate for changes in the mirror shape due to thermal expansion, contraction, stresses as it is rotated and distortion of the wavefront due to turbulence in the atmosphere. Complicated systems such as nuclear reactors and human cells are simulated by a computer as large MIMO control systems. Classical SISO system design The scope of classical control theory is limited to single-input and single-output (SISO) system design, except when analyzing for disturbance rejection using a second input. The system analysis is carried out in the time domain using differential equations, in the complex-s domain with the Laplace transform, or in the frequency domain by transforming from the complex-s domain. Many systems may be assumed to have a second order and single variable system response in the time domain. A controller designed using classical theory often requires on-site tuning due to incorrect design approximations. Yet, due to the easier physical implementation of classical controller designs as compared to systems designed using modern control theory, these controllers are preferred in most industrial applications. The most common controllers designed using classical control theory are PID controllers. A less common implementation may include either or both a Lead or Lag filter. The ultimate end goal is to meet requirements typically provided in the time-domain called the step response, or at times in the frequency domain called the open-loop response. The step response characteristics applied in a specification are typically percent overshoot, settling time, etc. The open-loop response characteristics applied in a specification are typically Gain and Phase margin and bandwidth. These characteristics may be evaluated through simulation including a dynamic model of the system under control coupled with the compensation model. Modern MIMO system design Modern control theory is carried out in the state space, and can deal with multiple-input and multiple-output (MIMO) systems. This overcomes the limitations of classical control theory in more sophisticated design problems, such as fighter aircraft control, with the limitation that no frequency domain analysis is possible. In modern design, a system is represented to the greatest advantage as a set of decoupled first order differential equations defined using state variables. Nonlinear, multivariable, adaptive and robust control theories come under this division. Matrix methods are significantly limited for MIMO systems where linear independence cannot be assured in the relationship between inputs and outputs. Being fairly new, modern control theory has many areas yet to be explored. Scholars like Rudolf E. Kálmán and Aleksandr Lyapunov are well known among the people who have shaped modern control theory. Topics in control theory Stability The stability of a general dynamical system with no input can be described with Lyapunov stability criteria. A linear system is called bounded-input bounded-output (BIBO) stable if its output will stay bounded for any bounded input. Stability for nonlinear systems that take an input is input-to-state stability (ISS), which combines Lyapunov stability and a notion similar to BIBO stability. For simplicity, the following descriptions focus on continuous-time and discrete-time linear systems. Mathematically, this means that for a causal linear system to be stable all of the poles of its transfer function must have negative-real values, i.e. the real part of each pole must be less than zero. Practically speaking, stability requires that the transfer function complex poles reside in the open left half of the complex plane for continuous time, when the Laplace transform is used to obtain the transfer function. inside the unit circle for discrete time, when the Z-transform is used. The difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions. The continuous Laplace transform is in Cartesian coordinates where the axis is the real axis and the discrete Z-transform is in circular coordinates where the axis is the real axis. When the appropriate conditions above are satisfied a system is said to be asymptotically stable; the variables of an asymptotically stable control system always decrease from their initial value and do not show permanent oscillations. Permanent oscillations occur when a pole has a real part exactly equal to zero (in the continuous time case) or a modulus equal to one (in the discrete time case). If a simply stable system response neither decays nor grows over time, and has no oscillations, it is marginally stable; in this case the system transfer function has non-repeated poles at the complex plane origin (i.e. their real and complex component is zero in the continuous time case). Oscillations are present when poles with real part equal to zero have an imaginary part not equal to zero. If a system in question has an impulse response of then the Z-transform (see this example), is given by which has a pole in (zero imaginary part). This system is BIBO (asymptotically) stable since the pole is inside the unit circle. However, if the impulse response was then the Z-transform is which has a pole at and is not BIBO stable since the pole has a modulus strictly greater than one. Numerous tools exist for the analysis of the poles of a system. These include graphical systems like the root locus, Bode plots or the Nyquist plots. Mechanical changes can make equipment (and control systems) more stable. Sailors add ballast to improve the stability of ships. Cruise ships use antiroll fins that extend transversely from the side of the ship for perhaps 30 feet (10 m) and are continuously rotated about their axes to develop forces that oppose the roll. Controllability and observability Controllability and observability are main issues in the analysis of a system before deciding the best control strategy to be applied, or whether it is even possible to control or stabilize the system. Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal. If a state is not controllable, then no signal will ever be able to control the state. If a state is not controllable, but its dynamics are stable, then the state is termed stabilizable. Observability instead is related to the possibility of observing, through output measurements, the state of a system. If a state is not observable, the controller will never be able to determine the behavior of an unobservable state and hence cannot use it to stabilize the system. However, similar to the stabilizability condition above, if a state cannot be observed it might still be detectable. From a geometrical point of view, looking at the states of each variable of the system to be controlled, every "bad" state of these variables must be controllable and observable to ensure a good behavior in the closed-loop system. That is, if one of the eigenvalues of the system is not both controllable and observable, this part of the dynamics will remain untouched in the closed-loop system. If such an eigenvalue is not stable, the dynamics of this eigenvalue will be present in the closed-loop system which therefore will be unstable. Unobservable poles are not present in the transfer function realization of a state-space representation, which is why sometimes the latter is preferred in dynamical systems analysis. Solutions to problems of an uncontrollable or unobservable system include adding actuators and sensors. Control specification Several different control strategies have been devised in the past years. These vary from extremely general ones (PID controller), to others devoted to very particular classes of systems (especially robotics or aircraft cruise control). A control problem can have several specifications. Stability, of course, is always present. The controller must ensure that the closed-loop system is stable, regardless of the open-loop stability. A poor choice of controller can even worsen the stability of the open-loop system, which must normally be avoided. Sometimes it would be desired to obtain particular dynamics in the closed loop: i.e. that the poles have , where is a fixed value strictly greater than zero, instead of simply asking that . Another typical specification is the rejection of a step disturbance; including an integrator in the open-loop chain (i.e. directly before the system under control) easily achieves this. Other classes of disturbances need different types of sub-systems to be included. Other "classical" control theory specifications regard the time-response of the closed-loop system. These include the rise time (the time needed by the control system to reach the desired value after a perturbation), peak overshoot (the highest value reached by the response before reaching the desired value) and others (settling time, quarter-decay). Frequency domain specifications are usually related to robustness (see after). Modern performance assessments use some variation of integrated tracking error (IAE, ISA, CQI). Model identification and robustness A control system must always have some robustness property. A robust controller is such that its properties do not change much if applied to a system slightly different from the mathematical one used for its synthesis. This requirement is important, as no real physical system truly behaves like the series of differential equations used to represent it mathematically. Typically a simpler mathematical model is chosen in order to simplify calculations, otherwise, the true system dynamics can be so complicated that a complete model is impossible. System identification The process of determining the equations that govern the model's dynamics is called system identification. This can be done off-line: for example, executing a series of measures from which to calculate an approximated mathematical model, typically its transfer function or matrix. Such identification from the output, however, cannot take account of unobservable dynamics. Sometimes the model is built directly starting from known physical equations, for example, in the case of a mass-spring-damper system we know that . Even assuming that a "complete" model is used in designing the controller, all the parameters included in these equations (called "nominal parameters") are never known with absolute precision; the control system will have to behave correctly even when connected to a physical system with true parameter values away from nominal. Some advanced control techniques include an "on-line" identification process (see later). The parameters of the model are calculated ("identified") while the controller itself is running. In this way, if a drastic variation of the parameters ensues, for example, if the robot's arm releases a weight, the controller will adjust itself consequently in order to ensure the correct performance. Analysis Analysis of the robustness of a SISO (single input single output) control system can be performed in the frequency domain, considering the system's transfer function and using Nyquist and Bode diagrams. Topics include gain and phase margin and amplitude margin. For MIMO (multi-input multi output) and, in general, more complicated control systems, one must consider the theoretical results devised for each control technique (see next section). I.e., if particular robustness qualities are needed, the engineer must shift their attention to a control technique by including these qualities in its properties. Constraints A particular robustness issue is the requirement for a control system to perform properly in the presence of input and state constraints. In the physical world every signal is limited. It could happen that a controller will send control signals that cannot be followed by the physical system, for example, trying to rotate a valve at excessive speed. This can produce undesired behavior of the closed-loop system, or even damage or break actuators or other subsystems. Specific control techniques are available to solve the problem: model predictive control (see later), and anti-wind up systems. The latter consists of an additional control block that ensures that the control signal never exceeds a given threshold. System classifications Linear systems control For MIMO systems, pole placement can be performed mathematically using a state space representation of the open-loop system and calculating a feedback matrix assigning poles in the desired positions. In complicated systems this can require computer-assisted calculation capabilities, and cannot always ensure robustness. Furthermore, all system states are not in general measured and so observers must be included and incorporated in pole placement design. Nonlinear systems control Processes in industries like robotics and the aerospace industry typically have strong nonlinear dynamics. In control theory it is sometimes possible to linearize such classes of systems and apply linear techniques, but in many cases it can be necessary to devise from scratch theories permitting control of nonlinear systems. These, e.g., feedback linearization, backstepping, sliding mode control, trajectory linearization control normally take advantage of results based on Lyapunov's theory. Differential geometry has been widely used as a tool for generalizing well-known linear control concepts to the nonlinear case, as well as showing the subtleties that make it a more challenging problem. Control theory has also been used to decipher the neural mechanism that directs cognitive states. Decentralized systems control When the system is controlled by multiple controllers, the problem is one of decentralized control. Decentralization is helpful in many ways, for instance, it helps control systems to operate over a larger geographical area. The agents in decentralized control systems can interact using communication channels and coordinate their actions. Deterministic and stochastic systems control A stochastic control problem is one in which the evolution of the state variables is subjected to random shocks from outside the system. A deterministic control problem is not subject to external random shocks. Main control strategies Every control system must guarantee first the stability of the closed-loop behavior. For linear systems, this can be obtained by directly placing the poles. Nonlinear control systems use specific theories (normally based on Aleksandr Lyapunov's Theory) to ensure stability without regard to the inner dynamics of the system. The possibility to fulfill different specifications varies from the model considered and the control strategy chosen. List of the main control techniques Optimal control is a particular control technique in which the control signal optimizes a certain "cost index": for example, in the case of a satellite, the jet thrusts needed to bring it to desired trajectory that consume the least amount of fuel. Two optimal control design methods have been widely used in industrial applications, as it has been shown they can guarantee closed-loop stability. These are Model Predictive Control (MPC) and linear-quadratic-Gaussian control (LQG). The first can more explicitly take into account constraints on the signals in the system, which is an important feature in many industrial processes. However, the "optimal control" structure in MPC is only a means to achieve such a result, as it does not optimize a true performance index of the closed-loop control system. Together with PID controllers, MPC systems are the most widely used control technique in process control. Robust control deals explicitly with uncertainty in its approach to controller design. Controllers designed using robust control methods tend to be able to cope with small differences between the true system and the nominal model used for design. The early methods of Bode and others were fairly robust; the state-space methods invented in the 1960s and 1970s were sometimes found to lack robustness. Examples of modern robust control techniques include H-infinity loop-shaping developed by Duncan McFarlane and Keith Glover, Sliding mode control (SMC) developed by Vadim Utkin, and safe protocols designed for control of large heterogeneous populations of electric loads in Smart Power Grid applications. Robust methods aim to achieve robust performance and/or stability in the presence of small modeling errors. Stochastic control deals with control design with uncertainty in the model. In typical stochastic control problems, it is assumed that there exist random noise and disturbances in the model and the controller, and the control design must take into account these random deviations. Adaptive control uses on-line identification of the process parameters, or modification of controller gains, thereby obtaining strong robustness properties. Adaptive controls were applied for the first time in the aerospace industry in the 1950s, and have found particular success in that field. A hierarchical control system is a type of control system in which a set of devices and governing software is arranged in a hierarchical tree. When the links in the tree are implemented by a computer network, then that hierarchical control system is also a form of networked control system. Intelligent control uses various AI computing approaches like artificial neural networks, Bayesian probability, fuzzy logic, machine learning, evolutionary computation and genetic algorithms or a combination of these methods, such as neuro-fuzzy algorithms, to control a dynamic system. Self-organized criticality control may be defined as attempts to interfere in the processes by which the self-organized system dissipates energy. People in systems and control Many active and historical figures made significant contribution to control theory including Pierre-Simon Laplace invented the Z-transform in his work on probability theory, now used to solve discrete-time control theory problems. The Z-transform is a discrete-time equivalent of the Laplace transform which is named after him. Irmgard Flugge-Lotz developed the theory of discontinuous automatic control and applied it to automatic aircraft control systems. Alexander Lyapunov in the 1890s marks the beginning of stability theory. Harold S. Black invented the concept of negative feedback amplifiers in 1927. He managed to develop stable negative feedback amplifiers in the 1930s. Harry Nyquist developed the Nyquist stability criterion for feedback systems in the 1930s. Richard Bellman developed dynamic programming in the 1940s. Warren E. Dixon, control theorist and a professor Kyriakos G. Vamvoudakis, developed synchronous reinforcement learning algorithms to solve optimal control and game theoretic problems Andrey Kolmogorov co-developed the Wiener–Kolmogorov filter in 1941. Norbert Wiener co-developed the Wiener–Kolmogorov filter and coined the term cybernetics in the 1940s. John R. Ragazzini introduced digital control and the use of Z-transform in control theory (invented by Laplace) in the 1950s. Lev Pontryagin introduced the maximum principle and the bang-bang principle. Pierre-Louis Lions developed viscosity solutions into stochastic control and optimal control methods. Rudolf E. Kálmán pioneered the state-space approach to systems and control. Introduced the notions of controllability and observability. Developed the Kalman filter for linear estimation. Ali H. Nayfeh who was one of the main contributors to nonlinear control theory and published many books on perturbation methods Jan C. Willems Introduced the concept of dissipativity, as a generalization of Lyapunov function to input/state/output systems. The construction of the storage function, as the analogue of a Lyapunov function is called, led to the study of the linear matrix inequality (LMI) in control theory. He pioneered the behavioral approach to mathematical systems theory. See also Examples of control systems Automation Deadbeat controller Distributed parameter systems Fractional-order control H-infinity loop-shaping Hierarchical control system Model predictive control Optimal control Process control Robust control Servomechanism State space (controls) Vector control Topics in control theory Coefficient diagram method Control reconfiguration Feedback H infinity Hankel singular value Krener's theorem Lead-lag compensator Minor loop feedback Multi-loop feedback Positive systems Radial basis function Root locus Signal-flow graphs Stable polynomial State space representation Steady state Transient response Transient state Underactuation Youla–Kucera parametrization Markov chain approximation method Other related topics Adaptive system Automation and remote control Bond graph Control engineering Control–feedback–abort loop Controller (control theory) Cybernetics Intelligent control Mathematical system theory Negative feedback amplifier Outline of management People in systems and control Perceptual control theory Systems theory References Further reading For Chemical Engineering External links Control Tutorials for Matlab, a set of worked-through control examples solved by several different methods. Control Tuning and Best Practices Advanced control structures, free on-line simulators explaining the control theory Control engineering Computer engineering Management cybernetics
Control theory
[ "Mathematics", "Technology", "Engineering" ]
6,082
[ "Computer engineering", "Applied mathematics", "Control theory", "Control engineering", "Electrical engineering", "Dynamical systems" ]
7,042
https://en.wikipedia.org/wiki/Joint%20cracking
Joint cracking is the manipulation of joints to produce a sound and related "popping" sensation. It is sometimes performed by physical therapists, chiropractors, and osteopaths pursuing a variety of outcomes. The cracking of joints, especially knuckles, was long believed to lead to arthritis and other joint problems. However, this is not supported by medical research. The cracking mechanism and the resulting sound is caused by dissolved gas (nitrogen gas) cavitation bubbles suddenly collapsing inside the joints. This happens when the joint cavity is stretched beyond its normal size. The pressure inside the joint cavity drops and the dissolved gas suddenly comes out of solution and takes gaseous form which makes a distinct popping noise. To be able to crack the same knuckle again requires waiting about 20 minutes before the bubbles dissolve back into the synovial fluid and will be able to form again. It is possible for voluntary joint cracking by an individual to be considered as part of the obsessive–compulsive disorders spectrum. Causes For many decades, the physical mechanism that causes the cracking sound as a result of bending, twisting, or compressing joints was uncertain. Suggested causes included: Cavitation within the joint—small cavities of partial vacuum form in the synovial fluid and then rapidly collapse, producing a sharp sound. Rapid stretching of ligaments. Intra-articular (within-joint) adhesions being broken. Formation of bubbles of joint air as the joint is expanded. There were several hypotheses to explain the cracking of joints. Synovial fluid cavitation has some evidence to support it. When a spinal manipulation is performed, the applied force separates the articular surfaces of a fully encapsulated synovial joint, which in turn creates a reduction in pressure within the joint cavity. In this low-pressure environment, some of the gases that are dissolved in the synovial fluid (which are naturally found in all bodily fluids) leave the solution, making a bubble, or cavity (tribonucleation), which rapidly collapses upon itself, resulting in a "clicking" sound. The contents of the resultant gas bubble are thought to be mainly carbon dioxide, oxygen and nitrogen. The effects of this process will remain for a period of time known as the "refractory period", during which the joint cannot be "re-cracked", which lasts about 20 minutes, while the gases are slowly reabsorbed into the synovial fluid. There is some evidence that ligament laxity may be associated with an increased tendency to cavitate. In 2015, research showed that bubbles remained in the fluid after cracking, suggesting that the cracking sound was produced when the bubble within the joint was formed, not when it collapsed. In 2018, a team in France created a mathematical simulation of what happens in a joint just before it cracks. The team concluded that the sound is caused by bubbles' collapse, and bubbles observed in the fluid are the result of a partial collapse. Due to the theoretical basis and lack of physical experimentation, the scientific community is still not fully convinced of this conclusion. The snapping of tendons or scar tissue over a prominence (as in snapping hip syndrome) can also generate a loud snapping or popping sound. Relation to arthritis The common old wives' tale that cracking one's knuckles causes arthritis is without scientific evidence. A study published in 2011 examined the hand radiographs of 215 people (aged 50 to 89). It compared the joints of those who regularly cracked their knuckles to those who did not. The study concluded that knuckle-cracking did not cause hand osteoarthritis, no matter how many years or how often a person cracked their knuckles. This early study has been criticized for not taking into consideration the possibility of confounding factors, such as whether the ability to crack one's knuckles is associated with impaired hand functioning rather than being a cause of it. The medical doctor Donald Unger cracked the knuckles of his left hand every day for more than sixty years, but he did not crack the knuckles of his right hand. No arthritis or other ailments formed in either hand, and for this, he was awarded 2009's satirical Ig Nobel Prize in Medicine. See also Crepitus—sounds made by joint References Articles containing video clips Habits Joints
Joint cracking
[ "Biology" ]
880
[ "Behavior", "Human behavior", "Habits" ]
7,043
https://en.wikipedia.org/wiki/Chemical%20formula
A chemical formula is a way of presenting information about the chemical proportions of atoms that constitute a particular chemical compound or molecule, using chemical element symbols, numbers, and sometimes also other symbols, such as parentheses, dashes, brackets, commas and plus (+) and minus (−) signs. These are limited to a single typographic line of symbols, which may include subscripts and superscripts. A chemical formula is not a chemical name since it does not contain any words. Although a chemical formula may imply certain simple chemical structures, it is not the same as a full chemical structural formula. Chemical formulae can fully specify the structure of only the simplest of molecules and chemical substances, and are generally more limited in power than chemical names and structural formulae. The simplest types of chemical formulae are called empirical formulae, which use letters and numbers indicating the numerical proportions of atoms of each type. Molecular formulae indicate the simple numbers of each type of atom in a molecule, with no information on structure. For example, the empirical formula for glucose is (twice as many hydrogen atoms as carbon and oxygen), while its molecular formula is (12 hydrogen atoms, six carbon and oxygen atoms). Sometimes a chemical formula is complicated by being written as a condensed formula (or condensed molecular formula, occasionally called a "semi-structural formula"), which conveys additional information about the particular ways in which the atoms are chemically bonded together, either in covalent bonds, ionic bonds, or various combinations of these types. This is possible if the relevant bonding is easy to show in one dimension. An example is the condensed molecular/chemical formula for ethanol, which is or . However, even a condensed chemical formula is necessarily limited in its ability to show complex bonding relationships between atoms, especially atoms that have bonds to four or more different substituents. Since a chemical formula must be expressed as a single line of chemical element symbols, it often cannot be as informative as a true structural formula, which is a graphical representation of the spatial relationship between atoms in chemical compounds (see for example the figure for butane structural and chemical formulae, at right). For reasons of structural complexity, a single condensed chemical formula (or semi-structural formula) may correspond to different molecules, known as isomers. For example, glucose shares its molecular formula with a number of other sugars, including fructose, galactose and mannose. Linear equivalent chemical names exist that can and do specify uniquely any complex structural formula (see chemical nomenclature), but such names must use many terms (words), rather than the simple element symbols, numbers, and simple typographical symbols that define a chemical formula. Chemical formulae may be used in chemical equations to describe chemical reactions and other chemical transformations, such as the dissolving of ionic compounds into solution. While, as noted, chemical formulae do not have the full power of structural formulae to show chemical relationships between atoms, they are sufficient to keep track of numbers of atoms and numbers of electrical charges in chemical reactions, thus balancing chemical equations so that these equations can be used in chemical problems involving conservation of atoms, and conservation of electric charge. Overview A chemical formula identifies each constituent element by its chemical symbol and indicates the proportionate number of atoms of each element. In empirical formulae, these proportions begin with a key element and then assign numbers of atoms of the other elements in the compound, by ratios to the key element. For molecular compounds, these ratio numbers can all be expressed as whole numbers. For example, the empirical formula of ethanol may be written because the molecules of ethanol all contain two carbon atoms, six hydrogen atoms, and one oxygen atom. Some types of ionic compounds, however, cannot be written with entirely whole-number empirical formulae. An example is boron carbide, whose formula of is a variable non-whole number ratio with n ranging from over 4 to more than 6.5. When the chemical compound of the formula consists of simple molecules, chemical formulae often employ ways to suggest the structure of the molecule. These types of formulae are variously known as molecular formulae and condensed formulae. A molecular formula enumerates the number of atoms to reflect those in the molecule, so that the molecular formula for glucose is rather than the glucose empirical formula, which is . However, except for very simple substances, molecular chemical formulae lack needed structural information, and are ambiguous. For simple molecules, a condensed (or semi-structural) formula is a type of chemical formula that may fully imply a correct structural formula. For example, ethanol may be represented by the condensed chemical formula , and dimethyl ether by the condensed formula . These two molecules have the same empirical and molecular formulae (), but may be differentiated by the condensed formulae shown, which are sufficient to represent the full structure of these simple organic compounds. Condensed chemical formulae may also be used to represent ionic compounds that do not exist as discrete molecules, but nonetheless do contain covalently bound clusters within them. These polyatomic ions are groups of atoms that are covalently bound together and have an overall ionic charge, such as the sulfate ion. Each polyatomic ion in a compound is written individually in order to illustrate the separate groupings. For example, the compound dichlorine hexoxide has an empirical formula , and molecular formula , but in liquid or solid forms, this compound is more correctly shown by an ionic condensed formula , which illustrates that this compound consists of ions and ions. In such cases, the condensed formula only need be complex enough to show at least one of each ionic species. Chemical formulae as described here are distinct from the far more complex chemical systematic names that are used in various systems of chemical nomenclature. For example, one systematic name for glucose is (2R,3S,4R,5R)-2,3,4,5,6-pentahydroxyhexanal. This name, interpreted by the rules behind it, fully specifies glucose's structural formula, but the name is not a chemical formula as usually understood, and uses terms and words not used in chemical formulae. Such names, unlike basic formulae, may be able to represent full structural formulae without graphs. Types Empirical formula In chemistry, the empirical formula of a chemical is a simple expression of the relative number of each type of atom or ratio of the elements in the compound. Empirical formulae are the standard for ionic compounds, such as , and for macromolecules, such as . An empirical formula makes no reference to isomerism, structure, or absolute number of atoms. The term empirical refers to the process of elemental analysis, a technique of analytical chemistry used to determine the relative percent composition of a pure chemical substance by element. For example, hexane has a molecular formula of , and (for one of its isomers, n-hexane) a structural formula , implying that it has a chain structure of 6 carbon atoms, and 14 hydrogen atoms. However, the empirical formula for hexane is . Likewise the empirical formula for hydrogen peroxide, , is simply , expressing the 1:1 ratio of component elements. Formaldehyde and acetic acid have the same empirical formula, . This is also the molecular formula for formaldehyde, but acetic acid has double the number of atoms. Like the other formula types detailed below, an empirical formula shows the number of elements in a molecule, and determines whether it is a binary compound, ternary compound, quaternary compound, or has even more elements. Molecular formula Molecular formulae simply indicate the numbers of each type of atom in a molecule of a molecular substance. They are the same as empirical formulae for molecules that only have one atom of a particular type, but otherwise may have larger numbers. An example of the difference is the empirical formula for glucose, which is (ratio 1:2:1), while its molecular formula is (number of atoms 6:12:6). For water, both formulae are . A molecular formula provides more information about a molecule than its empirical formula, but is more difficult to establish. Structural formula In addition to indicating the number of atoms of each elementa molecule, a structural formula indicates how the atoms are organized, and shows (or implies) the chemical bonds between the atoms. There are multiple types of structural formulas focused on different aspects of the molecular structure. The two diagrams show two molecules which are structural isomers of each other, since they both have the same molecular formula , but they have different structural formulas as shown. Condensed formula The connectivity of a molecule often has a strong influence on its physical and chemical properties and behavior. Two molecules composed of the same numbers of the same types of atoms (i.e. a pair of isomers) might have completely different chemical and/or physical properties if the atoms are connected differently or in different positions. In such cases, a structural formula is useful, as it illustrates which atoms are bonded to which other ones. From the connectivity, it is often possible to deduce the approximate shape of the molecule. A condensed (or semi-structural) formula may represent the types and spatial arrangement of bonds in a simple chemical substance, though it does not necessarily specify isomers or complex structures. For example, ethane consists of two carbon atoms single-bonded to each other, with each carbon atom having three hydrogen atoms bonded to it. Its chemical formula can be rendered as . In ethylene there is a double bond between the carbon atoms (and thus each carbon only has two hydrogens), therefore the chemical formula may be written: , and the fact that there is a double bond between the carbons is implicit because carbon has a valence of four. However, a more explicit method is to write or less commonly . The two lines (or two pairs of dots) indicate that a double bond connects the atoms on either side of them. A triple bond may be expressed with three lines () or three pairs of dots (), and if there may be ambiguity, a single line or pair of dots may be used to indicate a single bond. Molecules with multiple functional groups that are the same may be expressed by enclosing the repeated group in round brackets. For example, isobutane may be written . This condensed structural formula implies a different connectivity from other molecules that can be formed using the same atoms in the same proportions (isomers). The formula implies a central carbon atom connected to one hydrogen atom and three methyl groups (). The same number of atoms of each element (10 hydrogens and 4 carbons, or ) may be used to make a straight chain molecule, n-butane: . Chemical names in answer to limitations of chemical formulae The alkene called but-2-ene has two isomers, which the chemical formula does not identify. The relative position of the two methyl groups must be indicated by additional notation denoting whether the methyl groups are on the same side of the double bond (cis or Z) or on the opposite sides from each other (trans or E). As noted above, in order to represent the full structural formulae of many complex organic and inorganic compounds, chemical nomenclature may be needed which goes well beyond the available resources used above in simple condensed formulae. See IUPAC nomenclature of organic chemistry and IUPAC nomenclature of inorganic chemistry 2005 for examples. In addition, linear naming systems such as International Chemical Identifier (InChI) allow a computer to construct a structural formula, and simplified molecular-input line-entry system (SMILES) allows a more human-readable ASCII input. However, all these nomenclature systems go beyond the standards of chemical formulae, and technically are chemical naming systems, not formula systems. Polymers in condensed formulae For polymers in condensed chemical formulae, parentheses are placed around the repeating unit. For example, a hydrocarbon molecule that is described as , is a molecule with fifty repeating units. If the number of repeating units is unknown or variable, the letter n may be used to indicate this formula: . Ions in condensed formulae For ions, the charge on a particular atom may be denoted with a right-hand superscript. For example, , or . The total charge on a charged molecule or a polyatomic ion may also be shown in this way, such as for hydronium, , or sulfate, . Here + and − are used in place of +1 and −1, respectively. For more complex ions, brackets [ ] are often used to enclose the ionic formula, as in , which is found in compounds such as caesium dodecaborate, . Parentheses ( ) can be nested inside brackets to indicate a repeating unit, as in Hexamminecobalt(III) chloride, . Here, indicates that the ion contains six ammine groups () bonded to cobalt, and [ ] encloses the entire formula of the ion with charge +3. This is strictly optional; a chemical formula is valid with or without ionization information, and Hexamminecobalt(III) chloride may be written as or . Brackets, like parentheses, behave in chemistry as they do in mathematics, grouping terms togetherthey are not specifically employed only for ionization states. In the latter case here, the parentheses indicate 6 groups all of the same shape, bonded to another group of size 1 (the cobalt atom), and then the entire bundle, as a group, is bonded to 3 chlorine atoms. In the former case, it is clearer that the bond connecting the chlorines is ionic, rather than covalent. Isotopes Although isotopes are more relevant to nuclear chemistry or stable isotope chemistry than to conventional chemistry, different isotopes may be indicated with a prefixed superscript in a chemical formula. For example, the phosphate ion containing radioactive phosphorus-32 is . Also a study involving stable isotope ratios might include the molecule . A left-hand subscript is sometimes used redundantly to indicate the atomic number. For example, for dioxygen, and for the most abundant isotopic species of dioxygen. This is convenient when writing equations for nuclear reactions, in order to show the balance of charge more clearly. Trapped atoms The @ symbol (at sign) indicates an atom or molecule trapped inside a cage but not chemically bound to it. For example, a buckminsterfullerene () with an atom (M) would simply be represented as regardless of whether M was inside the fullerene without chemical bonding or outside, bound to one of the carbon atoms. Using the @ symbol, this would be denoted if M was inside the carbon network. A non-fullerene example is , an ion in which one arsenic (As) atom is trapped in a cage formed by the other 32 atoms. This notation was proposed in 1991 with the discovery of fullerene cages (endohedral fullerenes), which can trap atoms such as La to form, for example, or . The choice of the symbol has been explained by the authors as being concise, readily printed and transmitted electronically (the at sign is included in ASCII, which most modern character encoding schemes are based on), and the visual aspects suggesting the structure of an endohedral fullerene. Non-stoichiometric chemical formulae Chemical formulae most often use integers for each element. However, there is a class of compounds, called non-stoichiometric compounds, that cannot be represented by small integers. Such a formula might be written using decimal fractions, as in , or it might include a variable part represented by a letter, as in , where x is normally much less than 1. General forms for organic compounds A chemical formula used for a series of compounds that differ from each other by a constant unit is called a general formula. It generates a homologous series of chemical formulae. For example, alcohols may be represented by the formula (n ≥ 1), giving the homologs methanol, ethanol, propanol for 1 ≤ n ≤ 3. Hill system The Hill system (or Hill notation) is a system of writing empirical chemical formulae, molecular chemical formulae and components of a condensed formula such that the number of carbon atoms in a molecule is indicated first, the number of hydrogen atoms next, and then the number of all other chemical elements subsequently, in alphabetical order of the chemical symbols. When the formula contains no carbon, all the elements, including hydrogen, are listed alphabetically. By sorting formulae according to the number of atoms of each element present in the formula according to these rules, with differences in earlier elements or numbers being treated as more significant than differences in any later element or number—like sorting text strings into lexicographical order—it is possible to collate chemical formulae into what is known as Hill system order. The Hill system was first published by Edwin A. Hill of the United States Patent and Trademark Office in 1900. It is the most commonly used system in chemical databases and printed indexes to sort lists of compounds. A list of formulae in Hill system order is arranged alphabetically, as above, with single-letter elements coming before two-letter symbols when the symbols begin with the same letter (so "B" comes before "Be", which comes before "Br"). The following example formulae are written using the Hill system, and listed in Hill order: BrClH2Si BrI CCl4 CH3I C2H5Br H2O4S See also Formula unit Glossary of chemical formulae Nuclear notation Periodic table Skeletal formula Simplified molecular-input line-entry system Notes References External links Hill notation example, from the University of Massachusetts Lowell libraries, including how to sort into Hill system order Molecular formula calculation applying Hill notation. The library calculating Hill notation is available on npm. Chemical nomenclature Notation
Chemical formula
[ "Chemistry", "Mathematics" ]
3,658
[ "Symbols", "Chemical structures", "nan", "Notation", "Chemical formulas" ]
7,045
https://en.wikipedia.org/wiki/Concorde
Concorde () is a retired Anglo-French supersonic airliner jointly developed and manufactured by Sud Aviation (later Aérospatiale) and the British Aircraft Corporation (BAC). Studies started in 1954, and France and the United Kingdom signed a treaty establishing the development project on 29 November 1962, as the programme cost was estimated at £70 million (£ in ). Construction of the six prototypes began in February 1965, and the first flight took off from Toulouse on 2 March 1969. The market was predicted for 350 aircraft, and the manufacturers received up to 100 option orders from many major airlines. On 9 October 1975, it received its French Certificate of Airworthiness, and from the UK CAA on 5 December. Concorde is a tailless aircraft design with a narrow fuselage permitting 4-abreast seating for 92 to 128 passengers, an ogival delta wing and a droop nose for landing visibility. It is powered by four Rolls-Royce/Snecma Olympus 593 turbojets with variable engine intake ramps, and reheat for take-off and acceleration to supersonic speed. Constructed out of aluminium, it was the first airliner to have analogue fly-by-wire flight controls. The airliner had transatlantic range while supercruising at twice the speed of sound for 75% of the distance. Delays and cost overruns increased the programme cost to £1.5–2.1 billion in 1976, (£– in ). Concorde entered service on 21 January 1976 with Air France from Paris-Roissy and British Airways from London Heathrow. Transatlantic flights were the main market, to Washington Dulles from 24 May, and to New York JFK from 17 October 1977. Air France and British Airways remained the sole customers with seven airframes each, for a total production of twenty. Supersonic flight more than halved travel times, but sonic booms over the ground limited it to transoceanic flights only. Its only competitor was the Tupolev Tu-144, carrying passengers from November 1977 until a May 1978 crash, while a potential competitor, the Boeing 2707, was cancelled in 1971 before any prototypes were built. On 25 July 2000, Air France Flight 4590 crashed shortly after take-off with all 109 occupants and four on the ground killed. This was the only fatal incident involving Concorde; commercial service was suspended until November 2001. The surviving aircraft were retired in 2003, 27 years after commercial operations had begun. All but 2 of the 20 aircraft built have been preserved and are on display across Europe and North America. Development Early studies In the early 1950s, Arnold Hall, director of the Royal Aircraft Establishment (RAE), asked Morien Morgan to form a committee to study supersonic transport. The group met in February 1954 and delivered their first report in April 1955. Robert T. Jones' work at NACA had demonstrated that the drag at supersonic speeds was strongly related to the span of the wing. This led to the use of short-span, thin trapezoidal wings such as those seen on the control surfaces of many missiles, or aircraft such as the Lockheed F-104 Starfighter interceptor or the planned Avro 730 strategic bomber that the team studied. The team outlined a baseline configuration that resembled an enlarged Avro 730. This short wingspan produced little lift at low speed, resulting in long take-off runs and high landing speeds. In an SST design, this would have required enormous engine power to lift off from existing runways and, to provide the fuel needed, "some horribly large aeroplanes" resulted. Based on this, the group considered the concept of an SST infeasible, and instead suggested continued low-level studies into supersonic aerodynamics. Slender deltas Soon after, Johanna Weber and Dietrich Küchemann at the RAE published a series of reports on a new wing planform, known in the UK as the "slender delta". The team, including Eric Maskell whose report "Flow Separation in Three Dimensions" contributed to an understanding of separated flow, worked with the fact that delta wings can produce strong vortices on their upper surfaces at high angles of attack. The vortex will lower the air pressure and cause lift. This had been noticed by Chuck Yeager in the Convair XF-92, but its qualities had not been fully appreciated. Weber suggested that the effect could be used to improve low speed performance. Küchemann's and Weber's papers changed the entire nature of supersonic design. The delta had already been used on aircraft, but these designs used planforms that were not much different from a swept wing of the same span. Weber noted that the lift from the vortex was increased by the length of the wing it had to operate over, which suggested that the effect would be maximised by extending the wing along the fuselage as far as possible. Such a layout would still have good supersonic performance, but also have reasonable take-off and landing speeds using vortex generation. The aircraft would have to take off and land very "nose high" to generate the required vortex lift, which led to questions about the low speed handling qualities of such a design. Küchemann presented the idea at a meeting where Morgan was also present. Test pilot Eric Brown recalls Morgan's reaction to the presentation, saying that he immediately seized on it as the solution to the SST problem. Brown considers this moment as being the birth of the Concorde project. Supersonic Transport Aircraft Committee On 1 October 1956 the Ministry of Supply asked Morgan to form a new study group, the Supersonic Transport Aircraft Committee (STAC) (sometimes referred to as the Supersonic Transport Advisory Committee), to develop a practical SST design and find industry partners to build it. At the first meeting, on 5 November 1956, the decision was made to fund the development of a test-bed aircraft to examine the low-speed performance of the slender delta, a contract that eventually produced the Handley Page HP.115. This aircraft demonstrated safe control at speeds as low as , about one third that of the F-104 Starfighter. STAC stated that an SST would have economic performance similar to existing subsonic types. Lift is not generated the same way at supersonic and subsonic speeds, with the lift-to-drag ratio for supersonic designs being about half that of subsonic designs. The aircraft would need more thrust than a subsonic design of the same size. But although they would use more fuel in cruise, they would be able to fly more revenue-earning flights in a given time, so fewer aircraft would be needed to service a particular route. This would remain economically advantageous as long as fuel represented a small percentage of operational costs. STAC suggested that two designs naturally fell out of their work, a transatlantic model flying at about Mach 2, and a shorter-range version flying at Mach 1.2. Morgan suggested that a 150-passenger transatlantic SST would cost about £75 to £90 million to develop, and be in service in 1970. The smaller 100-passenger short-range version would cost perhaps £50 to £80 million, and be ready for service in 1968. To meet this schedule, development would need to begin in 1960, with production contracts let in 1962. Morgan suggested that the US was already involved in a similar project, and that if the UK failed to respond it would be locked out of an airliner market that he believed would be dominated by SST aircraft. In 1959, a study contract was awarded to Hawker Siddeley and Bristol for preliminary designs based on the slender delta, which developed as the HSA.1000 and Bristol 198. Armstrong Whitworth also responded with an internal design, the M-Wing, for the lower-speed shorter-range category. Both the STAC group and the government were looking for partners to develop the designs. In September 1959, Hawker approached Lockheed, and after the creation of British Aircraft Corporation in 1960, the former Bristol team immediately started talks with Boeing, General Dynamics, Douglas Aircraft, and Sud Aviation. Ogee planform selected Küchemann and others at the RAE continued their work on the slender delta throughout this period, considering three basic shapes; the classic straight-edge delta, the "gothic delta" that was rounded outward to appear like a gothic arch, and the "ogival wing" that was compound-rounded into the shape of an ogee. Each of these planforms had advantages and disadvantages. As they worked with these shapes, a practical concern grew to become so important that it forced selection of one of these designs. Generally the wing's centre of pressure (CP, or "lift point") should be close to the aircraft's centre of gravity (CG, or "balance point") to reduce the amount of control force required to pitch the aircraft. As the aircraft layout changes during the design phase, it is common for the CG to move fore or aft. With a normal wing design this can be addressed by moving the wing slightly fore or aft to account for this. With a delta wing running most of the length of the fuselage, this was no longer easy; moving the wing would leave it in front of the nose or behind the tail. Studying the various layouts in terms of CG changes, both during design and changes due to fuel use during flight, the ogee planform immediately came to the fore. To test the new wing, NASA assisted the team by modifying a Douglas F5D Skylancer to mimic the wing selection. In 1965 the NASA test aircraft successfully tested the wing, and found that it reduced landing speeds noticeably over the standard delta wing. NASA also ran simulations at Ames that showed the aircraft would exhibit a sudden change in pitch when entering ground effect. Ames test pilots later participated in a joint cooperative test with the French and British test pilots and found that the simulations had been correct, and this information was added to pilot training. Partnership with Sud Aviation France had its own SST plans. In the late 1950s, the government requested designs from the government-owned Sud Aviation and Nord Aviation, as well as Dassault. All three returned designs based on Küchemann and Weber's slender delta; Nord suggested a ramjet powered design flying at Mach 3, and the other two were jet-powered Mach 2 designs that were similar to each other. Of the three, the Sud Aviation Super-Caravelle won the design contest with a medium-range design deliberately sized to avoid competition with transatlantic US designs they assumed were already on the drawing board. As soon as the design was complete, in April 1960, Pierre Satre, the company's technical director, was sent to Bristol to discuss a partnership. Bristol was surprised to find that the Sud team had designed a similar aircraft after considering the SST problem and coming to the same conclusions as the Bristol and STAC teams in terms of economics. It was later revealed that the original STAC report, marked "For UK Eyes Only", had secretly been passed to France to win political favour. Sud made minor changes to the paper and presented it as their own work. France had no modern large jet engines and had already decided to buy a British design (as they had on the earlier subsonic Caravelle). As neither company had experience in the use of heat-resistant metals for airframes, a maximum speed of around Mach 2 was selected so aluminium could be used – above this speed, the friction with the air heats the metal so much that it begins to soften. This lower speed would also speed development and allow their design to fly before the Americans. Everyone involved agreed that Küchemann's ogee-shaped wing was the right one. The British team was still focused on a 150-passenger design serving transatlantic routes, while France was deliberately avoiding these. Common components could be used in both designs, with the shorter range version using a clipped fuselage and four engines, and the longer one a stretched fuselage and six engines, leaving only the wing to be extensively re-designed. The teams continued to meet in 1961, and by this time it was clear that the two aircraft would be very similar in spite of different ranges and seating arrangements. A single design emerged that differed mainly in fuel load. More powerful Bristol Siddeley Olympus engines, being developed for the TSR-2, allowed either design to be powered by only four engines. Cabinet response, treaty While the development teams met, the French Minister of Public Works and Transport Robert Buron was meeting with the UK Minister of Aviation Peter Thorneycroft, and Thorneycroft told the cabinet that France was much more serious about a partnership than any of the US companies. The various US companies had proved uninterested, likely due to the belief that the government would be funding development and would frown on any partnership with a European company, and the risk of "giving away" US technological leadership to a European partner. When the STAC plans were presented to the UK cabinet, the economic considerations were considered highly questionable, especially as these were based on development costs, now estimated to be , which were repeatedly overrun in the industry. The Treasury Ministry presented a negative view, suggesting that there was no way the project would have any positive financial returns for the government, especially in light that "the industry's past record of over-optimistic estimating (including the recent history of the TSR.2) suggests that it would be prudent to consider" the cost "to turn out much too low." This led to an independent review of the project by the Committee on Civil Scientific Research and Development, which met on the topic between July and September 1962. The committee rejected the economic arguments, including considerations of supporting the industry made by Thorneycroft. Their report in October stated that it was unlikely there would be any direct positive economic outcome, but that the project should still be considered because everyone else was going supersonic, and they were concerned they would be locked out of future markets. It appeared the project would not be likely to significantly affect other, more important, research efforts. At the time, the UK was pressing for admission to the European Economic Community, and this became the main rationale for moving ahead with the aircraft. The development project was negotiated as an international treaty between the two countries rather than a commercial agreement between companies and included a clause, originally asked for by the UK government, imposing heavy penalties for cancellation. This treaty was signed on 29 November 1962. Charles de Gaulle vetoed the UK's entry into the European Community in a speech on 25 January 1963. Naming At Charles de Gaulle's January 1963 press conference the aircraft was first called 'Concorde'. The name was suggested by the eighteen-year-old son of F.G. Clark, the publicity manager at BAC's Filton plant. Reflecting the treaty between the British and French governments that led to Concorde's construction, the name Concorde is from the French word concorde (), which has an English equivalent, concord. Both words mean agreement, harmony, or union. The name was changed to Concord by Harold Macmillan in response to a perceived slight by de Gaulle. At the French roll-out in Toulouse in late 1967, the British Minister of Technology, Tony Benn, announced that he would change the spelling back to Concorde. This created a nationalist uproar that died down when Benn stated that the suffixed "e" represented "Excellence, England, Europe, and Entente (Cordiale)". In his memoirs, he recounted a letter from a Scotsman claiming, "you talk about 'E' for England, but part of it is made in Scotland." Given Scotland's contribution of providing the nose cone for the aircraft, Benn replied, "it was also 'E' for 'Écosse' (the French name for Scotland) – and I might have added 'e' for extravagance and 'e' for escalation as well!" In common usage in the United Kingdom, the type is known as "Concorde" without an article, rather than " Concorde" or " Concorde". Sales efforts Advertisements for Concorde during the late 1960s placed in publications such as Aviation Week & Space Technology predicted a market for 350 aircraft by 1980. The new consortium intended to produce one long-range and one short-range version, but prospective customers showed no interest in the short-range version, thus it was later dropped. Concorde's costs spiralled during development to more than six times the original projections, arriving at a unit cost of £23 million in 1977 (equivalent to £ million in ). Its sonic boom made travelling supersonically over land impossible without causing complaints from citizens. World events also dampened Concorde sales prospects; the 1973–74 stock market crash and the 1973 oil crisis had made airlines cautious about aircraft with high fuel consumption, and new wide-body aircraft, such as the Boeing 747, had recently made subsonic aircraft significantly more efficient and presented a low-risk option for airlines. While carrying a full load, Concorde achieved 15.8 passenger miles per gallon of fuel, while the Boeing 707 reached 33.3 pm/g, the Boeing 747 46.4 pm/g, and the McDonnell Douglas DC-10 53.6 pm/g. A trend in favour of cheaper airline tickets also caused airlines such as Qantas to question Concorde's market suitability. During the early 2000s, Flight International described Concorde as being "one of aerospace's most ambitious but commercially flawed projects", The consortium received orders (non-binding options) for more than 100 of the long-range version from the major airlines of the day: Pan Am, BOAC, and Air France were the launch customers, with six aircraft each. Other airlines in the order book included Panair do Brasil, Continental Airlines, Japan Airlines, Lufthansa, American Airlines, United Airlines, Air India, Air Canada, Braniff, Singapore Airlines, Iran Air, Olympic Airways, Qantas, CAAC Airlines, Middle East Airlines, and TWA. At the time of the first flight, the options list contained 74 options from 16 airlines: Testing The design work was supported by a research programme studying the flight characteristics of low ratio delta wings. A supersonic Fairey Delta 2 was modified to carry the ogee planform, and, renamed as the BAC 221, used for tests of the high-speed flight envelope; the Handley Page HP.115 also provided valuable information on low-speed performance. Construction of two prototypes began in February 1965: 001, built by Aérospatiale at Toulouse, and 002, by BAC at Filton, Bristol. 001 made its first test flight from Toulouse on 2 March 1969, piloted by André Turcat, and first went supersonic on 1 October. The first UK-built Concorde flew from Filton to RAF Fairford on 9 April 1969, piloted by Brian Trubshaw. Both prototypes were presented to the public on 7–8 June 1969 at the Paris Air Show. As the flight programme progressed, 001 embarked on a sales and demonstration tour on 4 September 1971, which was also the first transatlantic crossing of Concorde. Concorde 002 followed on 2 June 1972 with a tour of the Middle and Far East. Concorde 002 made the first visit to the United States in 1973, landing at Dallas/Fort Worth Regional Airport to mark the airport's opening. Concorde had initially held a great deal of customer interest, but the project was hit by order cancellations. The Paris Le Bourget air show crash of the competing Soviet Tupolev Tu-144 had shocked potential buyers, and public concern over the environmental issues of supersonic aircraftthe sonic boom, take-off noise and pollutionhad produced a change in the public opinion of SSTs. By 1976 the remaining buyers were from four countries: Britain, France, China, and Iran. Only Air France and British Airways (the successor to BOAC) took up their orders, with the two governments taking a cut of any profits. The US government cut federal funding for the Boeing 2707, its supersonic transport programme, in 1971; Boeing did not complete its two 2707 prototypes. The US, India, and Malaysia all ruled out Concorde supersonic flights over the noise concern, although some of these restrictions were later relaxed. Professor Douglas Ross characterised restrictions placed upon Concorde operations by President Jimmy Carter's administration as having been an act of protectionism of American aircraft manufacturers. Programme cost The original programme cost estimate was £70 million in 1962, (£ in ). After cost overruns and delays the programme eventually cost between £1.5 and £2.1 billion in 1976, (£ – in ). This cost was the main reason the production run was much smaller than expected. Design General features Concorde is an ogival delta winged aircraft with four Olympus engines based on those employed in the RAF's Avro Vulcan strategic bomber. It has an unusual tailless configuration for a commercial aircraft, as does the Tupolev Tu-144. Concorde was the first airliner to have a fly-by-wire flight-control system (in this case, analogue); the avionics system Concorde used was unique because it was the first commercial aircraft to employ hybrid circuits. The principal designer for the project was Pierre Satre, with Sir Archibald Russell as his deputy. Concorde pioneered the following technologies: For high speed and optimisation of flight: Double delta (ogee/ogival) shaped wings Variable engine air intake ramp system controlled by digital computers Supercruise capability For weight-saving and enhanced performance: Mach 2.02 (~) cruising speed for optimum fuel consumption (supersonic drag minimum and turbojet engines are more efficient at higher speed); fuel consumption at and at altitude of was . Mainly aluminium construction using a high-temperature alloy similar to that developed for aero-engine pistons. This material gave low weight and allowed conventional manufacture (higher speeds would have ruled out aluminium) Full-regime autopilot and autothrottle allowing "hands off" control of the aircraft from climb out to landing Fully electrically controlled analogue fly-by-wire flight controls systems High-pressure hydraulic system using for lighter hydraulic components. Air data computer (ADC) for the automated monitoring and transmission of aerodynamic measurements (total pressure, static pressure, angle of attack, side-slip). Fully electrically controlled analogue brake-by-wire system No auxiliary power unit, as Concorde would only visit large airports where ground air start carts were available. Powerplant A symposium titled "Supersonic-Transport Implications" was hosted by the Royal Aeronautical Society on 8 December 1960. Various views were put forward on the likely type of powerplant for a supersonic transport, such as podded or buried installation and turbojet or ducted-fan engines. Concorde needed to fly long distances to be economically viable; this required high efficiency from the powerplant. Turbofan engines were rejected due to their larger cross-section producing excessive drag (but would be studied for future SSTs). Olympus turbojet technology was already available for development to meet the design requirements. Rolls-Royce proposed developing the RB.169 to power Concorde during its initial design phase, but developing a wholly-new engine for a single aircraft would have been extremely costly, so the existing BSEL Olympus Mk 320 turbojet engine, which was already flying in the BAC TSR-2 supersonic strike bomber prototype, was chosen instead. Boundary layer management in the podded installation was put forward as simpler with only an inlet cone, however, Dr. Seddon of the RAE favoured a more integrated buried installation. One concern of placing two or more engines behind a single intake was that an intake failure could lead to a double or triple engine failure. While a ducted fan over the turbojet would reduce noise, its larger cross-section also incurred more drag. Acoustics specialists were confident that a turbojet's noise could be reduced and SNECMA made advances in silencer design during the programme. The Olympus Mk.622 with reduced jet velocity was proposed to reduce the noise but was not pursued. By 1974, the spade silencers which projected into the exhaust were reported to be ineffective but "entry-into-service aircraft are likely to meet their noise guarantees". The powerplant configuration selected for Concorde highlighted airfield noise, boundary layer management and interactions between adjacent engines and the requirement that the powerplant, at Mach 2, tolerate pushovers, sideslips, pull-ups and throttle slamming without surging. Extensive development testing with design changes and changes to intake and engine control laws addressed most of the issues except airfield noise and the interaction between adjacent powerplants at speeds above Mach 1.6 which meant Concorde "had to be certified aerodynamically as a twin-engined aircraft above Mach 1.6". Situated behind the wing leading edge, the engine intake had a wing boundary layer ahead of it. Two-thirds were diverted and the remaining third which entered the intake did not adversely affect the intake efficiency except during pushovers when the boundary layer thickened and caused surging. Wind tunnel testing helped define leading-edge modifications ahead of the intakes which solved the problem. Each engine had its own intake and the nacelles were paired with a splitter plate between them to minimise the chance of one powerplant influencing the other. Only above was an engine surge likely to affect the adjacent engine. The air intake design for Concorde's engines was especially critical. The intakes had to slow down supersonic inlet air to subsonic speeds with high-pressure recovery to ensure efficient operation at cruising speed while providing low distortion levels (to prevent engine surge) and maintaining high efficiency for all likely ambient temperatures in cruise. They had to provide adequate subsonic performance for diversion cruise and low engine-face distortion at take-off. They also had to provide an alternative path for excess intake of air during engine throttling or shutdowns. The variable intake features required to meet all these requirements consisted of front and rear ramps, a dump door, an auxiliary inlet and a ramp bleed to the exhaust nozzle. As well as supplying air to the engine, the intake also supplied air through the ramp bleed to the propelling nozzle. The nozzle ejector (or aerodynamic) design, with variable exit area and secondary flow from the intake, contributed to good expansion efficiency from take-off to cruise. Concorde's Air Intake Control Units (AICUs) made use of a digital processor for intake control. It was the first use of a digital processor with full authority control of an essential system in a passenger aircraft. It was developed by BAC's Electronics and Space Systems division after the analogue AICUs (developed by Ultra Electronics) fitted to the prototype aircraft were found to lack sufficient accuracy. Ultra Electronics also developed Concorde's thrust-by-wire engine control system. Engine failure causes problems on conventional subsonic aircraft; not only does the aircraft lose thrust on that side but the engine creates drag, causing the aircraft to yaw and bank in the direction of the failed engine. If this had happened to Concorde at supersonic speeds, it theoretically could have caused a catastrophic failure of the airframe. Although computer simulations predicted considerable problems, in practice Concorde could shut down both engines on the same side of the aircraft at Mach 2 without difficulties. During an engine failure the required air intake is virtually zero. So, on Concorde, engine failure was countered by the opening of the auxiliary spill door and the full extension of the ramps, which deflected the air downwards past the engine, gaining lift and minimising drag. Concorde pilots were routinely trained to handle double-engine failure. Concorde used reheat (afterburners) only at take-off and to pass through the transonic speed range, between Mach 0.95 and 1.7. Heating problems Kinetic heating from the high speed boundary layer caused the skin to heat up during supersonic flight. Every surface, such as windows and panels, was warm to the touch by the end of the flight. Apart from the engine bay, the hottest part of any supersonic aircraft's structure is the nose, due to aerodynamic heating. Hiduminium R.R. 58, an aluminium alloy, was used throughout the aircraft because it was relatively cheap and easy to work with. The highest temperature it could sustain over the life of the aircraft was , which limited the top speed to Mach 2.02. Concorde went through two cycles of cooling and heating during a flight, first cooling down as it gained altitude at subsonic speed, then heating up accelerating to cruise speed, finally cooling again when descending and slowing down before heating again in low altitude air before landing. This had to be factored into the metallurgical and fatigue modelling. A test rig was built that repeatedly heated up a full-size section of the wing, and then cooled it, and periodically samples of metal were taken for testing. The airframe was designed for a life of 45,000 flying hours. As the fuselage heated up it expanded by as much as . The most obvious manifestation of this was a gap that opened up on the flight deck between the flight engineer's console and the bulkhead. On some aircraft that conducted a retiring supersonic flight, the flight engineers placed their caps in this expanded gap, wedging the cap when the airframe shrank again. To keep the cabin cool, Concorde used the fuel as a heat sink for the heat from the air conditioning. The same method also cooled the hydraulics. During supersonic flight a visor was used to keep high temperature air from flowing over the cockpit skin. Concorde had livery restrictions; the majority of the surface had to be covered with a highly reflective white paint to avoid overheating the aluminium structure due to heating effects. The white finish reduced the skin temperature by . In 1996, Air France briefly painted F-BTSD in a predominantly blue livery, with the exception of the wings, in a promotional deal with Pepsi. In this paint scheme, Air France was advised to remain at for no more than 20 minutes at a time, but there was no restriction at speeds under Mach 1.7. F-BTSD was used because it was not scheduled for any long flights that required extended Mach 2 operations. Structural issues Due to its high speeds, large forces were applied to the aircraft during turns, causing distortion of the aircraft's structure. There were concerns over maintaining precise control at supersonic speeds. Both of these issues were resolved by ratio changes between the inboard and outboard elevon deflections, varying at differing speeds including supersonic. Only the innermost elevons, attached to the stiffest area of the wings, were used at higher speeds. The narrow fuselage flexed, which was apparent to rear passengers looking along the length of the cabin. When any aircraft passes the critical mach of its airframe, the centre of pressure shifts rearwards. This causes a pitch-down moment on the aircraft if the centre of gravity remains where it was. The wings were designed to reduce this, but there was still a shift of about . This could have been countered by the use of trim controls, but at such high speeds, this would have increased drag which would have been unacceptable. Instead, the distribution of fuel along the aircraft was shifted during acceleration and deceleration to move the centre of gravity, effectively acting as an auxiliary trim control. Range To fly non-stop across the Atlantic Ocean, Concorde required the greatest supersonic range of any aircraft. This was achieved by a combination of powerplants which were efficient at twice the speed of sound, a slender fuselage with high fineness ratio, and a complex wing shape for a high lift-to-drag ratio. Only a modest payload could be carried and the aircraft was trimmed without using deflected control surfaces, to avoid the drag that would incur. Nevertheless, soon after Concorde began flying, a Concorde "B" model was designed with slightly larger fuel capacity and slightly larger wings with leading edge slats to improve aerodynamic performance at all speeds, with the objective of expanding the range to reach markets in new regions. It would have higher thrust engines with noise reducing features and no environmentally-objectionable afterburner. Preliminary design studies showed that an engine with a 25% gain in efficiency over the Rolls-Royce/Snecma Olympus 593 could be produced. This would have given additional range and a greater payload, making new commercial routes possible. This was cancelled due in part to poor sales of Concorde, but also to the rising cost of aviation fuel in the 1970s. Radiation concerns Concorde's high cruising altitude meant people on board received almost twice the flux of extraterrestrial ionising radiation as those travelling on a conventional long-haul flight. Upon Concorde's introduction, it was speculated that this exposure during supersonic travels would increase the likelihood of skin cancer. Due to the proportionally reduced flight time, the overall equivalent dose would normally be less than a conventional flight over the same distance. Unusual solar activity might lead to an increase in incident radiation. To prevent incidents of excessive radiation exposure, the flight deck had a radiometer and an instrument to measure the rate of increase or decrease of radiation. If the radiation level became too high, Concorde would descend below . Cabin pressurisation Airliner cabins were usually maintained at a pressure equivalent to elevation. Concorde's pressurisation was set to an altitude at the lower end of this range, . Concorde's maximum cruising altitude was ; subsonic airliners typically cruise below . A sudden reduction in cabin pressure is hazardous to all passengers and crew. Above , a sudden cabin depressurisation would leave a "time of useful consciousness" up to 10–15 seconds for a conditioned athlete. At Concorde's altitude, the air density is very low; a breach of cabin integrity would result in a loss of pressure severe enough that the plastic emergency oxygen masks installed on other passenger jets would not be effective and passengers would soon suffer from hypoxia despite quickly donning them. Concorde was equipped with smaller windows to reduce the rate of loss in the event of a breach, a reserve air supply system to augment cabin air pressure, and a rapid descent procedure to bring the aircraft to a safe altitude. The FAA enforces minimum emergency descent rates for aircraft and noting Concorde's higher operating altitude, concluded that the best response to pressure loss would be a rapid descent. Continuous positive airway pressure would have delivered pressurised oxygen directly to the pilots through masks. Flight characteristics While subsonic commercial jets took eight hours to fly from Paris to New York (seven hours from New York to Paris), the average supersonic flight time on the transatlantic routes was just under 3.5 hours. Concorde had a maximum cruising altitude of and an average cruise speed of , more than twice the speed of conventional aircraft. With no other civil traffic operating at its cruising altitude of about , Concorde had exclusive use of dedicated oceanic airways, or "tracks", separate from the North Atlantic Tracks, the routes used by other aircraft to cross the Atlantic. Due to the significantly less variable nature of high altitude winds compared to those at standard cruising altitudes, these dedicated SST tracks had fixed co-ordinates, unlike the standard routes at lower altitudes, whose co-ordinates are replotted twice daily based on forecast weather patterns (jetstreams). Concorde would also be cleared in a block, allowing for a slow climb from during the oceanic crossing as the fuel load gradually decreased. In regular service, Concorde employed an efficient cruise-climb flight profile following take-off. The delta-shaped wings required Concorde to adopt a higher angle of attack at low speeds than conventional aircraft, but it allowed the formation of large low-pressure vortices over the entire upper wing surface, maintaining lift. The normal landing speed was . Because of this high angle, during a landing approach Concorde was on the backside of the drag force curve, where raising the nose would increase the rate of descent; the aircraft was thus largely flown on the throttle and was fitted with an autothrottle to reduce the pilot's workload. Brakes and undercarriage Because of the way Concorde's delta-wing generated lift, the undercarriage had to be unusually strong and tall to allow for the angle of attack at low speed. At rotation, Concorde would rise to a high angle of attack, about 18 degrees. Prior to rotation, the wing generated almost no lift, unlike typical aircraft wings. Combined with the high airspeed at rotation ( indicated airspeed), this increased the stresses on the main undercarriage in a way that was initially unexpected during the development and required a major redesign. Due to the high angle needed at rotation, a small set of wheels was added aft to prevent tailstrikes. The main undercarriage units swing towards each other to be stowed but due to their great height also needed to contract in length telescopically before swinging to clear each other when stowed. The four main wheel tyres on each bogie unit are inflated to . The twin-wheel nose undercarriage retracts forwards and its tyres are inflated to a pressure of , and the wheel assembly carries a spray deflector to prevent standing water from being thrown up into the engine intakes. The tyres are rated to a maximum speed on the runway of . The high take-off speed of required Concorde to have upgraded brakes. Like most airliners, Concorde has anti-skid braking  to prevent the tyres from losing traction when the brakes are applied. The brakes, developed by Dunlop, were the first carbon-based brakes used on an airliner. The use of carbon over equivalent steel brakes provided a weight-saving of . Each wheel has multiple discs which are cooled by electric fans. Wheel sensors include brake overload, brake temperature, and tyre deflation. After a typical landing at Heathrow, brake temperatures were around . Landing Concorde required a minimum of runway length; the shortest runway Concorde ever landed on carrying commercial passengers was Cardiff Airport. Concorde G-AXDN (101) made its final landing at Duxford Aerodrome on 20 August 1977, which had a runway length of just at the time. This was the last aircraft to land at Duxford before the runway was shortened later that year. Droop nose Concorde's drooping nose, developed by Marshall's of Cambridge, enabled the aircraft to switch from being streamlined to reduce drag and achieve optimal aerodynamic efficiency during flight, to not obstructing the pilot's view during taxi, take-off, and landing operations. Due to the high angle of attack, the long pointed nose obstructed the view and necessitated the ability to droop. The droop nose was accompanied by a moving visor that retracted into the nose prior to being lowered. When the nose was raised to horizontal, the visor would rise in front of the cockpit windscreen for aerodynamic streamlining. A controller in the cockpit allowed the visor to be retracted and the nose to be lowered to 5° below the standard horizontal position for taxiing and take-off. Following take-off and after clearing the airport, the nose and visor were raised. Prior to landing, the visor was again retracted and the nose lowered to 12.5° below horizontal for maximal visibility. Upon landing the nose was raised to the 5° position to avoid the possibility of damage due to collision with ground vehicles, and then raised fully before engine shutdown to prevent pooling of internal condensation within the radome seeping down into the aircraft's pitot/ADC system probes. The US Federal Aviation Administration had objected to the restrictive visibility of the visor used on the first two prototype Concordes, which had been designed before a suitable high-temperature window glass had become available, and thus requiring alteration before the FAA would permit Concorde to serve US airports. This led to the redesigned visor used in the production and the four pre-production aircraft (101, 102, 201, and 202). The nose window and visor glass, needed to endure temperatures in excess of at supersonic flight, were developed by Triplex. Operational history Concorde began scheduled flights with British Airways (BA) and Air France (AF) on 21 January 1976. AF flew its last commercial flight on 30 May 2003 with BA retiring its Concorde fleet on 24 October 2003. Operators Air France British Airways Braniff International Airways operated Concordes at subsonic speed between Dulles International Airport and Dallas Fort Worth International Airport, from January 1979 until May 1980, utilizing its own flight and cabin crew, under its own insurance and operator's license. Stickers containing a US registration were placed over the French and British registrations of the aircraft during each rotation, and a placard was temporarily placed behind the cockpit to signify the operator and operator's license in command. Singapore Airlines had its livery placed on the left side of Concorde G-BOAD, and held a joint marketing agreement which saw Singapore insignias on the cabin fittings, as well as the airline's "Singapore Girl" stewardesses jointly sharing cabin duty with British Airways flight attendants. All flight crew, operations, and insurances remained solely under British Airways however, and at no point did Singapore Airlines operate Concorde services under its own operator's certification, nor wet-lease an aircraft. This arrangement initially only lasted for three flights, conducted between 9–13 December 1977; it later resumed on 24 January 1979, and operated until 1 November 1980. The Singapore livery was used on G-BOAD from 1977 to 1980. Accidents and incidents Air France Flight 4590 On 25 July 2000, Air France Flight 4590, registration F-BTSC, crashed in Gonesse, France, after departing from Charles de Gaulle Airport en route to John F. Kennedy International Airport in New York City, killing all 100 passengers and nine crew members on board as well as four people on the ground. It was the only fatal accident involving Concorde. This crash also damaged Concorde's reputation and caused both British Airways and Air France to temporarily ground their fleets. According to the official investigation conducted by the Bureau of Enquiry and Analysis for Civil Aviation Safety (BEA), the crash was caused by a metallic strip that had fallen from a Continental Airlines DC-10 that had taken off minutes earlier. This fragment punctured a tyre on Concorde's left main wheel bogie during take-off. The tyre exploded, and a piece of rubber hit the fuel tank, which caused a fuel leak and led to a fire. The crew shut down engine number 2 in response to a fire warning, and with engine number 1 surging and producing little power, the aircraft was unable to gain altitude or speed. The aircraft entered a rapid pitch-up then a sudden descent, rolling left and crashing tail-low into the Hôtelissimo Les Relais Bleus Hotel in Gonesse. Before the accident, Concorde had been arguably the safest operational passenger airliner in the world with zero passenger deaths, but there had been two prior non-fatal accidents and a rate of tyre damage 30 times higher than subsonic airliners from 1995 to 2000. Safety improvements made after the crash included more secure electrical controls, Kevlar lining on the fuel tanks and specially developed burst-resistant tyres. The first flight with the modifications departed from London Heathrow on 17 July 2001, piloted by BA Chief Concorde Pilot Mike Bannister. In a flight of 3 hours 20 minutes over the mid-Atlantic towards Iceland, Bannister attained Mach 2.02 and then returned to RAF Brize Norton. The test flight, intended to resemble the London–New York route, was declared a success and was watched on live TV, and by crowds on the ground at both locations. The first flight with passengers after the 2000 grounding landed shortly before the World Trade Center attacks in the United States. This was not a commercial flight: all the passengers were BA employees. Normal commercial operations resumed on 7 November 2001 by BA and AF (aircraft G-BOAE and F-BTSD), with service to New York JFK, where Mayor Rudy Giuliani greeted the passengers. Other accidents and incidents On 12 April 1989, Concorde G-BOAF, on a chartered flight from Christchurch, New Zealand, to Sydney, Australia, suffered a structural failure at supersonic speed. As the aircraft was climbing and accelerating through Mach 1.7, a "thud" was heard. The crew did not notice any handling problems, and they assumed the thud they heard was a minor engine surge. No further difficulty was encountered until descent through at Mach 1.3, when a vibration was felt throughout the aircraft, lasting two to three minutes. Most of the upper rudder had separated from the aircraft at this point. Aircraft handling was unaffected, and the aircraft made a safe landing at Sydney. The UK's Air Accidents Investigation Branch (AAIB) concluded that the skin of the rudder had been separating from the rudder structure over a period before the accident due to moisture seepage past the rivets in the rudder. Production staff had not followed proper procedures during an earlier modification of the rudder; the procedures were difficult to adhere to. The aircraft was repaired and returned to service. On 21 March 1992, G-BOAB while flying British Airways Flight 001 from London to New York, also suffered a structural failure at supersonic speed. While cruising at Mach 2, at approximately , the crew heard a "thump". No difficulties in handling were noticed, and no instruments gave any irregular indications. This crew also suspected there had been a minor engine surge. One hour later, during descent and while decelerating below Mach 1.4, a sudden "severe" vibration began throughout the aircraft. The vibration worsened when power was added to the No 2 engine. The crew shut down the No 2 engine and made a successful landing in New York, noting that increased rudder control was needed to keep the aircraft on its intended approach course. Again, the skin had separated from the structure of the rudder, which led to most of the upper rudder detaching in flight. The AAIB concluded that repair materials had leaked into the structure of the rudder during a recent repair, weakening the bond between the skin and the structure of the rudder, leading to it breaking up in flight. The large size of the repair had made it difficult to keep repair materials out of the structure, and prior to this accident, the severity of the effect of these repair materials on the structure and skin of the rudder was not appreciated. The 2010 trial involving Continental Airlines over the crash of Flight 4590 established that from 1976 until Flight 4590 there had been 57 tyre failures involving Concordes during takeoffs, including a near-crash at Dulles International Airport on 14 June 1979 involving Air France Flight 54 where a tyre blowout pierced the plane's fuel tank and damaged a left engine and electrical cables, with the loss of two of the craft's hydraulic systems. Aircraft on display Twenty Concorde aircraft were built: two prototypes, two pre-production aircraft, two development aircraft and 14 production aircraft for commercial service. With the exception of two of the production aircraft, all are preserved, mostly in museums. One aircraft was scrapped in 1994, and another was destroyed in the Air France Flight 4590 crash in 2000. Comparable aircraft Tu-144 Concorde was one of only two supersonic jetliner models to operate commercially; the other was the Soviet-built Tupolev Tu-144, which operated in the late 1970s. The Tu-144 was nicknamed "Concordski" by Western European journalists for its outward similarity to Concorde. Soviet espionage efforts allegedly stole Concorde blueprints to assist in the design of the Tu-144. As a result of a rushed development programme, the first Tu-144 prototype was substantially different from the preproduction machines, but both were cruder than Concorde. The Tu-144S had a significantly shorter range than Concorde. Jean Rech, Sud Aviation, attributed this to two things, a very heavy powerplant with an intake twice as long as that on Concorde, and low-bypass turbofan engines with too high a bypass ratio which needed afterburning for cruise. The aircraft had poor control at low speeds because of a simpler wing design. The Tu-144 required braking parachutes to land. The Tu-144 had two crashes, one at the 1973 Paris Air Show, and another during a pre-delivery test flight in May 1978. Passenger service commenced in November 1977, but after the 1978 crash the aircraft was taken out of passenger service after only 55 flights, which carried an average of 58 passengers. The Tu-144 had an inherently unsafe structural design as a consequence of an automated production method chosen to simplify and speed up manufacturing. The Tu-144 program was cancelled by the Soviet government on 1 July 1983. SST and others The main competing designs for the US government-funded supersonic transport (SST) were the swing-wing Boeing 2707 and the compound delta wing Lockheed L-2000. These were to have been larger, with seating for up to 300 people. The Boeing 2707 was selected for development. Concorde first flew in 1969, the year Boeing began building 2707 mockups after changing the design to a cropped delta wing; the cost of this and other changes helped to kill the project. The operation of US military aircraft such as the Mach 3+ North American XB-70 Valkyrie prototypes and Convair B-58 Hustler strategic nuclear bomber had shown that sonic booms were capable of reaching the ground, and the experience from the Oklahoma City sonic boom tests led to the same environmental concerns that hindered the commercial success of Concorde. The American government cancelled its SST project in 1971 having spent more than $1 billion without any aircraft being built. Impact Environmental Before Concorde's flight trials, developments in the civil aviation industry were largely accepted by governments and their respective electorates. Opposition to Concorde's noise, particularly on the east coast of the United States, forged a new political agenda on both sides of the Atlantic, with scientists and technology experts across a multitude of industries beginning to take the environmental and social impact more seriously. Although Concorde led directly to the introduction of a general noise abatement programme for aircraft flying out of John F. Kennedy Airport, many found that Concorde was quieter than expected, partly due to the pilots temporarily throttling back their engines to reduce noise during overflight of residential areas. Even before commercial flights started, it had been claimed that Concorde was quieter than many other aircraft. In 1971, BAC's technical director stated, "It is certain on present evidence and calculations that in the airport context, production Concordes will be no worse than aircraft now in service and will in fact be better than many of them." Concorde produced nitrogen oxides in its exhaust, which, despite complicated interactions with other ozone-depleting chemicals, are understood to result in degradation to the ozone layer at the stratospheric altitudes it cruised. It has been pointed out that other, lower-flying, airliners produce ozone during their flights in the troposphere, but vertical transit of gases between the layers is restricted. The small fleet meant overall ozone-layer degradation caused by Concorde was negligible. In 1995, David Fahey, of the National Oceanic and Atmospheric Administration in the United States, warned that a fleet of 500 supersonic aircraft with exhausts similar to Concorde might produce a 2 per cent drop in global ozone levels, much higher than previously thought. Each 1 per cent drop in ozone is estimated to increase the incidence of non-melanoma skin cancer worldwide by 2 per cent. Dr Fahey said if these particles are produced by highly oxidised sulphur in the fuel, as he believed, then removing sulphur in the fuel will reduce the ozone-destroying impact of supersonic transport. Concorde's technical leap forward boosted the public's understanding of conflicts between technology and the environment as well as awareness of the complex decision analysis processes that surround such conflicts. In France, the use of acoustic fencing alongside TGV tracks might not have been achieved without the 1970s controversy over aircraft noise. In the UK, the CPRE has issued tranquillity maps since 1990. Public perception Concorde was normally perceived as a privilege of the rich, but special circular or one-way (with return by other flight or ship) charter flights were arranged to bring a trip within the means of moderately well-off enthusiasts. As a symbol of national pride, an example from the BA fleet made occasional flypasts at selected Royal events, major air shows and other special occasions, sometimes in formation with the Red Arrows. On the final day of commercial service, public interest was so great that grandstands were erected at Heathrow Airport. Significant numbers of people attended the final landings; the event received widespread media coverage. The aircraft was usually referred to by the British as simply "Concorde". In France it was known as "le Concorde" due to "le", the definite article, used in French grammar to introduce the name of a ship or aircraft, and the capital being used to distinguish a proper name from a common noun of the same spelling. In French, the common noun concorde means "agreement, harmony, or peace". Concorde's pilots and British Airways in official publications often refer to Concorde both in the singular and plural as "she" or "her". In 2006, 37 years after its first test flight, Concorde was announced the winner of the Great British Design Quest organised by the BBC (through The Culture Show) and the Design Museum. A total of 212,000 votes were cast with Concorde beating other British design icons such as the Mini, mini skirt, Jaguar E-Type car, the Tube map, the World Wide Web, the K2 red telephone box and the Supermarine Spitfire. Special missions The heads of France and the United Kingdom flew in Concorde many times. Presidents Georges Pompidou, Valéry Giscard d'Estaing and François Mitterrand regularly used Concorde as French flagship aircraft on foreign visits. Elizabeth II and Prime Ministers Edward Heath, Jim Callaghan, Margaret Thatcher, John Major and Tony Blair took Concorde in some charter flights such as the Queen's trips to Barbados on her Silver Jubilee in 1977, in 1987 and in 2003, to the Middle East in 1984 and to the United States in 1991. Pope John Paul II flew on Concorde in May 1989. Concorde sometimes made special flights for demonstrations, air shows (such as the Farnborough, Paris-Le Bourget, Oshkosh AirVenture and MAKS air shows) as well as parades and celebrations (for example, of Zurich Airport's anniversary in 1998). The aircraft were also used for private charters (including by the President of Zaire Mobutu Sese Seko on multiple occasions), for advertising companies (including for the firm OKI), for Olympic torch relays (1992 Winter Olympics in Albertville) and for observing solar eclipses, including the solar eclipse of 30 June 1973 and again for the total solar eclipse on 11 August 1999. Records The fastest transatlantic airliner flight was from New York JFK to London Heathrow on 7 February 1996 by the British Airways G-BOAD in 2 hours, 52 minutes, 59 seconds from take-off to touchdown aided by a 175 mph (282 km/h) tailwind. On 13 February 1985, a Concorde charter flight flew from London Heathrow to Sydney in a time of 17 hours, 3 minutes and 45 seconds, including refuelling stops. Concorde set the FAI "Westbound Around the World" and "Eastbound Around the World" world air speed records. On 12–13 October 1992, in commemoration of the 500th anniversary of Columbus' first voyage to the New World, Concorde Spirit Tours (US) chartered Air France Concorde F-BTSD and circumnavigated the world in 32 hours 49 minutes and 3 seconds, from Lisbon, Portugal, including six refuelling stops at Santo Domingo, Acapulco, Honolulu, Guam, Bangkok, and Bahrain. The eastbound record was set by the same Air France Concorde (F-BTSD) under charter to Concorde Spirit Tours in the US on 15–16 August 1995. This promotional flight circumnavigated the world from New York/JFK International Airport in 31 hours 27 minutes 49 seconds, including six refuelling stops at Toulouse, Dubai, Bangkok, Andersen AFB in Guam, Honolulu, and Acapulco. On its way to the Museum of Flight in November 2003, G-BOAG set a New York City-to-Seattle speed record of 3 hours, 55 minutes, and 12 seconds. Due to the restrictions on supersonic overflights within the US the flight was granted permission by the Canadian authorities for the majority of the journey to be flown supersonically over sparsely-populated Canadian territory. Specifications Notable appearances in media See also Barbara Harmer, the first qualified female Concorde pilot Museo del Concorde, a former museum in Mexico dedicated to the airliner Notes References Citations Bibliography . External links Legacy British Airways Concorde page BAC Concorde at BAE Systems site Design Museum (UK) Concorde page Heritage Concorde preservation group site Articles Videos "Video: Roll-out." British Movietone/Associated Press. 14 December 1967, posted online on 21 July 2015. "This plane could cross the Atlantic in 3.5 hours. Why did it fail?." Vox Media. 19 July 2016. Air France–KLM British Airways British Aircraft Corporation aircraft Tailless delta-wing aircraft France–United Kingdom relations 1960s international airliners Quadjets Supersonic transports History of science and technology in the United Kingdom Aircraft first flown in 1969 Aircraft with retractable tricycle landing gear
Concorde
[ "Physics" ]
11,858
[ "Physical systems", "Transport", "Supersonic transports" ]
7,056
https://en.wikipedia.org/wiki/Computer%20mouse
A computer mouse (plural mice, also mouses) is a hand-held pointing device that detects two-dimensional motion relative to a surface. This motion is typically translated into the motion of the pointer (called a cursor) on a display, which allows a smooth control of the graphical user interface of a computer. The first public demonstration of a mouse controlling a computer system was done by Doug Engelbart in 1968 as part of the Mother of All Demos. Mice originally used two separate wheels to directly track movement across a surface: one in the x-dimension and one in the Y. Later, the standard design shifted to use a ball rolling on a surface to detect motion, in turn connected to internal rollers. Most modern mice use optical movement detection with no moving parts. Though originally all mice were connected to a computer by a cable, many modern mice are cordless, relying on short-range radio communication with the connected system. In addition to moving a cursor, computer mice have one or more buttons to allow operations such as the selection of a menu item on a display. Mice often also feature other elements, such as touch surfaces and scroll wheels, which enable additional control and dimensional input. Etymology The earliest known written use of the term mouse or mice in reference to a computer pointing device is in Bill English's July 1965 publication, "Computer-Aided Display Control". This likely originated from its resemblance to the shape and size of a mouse, with the cord resembling its tail. The popularity of wireless mice without cords makes the resemblance less obvious. According to Roger Bates, a hardware designer under English, the term also came about because the cursor on the screen was, for an unknown reason, referred to as "CAT" and was seen by the team as if it would be chasing the new desktop device. The plural for the small rodent is always "mice" in modern usage. The plural for a computer mouse is either "mice" or "mouses" according to most dictionaries, with "mice" being more common. The first recorded plural usage is "mice"; the online Oxford Dictionaries cites a 1984 use, and earlier uses include J. C. R. Licklider's "The Computer as a Communication Device" of 1968. History Stationary trackballs The trackball, a related pointing device, was invented in 1946 by Ralph Benjamin as part of a post-World War II-era fire-control radar plotting system called the Comprehensive Display System (CDS). Benjamin was then working for the British Royal Navy Scientific Service. Benjamin's project used analog computers to calculate the future position of target aircraft based on several initial input points provided by a user with a joystick. Benjamin felt that a more elegant input device was needed and invented what they called a "roller ball" for this purpose. The device was patented in 1947, but only a prototype using a metal ball rolling on two rubber-coated wheels was ever built, and the device was kept as a military secret. Another early trackball was built by Kenyon Taylor, a British electrical engineer working in collaboration with Tom Cranston and Fred Longstaff. Taylor was part of the original Ferranti Canada, working on the Royal Canadian Navy's DATAR (Digital Automated Tracking and Resolving) system in 1952. DATAR was similar in concept to Benjamin's display. The trackball used four disks to pick up motion, two each for the X and Y directions. Several rollers provided mechanical support. When the ball was rolled, the pickup discs spun and contacts on their outer rim made periodic contact with wires, producing pulses of output with each movement of the ball. By counting the pulses, the physical movement of the ball could be determined. A digital computer calculated the tracks and sent the resulting data to other ships in a task force using pulse-code modulation radio signals. This trackball used a standard Canadian five-pin bowling ball. It was not patented, since it was a secret military project. Engelbart's first "mouse" Douglas Engelbart of the Stanford Research Institute (now SRI International) has been credited in published books by Thierry Bardini, Paul Ceruzzi, Howard Rheingold, and several others as the inventor of the computer mouse. Engelbart was also recognized as such in various obituary titles after his death in July 2013. By 1963, Engelbart had already established a research lab at SRI, the Augmentation Research Center (ARC), to pursue his objective of developing both hardware and software computer technology to "augment" human intelligence. That November, while attending a conference on computer graphics in Reno, Nevada, Engelbart began to ponder how to adapt the underlying principles of the planimeter to inputting X- and Y-coordinate data. On 14 November 1963, he first recorded his thoughts in his personal notebook about something he initially called a "bug", which is a "3-point" form could have a "drop point and 2 orthogonal wheels". He wrote that the "bug" would be "easier" and "more natural" to use, and unlike a stylus, it would stay still when let go, which meant it would be "much better for coordination with the keyboard". In 1964, Bill English joined ARC, where he helped Engelbart build the first mouse prototype. They christened the device the mouse as early models had a cord attached to the rear part of the device which looked like a tail, and in turn, resembled the common mouse. According to Roger Bates, a hardware designer under English, another reason for choosing this name was because the cursor on the screen was also referred to as "CAT" at this time. As noted above, this "mouse" was first mentioned in print in a July 1965 report, on which English was the lead author. On 9 December 1968, Engelbart publicly demonstrated the mouse at what would come to be known as The Mother of All Demos. Engelbart never received any royalties for it, as his employer SRI held the patent, which expired before the mouse became widely used in personal computers. In any event, the invention of the mouse was just a small part of Engelbart's much larger project of augmenting human intellect. Several other experimental pointing-devices developed for Engelbart's oN-Line System (NLS) exploited different body movements – for example, head-mounted devices attached to the chin or nose – but ultimately the mouse won out because of its speed and convenience. The first mouse, a bulky device (pictured) used two potentiometers perpendicular to each other and connected to wheels: the rotation of each wheel translated into motion along one axis. At the time of the "Mother of All Demos", Engelbart's group had been using their second-generation, 3-button mouse for about a year. First rolling-ball mouse On 2 October 1968, three years after Engelbart's prototype but more than two months before his public demo, a mouse device named (German for "Trackball control") was shown in a sales brochure by the German company AEG-Telefunken as an optional input device for the SIG 100 vector graphics terminal, part of the system around their process computer TR 86 and the main frame. Based on an even earlier trackball device, the mouse device had been developed by the company in 1966 in what had been a parallel and independent discovery. As the name suggests and unlike Engelbart's mouse, the Telefunken model already had a ball (diameter 40 mm, weight 40 g) and two mechanical 4-bit rotational position transducers with Gray code-like states, allowing easy movement in any direction. The bits remained stable for at least two successive states to relax debouncing requirements. This arrangement was chosen so that the data could also be transmitted to the TR 86 front-end process computer and over longer distance telex lines with 50 baud. Weighing , the device with a total height of about came in a diameter hemispherical injection-molded thermoplastic casing featuring one central push button. As noted above, the device was based on an earlier trackball-like device (also named ) that was embedded into radar flight control desks. This trackball had been originally developed by a team led by at Telefunken for the German (Federal Air Traffic Control). It was part of the corresponding workstation system SAP 300 and the terminal SIG 3001, which had been designed and developed since 1963. Development for the TR 440 main frame began in 1965. This led to the development of the TR 86 process computer system with its SIG 100-86 terminal. Inspired by a discussion with a university customer, Mallebrein came up with the idea of "reversing" the existing trackball into a moveable mouse-like device in 1966, so that customers did not have to be bothered with mounting holes for the earlier trackball device. The device was finished in early 1968, and together with light pens and trackballs, it was commercially offered as an optional input device for their system starting later that year. Not all customers opted to buy the device, which added costs of per piece to the already up to 20-million DM deal for the main frame, of which only a total of 46 systems were sold or leased. They were installed at more than 20 German universities including RWTH Aachen, Technische Universität Berlin, University of Stuttgart and Konstanz. Several mice installed at the Leibniz Supercomputing Centre in Munich in 1972 are well preserved in a museum, two others survived in a museum at Stuttgart University, two in Hamburg, the one from Aachen at the Computer History Museum in the US, and yet another sample was recently donated to the Heinz Nixdorf MuseumsForum (HNF) in Paderborn. Anecdotal reports claim that Telefunken's attempt to patent the device was rejected by the German Patent Office due to lack of inventiveness. For the air traffic control system, the Mallebrein team had already developed a precursor to touch screens in form of an ultrasonic-curtain-based pointing device in front of the display. In 1970, they developed a device named "Touchinput-" ("touch input device") based on a conductively coated glass screen. First mice on personal computers and workstations The Xerox Alto was one of the first computers designed for individual use in 1973 and is regarded as the first modern computer to use a mouse. Alan Kay designed the 16-by-16 mouse cursor icon with its left edge vertical and right edge 45-degrees so it displays well on the bitmap.Inspired by PARC's Alto, the Lilith, a computer which had been developed by a team around Niklaus Wirth at ETH Zürich between 1978 and 1980, provided a mouse as well. The third marketed version of an integrated mouse shipped as a part of a computer and intended for personal computer navigation came with the Xerox 8010 Star in 1981. By 1982, the Xerox 8010 was probably the best-known computer with a mouse. The Sun-1 also came with a mouse, and the forthcoming Apple Lisa was rumored to use one, but the peripheral remained obscure; Jack Hawley of The Mouse House reported that one buyer for a large organization believed at first that his company sold lab mice. Hawley, who manufactured mice for Xerox, stated that "Practically, I have the market all to myself right now"; a Hawley mouse cost $415. In 1982, Logitech introduced the P4 Mouse at the Comdex trade show in Las Vegas, its first hardware mouse. That same year Microsoft made the decision to make the MS-DOS program Microsoft Word mouse-compatible, and developed the first PC-compatible mouse. The Microsoft Mouse shipped in 1983, thus beginning the Microsoft Hardware division of the company. However, the mouse remained relatively obscure until the appearance of the Macintosh 128K (which included an updated version of the single-button Lisa Mouse) in 1984, and of the Amiga 1000 and the Atari ST in 1985. Operation A mouse typically controls the motion of a pointer in two dimensions in a graphical user interface (GUI). The mouse turns movements of the hand backward and forward, left and right into equivalent electronic signals that in turn are used to move the pointer. The relative movements of the mouse on the surface are applied to the position of the pointer on the screen, which signals the point where actions of the user take place, so hand movements are replicated by the pointer. Clicking or pointing (stopping movement while the cursor is within the bounds of an area) can select files, programs or actions from a list of names, or (in graphical interfaces) through small images called "icons" and other elements. For example, a text file might be represented by a picture of a paper notebook and clicking while the cursor points at this icon might cause a text editing program to open the file in a window. Different ways of operating the mouse cause specific things to happen in the GUI: Point: stop the motion of the pointer while it is inside the boundaries of what the user wants to interact with. This act of pointing is what the "pointer" and "pointing device" are named after. In web design lingo, pointing is referred to as "hovering". This usage spread to web programming and Android programming, and is now found in many contexts. Click: pressing and releasing a button. (left) Single-click: clicking the main button. (left) Double-click: clicking the button two times in quick succession counts as a different gesture than two separate single clicks. (left) Triple-click: clicking the button three times in quick succession counts as a different gesture than three separate single clicks. Triple clicks are far less common in traditional navigation. Right-click: clicking the secondary button. In modern applications, this frequently opens a context menu. Middle-click: clicking the tertiary button. In most cases, this is also the scroll wheel. Clicking the fourth button. Clicking the fifth button. The USB standard defines up to 65535 distinct buttons for mice and other such devices, although in practice buttons above 3 are rarely implemented. Drag: pressing and holding a button, and moving the mouse before releasing the button. This is frequently used to move or copy files or other objects via drag and drop; other uses include selecting text and drawing in graphics applications. Mouse button chording or chord clicking: Clicking with more than one button simultaneously. Clicking while simultaneously typing a letter on the keyboard. Clicking and rolling the mouse wheel simultaneously. Clicking while holding down a modifier key. Moving the pointer a long distance: When a practical limit of mouse movement is reached, one lifts up the mouse, brings it to the opposite edge of the working area while it is held above the surface, and then lowers it back onto the working surface. This is often not necessary, because acceleration software detects fast movement, and moves the pointer significantly faster in proportion than for slow mouse motion. Multi-touch: this method is similar to a multi-touch touchpad on a laptop with support for tap input for multiple fingers, the most famous example being the Apple Magic Mouse. Gestures Gestural interfaces have become an integral part of modern computing, allowing users to interact with their devices in a more intuitive and natural way. In addition to traditional pointing-and-clicking actions, users can now employ gestural inputs to issue commands or perform specific actions. These stylized motions of the mouse cursor, known as "gestures", have the potential to enhance user experience and streamline workflow. To illustrate the concept of gestural interfaces, let's consider a drawing program as an example. In this scenario, a user can employ a gesture to delete a shape on the canvas. By rapidly moving the mouse cursor in an "x" motion over the shape, the user can trigger the command to delete the selected shape. This gesture-based interaction enables users to perform actions quickly and efficiently without relying solely on traditional input methods. While gestural interfaces offer a more immersive and interactive user experience, they also present challenges. One of the primary difficulties lies in the requirement of finer motor control from users. Gestures demand precise movements, which can be more challenging for individuals with limited dexterity or those who are new to this mode of interaction. However, despite these challenges, gestural interfaces have gained popularity due to their ability to simplify complex tasks and improve efficiency. Several gestural conventions have become widely adopted, making them more accessible to users. One such convention is the drag and drop gesture, which has become pervasive across various applications and platforms. The drag and drop gesture is a fundamental gestural convention that enables users to manipulate objects on the screen seamlessly. It involves a series of actions performed by the user: Pressing the mouse button while the cursor hovers over an interface object. Moving the cursor to a different location while holding the button down. Releasing the mouse button to complete the action. This gesture allows users to transfer or rearrange objects effortlessly. For instance, a user can drag and drop a picture representing a file onto an image of a trash can, indicating the intention to delete the file. This intuitive and visual approach to interaction has become synonymous with organizing digital content and simplifying file management tasks. In addition to the drag and drop gesture, several other semantic gestures have emerged as standard conventions within the gestural interface paradigm. These gestures serve specific purposes and contribute to a more intuitive user experience. Some of the notable semantic gestures include: Crossing-based goal: This gesture involves crossing a specific boundary or threshold on the screen to trigger an action or complete a task. For example, swiping across the screen to unlock a device or confirm a selection. Menu traversal: Menu traversal gestures facilitate navigation through hierarchical menus or options. Users can perform gestures such as swiping or scrolling to explore different menu levels or activate specific commands. Pointing: Pointing gestures involve positioning the mouse cursor over an object or element to interact with it. This fundamental gesture enables users to select, click, or access contextual menus. Mouseover (pointing or hovering): Mouseover gestures occur when the cursor is positioned over an object without clicking. This action often triggers a visual change or displays additional information about the object, providing users with real-time feedback. These standard semantic gestures, along with the drag and drop convention, form the building blocks of gestural interfaces, allowing users to interact with digital content using intuitive and natural movements. Specific uses At the end of 20th century, digitizer mice (puck) with magnifying glass was used with AutoCAD for the digitizations of blueprints. Other uses of the mouse's input occur commonly in special application domains. In interactive three-dimensional graphics, the mouse's motion often translates directly into changes in the virtual objects' or camera's orientation. For example, in the first-person shooter genre of games (see below), players usually employ the mouse to control the direction in which the virtual player's "head" faces: moving the mouse up will cause the player to look up, revealing the view above the player's head. A related function makes an image of an object rotate so that all sides can be examined. 3D design and animation software often modally chord many different combinations to allow objects and cameras to be rotated and moved through space with the few axes of movement mice can detect. When mice have more than one button, the software may assign different functions to each button. Often, the primary (leftmost in a right-handed configuration) button on the mouse will select items, and the secondary (rightmost in a right-handed) button will bring up a menu of alternative actions applicable to that item. For example, on platforms with more than one button, the Mozilla web browser will follow a link in response to a primary button click, will bring up a contextual menu of alternative actions for that link in response to a secondary-button click, and will often open the link in a new tab or window in response to a click with the tertiary (middle) mouse button. Types Mechanical mice The German company Telefunken published on their early ball mouse on 2 October 1968. Telefunken's mouse was sold as optional equipment for their computer systems. Bill English, builder of Engelbart's original mouse, created a ball mouse in 1972 while working for Xerox PARC. The ball mouse replaced the external wheels with a single ball that could rotate in any direction. It came as part of the hardware package of the Xerox Alto computer. Perpendicular chopper wheels housed inside the mouse's body chopped beams of light on the way to light sensors, thus detecting in their turn the motion of the ball. This variant of the mouse resembled an inverted trackball and became the predominant form used with personal computers throughout the 1980s and 1990s. The Xerox PARC group also settled on the modern technique of using both hands to type on a full-size keyboard and grabbing the mouse when required. The ball mouse has two freely rotating rollers. These are located 90 degrees apart. One roller detects the forward-backward motion of the mouse and the other the left-right motion. Opposite the two rollers is a third one (white, in the photo, at 45 degrees) that is spring-loaded to push the ball against the other two rollers. Each roller is on the same shaft as an encoder wheel that has slotted edges; the slots interrupt infrared light beams to generate electrical pulses that represent wheel movement. Each wheel's disc has a pair of light beams, located so that a given beam becomes interrupted or again starts to pass light freely when the other beam of the pair is about halfway between changes. Simple logic circuits interpret the relative timing to indicate which direction the wheel is rotating. This incremental rotary encoder scheme is sometimes called quadrature encoding of the wheel rotation, as the two optical sensors produce signals that are in approximately quadrature phase. The mouse sends these signals to the computer system via the mouse cable, directly as logic signals in very old mice such as the Xerox mice, and via a data-formatting IC in modern mice. The driver software in the system converts the signals into motion of the mouse cursor along X and Y axes on the computer screen. The ball is mostly steel, with a precision spherical rubber surface. The weight of the ball, given an appropriate working surface under the mouse, provides a reliable grip so the mouse's movement is transmitted accurately. Ball mice and wheel mice were manufactured for Xerox by Jack Hawley, doing business as The Mouse House in Berkeley, California, starting in 1975. Based on another invention by Jack Hawley, proprietor of the Mouse House, Honeywell produced another type of mechanical mouse. Instead of a ball, it had two wheels rotating at off axes. Key Tronic later produced a similar product. Modern computer mice took form at the École Polytechnique Fédérale de Lausanne (EPFL) under the inspiration of Professor Jean-Daniel Nicoud and at the hands of engineer and watchmaker André Guignard. This new design incorporated a single hard rubber mouseball and three buttons, and remained a common design until the mainstream adoption of the scroll-wheel mouse during the 1990s. In 1985, René Sommer added a microprocessor to Nicoud's and Guignard's design. Through this innovation, Sommer is credited with inventing a significant component of the mouse, which made it more "intelligent"; though optical mice from Mouse Systems had incorporated microprocessors by 1984. Another type of mechanical mouse, the "analog mouse" (now generally regarded as obsolete), uses potentiometers rather than encoder wheels, and is typically designed to be plug compatible with an analog joystick. The "Color Mouse", originally marketed by RadioShack for their Color Computer (but also usable on MS-DOS machines equipped with analog joystick ports, provided the software accepted joystick input) was the best-known example. Optical and laser mice Early optical mice relied entirely on one or more light-emitting diodes (LEDs) and an imaging array of photodiodes to detect movement relative to the underlying surface, eschewing the internal moving parts a mechanical mouse uses in addition to its optics. A laser mouse is an optical mouse that uses coherent (laser) light. The earliest optical mice detected movement on pre-printed mousepad surfaces, whereas the modern LED optical mouse works on most opaque diffuse surfaces; it is usually unable to detect movement on specular surfaces like polished stone. Laser diodes provide good resolution and precision, improving performance on opaque specular surfaces. Later, more surface-independent optical mice use an optoelectronic sensor (essentially, a tiny low-resolution video camera) to take successive images of the surface on which the mouse operates. Battery powered, wireless optical mice flash the LED intermittently to save power, and only glow steadily when movement is detected. Inertial and gyroscopic mice Often called "air mice" since they do not require a surface to operate, inertial mice use a tuning fork or other accelerometer (US Patent 4787051) to detect rotary movement for every axis supported. The most common models (manufactured by Logitech and Gyration) work using 2 degrees of rotational freedom and are insensitive to spatial translation. The user requires only small wrist rotations to move the cursor, reducing user fatigue or "gorilla arm". Usually cordless, they often have a switch to deactivate the movement circuitry between use, allowing the user freedom of movement without affecting the cursor position. A patent for an inertial mouse claims that such mice consume less power than optically based mice, and offer increased sensitivity, reduced weight and increased ease-of-use. In combination with a wireless keyboard an inertial mouse can offer alternative ergonomic arrangements which do not require a flat work surface, potentially alleviating some types of repetitive motion injuries related to workstation posture. 3D mice A 3D mouse is a computer input device for viewport interaction with at least three degrees of freedom (DoF), e.g. in 3D computer graphics software for manipulating virtual objects, navigating in the viewport, defining camera paths, posing, and desktop motion capture. 3D mice can also be used as spatial controllers for video game interaction, e.g. SpaceOrb 360. To perform such different tasks the used transfer function and the device stiffness are essential for efficient interaction. Transfer function The virtual motion is connected to the 3D mouse control handle via a transfer function. Position control means that the virtual position and orientation is proportional to the mouse handle's deflection whereas velocity control means that translation and rotation velocity of the controlled object is proportional to the handle deflection. A further essential property of a transfer function is its interaction metaphor: Object-in-hand metaphor: An exterocentrical metaphor whereby the scene moves in correspondence with the input device. If the handle of the input device is twisted clockwise the scene rotates clockwise. If the handle is moved left the scene shifts left, and so on. Camera-in-hand metaphor: An egocentrical metaphor whereby the user's view is controlled by direct movement of a virtual camera. If the handle is twisted clockwise the scene rotates counter-clockwise. If the handle is moved left the scene shifts right, and so on. Ware and Osborne performed an experiment investigating these metaphors whereby it was shown that there is no single best metaphor. For manipulation tasks, the object-in-hand metaphor was superior, whereas for navigation tasks the camera-in-hand metaphor was superior. Device stiffness Zhai used and the following three categories for device stiffness: Isotonic Input: An input device with zero stiffness, that is, there is no self-centering effect. Elastic Input: A device with some stiffness, that is, the forces on the handle are proportional to the deflections. Isometric Input: An elastic input device with infinite stiffness, that is, the device handle does not allow any deflection but records force and torque. Isotonic 3D mice Logitech 3D Mouse (1990) was the first ultrasonic mouse and is an example of an isotonic 3D mouse having six degrees of freedom (6DoF). Isotonic devices have also been developed with less than 6DoF, e.g. the Inspector at Technical University of Denmark (5DoF input). Other examples of isotonic 3D mice are motion controllers, i.e. is a type of game controller that typically uses accelerometers to track motion. Motion tracking systems are also used for motion capture e.g. in the film industry, although that these tracking systems are not 3D mice in a strict sense, because motion capture only means recording 3D motion and not 3D interaction. Isometric 3D mice Early 3D mice for velocity control were almost ideally isometric, e.g. SpaceBall 1003, 2003, 3003, and a device developed at Deutsches Zentrum für Luft und Raumfahrt (DLR), cf. US patent US4589810A. Elastic 3D mice At DLR an elastic 6DoF sensor was developed that was used in Logitech's SpaceMouse and in the products of 3DConnexion. SpaceBall 4000 FLX has a maximum deflection of approximately at a maximum force of approximately 10N, that is, a stiffness of approximately . SpaceMouse has a maximum deflection of at a maximum force of , that is, a stiffness of approximately . Taking this development further, the softly elastic Sundinlabs SpaceCat was developed. SpaceCat has a maximum translational deflection of approximately and maximum rotational deflection of approximately 30° at a maximum force less than 2N, that is, a stiffness of approximately . With SpaceCat Sundin and Fjeld reviewed five comparative experiments performed with different device stiffness and transfer functions and performed a further study comparing 6DoF softly elastic position control with 6DoF stiffly elastic velocity control in a positioning task. They concluded that for positioning tasks position control is to be preferred over velocity control. They could further conjecture the following two types of preferred 3D mouse usage: Positioning, manipulation, and docking using isotonic or softly elastic position control and an object-in-hand metaphor. Navigation using softly or stiffly elastic rate control and a camera-in-hand metaphor. 3DConnexion's 3D mice have been commercially successful over decades. They are used in combination with the conventional mouse for CAD. The Space Mouse is used to orient the target object or change the viewpoint with the non-dominant hand, whereas the dominant hand operates the computer mouse for conventional CAD GUI operation. This is a kind of space-multiplexed input where the 6 DoF input device acts as a graspable user interface that is always connected to the view port. Force feedback With force feedback the device stiffness can dynamically be adapted to the task just performed by the user, e.g. performing positioning tasks with less stiffness than navigation tasks. Tactile mice In 2000, Logitech introduced a "tactile mouse" known as the "iFeel Mouse" developed by Immersion Corporation that contained a small actuator to enable the mouse to generate simulated physical sensations. Such a mouse can augment user-interfaces with haptic feedback, such as giving feedback when crossing a window boundary. To surf the internet by touch-enabled mouse was first developed in 1996 and first implemented commercially by the Wingman Force Feedback Mouse. It requires the user to be able to feel depth or hardness; this ability was realized with the first electrorheological tactile mice but never marketed. Pucks Tablet digitizers are sometimes used with accessories called pucks, devices which rely on absolute positioning, but can be configured for sufficiently mouse-like relative tracking that they are sometimes marketed as mice. Ergonomic mice As the name suggests, this type of mouse is intended to provide optimum comfort and avoid injuries such as carpal tunnel syndrome, arthritis, and other repetitive strain injuries. It is designed to fit natural hand position and movements, to reduce discomfort. When holding a typical mouse, the ulna and radius bones on the arm are crossed. Some designs attempt to place the palm more vertically, so the bones take more natural parallel position. Increasing mouse height and angling the mouse topcase can improve wrist posture without negatively affecting performance. Some limit wrist movement, encouraging arm movement instead, that may be less precise but more optimal from the health point of view. A mouse may be angled from the thumb downward to the opposite side – this is known to reduce wrist pronation. However such optimizations make the mouse right or left hand specific, making more problematic to change the tired hand. Time has criticized manufacturers for offering few or no left-handed ergonomic mice: "Oftentimes I felt like I was dealing with someone who'd never actually met a left-handed person before." Another solution is a pointing bar device. The so-called roller bar mouse is positioned snugly in front of the keyboard, thus allowing bi-manual accessibility. Gaming mice These mice are specifically designed for use in computer games. They typically employ a wider array of controls and buttons and have designs that differ radically from traditional mice. They may also have decorative monochrome or programmable RGB LED lighting. The additional buttons can often be used for changing the sensitivity of the mouse or they can be assigned (programmed) to macros (i.e., for opening a program or for use instead of a key combination). It is also common for game mice, especially those designed for use in real-time strategy games such as StarCraft, or in multiplayer online battle arena games such as League of Legends to have a relatively high sensitivity, measured in dots per inch (DPI), which can be as high as 25,600. DPI and CPI are the same values that refer to the mouse's sensitivity. DPI is a misnomer used in the gaming world, and many manufacturers use it to refer to CPI, counts per inch. Some advanced mice from gaming manufacturers also allow users to adjust the weight of the mouse by adding or subtracting weights to allow for easier control. Ergonomic quality is also an important factor in gaming mouse, as extended gameplay times may render further use of the mouse to be uncomfortable. Some mice have been designed to have adjustable features such as removable and/or elongated palm rests, horizontally adjustable thumb rests and pinky rests. Some mice may include several different rests with their products to ensure comfort for a wider range of target consumers. Gaming mice are held by gamers in three styles of grip: Palm Grip: the hand rests on the mouse, with extended fingers. Claw Grip: palm rests on the mouse, bent fingers. Finger-Tip Grip: bent fingers, palm does not touch the mouse. Connectivity and communication protocols To transmit their input, typical cabled mice use a thin electrical cord terminating in a standard connector, such as RS-232C, PS/2, ADB, or USB. Cordless mice instead transmit data via infrared radiation (see IrDA) or radio (including Bluetooth), although many such cordless interfaces are themselves connected through the aforementioned wired serial buses. While the electrical interface and the format of the data transmitted by commonly available mice is currently standardized on USB, in the past it varied between different manufacturers. A bus mouse used a dedicated interface card for connection to an IBM PC or compatible computer. Mouse use in DOS applications became more common after the introduction of the Microsoft Mouse, largely because Microsoft provided an open standard for communication between applications and mouse driver software. Thus, any application written to use the Microsoft standard could use a mouse with a driver that implements the same API, even if the mouse hardware itself was incompatible with Microsoft's. This driver provides the state of the buttons and the distance the mouse has moved in units that its documentation calls "mickeys". Early mice In the 1970s, the Xerox Alto mouse, and in the 1980s the Xerox optical mouse, used a quadrature-encoded X and Y interface. This two-bit encoding per dimension had the property that only one bit of the two would change at a time, like a Gray code or Johnson counter, so that the transitions would not be misinterpreted when asynchronously sampled. The earliest mass-market mice, such as the original Macintosh, Amiga, and Atari ST mice used a D-subminiature 9-pin connector to send the quadrature-encoded X and Y axis signals directly, plus one pin per mouse button. The mouse was a simple optomechanical device, and the decoding circuitry was all in the main computer. The DE-9 connectors were designed to be electrically compatible with the joysticks popular on numerous 8-bit systems, such as the Commodore 64 and the Atari 2600. Although the ports could be used for both purposes, the signals must be interpreted differently. As a result, plugging a mouse into a joystick port causes the "joystick" to continuously move in some direction, even if the mouse stays still, whereas plugging a joystick into a mouse port causes the "mouse" to only be able to move a single pixel in each direction. Serial interface and protocol Because the IBM PC did not have a quadrature decoder built in, early PC mice used the RS-232C serial port to communicate encoded mouse movements, as well as provide power to the mouse's circuits. The Mouse Systems Corporation (MSC) version used a five-byte protocol and supported three buttons. The Microsoft version used a three-byte protocol and supported two buttons. Due to the incompatibility between the two protocols, some manufacturers sold serial mice with a mode switch: "PC" for MSC mode, "MS" for Microsoft mode. Apple Desktop Bus In 1986 Apple first implemented the Apple Desktop Bus allowing the daisy chaining of up to 16 devices, including mice and other devices on the same bus with no configuration whatsoever. Featuring only a single data pin, the bus used a purely polled approach to device communications and survived as the standard on mainstream models (including a number of non-Apple workstations) until 1998 when Apple's iMac line of computers joined the industry-wide switch to using USB. Beginning with the Bronze Keyboard PowerBook G3 in May 1999, Apple dropped the external ADB port in favor of USB, but retained an internal ADB connection in the PowerBook G4 for communication with its built-in keyboard and trackpad until early 2005. PS/2 interface and protocol With the arrival of the IBM PS/2 personal-computer series in 1987, IBM introduced the eponymous PS/2 port for mice and keyboards, which other manufacturers rapidly adopted. The most visible change was the use of a round 6-pin mini-DIN, in lieu of the former 5-pin MIDI style full sized DIN 41524 connector. In default mode (called stream mode) a PS/2 mouse communicates motion, and the state of each button, by means of 3-byte packets. For any motion, button press or button release event, a PS/2 mouse sends, over a bi-directional serial port, a sequence of three bytes, with the following format: Here, XS and YS represent the sign bits of the movement vectors, XV and YV indicate an overflow in the respective vector component, and LB, MB and RB indicate the status of the left, middle and right mouse buttons (1 = pressed). PS/2 mice also understand several commands for reset and self-test, switching between different operating modes, and changing the resolution of the reported motion vectors. A Microsoft IntelliMouse relies on an extension of the PS/2 protocol: the ImPS/2 or IMPS/2 protocol (the abbreviation combines the concepts of "IntelliMouse" and "PS/2"). It initially operates in standard PS/2 format, for backward compatibility. After the host sends a special command sequence, it switches to an extended format in which a fourth byte carries information about wheel movements. The IntelliMouse Explorer works analogously, with the difference that its 4-byte packets also allow for two additional buttons (for a total of five). Mouse vendors also use other extended formats, often without providing public documentation. The Typhoon mouse uses 6-byte packets which can appear as a sequence of two standard 3-byte packets, such that an ordinary PS/2 driver can handle them. For 3D (or 6-degree-of-freedom) input, vendors have made many extensions both to the hardware and to software. In the late 1990s, Logitech created ultrasound based tracking which gave 3D input to a few millimeters accuracy, which worked well as an input device but failed as a profitable product. In 2008, Motion4U introduced its "OptiBurst" system using IR tracking for use as a Maya (graphics software) plugin. USB Almost all wired mice today use USB and the USB human interface device class for communication. Cordless or wireless Cordless or wireless mice transmit data via radio. Some mice connect to the computer through Bluetooth or Wi-Fi, while others use a receiver that plugs into the computer, for example through a USB port. Many mice that use a USB receiver have a storage compartment for it inside the mouse. Some "nano receivers" are designed to be small enough to remain plugged into a laptop during transport, while still being large enough to easily remove. Operating system support MS-DOS and Windows 1.0 support connecting a mouse such as a Microsoft Mouse via multiple interfaces: BallPoint, Bus (InPort), Serial port or PS/2. Windows 98 added built-in support for USB Human Interface Device class (USB HID), with native vertical scrolling support. Windows 2000 and Windows Me expanded this built-in support to 5-button mice. Windows XP Service Pack 2 introduced a Bluetooth stack, allowing Bluetooth mice to be used without any USB receivers. Windows Vista added native support for horizontal scrolling and standardized wheel movement granularity for finer scrolling. Windows 8 introduced BLE (Bluetooth Low Energy) mouse/HID support. Multiple-mouse systems Some systems allow two or more mice to be used at once as input devices. Late-1980s era home computers such as the Amiga used this to allow computer games with two players interacting on the same computer (Lemmings and The Settlers for example). The same idea is sometimes used in collaborative software, e.g. to simulate a whiteboard that multiple users can draw on without passing a single mouse around. Microsoft Windows, since Windows 98, has supported multiple simultaneous pointing devices. Because Windows only provides a single screen cursor, using more than one device at the same time requires cooperation of users or applications designed for multiple input devices. Multiple mice are often used in multi-user gaming in addition to specially designed devices that provide several input interfaces. Windows also has full support for multiple input/mouse configurations for multi-user environments. Starting with Windows XP, Microsoft introduced an SDK for developing applications that allow multiple input devices to be used at the same time with independent cursors and independent input points. However, it no longer appears to be available. The introduction of Windows Vista and Microsoft Surface (now known as Microsoft PixelSense) introduced a new set of input APIs that were adopted into Windows 7, allowing for 50 points/cursors, all controlled by independent users. The new input points provide traditional mouse input; however, they were designed with other input technologies like touch and image in mind. They inherently offer 3D coordinates along with pressure, size, tilt, angle, mask, and even an image bitmap to see and recognize the input point/object on the screen. , Linux distributions and other operating systems that use X.Org, such as OpenSolaris and FreeBSD, support 255 cursors/input points through Multi-Pointer X. However, currently no window managers support Multi-Pointer X leaving it relegated to custom software usage. There have also been propositions of having a single operator use two mice simultaneously as a more sophisticated means of controlling various graphics and multimedia applications. Buttons Mouse buttons are microswitches which can be pressed to select or interact with an element of a graphical user interface, producing a distinctive clicking sound. Since around the late 1990s, the three-button scrollmouse has become the de facto standard. Users most commonly employ the second button to invoke a contextual menu in the computer's software user interface, which contains options specifically tailored to the interface element over which the mouse cursor currently sits. By default, the primary mouse button sits located on the left-hand side of the mouse, for the benefit of right-handed users; left-handed users can usually reverse this configuration via software. Scrolling Nearly all mice now have an integrated input primarily intended for scrolling on top, usually a single-axis digital wheel or rocker switch which can also be depressed to act as a third button. Though less common, many mice instead have two-axis inputs such as a tiltable wheel, trackball, or touchpad. Those with a trackball may be designed to stay stationary, using the trackball instead of moving the mouse. Speed Mickeys per second is a unit of measurement for the speed and movement direction of a computer mouse, where direction is often expressed as "horizontal" versus "vertical" mickey count. However, speed can also refer to the ratio between how many pixels the cursor moves on the screen and how far the mouse moves on the mouse pad, which may be expressed as pixels per mickey, pixels per inch, or pixels per centimeter. The computer industry often measures mouse sensitivity in terms of counts per inch (CPI), commonly expressed as dots per inch (DPI)the number of steps the mouse will report when it moves one inch. In early mice, this specification was called pulses per inch (ppi). The mickey originally referred to one of these counts, or one resolvable step of motion. If the default mouse-tracking condition involves moving the cursor by one screen-pixel or dot on-screen per reported step, then the CPI does equate to DPI: dots of cursor motion per inch of mouse motion. The CPI or DPI as reported by manufacturers depends on how they make the mouse; the higher the CPI, the faster the cursor moves with mouse movement. However, software can adjust the mouse sensitivity, making the cursor move faster or slower than its CPI. software can change the speed of the cursor dynamically, taking into account the mouse's absolute speed and the movement from the last stop-point. For simple software, when the mouse starts to move, the software will count the number of "counts" or "mickeys" received from the mouse and will move the cursor across the screen by that number of pixels (or multiplied by a rate factor, typically less than 1). The cursor will move slowly on the screen, with good precision. When the movement of the mouse passes the value set for some threshold, the software will start to move the cursor faster, with a greater rate factor. Usually, the user can set the value of the second rate factor by changing the "acceleration" setting. Operating systems sometimes apply acceleration, referred to as "ballistics", to the motion reported by the mouse. For example, versions of Windows prior to Windows XP doubled reported values above a configurable threshold, and then optionally doubled them again above a second configurable threshold. These doublings applied separately in the X and Y directions, resulting in very nonlinear response. Mousepads Engelbart's original mouse did not require a mousepad; the mouse had two large wheels which could roll on virtually any surface. However, most subsequent mechanical mice starting with the steel roller ball mouse have required a mousepad for optimal performance. The mousepad, the most common mouse accessory, appears most commonly in conjunction with mechanical mice, because to roll smoothly the ball requires more friction than common desk surfaces usually provide. So-called "hard mousepads" for gamers or optical/laser mice also exist. Most optical and laser mice do not require a pad, the notable exception being early optical mice which relied on a grid on the pad to detect movement (e.g. Mouse Systems). Whether to use a hard or soft mousepad with an optical mouse is largely a matter of personal preference. One exception occurs when the desk surface creates problems for the optical or laser tracking, for example, a transparent or reflective surface, such as glass. Some mice also come with small "pads" attached to the bottom surface, also called mouse feet or mouse skates, that help the user slide the mouse smoothly across surfaces. In the marketplace Around 1981, Xerox included mice with its Xerox Star, based on the mouse used in the 1970s on the Alto computer at Xerox PARC. Sun Microsystems, Symbolics, Lisp Machines Inc., and Tektronix also shipped workstations with mice, starting in about 1981. Later, inspired by the Star, Apple Computer released the Apple Lisa, which also used a mouse. However, none of these products achieved large-scale success. Only with the release of the Apple Macintosh in 1984 did the mouse see widespread use. The Macintosh design, commercially successful and technically influential, led many other vendors to begin producing mice or including them with their other computer products (by 1986, Atari ST, Amiga, Windows 1.0, GEOS for the Commodore 64, and the Apple IIGS). The widespread adoption of graphical user interfaces in the software of the 1980s and 1990s made mice all but indispensable for controlling computers. In November 2008, Logitech built their billionth mouse. Use in games The device often functions as an interface for PC-based computer games and sometimes for video game consoles. The Classic Mac OS Desk Accessory Puzzle in 1984 was the first game designed specifically for a mouse. First-person shooters FPSs naturally lend themselves to separate and simultaneous control of the player's movement and aim, and on computers this has traditionally been achieved with a combination of keyboard and mouse. Players use the X-axis of the mouse for looking (or turning) left and right, and the Y-axis for looking up and down; the keyboard is used for movement and supplemental inputs. Many shooting genre players prefer a mouse over a gamepad analog stick because the wide range of motion offered by a mouse allows for faster and more varied control. Although an analog stick allows the player more granular control, it is poor for certain movements, as the player's input is relayed based on a vector of both the stick's direction and magnitude. Thus, a small but fast movement (known as "flick-shotting") using a gamepad requires the player to quickly move the stick from its rest position to the edge and back again in quick succession, a difficult maneuver. In addition the stick also has a finite magnitude; if the player is currently using the stick to move at a non-zero velocity their ability to increase the rate of movement of the camera is further limited based on the position their displaced stick was already at before executing the maneuver. The effect of this is that a mouse is well suited not only to small, precise movements but also to large, quick movements and immediate, responsive movements; all of which are important in shooter gaming. This advantage also extends in varying degrees to similar game styles such as third-person shooters. Some incorrectly ported games or game engines have acceleration and interpolation curves which unintentionally produce excessive, irregular, or even negative acceleration when used with a mouse instead of their native platform's non-mouse default input device. Depending on how deeply hardcoded this misbehavior is, internal user patches or external 3rd-party software may be able to fix it. Individual game engines will also have their own sensitivities. This often restricts one from taking a game's existing sensitivity, transferring it to another, and acquiring the same 360 rotational measurements. A sensitivity converter is the preferred tool that FPS gamers use to translate correctly the rotational movements between different mice and between different games. Calculating the conversion values manually is also possible but it is more time-consuming and requires performing complex mathematical calculations, while using a sensitivity converter is a lot faster and easier for gamers. Due to their similarity to the WIMP desktop metaphor interface for which mice were originally designed, and to their own tabletop game origins, computer strategy games are most commonly played with mice. In particular, real-time strategy and MOBA games usually require the use of a mouse. The left button usually controls primary fire. If the game supports multiple fire modes, the right button often provides secondary fire from the selected weapon. Games with only a single fire mode will generally map secondary fire to aim down the weapon sights. In some games, the right button may also invoke accessories for a particular weapon, such as allowing access to the scope of a sniper rifle or allowing the mounting of a bayonet or silencer. Players can use a scroll wheel for changing weapons (or for controlling scope-zoom magnification, in older games). On most first person shooter games, programming may also assign more functions to additional buttons on mice with more than three controls. A keyboard usually controls movement (for example, WASD for moving forward, left, backward, and right, respectively) and other functions such as changing posture. Since the mouse serves for aiming, a mouse that tracks movement accurately and with less lag (latency) will give a player an advantage over players with less accurate or slower mice. In some cases the right mouse button may be used to move the player forward, either in lieu of, or in conjunction with the typical WASD configuration. Many games provide players with the option of mapping their own choice of a key or button to a certain control. An early technique of players, circle strafing, saw a player continuously strafing while aiming and shooting at an opponent by walking in circle around the opponent with the opponent at the center of the circle. Players could achieve this by holding down a key for strafing while continuously aiming the mouse toward the opponent. Games using mice for input are so popular that many manufacturers make mice specifically for gaming. Such mice may feature adjustable weights, high-resolution optical or laser components, additional buttons, ergonomic shape, and other features such as adjustable CPI. Mouse Bungees are typically used with gaming mice because it eliminates the annoyance of the cable. Many games, such as first- or third-person shooters, have a setting named "invert mouse" or similar (not to be confused with "button inversion", sometimes performed by left-handed users) which allows the user to look downward by moving the mouse forward and upward by moving the mouse backward (the opposite of non-inverted movement). This control system resembles that of aircraft control sticks, where pulling back causes pitch up and pushing forward causes pitch down; computer joysticks also typically emulate this control-configuration. After id Software's commercial hit of Doom, which did not support vertical aiming, competitor Bungie's Marathon became the first first-person shooter to support using the mouse to aim up and down. Games using the Build engine had an option to invert the Y-axis. The "invert" feature actually made the mouse behave in a manner that users regard as non-inverted (by default, moving mouse forward resulted in looking down). Soon after, id Software released Quake, which introduced the invert feature as users know it. Home consoles In 1988, the VTech Socrates educational video game console featured a wireless mouse with an attached mouse pad as an optional controller used for some games. In the early 1990s, the Super Nintendo Entertainment System video game system featured a mouse in addition to its controllers. A mouse was also released for the Nintendo 64, although it was only released in Japan. The 1992 game Mario Paint in particular used the mouse's capabilities, as did its Japanese-only successor Mario Artist on the N64 for its 64DD disk drive peripheral in 1999. Sega released official mice for their Genesis/Mega Drive, Saturn and Dreamcast consoles. NEC sold official mice for its PC Engine and PC-FX consoles. Sony released an official mouse product for the PlayStation console, included one along with the Linux for PlayStation 2 kit, as well as allowing owners to use virtually any USB mouse with the PS2, PS3, and PS4. Nintendo's Wii also had this feature implemented in a later software update, and this support was retained on its successor, the Wii U. Microsoft's Xbox line of game consoles (which used operaring systems based on modified versions of Windows NT) also had universal-wide mouse support using USB. See also Computer accessibility Footmouse Graphics tablet Gesture recognition Human–computer interaction (HCI) Mouse keys Mouse tracking Optical trackpad Pointing stick Rotational mouse Trackball Notes References Further reading (11 pages) (NB. This is based on an earlier German article published in 1996 in Lab. Jahrbuch 1995/1996 für Künste und Apparate (350 pages) by Kunsthochschule für Medien Köln mit dem Verein der Freunde der Kunsthochschule für Medien Köln; in Cologne, Germany. .) External links Doug Engelbart Institute mouse resources page includes stories and links The video segment of The Mother of All Demos with Doug Engelbart showing the device from 1968 American inventions Computing input devices History of human–computer interaction Pointing devices Video game control methods Computer-related introductions in 1964
Computer mouse
[ "Technology" ]
11,955
[ "History of human–computer interaction", "History of computing" ]
7,060
https://en.wikipedia.org/wiki/Chymotrypsin
Chymotrypsin (, chymotrypsins A and B, alpha-chymar ophth, avazyme, chymar, chymotest, enzeon, quimar, quimotrase, alpha-chymar, alpha-chymotrypsin A, alpha-chymotrypsin) is a digestive enzyme component of pancreatic juice acting in the duodenum, where it performs proteolysis, the breakdown of proteins and polypeptides. Chymotrypsin preferentially cleaves peptide amide bonds where the side chain of the amino acid N-terminal to the scissile amide bond (the P1 position) is a large hydrophobic amino acid (tyrosine, tryptophan, and phenylalanine). These amino acids contain an aromatic ring in their side chain that fits into a hydrophobic pocket (the S1 position) of the enzyme. It is activated in the presence of trypsin. The hydrophobic and shape complementarity between the peptide substrate P1 side chain and the enzyme S1 binding cavity accounts for the substrate specificity of this enzyme. Chymotrypsin also hydrolyzes other amide bonds in peptides at slower rates, particularly those containing leucine at the P1 position. Structurally, it is the archetypal structure for its superfamily, the PA clan of proteases. Activation Chymotrypsin is synthesized in the pancreas. Its precursor is chymotrypsinogen. Trypsin activates chymotrypsinogen by cleaving peptidic bonds in positions Arg15 – Ile16 and produces π-chymotrypsin. In turn, aminic group (-NH3+) of the Ile16 residue interacts with the side chain of Asp194, producing the "oxyanion hole" and the hydrophobic "S1 pocket". Moreover, chymotrypsin induces its own activation by cleaving in positions 14–15, 146–147, and 148–149, producing α-chymotrypsin (which is more active and stable than π-chymotrypsin). The resulting molecule is a three-polypeptide molecule interconnected via disulfide bonds. Mechanism of action and kinetics In vivo, chymotrypsin is a proteolytic enzyme (serine protease) acting in the digestive systems of many organisms. It facilitates the cleavage of peptide bonds by a hydrolysis reaction, which despite being thermodynamically favorable, occurs extremely slowly in the absence of a catalyst. The main substrates of chymotrypsin are peptide bonds in which the amino acid N-terminal to the bond is a tryptophan, tyrosine, phenylalanine, or leucine. Like many proteases, chymotrypsin also hydrolyses amide bonds in vitro, a virtue that enabled the use of substrate analogs such as N-acetyl-L-phenylalanine p-nitrophenyl amide for enzyme assays. Chymotrypsin cleaves peptide bonds by attacking the unreactive carbonyl group with a powerful nucleophile, the serine 195 residue located in the active site of the enzyme, which briefly becomes covalently bonded to the substrate, forming an enzyme-substrate intermediate. Along with histidine 57 and aspartic acid 102, this serine residue constitutes the catalytic triad of the active site. These findings rely on inhibition assays and the study of the kinetics of cleavage of the aforementioned substrate, exploiting the fact that the enzyme-substrate intermediate p-nitrophenolate has a yellow colour, enabling measurement of its concentration by measuring light absorbance at 410 nm. Chymotrypsin catalysis of the hydrolysis of a protein substrate (in red) is performed in two steps.  First, the nucleophilicity of Ser-195 is enhanced by general-base catalysis in which the proton of the serine hydroxyl group is transferred to the imidazole moiety of His-57 during its attack on the electron-deficient carbonyl carbon of the protein-substrate main chain (k1 step). This occurs via the concerted action of the three-amino-acid residues in the catalytic triad. The buildup of negative charge on the resultant tetrahedral intermediate is stabilized in the enzyme's active site's oxyanion hole, by formation of two hydrogen bonds to adjacent main-chain amide-hydrogens. The His-57 imidazolium moiety formed in the k1 step is a general acid catalyst for the k-1 reaction.  However, evidence for similar general-acid catalysis of the k2 reaction (Tet2) has been controverted; apparently water provides a proton to the amine leaving group. Breakdown of Tet1 (via k3) generates an acyl enzyme, which is hydrolyzed with His-57 acting as a general base  (kH2O) in formation of a tetrahedral intermediate, that breaks down to regenerate the serine hydroxyl moiety, as well as the protein fragment with the newly formed carboxyl terminus. Uses Medical uses Chymotrypsin has been used during cataract surgery. It was marketed under the brand name Zolyse. Isozymes See also Trypsin PA clan of proteases References Further reading External links The MEROPS online database for peptidases and their inhibitors: S01.001 EC 3.4.21 Proteases Withdrawn drugs
Chymotrypsin
[ "Chemistry" ]
1,207
[ "Drug safety", "Withdrawn drugs" ]
7,063
https://en.wikipedia.org/wiki/Catapult
A catapult is a ballistic device used to launch a projectile a great distance without the aid of gunpowder or other propellants – particularly various types of ancient and medieval siege engines. A catapult uses the sudden release of stored potential energy to propel its payload. Most convert tension or torsion energy that was more slowly and manually built up within the device before release, via springs, bows, twisted rope, elastic, or any of numerous other materials and mechanisms. During wars in the ancient times, the catapult was usually known to be the strongest heavy weaponry. In modern times the term can apply to devices ranging from a simple hand-held implement (also called a "slingshot") to a mechanism for launching aircraft from a ship. The earliest catapults date to at least the 7th century BC, with King Uzziah of Judah recorded as equipping the walls of Jerusalem with machines that shot "great stones". Catapults are mentioned in Yajurveda under the name "Jyah" in chapter 30, verse 7. In the 5th century BC the mangonel appeared in ancient China, a type of traction trebuchet and catapult. Early uses were also attributed to Ajatashatru of Magadha in his 5th century BC war against the Licchavis. Greek catapults were invented in the early 4th century BC, being attested by Diodorus Siculus as part of the equipment of a Greek army in 399 BC, and subsequently used at the siege of Motya in 397 BC. Etymology The word 'catapult' comes from the Latin 'catapulta', which in turn comes from the Greek (katapeltēs), itself from κατά (kata), "downwards" and πάλλω (pallō), "to toss, to hurl". Catapults were invented by the ancient Greeks and in ancient India where they were used by the Magadhan King Ajatashatru around the early to mid 5th century BC. Greek and Roman catapults The catapult and crossbow in Greece are closely intertwined. Primitive catapults were essentially "the product of relatively straightforward attempts to increase the range and penetrating power of missiles by strengthening the bow which propelled them". The historian Diodorus Siculus (fl. 1st century BC), described the invention of a mechanical arrow-firing catapult (katapeltikon) by a Greek task force in 399 BC. The weapon was soon after employed against Motya (397 BC), a key Carthaginian stronghold in Sicily. Diodorus is assumed to have drawn his description from the highly rated history of Philistus, a contemporary of the events then. The introduction of crossbows however, can be dated further back: according to the inventor Hero of Alexandria (fl. 1st century AD), who referred to the now lost works of the 3rd-century BC engineer Ctesibius, this weapon was inspired by an earlier foot-held crossbow, called the gastraphetes, which could store more energy than the Greek bows. A detailed description of the gastraphetes, or the "belly-bow", along with a watercolor drawing, is found in Heron's technical treatise Belopoeica. A third Greek author, Biton (fl. 2nd century BC), whose reliability has been positively reevaluated by recent scholarship, described two advanced forms of the gastraphetes, which he credits to Zopyros, an engineer from southern Italy. Zopyrus has been plausibly equated with a Pythagorean of that name who seems to have flourished in the late 5th century BC. He probably designed his bow-machines on the occasion of the sieges of Cumae and Milet between 421 BC and 401 BC. The bows of these machines already featured a winched pull back system and could apparently throw two missiles at once. Philo of Byzantium provides probably the most detailed account on the establishment of a theory of belopoietics (belos = "projectile"; poietike = "(art) of making") circa 200 BC. The central principle to this theory was that "all parts of a catapult, including the weight or length of the projectile, were proportional to the size of the torsion springs". This kind of innovation is indicative of the increasing rate at which geometry and physics were being assimilated into military enterprises. From the mid-4th century BC onwards, evidence of the Greek use of arrow-shooting machines becomes more dense and varied: arrow firing machines (katapaltai) are briefly mentioned by Aeneas Tacticus in his treatise on siegecraft written around 350 BC. An extant inscription from the Athenian arsenal, dated between 338 and 326 BC, lists a number of stored catapults with shooting bolts of varying size and springs of sinews. The later entry is particularly noteworthy as it constitutes the first clear evidence for the switch to torsion catapults, which are more powerful than the more-flexible crossbows and which came to dominate Greek and Roman artillery design thereafter. This move to torsion springs was likely spurred by the engineers of Philip II of Macedonia. Another Athenian inventory from 330 to 329 BC includes catapult bolts with heads and flights. As the use of catapults became more commonplace, so did the training required to operate them. Many Greek children were instructed in catapult usage, as evidenced by "a 3rd Century B.C. inscription from the island of Ceos in the Cyclades [regulating] catapult shooting competitions for the young". Arrow firing machines in action are reported from Philip II's siege of Perinth (Thrace) in 340 BC. At the same time, Greek fortifications began to feature high towers with shuttered windows in the top, which could have been used to house anti-personnel arrow shooters, as in Aigosthena. Projectiles included both arrows and (later) stones that were sometimes lit on fire. Onomarchus of Phocis first used catapults on the battlefield against Philip II of Macedon. Philip's son, Alexander the Great, was the next commander in recorded history to make such use of catapults on the battlefield as well as to use them during sieges. The Romans started to use catapults as arms for their wars against Syracuse, Macedon, Sparta and Aetolia (3rd and 2nd centuries BC). The Roman machine known as an arcuballista was similar to a large crossbow. Later the Romans used ballista catapults on their warships. Other ancient catapults In chronological order: 19th century BC, Egypt, walls of the fortress of Buhen appear to contain platforms for siege weapons. c.750 BC, Judah, King Uzziah is documented as having overseen the construction of machines to "shoot great stones". between 484 and 468 BC, India, Ajatashatru is recorded in Jaina texts as having used catapults in his campaign against the Licchavis. between 500 and 300 BC, China, recorded use of mangonels. They were probably used by the Mohists as early as the 4th century BC, descriptions of which can be found in the Mojing (compiled in the 4th century BC). In Chapter 14 of the Mojing, the mangonel is described hurling hollowed out logs filled with burning charcoal at enemy troops. The mangonel was carried westward by the Avars and appeared next in the eastern Mediterranean by the late 6th century AD, where it replaced torsion powered siege engines such as the ballista and onager due to its simpler design and faster rate of fire. The Byzantines adopted the mangonel possibly as early as 587, the Persians in the early 7th century, and the Arabs in the second half of the 7th century. The Franks and Saxons adopted the weapon in the 8th century. Medieval catapults Castles and fortified walled cities were common during this period and catapults were used as siege weapons against them. As well as their use in attempts to breach walls, incendiary missiles, or diseased carcasses or garbage could be catapulted over the walls. Defensive techniques in the Middle Ages progressed to a point that rendered catapults largely ineffective. The Viking siege of Paris (AD 885–6) "saw the employment by both sides of virtually every instrument of siege craft known to the classical world, including a variety of catapults", to little effect, resulting in failure. The most widely used catapults throughout the Middle Ages were as follows: Ballista Ballistae were similar to giant crossbows and were designed to work through torsion. The projectiles were large arrows or darts made from wood with an iron tip. These arrows were then shot "along a flat trajectory" at a target. Ballistae were accurate, but lacked firepower compared with that of a mangonel or trebuchet. Because of their immobility, most ballistae were constructed on site following a siege assessment by the commanding military officer. Springald The springald's design resembles that of the ballista, being a crossbow powered by tension. The springald's frame was more compact, allowing for use inside tighter confines, such as the inside of a castle or tower, but compromising its power. Mangonel This machine was designed to throw heavy projectiles from a "bowl-shaped bucket at the end of its arm". Mangonels were mostly used for “firing various missiles at fortresses, castles, and cities,” with a range of up to . These missiles included anything from stones to excrement to rotting carcasses. Mangonels were relatively simple to construct, and eventually wheels were added to increase mobility. Onager Mangonels are also sometimes referred to as Onagers. Onager catapults initially launched projectiles from a sling, which was later changed to a "bowl-shaped bucket". The word Onager is derived from the Greek word onagros for "wild ass", referring to the "kicking motion and force" that were recreated in the Mangonel's design. Historical records regarding onagers are scarce. The most detailed account of Mangonel use is from "Eric Marsden's translation of a text written by Ammianus Marcellius in the 4th Century AD" describing its construction and combat usage. Trebuchet Trebuchets were probably the most powerful catapult employed in the Middle Ages. The most commonly used ammunition were stones, but "darts and sharp wooden poles" could be substituted if necessary. The most effective kind of ammunition though involved fire, such as "firebrands, and deadly Greek Fire". Trebuchets came in two different designs: Traction, which were powered by people, or Counterpoise, where the people were replaced with "a weight on the short end". The most famous historical account of trebuchet use dates back to the siege of Stirling Castle in 1304, when the army of Edward I constructed a giant trebuchet known as Warwolf, which then proceeded to "level a section of [castle] wall, successfully concluding the siege". Couillard A simplified trebuchet, where the trebuchet's single counterweight is split, swinging on either side of a central support post. Leonardo da Vinci's catapult Leonardo da Vinci sought to improve the efficiency and range of earlier designs. His design incorporated a large wooden leaf spring as an accumulator to power the catapult. Both ends of the bow are connected by a rope, similar to the design of a bow and arrow. The leaf spring was not used to pull the catapult armature directly, rather the rope was wound around a drum. The catapult armature was attached to this drum which would be turned until enough potential energy was stored in the deformation of the spring. The drum would then be disengaged from the winding mechanism, and the catapult arm would snap around. Though no records exist of this design being built during Leonardo's lifetime, contemporary enthusiasts have reconstructed it. Modern use Military The last large scale military use of catapults was during the trench warfare of World War I. During the early stages of the war, catapults were used to throw hand grenades across no man's land into enemy trenches. They were eventually replaced by small mortars. The SPBG (Silent Projector of Bottles and Grenades) was a Soviet proposal for an anti-tank weapon that launched grenades from a spring-loaded shuttle up to . Special variants called aircraft catapults are used to launch planes from land bases and sea carriers when the takeoff runway is too short for a powered takeoff or simply impractical to extend. Ships also use them to launch torpedoes and deploy bombs against submarines. In 2024, during the Israel-Hamas war, a trebuchet created by private initiative of an IDF reserve unit was used to throw firebrands over the border into Lebanon, in order to set on fire the undergrowth which offered camouflage to Hezbollah fighters. Toys, sports, entertainment In the 1840s, the invention of vulcanized rubber allowed the making of small hand-held catapults, either improvised from Y-shaped sticks or manufactured for sale; both were popular with children and teenagers. These devices were also known as slingshots in the United States. Small catapults, referred to as "traps", are still widely used to launch clay targets into the air in the sport of clay pigeon shooting. In the 1990s and early 2000s, a powerful catapult, a trebuchet, was used by thrill-seekers first on private property and in 2001–2002 at Middlemoor Water Park, Somerset, England, to experience being catapulted through the air for . The practice has been discontinued due to a fatality at the Water Park. There had been an injury when the trebuchet was in use on private property. Injury and death occurred when those two participants failed to land onto the safety net. The operators of the trebuchet were tried, but found not guilty of manslaughter, though the jury noted that the fatality might have been avoided had the operators "imposed stricter safety measures." Human cannonball circus acts use a catapult launch mechanism, rather than gunpowder, and are risky ventures for the human cannonballs. Early launched roller coasters used a catapult system powered by a diesel engine or a dropped weight to acquire their momentum, such as Shuttle Loop installations between 1977 and 1978. The catapult system for roller coasters has been replaced by flywheels and later linear motors. Pumpkin chunking is another widely popularized use, in which people compete to see who can launch a pumpkin the farthest by mechanical means (although the world record is held by a pneumatic air cannon). Smuggling In January 2011, a homemade catapult was discovered that was used to smuggle cannabis into the United States from Mexico. The machine was found from the border fence with bales of cannabis ready to launch. See also Aircraft catapult Centrifugal gun List of siege engines Mangonel Mass driver National Catapult Contest Sling (weapon) SPBG Trebuchet Notes References Bibliography . . . . . . . External links . . . . . Projectile weapons Siege engines Obsolete technologies
Catapult
[ "Engineering" ]
3,193
[ "Military engineering", "Siege engines" ]
7,080
https://en.wikipedia.org/wiki/Christian%20Doppler
Christian Andreas Doppler (; ; 29November 180317March 1853) was an Austrian mathematician and physicist. He formulated the principle – now known as the Doppler effect – that the observed frequency of a wave depends on the relative speed of the source and the observer. Biography Early life and education Doppler was born in Salzburg (today Austria) in 1803. Doppler was the second son born to Johann Evangelist Doppler and Theresia Seeleuthner (Doppler). Doppler's father, Johann Doppler, was a third-generation stone mason in Salzburg. As a young boy, Doppler showed promise for his family's trade. However, due to his weak health, Doppler's father encouraged him instead to pursue a career in business. Doppler started elementary education at the age of 13. After completion, he moved on to secondary education at a school in Linz. Doppler's proficiency in mathematics was discovered by Simon Stampfer, a mathematician in Salzburg. Upon his recommendation, Doppler took a break from high school to attend the Polytechnic Institute in Vienna in 1822. Doppler returned to Salzburg in 1825 to finish his secondary education. After completing high school, Doppler studied philosophy in Salzburg and mathematics and physics at the University of Vienna and Imperial–Royal Polytechnic Institute (now TU Wien). In 1829, he was chosen for an assistant position to Professor Adam Von Burg at the Polytechnic Institute of Vienna, where he continued his studies. In 1835, he decided to immigrate to the United States to pursue a position in academia. Before departing for the United States, Doppler was offered a teaching position at a state-operated high school in Prague, which convinced him to stay in Europe. Shortly after, in 1837 he was appointed as an associate professor of math and geometry at the Prague Polytechnic Institute (now Czech Technical University in Prague). He received a full professorship position in 1841. Family In 1836, Doppler married Mathilde Sturm, the daughter of goldsmith Franz Sturm. Doppler and Mathilde had five children together. Their first child was Mathilde Doppler who was born in 1837. Doppler's second child, Ludwig Doppler was born in 1838. Two years later, in 1840 Adolf Doppler was born. Doppler's fourth child, Bertha Doppler was born in 1843. Their last child Hermann was born in 1845. Development of the Doppler effect In 1842, at the age of 38, Doppler gave a lecture to the Royal Bohemian Society of Sciences and subsequently published Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels ("On the coloured light of the binary stars and some other stars of the heavens"). In this work, Doppler postulated his principle (later named the Doppler effect) that the observed frequency of a wave depends on the relative speed of the source and the observer, and he later tried to use this concept to explain the visible colours of binary stars (this hypothesis was later proven wrong). Doppler also incorrectly believed that if a star were to exceed 136,000 kilometers per second in radial velocity, then it would not be visible to the human eye. Later life Doppler continued working as a professor at the Prague Polytechnic, publishing over 50 articles on mathematics, physics and astronomy, but in 1847 he left Prague for the professorship of mathematics, physics, and mechanics at the Academy of Mines and Forests (its successor is the University of Miskolc) in Selmecbánya (then Kingdom of Hungary, now Banská Štiavnica Slovakia). Doppler's research was interrupted by the Hungarian Revolution of 1848. In 1849, he fled to Vienna and in 1850 was appointed head of the Institute for Experimental Physics at the University of Vienna. While there, Doppler, along with Franz Unger, influenced the development of young Gregor Mendel, the founding father of genetics, who was a student at the University of Vienna from 1851 to 1853. Death Doppler died on 17 March 1853 at age 49 from a pulmonary disease in Venice (at that time part of the Austrian Empire). His tomb is in the San Michele cemetery on the Venetian island of San Michele. Full name Some confusion exists about Doppler's full name. Doppler referred to himself as Christian Doppler. The records of his birth and baptism stated Christian Andreas Doppler. Doppler's middle name is shared by his great-great-grandfather Andreas Doppler. Forty years after Doppler's death the misnomer Johann Christian Doppler was introduced by the astronomer Julius Scheiner. Scheiner's mistake has since been copied by many. Works Christian Doppler (1803–1853). Wien: Böhlau, 1992. Bd. 1: 1. Teil: Helmuth Grössing (unter Mitarbeit von B. Reischl): Wissenschaft, Leben, Umwelt, Gesellschaft; 2. Teil: Karl Kadletz (unter Mitarbeit von Peter Schuster und Ildikó Cazan-Simányi) Quellenanhang. Bd. 2: 3. Teil: Peter Schuster: Das Werk. See also List of Austrian scientists List of Austrians List of minor planets named after people References Further reading Alec Eden: Christian Doppler: Leben und Werk. Salzburg: Landespressebureau, 1988. Hoffmann, Robert (2007). The Life of an (almost) Unknown Person. Christian Doppler's Youth in Salzburg and Vienna. In: Ewald Hiebl, Maurizio Musso (Eds.), Christian Doppler – Life and Work. Principle an Applications. Proceedings of the Commemorative Symposia in Salzburg, Salzburg, Prague, Vienna, Venice. Pöllauberg/Austria, Hainault/UK, Atascadero/US, pages 33 – 46. David Nolte (2020). The fall and rise of the Doppler effect. Physics Today, v. 73, pgs. 31 – 35. DOI: 10.1063/PT.3.4429 External links Christian Doppler Platform & Christian-Doppler-Fonds 1803 births 1853 deaths 19th-century Austrian physicists Austrian Roman Catholics Scientists from Salzburg Burials at Isola di San Michele Czech Technical University in Prague alumni Academic staff of Czech Technical University in Prague Doppler effects Mathematicians from the Austrian Empire People from the Duchy of Salzburg
Christian Doppler
[ "Physics" ]
1,370
[ "Doppler effects", "Physical phenomena", "Astrophysics" ]
7,088
https://en.wikipedia.org/wiki/List%20of%20cryptographers
This is a list of cryptographers. Cryptography is the practice and study of techniques for secure communication in the presence of third parties called adversaries. Pre twentieth century Al-Khalil ibn Ahmad al-Farahidi: wrote a (now lost) book on cryptography titled the "Book of Cryptographic Messages". Al-Kindi, 9th century Arabic polymath and originator of frequency analysis. Athanasius Kircher, attempts to decipher crypted messages Augustus the Younger, Duke of Brunswick-Lüneburg, wrote a standard book on cryptography Ibn Wahshiyya: published several cipher alphabets that were used to encrypt magic formulas. John Dee, wrote an occult book, which in fact was a cover for crypted text Ibn 'Adlan: 13th-century cryptographer who made important contributions on the sample size of the frequency analysis. Duke of Mantua Francesco I Gonzaga is the one who used the earliest example of homophonic Substitution cipher in early 1400s. Ibn al-Durayhim: gave detailed descriptions of eight cipher systems that discussed substitution ciphers, leading to the earliest suggestion of a "tableau" of the kind that two centuries later became known as the "Vigenère table". Ahmad al-Qalqashandi: Author of Subh al-a 'sha, a fourteen volume encyclopedia in Arabic, which included a section on cryptology. The list of ciphers in this work included both substitution and transposition, and for the first time, a cipher with multiple substitutions for each plaintext letter. Charles Babbage, UK, 19th century mathematician who, about the time of the Crimean War, secretly developed an effective attack against polyalphabetic substitution ciphers. Leone Battista Alberti, polymath/universal genius, inventor of polyalphabetic substitution (more specifically, the Alberti cipher), and what may have been the first mechanical encryption aid. Giovanni Battista della Porta, author of a seminal work on cryptanalysis. Étienne Bazeries, French, military, considered one of the greatest natural cryptanalysts. Best known for developing the "Bazeries Cylinder" and his influential 1901 text Les Chiffres secrets dévoilés ("Secret ciphers unveiled"). Giovan Battista Bellaso, Italian cryptologist Giovanni Fontana (engineer), wrote two encrypted books Hildegard of Bingen used her own alphabet to write letters. Julius Caesar, Roman general/politician, has the Caesar cipher named after him, and a lost work on cryptography by Probus (probably Valerius Probus) is claimed to have covered his use of military cryptography in some detail. It is likely that he did not invent the cipher named after him, as other substitution ciphers were in use well before his time. Friedrich Kasiski, author of the first published attack on the Vigenère cipher, now known as the Kasiski test. Auguste Kerckhoffs, known for contributing cipher design principles. Edgar Allan Poe, author of the book, A Few Words on Secret Writing, an essay on cryptanalysis, and The Gold Bug, a short story featuring the use of letter frequencies in the solution of a cryptogram. Johannes Trithemius, mystic and first to describe tableaux (tables) for use in polyalphabetic substitution. Wrote an early work on steganography and cryptography generally. Philips van Marnix, lord of Sint-Aldegonde, deciphered Spanish messages for William the Silent during the Dutch revolt against the Spanish. John Wallis codebreaker for Cromwell and Charles II Sir Charles Wheatstone, inventor of the so-called Playfair cipher and general polymath. World War I and World War II wartime cryptographers Richard J. Hayes (1902–1976) Irish code breaker in World War II. Jean Argles (1925–2023), British code breaker in World War II Arne Beurling (1905–1986), Swedish mathematician and cryptographer. Lambros D. Callimahos, US, NSA, worked with William F. Friedman, taught NSA cryptanalysts. Ann Z. Caracristi, US, SIS, solved Japanese Army codes in World War II, later became deputy director of National Security Agency. Alec Naylor Dakin, UK, Hut 4, Bletchley park during World War II. Ludomir Danilewicz, Poland, Biuro Szyfrow, helped to construct the Enigma machine copies to break the ciphers. Patricia Davies (born 1923), British code breaker in World War II Alastair Denniston, UK, director of GC&CS at Bletchley Park from 1919 to 1942. Agnes Meyer Driscoll, US, broke several Japanese ciphers. Genevieve Grotjan Feinstein, US, SIS, noticed the pattern that led to breaking Purple. Elizebeth Smith Friedman, US, Coast Guard and US Treasury Department cryptographer, co-invented modern cryptography. William F. Friedman, US, SIS, introduced statistical methods into cryptography. Cecilia Elspeth Giles, UK, Bletchley Park Jack Good UK, GC&CS, Bletchley Park worked with Alan Turing on the statistical approach to cryptanalysis. Nigel de Grey, UK, Room 40, played an important role in the decryption of the Zimmermann Telegram during World War I. Dillwyn Knox, UK, Room 40 and GC&CS, broke commercial Enigma cipher as used by the Abwehr (German military intelligence). Solomon Kullback US, SIS, helped break the Japanese Red cipher, later Chief Scientist at the National Security Agency. Frank W. Lewis US, worked with William F. Friedman, puzzle master William Hamilton Martin and Bernon F. Mitchell, U.S. National Security Agency cryptologists who defected to the Soviet Union in 1960 Leo Marks UK, SOE cryptography director, author and playwright. Donald Michie UK, GC&CS, Bletchley Park worked on Cryptanalysis of the Lorenz cipher and the Colossus computer. Consuelo Milner, US, crytopgraher for the Naval Applied Science Lab Max Newman, UK, GC&CS, Bletchley Park headed the section that developed the Colossus computer for Cryptanalysis of the Lorenz cipher. Georges Painvin French, broke the ADFGVX cipher during the First World War. Marian Rejewski, Poland, Biuro Szyfrów, a Polish mathematician and cryptologist who, in 1932, solved the Enigma machine with plugboard, the main cipher device then in use by Germany. The first to break the cipher in history. John Joseph Rochefort US, made major contributions to the break into JN-25 after the attack on Pearl Harbor. Leo Rosen US, SIS, deduced that the Japanese Purple machine was built with stepping switches. Frank Rowlett US, SIS, leader of the team that broke Purple. Jerzy Różycki, Poland, Biuro Szyfrów, helped break German Enigma ciphers. Luigi Sacco, Italy, Italian General and author of the Manual of Cryptography. Laurance Safford US, chief cryptographer for the US Navy for 2 decades+, including World War II. Abraham Sinkov US, SIS. John Tiltman UK, Brigadier, Room 40, GC&CS, Bletchley Park, GCHQ, NSA. Extraordinary length and range of cryptographic service Alan Mathison Turing UK, GC&CS, Bletchley Park where he was chief cryptographer, inventor of the Bombe that was used in decrypting Enigma, mathematician, logician, and renowned pioneer of Computer Science. William Thomas Tutte UK, GC&CS, Bletchley Park, with John Tiltman, broke Lorenz SZ 40/42 encryption machine (codenamed Tunny) leading to the development of the Colossus computer. Betty Webb (code breaker), British codebreaker during World War II William Stone Weedon, US, Gordon Welchman UK, GC&CS, Bletchley Park where he was head of Hut Six (German Army and Air Force Enigma cipher. decryption), made an important contribution to the design of the Bombe. Herbert Yardley US, MI8 (US), author "The American Black Chamber", worked in China as a cryptographer and briefly in Canada. Henryk Zygalski, Poland, Biuro Szyfrów, inventor of Zygalski sheets, broke German Enigma ciphers pre-1939. Karl Stein German, Head of the Division IVa (security of own processes) at Cipher Department of the High Command of the Wehrmacht. Discoverer of Stein manifold. Gisbert Hasenjaeger German, Tester of the Enigma. Discovered new proof of the completeness theorem of Kurt Gödel for predicate logic. Heinrich Scholz German, Worked in Division IVa at OKW. Logician and pen friend of Alan Turning. Gottfried Köthe German, Cryptanalyst at OKW. Mathematician created theory of topological vector spaces. Ernst Witt German, Mathematician at OKW. Mathematical Discoveries Named After Ernst Witt. Helmut Grunsky German, worked in complex analysis and geometric function theory. He introduced Grunsky's theorem and the Grunsky inequalities. Georg Hamel. Oswald Teichmüller German, Temporarily employed at OKW as cryptanalyst. Introduced quasiconformal mappings and differential geometric methods into complex analysis. Described by Friedrich L. Bauer as an extreme Nazi and a true genius. Hans Rohrbach German, Mathematician at AA/Pers Z, the German department of state, civilian diplomatic cryptological agency. Wolfgang Franz German, Mathematician who worked at OKW. Later significant discoveries in Topology. Werner Weber German, Mathematician at OKW. Georg Aumann German, Mathematician at OKW. His doctoral student was Friedrich L. Bauer. Otto Leiberich German, Mathematician who worked as a linguist at the Cipher Department of the High Command of the Wehrmacht. Alexander Aigner German, Mathematician who worked at OKW. Erich Hüttenhain German, Chief cryptanalyst of and led Chi IV (section 4) of the Cipher Department of the High Command of the Wehrmacht. A German mathematician and cryptanalyst who tested a number of German cipher machines and found them to be breakable. Wilhelm Fenner German, Chief Cryptologist and Director of Cipher Department of the High Command of the Wehrmacht. Walther Fricke German, Worked alongside Dr Erich Hüttenhain at Cipher Department of the High Command of the Wehrmacht. Mathematician, logician, cryptanalyst and linguist. Fritz Menzer German. Inventor of SG39 and SG41. Other pre-computer Rosario Candela, US, Architect and notable amateur cryptologist who authored books and taught classes on the subject to civilians at Hunter College. Claude Elwood Shannon, US, founder of information theory, proved the one-time pad to be unbreakable. Modern See also: Category:Modern cryptographers for a more exhaustive list. Symmetric-key algorithm inventors Ross Anderson, UK, University of Cambridge, co-inventor of the Serpent cipher. Paulo S. L. M. Barreto, Brazilian, University of São Paulo, co-inventor of the Whirlpool hash function. George Blakley, US, independent inventor of secret sharing. Eli Biham, Israel, co-inventor of the Serpent cipher. Don Coppersmith, co-inventor of DES and MARS ciphers. Joan Daemen, Belgian, Radboud University, co-developer of Rijndael which became the Advanced Encryption Standard (AES), and Keccak which became SHA-3. Horst Feistel, German, IBM, namesake of Feistel networks and Lucifer cipher. Lars Knudsen, Denmark, co-inventor of the Serpent cipher. Ralph Merkle, US, inventor of Merkle trees. Bart Preneel, Belgian, KU Leuven, co-inventor of RIPEMD-160. Vincent Rijmen, Belgian, KU Leuven, co-developer of Rijndael which became the Advanced Encryption Standard (AES). Ronald L. Rivest, US, MIT, inventor of RC cipher series and MD algorithm series. Bruce Schneier, US, inventor of Blowfish and co-inventor of Twofish and Threefish. Xuejia Lai, CH, co-inventor of International Data Encryption Algorithm (IDEA). Adi Shamir, Israel, Weizmann Institute, inventor of secret sharing. Walter Tuchman. US. led the Data Encryption Standard development team at IBM and inventor of Triple DES Asymmetric-key algorithm inventors Leonard Adleman, US, USC, the 'A' in RSA. David Chaum, US, inventor of blind signatures. Clifford Cocks, UK GCHQ first inventor of RSA, a fact that remained secret until 1997 and so was unknown to Rivest, Shamir, and Adleman. Whitfield Diffie, US, (public) co-inventor of the Diffie-Hellman key-exchange protocol. Taher Elgamal, US (born Egyptian), inventor of the Elgamal discrete log cryptosystem. Shafi Goldwasser, US and Israel, MIT and Weizmann Institute, co-discoverer of zero-knowledge proofs, and of Semantic security. Martin Hellman, US, (public) co-inventor of the Diffie-Hellman key-exchange protocol. Neal Koblitz, independent co-creator of elliptic curve cryptography. Alfred Menezes, co-inventor of MQV, an elliptic curve technique. Silvio Micali, US (born Italian), MIT, co-discoverer of zero-knowledge proofs, and of Semantic security. Victor Miller, independent co-creator of elliptic curve cryptography. David Naccache, inventor of the Naccache–Stern cryptosystem and of the Naccache–Stern knapsack cryptosystem. Moni Naor, co-inventor the Naor–Yung encryption paradigm for CCA security. Rafail Ostrovsky, co-inventor of Oblivious RAM, of single-server Private Information Retrieval, and proactive cryptosystems. Pascal Paillier, inventor of Paillier encryption. Michael O. Rabin, Israel, inventor of Rabin encryption. Ronald L. Rivest, US, MIT, the 'R' in RSA. Adi Shamir, Israel, Weizmann Institute, the 'S' in RSA. Victor Shoup, US, NYU Courant, co-inventor of the Cramer-Shoup cryptosystem. Moti Yung, co-inventor of the Naor–Yung encryption paradigm for CCA security, of threshold cryptosystems, and proactive cryptosystems. Cryptanalysts Joan Clarke, English cryptanalyst and numismatist best known for her work as a code-breaker at Bletchley Park during the Second World War. Ross Anderson, UK. Eli Biham, Israel, co-discoverer of differential cryptanalysis and Related-key attack. Matt Blaze, US. Dan Boneh, US, Stanford University. Niels Ferguson, Netherlands, co-inventor of Twofish and Fortuna. Ian Goldberg, Canada, University of Waterloo. Lars Knudsen, Denmark, DTU, discovered integral cryptanalysis. Paul Kocher, US, discovered differential power analysis. Mitsuru Matsui, Japan, discoverer of linear cryptanalysis. Kenny Paterson, UK, previously Royal Holloway, now ETH Zurich, known for several attacks on cryptosystems. David Wagner, US, UC Berkeley, co-discoverer of the slide and boomerang attacks. Xiaoyun Wang, the People's Republic of China, known for MD5 and SHA-1 hash function attacks. Alex Biryukov, University of Luxembourg, known for impossible differential cryptanalysis and slide attack. Moti Yung, Kleptography. Algorithmic number theorists Daniel J. Bernstein, US, developed several popular algorithms, fought US government restrictions in Bernstein v. United States. Don Coppersmith, US Dorian M. Goldfeld, US, Along with Michael Anshel and Iris Anshel invented the Anshel–Anshel–Goldfeld key exchange and the Algebraic Eraser. They also helped found Braid Group Cryptography. Victor Shoup, US, NYU Courant. Theoreticians Mihir Bellare, US, UCSD, co-proposer of the Random oracle model. Dan Boneh, US, Stanford. Gilles Brassard, Canada, Université de Montréal. Co-inventor of quantum cryptography. Claude Crépeau, Canada, McGill University. Oded Goldreich, Israel, Weizmann Institute, author of Foundations of Cryptography. Shafi Goldwasser, US and Israel. Silvio Micali, US, MIT. Rafail Ostrovsky, US, UCLA. Charles Rackoff, co-discoverer of zero-knowledge proofs. Oded Regev, inventor of learning with errors. Phillip Rogaway, US, UC Davis, co-proposer of the Random oracle model. Amit Sahai, US, UCLA. Victor Shoup, US, NYU Courant. Gustavus Simmons, US, Sandia, authentication theory. Moti Yung, US, Google. Government cryptographers Clifford Cocks, UK, GCHQ, secret inventor of the algorithm later known as RSA. James H. Ellis, UK, GCHQ, secretly proved the possibility of asymmetric encryption. Lowell Frazer, US, National Security Agency Laura Holmes, US, National Security Agency Julia Wetzel, US, National Security Agency Malcolm Williamson, UK, GCHQ, secret inventor of the protocol later known as the Diffie–Hellman key exchange. Cryptographer businesspeople Bruce Schneier, US, CTO and founder of Counterpane Internet Security, Inc. and cryptography author. Scott Vanstone, Canada, founder of Certicom and elliptic curve cryptography proponent. See also Cryptography References External links List of cryptographers' home pages Lists of people by occupation Cryptography lists and comparisons
List of cryptographers
[ "Technology" ]
3,795
[ "Computing-related lists", "Cryptography lists and comparisons" ]
7,104
https://en.wikipedia.org/wiki/Cotton%20Mather
Cotton Mather (; February 12, 1663 – February 13, 1728) was a Puritan clergyman and author in colonial New England, who wrote extensively on theological, historical, and scientific subjects. After being educated at Harvard College, he joined his father Increase as minister of the Congregationalist Old North Meeting House in Boston, Massachusetts, where he preached for the rest of his life. He has been referred to as the "first American Evangelical". A major intellectual and public figure in English-speaking colonial America, Cotton Mather helped lead the successful revolt of 1689 against Sir Edmund Andros, the governor of New England appointed by King James II. Mather's subsequent involvement in the Salem witch trials of 1692–1693, which he defended in the book Wonders of the Invisible World (1693), attracted intense controversy in his own day and has negatively affected his historical reputation. As a historian of colonial New England, Mather is noted for his Magnalia Christi Americana (1702). Personally and intellectually committed to the waning social and religious orders in New England, Cotton Mather unsuccessfully sought the presidency of Harvard College. After 1702, Cotton Mather clashed with Joseph Dudley, the governor of the Province of Massachusetts Bay, whom Mather attempted unsuccessfully to drive out of power. Mather championed the new Yale College as an intellectual bulwark of Puritanism in New England. He corresponded extensively with European intellectuals and received an honorary Doctor of Divinity degree from the University of Glasgow in 1710. A promoter of the new experimental science in America, Cotton Mather carried out original research on plant hybridization. He also researched the variolation method of inoculation as a means of preventing smallpox contagion, which he learned about from an African-American slave who he owned, Onesimus. He dispatched many reports on scientific matters to the Royal Society of London, which elected him as a fellow in 1713. Mather's promotion of inoculation against smallpox caused violent controversy in Boston during the outbreak of 1721. Scientist and United States Founding Father Benjamin Franklin, who as a young Bostonian had opposed the old Puritan order represented by Mather and participated in the anti-inoculation campaign, later described Mather's book Bonifacius, or Essays to Do Good (1710) as a major influence on his life. Early life and education Cotton Mather was born in 1663 in the city of Boston, the capital of the Massachusetts Bay Colony, to the Rev. Increase Mather and his wife Maria née Cotton. His grandfathers were Richard Mather and John Cotton, both of them prominent Puritan ministers who had played major roles in the establishment and growth of the Massachusetts colony. Richard Mather was a graduate of the University of Oxford and John Cotton a graduate of the University of Cambridge. Increase Mather was a graduate of Harvard College and the Trinity College Dublin, and served as the minister of Boston's original North Church (not to be confused with the Anglican Old North Church of Paul Revere fame). This was one of the two principal Congregationalist churches in the city, the other being the First Church established by John Winthrop. Cotton Mather was therefore born into one of the most influential and intellectually distinguished families in colonial New England and seemed destined to follow his father and grandfathers into the Puritan clergy. Cotton entered Harvard College, in the neighboring town of Cambridge, in 1674. Aged only eleven and a half, he is the youngest student ever admitted to that institution. At around this time, Cotton began to be afflicted by stuttering, a speech disorder that he would struggle to overcome throughout the rest of his life. Bullied by the older students and fearing that his stutter would make him unsuitable as a preacher, Cotton withdrew temporarily from the college, continuing his education at home. He also took an interest in medicine and considered the possibility of pursuing a career as a physician rather than as a religious minister. Cotton eventually returned to Harvard and received his Bachelor of Arts degree in 1678, followed by a Master of Arts degree in 1681, the same year his father became Harvard President. At Harvard, Cotton studied Hebrew and the sciences. After completing his education, Cotton joined his father's church as assistant pastor. In 1685, Cotton was ordained and assumed full responsibilities as co-pastor of the church. Father and son continued to share responsibility for the care of the congregation until the death of Increase in 1723. Cotton would die less than five years after his father, and was therefore throughout most of his career in the shadow of the respected and formidable Increase. When Increase Mather became president of Harvard in 1692, he exercised considerable influence on the politics of the Massachusetts colony. Despite Cotton's efforts, he never became quite as influential as his father. One of the most public displays of their strained relationship emerged during the Salem witch trials, which Increase Mather reportedly did not support. Cotton did surpass his father's output as a writer, producing nearly 400 works. Personal life Cotton Mather married Abigail Phillips, daughter of Colonel John Phillips of Charlestown, on May 4, 1686, when Cotton was twenty-three and Abigail was not quite sixteen years old. They had eight children. Abigail died of smallpox in 1702, having previously suffered a miscarriage. He married widow Elizabeth Hubbard in 1703. Like his first marriage, he was happily married to a very religious and emotionally stable woman. They had six children. Elizabeth, the couple's newborn twins, and a two-year-old daughter, Jerusha, all succumbed to a measles epidemic in 1713. On July 5, 1715, Mather married widow Lydia Lee George. Her daughter Katherine, wife of Nathan Howell, became a widow shortly after Lydia married Mather and she came to live with the newly married couple. Also living in the Mather household at that time were Mather's children Abigal (21), Hannah (18), Elizabeth (11), and Samuel (9). Initially, Mather wrote in his journal how lovely he found his wife and how much he enjoyed their discussions about scripture. Within a few years of their marriage, Lydia was subject to rages which left Mather humiliated and depressed. They clashed over Mather's piety and his mishandling of Nathan Howell's estate. He began to call her deranged. She left him for ten days, returning when she learned that Mather's son Increase was lost at sea. Lydia nursed him through illnesses, the last of which lasted five weeks and ended with his death on February 15, 1728. Of the children that Mather had with Abigail and Elizabeth, the only children to survive him were Hannah and Samuel. He did not have any children with Lydia. Revolt of 1689 On May 14, 1686, ten days after Cotton Mather's marriage to Abigail Phillips, Edward Randolph disembarked in Boston bearing letters patent from King James II of England that revoked the Charter of the Massachusetts Bay Company and commissioned Randolph to reorganize the colonial government. James's intention was to curb Massachusetts's religious separatism by incorporating the colony it into a larger Dominion of New England, without an elected legislature and under a governor who would serve at the pleasure of the Crown. Later that year, the King appointed Sir Edmund Andros as governor of that new Dominion. This was a direct attack upon the Puritan religious and social orders that the Mathers represented, as well as upon the local autonomy of Massachusetts. The colonists were particularly outraged when Andros declared that all grants of land made in the name of the old Massachusetts Bay Company were invalid, forcing them to apply and pay for new royal patents on land that they already occupied or face eviction. In April 1687, Increase Mather sailed to London, where he remained for the next four years, pleading with the Court for what he regarded as the interests of the Massachusetts colony. The birth of a male heir to King James in June 1688, which could have cemented a Roman Catholic dynasty in the English throne, triggered the so-called Glorious Revolution in which Parliament deposed James and gave the Crown jointly to his Protestant daughter Mary and her husband, the Dutch Prince William of Orange. News of the events in London greatly emboldened the opposition in Boston to Governor Andros, finally precipitating the 1689 Boston revolt. Cotton Mather, then aged twenty-six, was one of the Puritan ministers who guided resistance in Boston to Andros's regime. Early in 1689, Randolph had a warrant issued for Cotton Mather's arrest on a charge of "scandalous libel", but the warrant was overruled by Wait Winthrop. According to some sources, Cotton Mather escaped a second attempted arrest on April 18, 1689, the same day that the people of Boston took up arms against Andros. The young Mather may have authored, in whole or in part, the "Declaration of the Gentlemen, Merchants, and Inhabitants of Boston and the Country Adjacent", which justified that uprising by a list of grievances that the declaration attributed to the deposed officials. The authorship of that document is uncertain: it was not signed by Mather or any other clergymen, and Puritans frowned upon the clergy being seen to play too direct and personal a hand in political affairs. That day, Mather probably read the Declaration to a crowd gathered in front of the Boston Town House. In July, Andros, Randolph, Joseph Dudley, and other officials who had been deposed and arrested in the Boston revolt were summoned to London to answer the complaints against them. The administration of Massachusetts was temporarily assumed by Simon Bradstreet, whose rule proved weak and contentious. In 1691, the government of King William and Queen Mary issued a new Massachusetts Charter. This charter united the Massachusetts Bay Colony with Plymouth Colony into the new Province of Massachusetts Bay. Rather than restoring the old Puritan rule, the Charter of 1691 mandated religious toleration for all non-Catholics and established a government led by a Crown-appointed governor. The first governor under the new charter was Sir William Phips, who was a member of the Mathers' church in Boston. Involvement with the Salem witch trials Cotton Mather's reputation, in his own day as well as in the historiography and popular culture of subsequent generations, has been very adversely affected by his association with the events surrounding the Salem witch trials of 1692–1693. As a consequence of those trials, nineteen people were executed by hanging for practicing witchcraft and one was pressed to death for refusing to enter a plea before the court. Although Mather had no official role in the legal proceedings, he wrote the book Wonders of the Invisible World, which appeared in 1693 with the endorsement of William Stoughton, the Lieutenant Governor of Massachusetts and chief judge of the Salem witch trials. Mather's book constitutes the most detailed written defense of the conduct of those trials. Mather's role in drumming up and sustaining the witch hysteria behind those proceedings was denounced by Robert Calef in his book More Wonders of the Invisible World, published in 1700. In the 19th century, Nathaniel Hawthorne called Mather "the chief agent of the mischief" at Salem. More recently historians have tended to downplay Mather's role in the events at Salem. According to Jan Stievermann, of the Heidelberg Center for American Studies, Prelude: The Goodwin case In 1689, Mather published Memorable Providences, Relating to Witchcrafts and Possessions, based on his study of events surrounding the affliction of the children of a Boston mason named John Goodwin. Those afflictions had begun after Goodwin's eldest daughter confronted a washerwoman whom she suspected of stealing some of the family's linen. In response to this, the washerwoman's mother, Ann Glover, verbally insulted the Goodwin girl, who soon began to suffer from hysterical fits that later began to afflict also the three other Goodwin children. Glover was an Irish Catholic widow who could understand English but spoke only Gaelic. Interrogated by the magistrates, she admitted that she tormented her enemies by stroking certain images or dolls with her finger wetted with spittle. After she was sentenced to death for witchcraft, Mather visited her in prison and interrogated her through an interpreter. Before her execution, Glover warned that her death would not bring relief to the Goodwin children, as she was not the one responsible for their torments. Indeed, after Glover was hanged the children's afflictions increased. Mather documented these events and attempted to de-possess the "Haunted Children" by prayer and fasting. He also took in the eldest Goodwin child, Martha, into his own home, where she lived for several weeks. Eventually, the afflictions ceased and Martha was admitted into Mather's church. The publication of Mather's Memorable Providences attracted attention on both sides of the Atlantic, including from the eminent English Puritan Richard Baxter. In his book, Mather argued that since there are witches and devils, there are "immortal souls". He also claimed that witches appear spectrally as themselves. He opposed any natural explanations for the fits, believed that people who confessed to using witchcraft were sane, and warned against all magical practices due to their diabolical connections. Mather's contemporary Robert Calef would later accuse Mather of laying the groundwork, with his Memorable Providences, for the witchcraft hysteria that gripped Salem three years later: Similar views, on Mather's responsibility for the climate of hysteria over witchcraft that led to the Salem trials, were repeated by later commentators, such as the politician and historian Charles W. Upham in the 19th century. Preparation for the Salem trials When the accusations of witchcraft arose in Salem Village in 1692, Cotton Mather was incapacitated by a serious illness, which he attributed to overwork. He suggested that the afflicted girls be separated and offered to take six of them into his home, as he had done previously with Martha Goodwin. That offer was not accepted. In May of that year, Sir William Phips, governor of the newly chartered Province of Massachusetts Bay, appointed a special "Court of Oyer and Terminer" to try the cases of witchcraft in Salem. The chief judge of that court was Phips's lieutenant governor, William Stoughton. Stoughton had close ties to the Mathers and had been recommended as Governor Phips's lieutenant by Increase Mather. Another of the judges in the new court, John Richards, requested that Cotton Mather accompany him to Salem, but Mather refused due to his ill health. Instead, Mather wrote a long letter to Richards in which he gave his advice on the impending trials. In that letter, Mather states that witches guilty of the most grievous crimes should be executed, but that witches convicted of lesser offenses deserve more lenient punishment. He also wrote that the identification and conviction of all witches should be undertaken with the greatest caution and warned against the use of spectral evidence (i.e., testimony that the specter of the accused had tormented a victim) on the grounds that devils could assume the form of innocent and even virtuous people. Under English law, spectral evidence had been admissible in witchcraft trials for a century before the events in Salem, and it would remain admissible until 1712. There was, however, debate among experts as to how much weight should be given to such testimonies. Response to the trials On June 10, 1692, Bridget Bishop, the thrice-married owner of an unlicensed tavern, was hanged after being convicted and sentenced by the Court of Oyer and Terminer, based largely on spectral evidence. A group of twelve Puritan ministers issued a statement, drawn up by Cotton Mather and presented to Governor Phips and his council a few days later, entitled The Return of Several Ministers. In that document, Mather criticized the court's reliance on spectral evidence and recommended that it adopt a more cautious procedure. However, he ended the document with a statement defending the continued prosecution of witchcraft according to the "Direction given by the Laws of God, and the wholesome Statues of the English Nation". Robert Calef would later criticize Mather's intervention in The Return of Several Ministers as "perfectly ambidexter, giving a great or greater encouragement to proceed in those dark methods, than cautions against them." On August 4, Cotton Mather preached a sermon before his North Church congregation on the text of Revelation 12:12: "Woe to the Inhabitants of the Earth, and of the Sea; for the Devil is come down unto you, having great Wrath; because he knoweth, that he hath but a short time." In the sermon, Mather claimed that the witches "have associated themselves to do no less a thing than to destroy the Kingdom of our Lord Jesus Christ, in these parts of the World." Although he did not intervene in any of the trials, there are some testimonies that Mather was present at the executions that were carried out in Salem on August 19. According to his Mather's contemporary critic Robert Calef, the crowd was disturbed by George Burroughs's eloquent declarations of innocence from the scaffold and by his recitation of the Lord's Prayer, of which witches were commonly believed to be incapable. Calef claimed that, after Burroughs had been hanged, As public discontent with the witch trials grew in the summer of 1692, threatening civil unrest, the conservative Cotton Mather felt compelled to defend the responsible authorities. On September 2, 1692, after eleven people had been executed as witches, Cotton Mather wrote a letter to Judge Stoughton congratulating him on "extinguishing of as wonderful a piece of devilism as has been seen in the world". As the opposition to the witch trials was bringing them to a halt, Mather wrote Wonders of the Invisible World, a defense of the trials that carried Stoughton's official approval. Post-trials Mather's Wonders did little to appease the growing clamor against the Salem witch trials. At around the same time that the book began to circulate in manuscript form, Governor Phips decided to restrict greatly the use of spectral evidence, thus raising a high barrier against further convictions. The Court of Oyer and Terminer was dismissed on October 29. A new court convened in January 1693 to hear the remaining cases, almost all of which ended in acquittal. In May, Governor Phips issued a general pardon, bringing the witch trials to an end. The last major events in Mather's involvement with witchcraft were his interactions with Mercy Short in December 1692 and Margaret Rule in September 1693. Mather appears to have remained convinced that genuine witches had been executed in Salem and he never publicly expressed regrets over his role in those events. Robert Calef, an otherwise obscure Boston merchant, published More Wonders of the Invisible World in 1700, bitterly attacking Cotton Mather over his role in the events of 1692. In the words of 20th-century historian Samuel Eliot Morison, "Robert Calef tied a tin can to Cotton Mather which has rattled and banged through the pages of superficial and popular historians". Intellectual historian Reiner Smolinski, an expert on the writings of Cotton Mather, found it "deplorable that Mather's reputation is still overshadowed by the specter of Salem witchcraft." Historical and theological writings Cotton Mather was an extremely prolific writer, producing 388 different books and pamphlets during his lifetime. His most widely distributed work was Magnalia Christi Americana (which may be translated as "The Glorious Works of Christ in America"), subtitled "The ecclesiastical history of New England, from its first planting in the year 1620 unto the year of Our Lord 1698. In seven books." Despite the Latin title, the work is written in English. Mather began working on it towards the end of 1693 and it was finally published in London in 1702. The work incorporates information that Mather put together from a variety of sources, such as letters, diaries, sermons, Harvard College records, personal conversations, and the manuscript histories composed by William Hubbard and William Bradford. The Magnalia includes about fifty biographies of eminent New Englanders (ranging from John Eliot, the first Puritan missionary to the Native Americans, to Sir William Phips, the incumbent governor of Massachusetts at the time that Mather began writing), plus dozens of brief biographical sketches, including those of Hannah Duston and Hannah Swarton. According to Kenneth Silverman, an expert on early American literature and Cotton Mather's biographer, Silverman argues that, although Mather glorifies New England's Puritan past, in the Magnalia he also attempts to transcend the religious separatism of the old Puritan settlers, reflecting Mather's more ecumenical and cosmopolitan embrace of a Transatlantic Protestant Christianity that included, in addition to Mather's own Congregationalists, also Presbyterians, Baptists, and low church Anglicans. In 1693 Mather also began work on a grand intellectual project that he titled Biblia Americana, which sought to provide a commentary and interpretation of the Christian Bible in light of "all of the Learning in the World". Mather, who continued to work on it for many years, sought to incorporate into his reading of Scripture the new scientific knowledge and theories, including geography, heliocentrism, atomism, and Newtonianism. According to Silverman, the project "looks forward to Mather's becoming probably the most influential spokesman in New England for a rationalized, scientized Christianity." Mather could not find a publisher for the Biblia Americana, which remained in manuscript form during his lifetime. It is currently being edited in ten volumes, published by Mohr Siebeck under the direction of Reiner Smolinski and Jan Stievermann. As of 2023, seven of the ten volumes have appeared in print. Conflict with Governor Dudley In Massachusetts at the start of the 18th century, Joseph Dudley was a highly controversial figure, as he had participated actively in the government of Sir Edmund Andros in 1686–1689. Dudley was among those arrested in the revolt of 1689, and was later called to London to answer the charges against him brought by a committee of the colonists. However, Dudley was able to pursue a successful political career in Britain. Upon the death in 1701 of acting governor William Stoughton, Dudley began enlisting support in London to procure appointment as the new governor of Massachusetts. Although the Mathers (to whom he was related by marriage), continued to resent Dudley's role in the Andros administration, they eventually came around to the view that Dudley would now be preferable as governor to the available alternatives, at a time when the English Parliament was threatening to repeal the Massachusetts Charter. With the Mathers' support, Dudley was appointed governor by the Crown and returned to Boston in 1702. Contrary to the promises that he had made to the Mathers, Governor Dudley proved a divisive and high-handed executive, reserving his patronage for a small circle composed of transatlantic merchants, Anglicans, and religious liberals such as Thomas Brattle, Benjamin Colman, and John Leverett. In the context of Queen Anne's War (1702–1713), Cotton Mather preached and published against Governor Dudley, whom Mather accused of corruption and misgovernment. Mather sought unsuccessfully to have Dudley replaced by Sir Charles Hobby. Outmaneuvered by Dudley, this political rivalry left Mather increasingly isolated at a time when Massachusetts society was steadily moving away from the Puritan tradition that Mather represented. Relationship with Harvard and Yale Cotton Mather was a fellow of Harvard College from 1690 to 1702, and at various times sat on its Board of Overseers. His father Increase had succeeded John Rogers as president of Harvard in 1684, first as acting president (1684–1686), later with the title of "rector" (1686–1692, during much of which period he was away from Massachusetts, pleading the Puritans' case before the Royal Court in London), and finally with the full title of president (1692–1701). Increase was unwilling to move permanently to the Harvard campus in Cambridge, Massachusetts, since his congregation in Boston was much larger than the Harvard student body, which at the time counted only a few dozen. Instructed by a committee of the Massachusetts General Assembly that the president of Harvard had to reside in Cambridge and preach to the students in person, Increase resigned in 1701 and was replaced by the Rev. Samuel Willard as acting president. Cotton Mather sought the presidency of Harvard, but in 1708 the fellows instead appointed a layman, John Leverett, who had the support of Governor Dudley. The Mathers disapproved of the increasing independence and liberalism of the Harvard faculty, which they regarded as laxity. Cotton Mather came to see the Collegiate School, which had moved in 1716 from Saybrook to New Haven, Connecticut, as a better vehicle for preserving the Puritan orthodoxy in New England. In 1718, Cotton convinced Boston-born British businessman Elihu Yale to make a charitable gift sufficient to ensure the school's survival. It was also Mather who suggested that the school change its name to Yale College after it accepted that donation. Cotton Mather sought the presidency of Harvard again after Leverett's death in 1724, but the fellows offered the position to the Rev. Joseph Sewall (son of Judge Samuel Sewall, who had repented publicly for his role in the Salem witch trials). When Sewall turned it down, Mather once again hoped that he might get the appointment. Instead, the fellows offered it to one of its own number, the Rev. Benjamin Coleman, an old rival of Mather. When Coleman refused it, the presidency went finally to the Rev. Benjamin Wadsworth. Advocacy for smallpox inoculation The practice of smallpox inoculation (as distinguished from to the later practice of vaccination) was developed possibly in 8th-century India or 10th-century China and by the 17th-century had reached Turkey. It was also practiced in western Africa, but it is not known when it started there. Inoculation or, rather, variolation, involved infecting a person via a cut in the skin with exudate from a patient with a relatively mild case of smallpox (variola), to bring about a manageable and recoverable infection that would provide later immunity. By the beginning of the 18th century, the Royal Society in England was discussing the practice of inoculation, and the smallpox epidemic in 1713 spurred further interest. It was not until 1721, however, that England recorded its first case of inoculation. Early New England Smallpox was a serious threat in colonial America, most devastating to Native Americans, but also to Anglo-American settlers. New England suffered smallpox epidemics in 1677, 1689–90, and 1702. It was highly contagious, and mortality could reach as high as 30 percent. Boston had been plagued by smallpox outbreaks in 1690 and 1702. During this era, public authorities in Massachusetts dealt with the threat primarily by means of quarantine. Incoming ships were quarantined in Boston Harbor, and any smallpox patients in town were held under guard or in a "pesthouse". In 1716, Onesimus, one of Mather's slaves, explained to Mather how he had been inoculated as a child in Africa. Mather was fascinated by the idea. By July 1716, he had read an endorsement of inoculation by Dr Emanuel Timonius of Constantinople in the Philosophical Transactions. Mather then declared, in a letter to Dr John Woodward of Gresham College in London, that he planned to press Boston's doctors to adopt the practice of inoculation should smallpox reach the colony again. By 1721, a whole generation of young Bostonians was vulnerable and memories of the last epidemic's horrors had by and large disappeared. Smallpox returned on April 22 of that year, when HMS Seahorse arrived from the West Indies carrying smallpox on board. Despite attempts to protect the town through quarantine, nine known cases of smallpox appeared in Boston by May 27, and by mid-June, the disease was spreading at an alarming rate. As a new wave of smallpox hit the area and continued to spread, many residents fled to outlying rural settlements. The combination of exodus, quarantine, and outside traders' fears disrupted business in the capital of the Bay Colony for weeks. Guards were stationed at the House of Representatives to keep Bostonians from entering without special permission. The death toll reached 101 in September, and the Selectmen, powerless to stop it, "severely limited the length of time funeral bells could toll." As one response, legislators delegated a thousand pounds from the treasury to help the people who, under these conditions, could no longer support their families. On June 6, 1721, Mather sent an abstract of reports on inoculation by Timonius and Jacobus Pylarinus to local physicians, urging them to consult about the matter. He received no response. Next, Mather pleaded his case to Dr. Zabdiel Boylston, who tried the procedure on his youngest son and two slaves—one grown and one a boy. All recovered in about a week. Boylston inoculated seven more people by mid-July. The epidemic peaked in October 1721, with 411 deaths; by February 26, 1722, Boston was again free from smallpox. The total number of cases since April 1721 came to 5,889, with 844 deaths—more than three-quarters of all the deaths in Boston during 1721. Meanwhile, Boylston had inoculated 287 people, with six resulting deaths. Inoculation debate Boylston and Mather's inoculation crusade "raised a horrid Clamour" among the people of Boston. Both Boylston and Mather were "Object[s] of their Fury; their furious Obloquies and Invectives", which Mather acknowledges in his diary. Boston's Selectmen, consulting a doctor who claimed that the practice caused many deaths and only spread the infection, forbade Boylston from performing it again. The New-England Courant published writers who opposed the practice. The editorial stance was that the Boston populace feared that inoculation spread, rather than prevented, the disease; however, some historians, notably H. W. Brands, have argued that this position was a result of the contrarian positions of editor-in-chief James Franklin (a brother of Benjamin Franklin). Public discourse ranged in tone from organized arguments by John Williams from Boston, who posted that "several arguments proving that inoculating the smallpox is not contained in the law of Physick, either natural or divine, and therefore unlawful", to those put forth in a pamphlet by Dr. William Douglass of Boston, entitled The Abuses and Scandals of Some Late Pamphlets in Favour of Inoculation of the Small Pox (1721), on the qualifications of inoculation's proponents. (Douglass was exceptional at the time for holding a medical degree from Europe.) At the extreme, in November 1721, someone hurled a lighted grenade into Mather's home. Medical opposition Several opponents of smallpox inoculation, among them John Williams, stated that there were only two laws of physick (medicine): sympathy and antipathy. In his estimation, inoculation was neither a sympathy toward a wound or a disease, or an antipathy toward one, but the creation of one. For this reason, its practice violated the natural laws of medicine, transforming health care practitioners into those who harm rather than heal. As with most colonists, Williams' Puritan beliefs were enmeshed in every aspect of his life, and he used the Bible to state his case. He quoted Matthew 9:12, when Jesus said: "It is not the healthy who need a doctor, but the sick." William Douglass proposed a more secular argument against inoculation, stressing the importance of reason over passion and urging the public to be pragmatic in their choices. In addition, he demanded that ministers leave the practice of medicine to physicians, and not meddle in areas where they lacked expertise. According to Douglass, smallpox inoculation was "a medical experiment of consequence," one not to be undertaken lightly. He believed that not all learned individuals were qualified to doctor others, and while ministers took on several roles in the early years of the colony, including that of caring for the sick, they were now expected to stay out of state and civil affairs. Douglass felt that inoculation caused more deaths than it prevented. The only reason Mather had had success in it, he said, was because Mather had used it on children, who are naturally more resilient. Douglass vowed to always speak out against "the wickedness of spreading infection". Speak out he did: "The battle between these two prestigious adversaries [Douglass and Mather] lasted far longer than the epidemic itself, and the literature accompanying the controversy was both vast and venomous." Puritan resistance Generally, Puritan pastors favored the inoculation experiments. Increase Mather, Cotton's father, was joined by prominent pastors Benjamin Colman and William Cooper in openly propagating the use of inoculations. "One of the classic assumptions of the Puritan mind was that the will of God was to be discerned in nature as well as in revelation." Nevertheless, Williams questioned whether the smallpox "is not one of the strange works of God; and whether inoculation of it be not a fighting with the most High." He also asked his readers if the smallpox epidemic may have been given to them by God as "punishment for sin," and warned that attempting to shield themselves from God's fury (via inoculation), would only serve to "provoke him more". Puritans found meaning in affliction, and they did not yet know why God was showing them disfavor through smallpox. Not to address their errant ways before attempting a cure could set them back in their "errand". Many Puritans believed that creating a wound and inserting poison was doing violence and therefore was antithetical to the healing art. They grappled with adhering to the Ten Commandments, with being proper church members and good caring neighbors. The apparent contradiction between harming or murdering a neighbor through inoculation and the Sixth Commandment—"thou shalt not kill"—seemed insoluble and hence stood as one of the main objections against the procedure. Williams maintained that because the subject of inoculation could not be found in the Bible, it was not the will of God, and therefore "unlawful." He explained that inoculation violated The Golden Rule, because if one neighbor voluntarily infected another with disease, he was not doing unto others as he would have done to him. With the Bible as the Puritans' source for all decision-making, lack of scriptural evidence concerned many, and Williams vocally scorned Mather for not being able to reference an inoculation edict directly from the Bible. Inoculation defended With the smallpox epidemic catching speed and racking up a staggering death toll, a solution to the crisis was becoming more urgently needed by the day. The use of quarantine and various other efforts, such as balancing the body's humors, did not slow the spread of the disease. As news rolled in from town to town and correspondence arrived from overseas, reports of horrific stories of suffering and loss due to smallpox stirred mass panic among the people. "By circa 1700, smallpox had become among the most devastating of epidemic diseases circulating in the Atlantic world." Mather strongly challenged the perception that inoculation was against the will of God and argued the procedure was not outside of Puritan principles. He wrote that "whether a Christian may not employ this Medicine (let the matter of it be what it will) and humbly give Thanks to God's good Providence in discovering of it to a miserable World; and humbly look up to His Good Providence (as we do in the use of any other Medicine) It may seem strange, that any wise Christian cannot answer it. And how strangely do Men that call themselves Physicians betray their Anatomy, and their Philosophy, as well as their Divinity in their invectives against this Practice?" The Puritan minister began to embrace the sentiment that smallpox was an inevitability for anyone, both the good and the wicked, yet God had provided them with the means to save themselves. Mather reported that, from his view, "none that have used it ever died of the Small Pox, tho at the same time, it were so malignant, that at least half the People died, that were infected With it in the Common way." While Mather was experimenting with the procedure, prominent Puritan pastors Benjamin Colman and William Cooper expressed public and theological support for them. The practice of smallpox inoculation was eventually accepted by the general population due to first-hand experiences and personal relationships. Although many were initially wary of the concept, it was because people were able to witness the procedure's consistently positive results, within their own community of ordinary citizens, that it became widely utilized and supported. One important change in the practice after 1721 was regulated quarantine of inoculees. The aftermath Although Mather and Boylston were able to demonstrate the efficacy of the practice, the debate over inoculation would continue even beyond the epidemic of 1721–22. After overcoming considerable difficulty and achieving notable success, Boylston traveled to London in 1725, where he published his results and was elected to the Royal Society in 1726, with Mather formally receiving the honor two years prior. Other scientific work In 1716, Mather used different varieties of maize ("Indian corn") to conduct one of the first recorded experiments on plant hybridization. He described the results in a letter to his friend James Petiver: In his Curiosa Americana (1712–1724) collection, Mather also announced that flowering plants reproduce sexually, an observation that later became the basis of the Linnaean system of plant classification. Mather may also have been the first to develop the concept of genetic dominance, which later would underpin Mendelian genetics. In 1713, the Secretary of the Royal Society of London, naturalist Richard Waller, informed Mather that he had been elected as a fellow of the Society. Mather was the eighth colonial American to join that learned body, with the first having been John Winthrop the Younger in 1662. During the controversies surrounding Mather's smallpox inoculation campaign of 1721, his adversaries questioned that credential on the grounds that Mather's name did not figure in the published lists of the Society's members. At the time, the Society responded that those published lists included only members who had been inducted in person and who were therefore entitled to vote in the Society's yearly elections. In May 1723, Mather's correspondent John Woodward discovered that, although Mather had been duly nominated in 1713, approved by the council, and informed by Waller of his election at that time, due to an oversight the nomination had not in fact been voted upon by the full assembly of fellows or the vote had not been recorded. After Woodward informed the Society of the situation, the members proceeded to elect Mather by a formal vote. Mather's enthusiasm for experimental science was strongly influenced by his reading of Robert Boyle's work. Mather was a significant popularizer of the new scientific knowledge and promoted Copernican heliocentrism in some of his sermons. He also argued against the spontaneous generation of life and compiled a medical manual titled The Angel of Bethesda that he hoped would assist people who were unable to procure the services of a physician, but which went unpublished in Mather's lifetime. This was the only comprehensive medical work written in colonial English-speaking America. Although much of what Mather included in that manual were folk remedies now regarded as unscientific or superstitious, some of them are still valid, including smallpox inoculation and the use of citrus juice to treat scurvy. Mather also outlined an early form of germ theory and discussed psychogenic diseases, while recommending hygiene, physical exercise, temperate diet, and avoidance of tobacco smoking. In his later years, Mather also promoted the professionalization of scientific research in America. He presented a Boston tradesman named Grafton Feveryear with the barometer that Feveryear used to make the first quantitative meteorological observations in New England, which he communicated to the Royal Society in 1727. Mather also sponsored Isaac Greenwood, a Harvard graduate and member of Mather's church, who travelled to London and collaborated with the Royal Society's curator of experiments, John Theophilus Desaguliers. Greenwood later became the first Hollis professor of mathematics and natural philosophy at Harvard, and may well have been the first American to practice science professionally. Slavery and racial attitudes Cotton Mather's household included both free servants and a number of slaves who performed domestic chores. Surviving records indicate that, over the course of his lifetime, Mather owned at least three, and probably more, slaves. Like the vast majority of Christians at the time, but unlike his political rival Judge Samuel Sewall, Mather was never an abolitionist, although he did publicly denounce what he regarded as the illegal and inhuman aspects of the burgeoning Atlantic slave trade. Concerned about the New England sailors enslaved in Africa since the 1680s and 1690s, in 1698 Mather wrote them his Pastoral Letter to the Captives, consoling them, and expressing hope that “your slavery to the monsters of Africa will be but short.” On the return of some survivors of African slavery in 1703, Mather published The History of What the Goodness of God has done for the Captives, lately delivered out of Barbary, wherein he lamented the death of many of the American slaves, the length of their captivity — which he described as between 7 and 19 years, — the harsh conditions of their bondage, and celebrated their refusal to convert to Islam, unlike others who did. In his book The Negro Christianized (1706), Mather insisted that slaveholders should treat their black slaves humanely and instruct them in Christianity with a view to promoting their salvation. Mather received black members of his congregation in his home and he paid a schoolteacher to instruct local black people in reading. Mather consistently held that black Africans were "of one Blood" with the rest of mankind and that blacks and whites would meet as equals in Heaven. After a number of black people carried out arson attacks in Boston in 1723, Mather asked the outraged white Bostonians whether the black population had been "always treated according to the Rules of Humanity? Are they treated as those, that are of one Blood with us, and those who have Immortal Souls in them, and are not mere Beasts of Burden?" Mather advocated the Christianization of black slaves both on religious grounds and as tending to make them more patient and faithful servants of their masters. In The Negro Christianized, Mather argued against the opinion of Richard Baxter that a Christian could not enslave another baptized Christian. The African slave Onesimus, from whom Mather first learned about smallpox inoculation, had been purchased for him as a gift by his congregation in 1706. Despite his efforts, Mather was unable to convert Onesimus to Christianity and finally manumitted him in 1716. Sermons against pirates and piracy Throughout his career Mather was also keen to minister to convicted pirates. He produced a number of pamphlets and sermons concerning piracy, including Faithful Warnings to prevent Fearful Judgments; Instructions to the Living, from the Condition of the Dead; The Converted Sinner… A Sermon Preached in Boston, May 31, 1724, In the Hearing and at the Desire of certain Pirates; A Brief Discourse occasioned by a Tragical Spectacle of a Number of Miserables under Sentence of Death for Piracy; Useful Remarks. An Essay upon Remarkables in the Way of Wicked Men and The Vial Poured Out Upon the Sea. His father Increase had preached at the trial of Dutch pirate Peter Roderigo; Cotton Mather in turn preached at the trials and sometimes executions of pirate Captains (or the crews of) William Fly, John Quelch, Samuel Bellamy, William Kidd, Charles Harris, and John Phillips. He also ministered to Thomas Hawkins, Thomas Pound, and William Coward; having been convicted of piracy, they were jailed alongside "Mary Glover the Irish Catholic witch," daughter of witch "Goody" Ann Glover at whose trial Mather had also preached. In his conversations with William Fly and his crew Mather scolded them: "You have something within you, that will compell you to confess, That the Things which you have done, are most Unreasonable and Abominable. The Robberies and Piracies, you have committed, you can say nothing to Justify them. … It is a most hideous Article in the Heap of Guilt lying on you, that an Horrible Murder is charged upon you; There is a cry of Blood going up to Heaven against you." Death and place of burial Cotton Mather was twice widowed, and only two of his 15 children survived him. He died on the day after his 65th birthday and was buried on Copp's Hill Burying Ground, in Boston's North End. Works Mather was a prolific writer and industrious in having his works printed, including a vast number of his sermons. Major Memorable Providences (1689) his first full book, on the subject of witchcraft Wonders of the Invisible World (1692) his second major book, also on witchcraft, sent to London in October, 1692 Pillars of Salt (1699) Magnalia Christi Americana (1702) The Negro Christianized (1706) Corderius Americanus: A Discourse on the Good Education of Children (1708) Bonifacius (1710) Pillars of Salt Mather's first published sermon, printed in 1686, concerned the execution of James Morgan, convicted of murder. Thirteen years later, Mather published the sermon in a compilation, along with other similar works, called Pillars of Salt. Magnalia Christi Americana Magnalia Christi Americana, considered Mather's greatest work, was published in 1702, when he was 39. The book includes several biographies of saints and describes the process of the New England settlement. In this context "saints" does not refer to the canonized saints of the Catholic church, but to those Puritan divines about whom Mather is writing. It comprises seven total books, including Pietas in Patriam: The life of His Excellency Sir William Phips, originally published anonymously in London in 1697. Despite being one of Mather's best-known works, some have openly criticized it, labeling it as hard to follow and understand, and poorly paced and organized. However, other critics have praised Mather's work, citing it as one of the best efforts at properly documenting the establishment of America and growth of the people. The Christian Philosopher In 1721, Mather published The Christian Philosopher, the first systematic book on science published in America. Mather attempted to show how Newtonian science and religion were in harmony. It was in part based on Robert Boyle's The Christian Virtuoso (1690). Mather took inspiration from Hayy ibn Yaqdhan, by the 12th-century Islamic philosopher Abu Bakr Ibn Tufail. Mather's short treatise on the Lord's Supper was later translated by his cousin Josiah Cotton. In popular culture Comic books Marvel Comics features a supervillain character named Cotton Mather with alias name, 'Witch-Slayer', that is an enemy of Spider-Man. He first appears in the 1976 comic 'Marvel Team-Up' issue #41, and appears in the subsequent issues until issue #45. Music The rock band Cotton Mather is named after Mather. The Handsome Family's 2006 album Last Days of Wonder is named in reference to Mather's 1693 book Wonders of the Invisible World, which lyricist Rennie Sparks found intriguing because of what she called its "madness brimming under the surface of things." Radio Howard da Silva portrayed Mather in Burn, Witch, Burn, a December 15, 1975 episode of the CBS Radio Mystery Theater. Literature One of the stories in Richard Brautigan′s collection Revenge of the Lawn is called ″1692 Cotton Mather Newsreel″. In Burned: A Daughters of Salem Novel, a 2023 young adult novel by Kellie O'Neill, the character "Vivienne Mathers" is a descendant of Cotton Mathers. Mather is mentioned in several of the New England horror stories of H.P. Lovecraft, such as "The Picture in the House," "The Unnamable" and "Pickman's Model." Television Seth Gabel portrays Cotton Mather in the TV series Salem, which aired from 2014 to 2017. See also Charles Colcock Jones References Sources Further reading External links Salem Witchcraft and Cotton Mather by Charles Wentworth Upham at Project Gutenberg Cotton Mather's writings Mather's influential commentary, collegiateway.org The Wonders of the Invisible World (1693 edition) (PDF format) The Threefold Paradise of Cotton Mather: An Edition of "Triparadisus" (PDF format) Cotton Mather's "~Resolved~", A Puritan Father's Lesson Plan, neprimer.com Cotton Mather's "The Story of Margaret Rule", bartleby.com 1663 births 1728 deaths 17th-century American philosophers 17th-century American writers 17th-century apocalypticists 17th-century Calvinist and Reformed theologians 17th-century male writers 17th-century New England Puritan ministers 18th-century American male writers 18th-century American philosophers 18th-century apocalypticists 18th-century Calvinist and Reformed theologians 18th-century evangelicals 18th-century New England Puritan ministers Alumni of the University of Glasgow American Calvinist and Reformed theologians American Evangelical writers American people of English descent American religious writers American sermon writers Slave owners from the Thirteen Colonies Boston Latin School alumni Burials at Copp's Hill Burying Ground Calvinist and Reformed writers Christianity in the early modern period Clergy in the Salem witch trials Demonologists Early modern period Fellows of the Royal Society Harvard College alumni History of religion in the United States Mather family People from colonial Boston People from North End, Boston Philosophers of science Witch hunters Writers from Boston Yale University people Vaccination advocates
Cotton Mather
[ "Biology" ]
10,393
[ "Vaccination", "Vaccination advocates" ]
7,120
https://en.wikipedia.org/wiki/Calreticulin
Calreticulin also known as calregulin, CRP55, CaBP3, calsequestrin-like protein, and endoplasmic reticulum resident protein 60 (ERp60) is a protein that in humans is encoded by the CALR gene. Calreticulin is a multifunctional soluble protein that binds Ca2+ ions (a second messenger in signal transduction), rendering them inactive. The Ca2+ is bound with low affinity, but high capacity, and can be released on a signal (see inositol trisphosphate). Calreticulin is located in storage compartments associated with the endoplasmic reticulum and is considered an ER resident protein. The term "Mobilferrin" is considered to be the same as calreticulin by some sources. Function Calreticulin binds to misfolded proteins and prevents them from being exported from the endoplasmic reticulum to the Golgi apparatus. A similar quality-control molecular chaperone, calnexin, performs the same service for soluble proteins as does calreticulin, however it is a membrane-bound protein. Both proteins, calnexin and calreticulin, have the function of binding to oligosaccharides containing terminal glucose residues, thereby targeting them for degradation. Calreticulin and Calnexin's ability to bind carbohydrates associates them with the lectin protein family. In normal cellular function, trimming of glucose residues off the core oligosaccharide added during N-linked glycosylation is a part of protein processing. If "overseer" enzymes note that residues are misfolded, proteins within the rER will re-add glucose residues so that other calreticulin/calnexin can bind to these proteins and prevent them from proceeding to the Golgi. This leads these aberrantly folded proteins down a path whereby they are targeted for degradation. Studies on transgenic mice reveal that calreticulin is a cardiac embryonic gene that is essential during development. Calreticulin and calnexin are also integral in the production of MHC class I proteins. As newly synthesized MHC class I α-chains enter the endoplasmic reticulum, calnexin binds on to them retaining them in a partly folded state. After the β2-microglobulin binds to the peptide-loading complex (PLC), calreticulin (along with ERp57) takes over the job of chaperoning the MHC class I protein while the tapasin links the complex to the transporter associated with antigen processing (TAP) complex. This association prepares the MHC class I to bind an antigen for presentation on the cell surface. Transcription regulation Calreticulin is also found in the nucleus, suggesting that it may have a role in transcription regulation. Calreticulin binds to the synthetic peptide KLGFFKR, which is almost identical to an amino acid sequence in the DNA-binding domain of the superfamily of nuclear receptors. The amino terminus of calreticulin interacts with the DNA-binding domain of the glucocorticoid receptor and prevents the receptor from binding to its specific glucocorticoid response element. Calreticulin can inhibit the binding of androgen receptor to its hormone-responsive DNA element and can inhibit androgen receptor and retinoic acid receptor transcriptional activities in vivo, as well as retinoic acid-induced neuronal differentiation. Thus, calreticulin can act as an important modulator of the regulation of gene transcription by nuclear hormone receptors. Clinical significance Calreticulin binds to antibodies in certain area of systemic lupus and Sjögren patients that contain anti-Ro/SSA antibodies. Systemic lupus erythematosus is associated with increased autoantibody titers against calreticulin, but calreticulin is not a Ro/SS-A antigen. Earlier papers referred to calreticulin as an Ro/SS-A antigen, but this was later disproven. Increased autoantibody titer against human calreticulin is found in infants with complete congenital heart block of both the IgG and IgM classes. In 2013, two groups detected calreticulin mutations in a majority of JAK2-negative/MPL-negative patients with essential thrombocythemia and primary myelofibrosis, which makes CALR mutations the second most common in myeloproliferative neoplasms. All mutations (insertions or deletions) affected the last exon, generating a reading frame shift of the resulting protein, that creates a novel terminal peptide and causes a loss of endoplasmic reticulum KDEL retention signal. Role in cancer Calreticulin (CRT) is expressed in many cancer cells and plays a role to promote macrophages to engulf hazardous cancerous cells. The reason why most of the cells are not destroyed is the presence of another molecule with signal CD47, which blocks CRT. Hence antibodies that block CD47 might be useful as a cancer treatment. In mice models of myeloid leukemia and non-Hodgkin lymphoma, anti-CD47 were effective in clearing cancer cells while normal cells were unaffected. Interactions Calreticulin has been shown to interact with Perforin and NK2 homeobox 1. References Further reading External links C-type lectins Immune system Transcription coregulators Molecular chaperones
Calreticulin
[ "Biology" ]
1,151
[ "Immune system", "Organ systems" ]
7,123
https://en.wikipedia.org/wiki/Calendar%20date
A calendar date is a reference to a particular day represented within a calendar system. The calendar date allows the specific day to be identified. The number of days between two dates may be calculated. For example, "25 " is ten days after "15 ". The date of a particular event depends on the observed time zone. For example, the air attack on Pearl Harbor that began at 7:48 a.m. Hawaiian time on 7 December 1941 took place at 3:18 a.m. Japan Standard Time, 8 December in Japan. A particular day may be assigned a different nominal date according to the calendar used, so an identifying suffix may be needed where ambiguity may arise. The Gregorian calendar is the world's most widely used civil calendar, and is designated (in English) as AD or CE. Many cultures use religious or regnal calendars such as the Gregorian (Western Christendom, AD), Hebrew calendar (Judaism, AM), the Hijri calendars (Islam, AH), Julian calendar (Eastern Christendom, AD) or any other of the many calendars used around the world. In most calendar systems, the date consists of three parts: the (numbered) day of the month, the month, and the (numbered) year. There may also be additional parts, such as the day of the week. Years are usually counted from a particular starting point, usually called the epoch, with era referring to the span of time since that epoch. A date without the year may also be referred to as a date or calendar date (such as " " rather than " "). As such, it is either shorthand for the current year or it defines the day of an annual event, such as a birthday on 31 May, a holiday on 1 September, or Christmas on 25 December. Many computer systems internally store points in time in Unix time format or some other system time format. The date (Unix) command—internally using the C date and time functions—can be used to convert that internal representation of a point in time to most of the date representations shown here. Date format There is a large variety of formats for dates in use, which differ in the order of date components. These variations use the sample date of 31 May 2006: (e.g. 31/05/2006, 05/31/2006, 2006/05/31), component separators (e.g. 31.05.2006, 31/05/2006, 31-05-2006), whether leading zeros are included (e.g. 31/5/2006 vs. 31/05/2006), whether all four digits of the year are written (e.g., 31.05.2006 vs. 31.05.06), and whether the month is represented in Arabic or Roman numerals or by name (e.g. 31.05.2006, 31.V.2006 vs. 31 May 2006). Gregorian, day–month–year (DMY) This little-endian sequence is used by a majority of the world and is the preferred form by the United Nations when writing the full date format in official documents. This date format originates from the custom of writing the date as "the Nth day of [month] in the year of our Lord [year]" in Western religious and legal documents. The format has shortened over time but the order of the elements has remained constant. The following examples use the date of 9 November 2006. (With the years 2000–2009, care must be taken to ensure that two digit years do not intend to be 1900–1909 or other similar years.) The dots have a function of ordinal dot. "9 November 2006" or "9. November 2006" (the latter is common in German-speaking regions) 9/11/2006 or 09/11/2006 09.11.2006 or 9.11.2006 9. 11. 2006 9-11-2006 or 09-11-2006 09-Nov-2006 09Nov06 – Used, including in the U.S., where space needs to be saved by skipping punctuation (often seen on the dateline of Internet news articles). [The] 9th [of] November 2006 – 'The' and 'of' are often spoken but generally omitted in all but the most formal writing such as legal documents. 09/Nov/2006 – used in the Common Log Format Thursday, 9 November 2006 9/xi/06, 9.xi.06, 9-xi.06, 9/xi-06, 9.XI.2006, 9. XI. 2006 or 9 XI 2006 (using the Roman numeral for the month) – In the past, this was a common and typical way of distinguishing day from month and was widely used in many countries, but recently this practice has been affected by the general retreat from the use of Roman numerals. This is usually confined to handwriting only and is not put into any form of print. It is associated with a number of schools and universities. It has also been used by the Vatican as an alternative to using months named after Roman deities. It is used on Canadian postmarks as a bilingual form of the month. It was also commonly used in the Soviet Union, in both handwriting and print. 9 November 2006 CE or 9 November 2006 AD Gregorian, year–month–day (YMD) In this format, the most significant data item is written before lesser data items i.e. the year before the month before the day. It is consistent with the big-endianness of the Hindu–Arabic numeral system, which progresses from the highest to the lowest order magnitude. That is, using this format textual orderings and chronological orderings are identical. This form is standard in East Asia, Iran, Lithuania, Hungary, and Sweden; and some other countries to a limited extent. Examples for the 9th of November 2003: 2003-11-09: the standard Internet date/time format, a profile of the international standard ISO 8601, orders the components of a date like this, and additionally uses leading zeros, for example, 1996-05-01, to be easily read and sorted by computers. It is used with UTC in RFC 3339. This format is also favored in certain Asian countries, mainly East Asian countries, as well as in some European countries. The big-endian convention is also frequently used in Canada, but all three conventions are used there (both endians and the American MMDDYYYY format are allowed on Canadian bank cheques provided that the layout of the cheque makes it clear which style is to be used). 2003 November 9 2003Nov9 or 2003Nov09 2003-Nov-9 or 2003-Nov-09 2003-Nov-9, Sunday 2003. 9. – The official format in Hungary, point after year and day, month name with small initial. Following shorter formats also can be used: 2003. . 9., 2003. 11. 9., 2003. XI. 9. 2003.11.9 using dots and no leading zeros, common in China. 2003.11.09 2003/11/09 using slashes and leading zeros, common in Japan on the Internet. 2003/11/9 03/11/09 20031109 : the "basic format" profile of ISO 8601, an 8-digit number providing monotonic date codes, common in computing and increasingly used in dated computer file names. It is used in the standard iCalendar file format defined in RFC 5545. A big advantage of the ISO 8601 "basic format" is that a simple textual sort is equivalent to a sort by date. It is also extended through the universal big-endian format clock time: 9 November 2003, 18h 14m 12s, or 2003/11/9/18:14:12 or (ISO 8601) 2003-11-09T18:14:12. Gregorian, month–day–year (MDY) This sequence is used primarily in the Philippines and the United States. It is also used to varying extents in Canada (though never in Quebec). This date format was commonly used alongside the little-endian form in the United Kingdom until the mid-20th century and can be found in both defunct and modern print media such as the London Gazette and The Times, respectively. This format was also commonly used by several English-language print media in many former British colonies and also one of two formats commonly used in India during British Raj era until the mid-20th century. Thursday, November 9, 2006 November 9, 2006 Nov 9, 2006 Nov-9-2006 Nov-09-2006 11/9/2006 or 11/09/2006 11-09-2006 or 11-9-2006 11.09.2006 or 11.9.2006 11.09.06 11/09/06 Modern style guides recommend avoiding the use of the ordinal (e.g. 1st, 2nd, 3rd, 4th) form of numbers when the day follows the month (July 4 or July 4, 2024), and that format is not included in ISO standards. The ordinal was common in the past and is still sometimes used ([the] 4th [of] July or July 4th). Gregorian, year–day–month (YDM) This date format is used in Kazakhstan, Latvia, Nepal, and Turkmenistan. According to the official rules of documenting dates by governmental authorities, the long date format in Kazakh is written in the year–day–month order, e.g. 2006 5 April (). Standards There are several standards that specify date formats: ISO 8601 Data elements and interchange formats – Information interchange – Representation of dates and times specifies YYYY-MM-DD (the separators are optional, but only hyphens are allowed to be used), where all values are fixed length numeric, but also allows YYYY-DDD, where DDD is the ordinal number of the day within the year, e.g. 2001–365. RFC 3339 Date and Time on the Internet: Timestamps specifies YYYY-MM-DD, i.e. a particular subset of the options allowed by ISO 8601. RFC 5322 Internet Message Format specifies day month year where day is one or two digits, month is a three letter month abbreviation, and year is four digits. Difficulties Many numerical forms can create confusion when used in international correspondence, particularly when abbreviating the year to its final two digits, with no context. For example, "07/08/06" could refer to either 7 August 2006 or July 8, 2006 (or 1906, or the sixth year of any century), or 2007 August 6. The date format of YYYY-MM-DD in ISO 8601, as well as other international standards, have been adopted for many applications for reasons including reducing transnational ambiguity and simplifying machine processing. An early U.S. Federal Information Processing Standard recommended 2-digit years. This is now widely recognized as extremely problematic, because of the year 2000 problem. Some U.S. government agencies now use ISO 8601 with 4-digit years. When transitioning from one calendar or date notation to another, a format that includes both styles may be developed; for example Old Style and New Style dates in the transition from the Julian to the Gregorian calendar. Advantages for ordering in sequence One of the advantages of using the ISO 8601 date format is that the lexicographical order (ASCIIbetical) of the representations is equivalent to the chronological order of the dates, assuming that all dates are in the same time zone. Thus dates can be sorted using simple string comparison algorithms, and indeed by any left to right collation. For example: 2003-02-28 (28 February 2003) sorts before 2006-03-01 (1 March 2006) which sorts before 2015-01-30 (30 January 2015) The YYYY-MM-DD layout is the only common format that can provide this. Sorting other date representations involves some parsing of the date strings. This also works when a time in 24-hour format is included after the date, as long as all times are understood to be in the same time zone. ISO 8601 is used widely where concise, human-readable yet easily computable and unambiguous dates are required, although many applications store dates internally as UNIX time and only convert to ISO 8601 for display. All modern computer Operating Systems retain date information of files outside of their titles, allowing the user to choose which format they prefer and have them sorted thus, irrespective of the files' names. Specialized usage Day and year only The U.S. military sometimes uses a system, known to them as the "Julian date format", which indicates the year and the actual day out of the 365 days of the year (and thus a designation of the month would not be needed). For example, "11 December 1999" can be written in some contexts as "1999345" or "99345", for the 345th day of 1999. This system is most often used in US military logistics since it simplifies the process of calculating estimated shipping and arrival dates. For example: say a tank engine takes an estimated 35 days to ship by sea from the US to South Korea. If the engine is sent on 06104 (Friday, 14 April 2006), it should arrive on 06139 (Friday, 19 May). Outside of the US military and some US government agencies, including the Internal Revenue Service, this format is usually referred to as "ordinal date", rather than "Julian date". Such ordinal date formats are also used by many computer programs (especially those for mainframe systems). Using a three-digit Julian day number saves one byte of computer storage over a two-digit month plus two-digit day, for example, "January 17" is 017 in Julian versus 0117 in month-day format. OS/390 or its successor, z/OS, display dates in yy.ddd format for most operations. UNIX time stores time as a number in seconds since the beginning of the UNIX Epoch (1970-01-01). Another "ordinal" date system ("ordinal" in the sense of advancing in value by one as the date advances by one day) is in common use in astronomical calculations and referencing and uses the same name as this "logistics" system. The continuity of representation of period regardless of the time of year being considered is highly useful to both groups of specialists. The astronomers describe their system as also being a "Julian date" system. Week number used Companies in Europe often use year, week number, and day for planning purposes. So, for example, an event in a project can happen on (week 43) or (Monday, week 43) or, if the year needs to be indicated, on (the year 2006, week 43; i.e., Monday 23OctoberSunday 29October 2006). An ISO week-numbering year has 52 or 53 full weeks. That is 364 or 371 days instead of the conventional Gregorian year of 365 or 366 days. These 53 week years occur on all years that have Thursday as 1 January and on leap years that start on Wednesday the 1 January. The extra week is sometimes referred to as a 'leap week', although ISO 8601 does not use this term. Expressing dates in spoken English In English-language outside North America (mostly in Anglophone Europe and some countries in Australasia), full dates are written as 7 December 1941 (or 7th December 1941) and spoken as "the seventh of December, nineteen forty-one" (exceedingly common usage of "the" and "of"), with the occasional usage of December 7, 1941 ("December the seventh, nineteen forty-one"). In common with most continental European usage, however, all-numeric dates are invariably ordered dd/mm/yyyy. In Canada and the United States, the usual written form is December 7, 1941, spoken as "December seventh, nineteen forty-one" or colloquially "December the seventh, nineteen forty-one". Ordinal numerals, however, are not always used when writing and pronouncing dates, and "December seven, nineteen forty-one" is also an accepted pronunciation of the date written December 7, 1941. A notable exception to this rule is the Fourth of July (U.S. Independence Day). See also Calendar algorithms Date and time representation by country Date and time notation in the United Kingdom Date and time notation in the United States Internationalization and localization ISO 8601 – an international standard covering the representation of dates and times List of calendars Time formatting and storage bugs Year 10,000 problem Notes References External links IETF: The ISO 8601 Date Format : Y10K and Beyond Today's date (Gregorian) in over 400 more-or-less obscure foreign languages Date
Calendar date
[ "Physics" ]
3,527
[ "Spacetime", "Calendars", "Physical quantities", "Time" ]
7,125
https://en.wikipedia.org/wiki/Center%20%28group%20theory%29
In abstract algebra, the center of a group is the set of elements that commute with every element of . It is denoted , from German Zentrum, meaning center. In set-builder notation, . The center is a normal subgroup, , and also a characteristic subgroup, but is not necessarily fully characteristic. The quotient group, , is isomorphic to the inner automorphism group, . A group is abelian if and only if . At the other extreme, a group is said to be centerless if is trivial; i.e., consists only of the identity element. The elements of the center are central elements. As a subgroup The center of G is always a subgroup of . In particular: contains the identity element of , because it commutes with every element of , by definition: , where is the identity; If and are in , then so is , by associativity: for each ; i.e., is closed; If is in , then so is as, for all in , commutes with : . Furthermore, the center of is always an abelian and normal subgroup of . Since all elements of commute, it is closed under conjugation. A group homomorphism might not restrict to a homomorphism between their centers. The image elements commute with the image , but they need not commute with all of unless is surjective. Thus the center mapping is not a functor between categories Grp and Ab, since it does not induce a map of arrows. Conjugacy classes and centralizers By definition, an element is central whenever its conjugacy class contains only the element itself; i.e. . The center is the intersection of all the centralizers of elements of : As centralizers are subgroups, this again shows that the center is a subgroup. Conjugation Consider the map , from to the automorphism group of defined by , where is the automorphism of defined by . The function, is a group homomorphism, and its kernel is precisely the center of , and its image is called the inner automorphism group of , denoted . By the first isomorphism theorem we get, . The cokernel of this map is the group of outer automorphisms, and these form the exact sequence . Examples The center of an abelian group, , is all of . The center of the Heisenberg group, , is the set of matrices of the form: The center of a nonabelian simple group is trivial. The center of the dihedral group, , is trivial for odd . For even , the center consists of the identity element together with the 180° rotation of the polygon. The center of the quaternion group, , is . The center of the symmetric group, , is trivial for . The center of the alternating group, , is trivial for . The center of the general linear group over a field , , is the collection of scalar matrices, . The center of the orthogonal group, is . The center of the special orthogonal group, is the whole group when , and otherwise when n is even, and trivial when n is odd. The center of the unitary group, is . The center of the special unitary group, is . The center of the multiplicative group of non-zero quaternions is the multiplicative group of non-zero real numbers. Using the class equation, one can prove that the center of any non-trivial finite p-group is non-trivial. If the quotient group is cyclic, is abelian (and hence , so is trivial). The center of the Rubik's Cube group consists of two elements – the identity (i.e. the solved state) and the superflip. The center of the Pocket Cube group is trivial. The center of the Megaminx group has order 2, and the center of the Kilominx group is trivial. Higher centers Quotienting out by the center of a group yields a sequence of groups called the upper central series: The kernel of the map is the th center of (second center, third center, etc.), denoted . Concretely, the ()-st center comprises the elements that commute with all elements up to an element of the th center. Following this definition, one can define the 0th center of a group to be the identity subgroup. This can be continued to transfinite ordinals by transfinite induction; the union of all the higher centers is called the hypercenter. The ascending chain of subgroups stabilizes at i (equivalently, ) if and only if is centerless. Examples For a centerless group, all higher centers are zero, which is the case of stabilization. By Grün's lemma, the quotient of a perfect group by its center is centerless, hence all higher centers equal the center. This is a case of stabilization at . See also Center (algebra) Center (ring theory) Centralizer and normalizer Conjugacy class Notes References External links Group theory Functional subgroups
Center (group theory)
[ "Mathematics" ]
1,042
[ "Group theory", "Fields of abstract algebra" ]
7,158
https://en.wikipedia.org/wiki/Carat%20%28mass%29
The carat (ct) is a unit of mass equal to , which is used for measuring gemstones and pearls. The current definition, sometimes known as the metric carat, was adopted in 1907 at the Fourth General Conference on Weights and Measures, and soon afterwards in many countries around the world. The carat is divisible into 100 points of 2 mg. Other subdivisions, and slightly different mass values, have been used in the past in different locations. In terms of diamonds, a paragon is a flawless stone of at least 100 carats (20 g). The ANSI X.12 EDI standard abbreviation for the carat is CD. Etymology First attested in English in the mid-15th century, the word carat comes from Italian carato, which comes from Arabic (qīrāṭ; قيراط), in turn borrowed from Greek kerátion κεράτιον 'carob seed', a diminutive of keras 'horn'. It was a unit of weight, equal to 1/1728 (1/12) of a pound (see Mina (unit)). History Carob seeds have been used throughout history to measure jewelry, because it was believed that there was little variance in their mass distribution. However, this was a factual inaccuracy, as their mass varies about as much as seeds of other species. In the past, each country had its own carat. It was often used for weighing gold. Beginning in the 1570s, it was used to measure weights of diamonds. Standardization An 'international carat' of 205 milligrams was proposed in 1871 by the Syndical Chamber of Jewellers, etc., in Paris, and accepted in 1877 by the Syndical Chamber of Diamond Merchants in Paris. A metric carat of 200 milligrams is exactly one-fifth of a gram and had often been suggested in various countries, and was finally proposed by the International Committee of Weights and Measures, and unanimously accepted at the fourth sexennial General Conference of the Metric Convention held in Paris in October 1907. It was soon made compulsory by law in France, but uptake of the new carat was slower in England, where its use was allowed by the Weights and Measures (Metric System) Act of 1897. Historical definitions UK Board of Trade In the United Kingdom the original Board of Trade carat was exactly grains (~3.170 grains = ~205 mg); in 1888, the Board of Trade carat was changed to exactly grains (~3.168 grains = ~205 mg). Despite it being a non-metric unit, a number of metric countries have used this unit for its limited range of application. The Board of Trade carat was divisible into four diamond grains, but measurements were typically made in multiples of carat. Refiners' carats There were also two varieties of refiners' carats once used in the United Kingdom—the pound carat and the ounce carat. The pound troy was divisible into 24 pound carats of 240 grains troy each; the pound carat was divisible into four pound grains of 60 grains troy each; and the pound grain was divisible into four pound quarters of 15 grains troy each. Likewise, the ounce troy was divisible into 24 ounce carats of 20 grains troy each; the ounce carat was divisible into four ounce grains of 5 grains troy each; and the ounce grain was divisible into four ounce quarters of grains troy each. Greco-Roman The solidus was also a Roman weight unit. There is literary evidence that the weight of 72 coins of the type called solidus was exactly 1 Roman pound, and that the weight of 1 solidus was 24 siliquae. The weight of a Roman pound is generally believed to have been 327.45 g or possibly up to 5 g less. Therefore, the metric equivalent of 1 siliqua was approximately 189 mg. The Greeks had a similar unit of the same value. Gold fineness in carats comes from carats and grains of gold in a solidus of coin. The conversion rates 1 solidus = 24 carats, 1 carat = 4 grains still stand. Woolhouse's Measures, Weights and Moneys of All Nations gives gold fineness in carats of 4 grains, and silver in troy pounds of 12 troy ounces of 20 pennyweight each. Notes References Units of mass Jewellery making Metricated units
Carat (mass)
[ "Physics", "Mathematics" ]
914
[ "Matter", "Metricated units", "Quantity", "Units of mass", "Mass", "Units of measurement" ]
7,163
https://en.wikipedia.org/wiki/Catenary
In physics and geometry, a catenary ( , ) is the curve that an idealized hanging chain or cable assumes under its own weight when supported only at its ends in a uniform gravitational field. The catenary curve has a U-like shape, superficially similar in appearance to a parabola, which it is not. The curve appears in the design of certain types of arches and as a cross section of the catenoid—the shape assumed by a soap film bounded by two parallel circular rings. The catenary is also called the alysoid, chainette, or, particularly in the materials sciences, an example of a funicular. Rope statics describes catenaries in a classic statics problem involving a hanging rope. Mathematically, the catenary curve is the graph of the hyperbolic cosine function. The surface of revolution of the catenary curve, the catenoid, is a minimal surface, specifically a minimal surface of revolution. A hanging chain will assume a shape of least potential energy which is a catenary. Galileo Galilei in 1638 discussed the catenary in the book Two New Sciences recognizing that it was different from a parabola. The mathematical properties of the catenary curve were studied by Robert Hooke in the 1670s, and its equation was derived by Leibniz, Huygens and Johann Bernoulli in 1691. Catenaries and related curves are used in architecture and engineering (e.g., in the design of bridges and arches so that forces do not result in bending moments). In the offshore oil and gas industry, "catenary" refers to a steel catenary riser, a pipeline suspended between a production platform and the seabed that adopts an approximate catenary shape. In the rail industry it refers to the overhead wiring that transfers power to trains. (This often supports a contact wire, in which case it does not follow a true catenary curve.) In optics and electromagnetics, the hyperbolic cosine and sine functions are basic solutions to Maxwell's equations. The symmetric modes consisting of two evanescent waves would form a catenary shape. History The word "catenary" is derived from the Latin word catēna, which means "chain". The English word "catenary" is usually attributed to Thomas Jefferson, who wrote in a letter to Thomas Paine on the construction of an arch for a bridge: It is often said that Galileo thought the curve of a hanging chain was parabolic. However, in his Two New Sciences (1638), Galileo wrote that a hanging cord is only an approximate parabola, correctly observing that this approximation improves in accuracy as the curvature gets smaller and is almost exact when the elevation is less than 45°. The fact that the curve followed by a chain is not a parabola was proven by Joachim Jungius (1587–1657); this result was published posthumously in 1669. The application of the catenary to the construction of arches is attributed to Robert Hooke, whose "true mathematical and mechanical form" in the context of the rebuilding of St Paul's Cathedral alluded to a catenary. Some much older arches approximate catenaries, an example of which is the Arch of Taq-i Kisra in Ctesiphon. In 1671, Hooke announced to the Royal Society that he had solved the problem of the optimal shape of an arch, and in 1675 published an encrypted solution as a Latin anagram in an appendix to his Description of Helioscopes, where he wrote that he had found "a true mathematical and mechanical form of all manner of Arches for Building." He did not publish the solution to this anagram in his lifetime, but in 1705 his executor provided it as ut pendet continuum flexile, sic stabit contiguum rigidum inversum, meaning "As hangs a flexible cable so, inverted, stand the touching pieces of an arch." In 1691, Gottfried Leibniz, Christiaan Huygens, and Johann Bernoulli derived the equation in response to a challenge by Jakob Bernoulli; their solutions were published in the Acta Eruditorum for June 1691. David Gregory wrote a treatise on the catenary in 1697 in which he provided an incorrect derivation of the correct differential equation. Leonhard Euler proved in 1744 that the catenary is the curve which, when rotated about the -axis, gives the surface of minimum surface area (the catenoid) for the given bounding circles. Nicolas Fuss gave equations describing the equilibrium of a chain under any force in 1796. Inverted catenary arch Catenary arches are often used in the construction of kilns. To create the desired curve, the shape of a hanging chain of the desired dimensions is transferred to a form which is then used as a guide for the placement of bricks or other building material. The Gateway Arch in St. Louis, Missouri, United States is sometimes said to be an (inverted) catenary, but this is incorrect. It is close to a more general curve called a flattened catenary, with equation , which is a catenary if . While a catenary is the ideal shape for a freestanding arch of constant thickness, the Gateway Arch is narrower near the top. According to the U.S. National Historic Landmark nomination for the arch, it is a "weighted catenary" instead. Its shape corresponds to the shape that a weighted chain, having lighter links in the middle, would form. Catenary bridges In free-hanging chains, the force exerted is uniform with respect to length of the chain, and so the chain follows the catenary curve. The same is true of a simple suspension bridge or "catenary bridge," where the roadway follows the cable. A stressed ribbon bridge is a more sophisticated structure with the same catenary shape. However, in a suspension bridge with a suspended roadway, the chains or cables support the weight of the bridge, and so do not hang freely. In most cases the roadway is flat, so when the weight of the cable is negligible compared with the weight being supported, the force exerted is uniform with respect to horizontal distance, and the result is a parabola, as discussed below (although the term "catenary" is often still used, in an informal sense). If the cable is heavy then the resulting curve is between a catenary and a parabola. Anchoring of marine objects The catenary produced by gravity provides an advantage to heavy anchor rodes. An anchor rode (or anchor line) usually consists of chain or cable or both. Anchor rodes are used by ships, oil rigs, docks, floating wind turbines, and other marine equipment which must be anchored to the seabed. When the rope is slack, the catenary curve presents a lower angle of pull on the anchor or mooring device than would be the case if it were nearly straight. This enhances the performance of the anchor and raises the level of force it will resist before dragging. To maintain the catenary shape in the presence of wind, a heavy chain is needed, so that only larger ships in deeper water can rely on this effect. Smaller boats also rely on catenary to maintain maximum holding power. Cable ferries and chain boats present a special case of marine vehicles moving although moored by the two catenaries each of one or more cables (wire ropes or chains) passing through the vehicle and moved along by motorized sheaves. The catenaries can be evaluated graphically. Mathematical description Equation The equation of a catenary in Cartesian coordinates has the form where is the hyperbolic cosine function, and where is the distance of the lowest point above the x axis. All catenary curves are similar to each other, since changing the parameter is equivalent to a uniform scaling of the curve. The Whewell equation for the catenary is where is the tangential angle and the arc length. Differentiating gives and eliminating gives the Cesàro equation where is the curvature. The radius of curvature is then which is the length of the normal between the curve and the -axis. Relation to other curves When a parabola is rolled along a straight line, the roulette curve traced by its focus is a catenary. The envelope of the directrix of the parabola is also a catenary. The involute from the vertex, that is the roulette traced by a point starting at the vertex when a line is rolled on a catenary, is the tractrix. Another roulette, formed by rolling a line on a catenary, is another line. This implies that square wheels can roll perfectly smoothly on a road made of a series of bumps in the shape of an inverted catenary curve. The wheels can be any regular polygon except a triangle, but the catenary must have parameters corresponding to the shape and dimensions of the wheels. Geometrical properties Over any horizontal interval, the ratio of the area under the catenary to its length equals , independent of the interval selected. The catenary is the only plane curve other than a horizontal line with this property. Also, the geometric centroid of the area under a stretch of catenary is the midpoint of the perpendicular segment connecting the centroid of the curve itself and the -axis. Science A moving charge in a uniform electric field travels along a catenary (which tends to a parabola if the charge velocity is much less than the speed of light ). The surface of revolution with fixed radii at either end that has minimum surface area is a catenary revolved about the -axis. Analysis Model of chains and arches In the mathematical model the chain (or cord, cable, rope, string, etc.) is idealized by assuming that it is so thin that it can be regarded as a curve and that it is so flexible any force of tension exerted by the chain is parallel to the chain. The analysis of the curve for an optimal arch is similar except that the forces of tension become forces of compression and everything is inverted. An underlying principle is that the chain may be considered a rigid body once it has attained equilibrium. Equations which define the shape of the curve and the tension of the chain at each point may be derived by a careful inspection of the various forces acting on a segment using the fact that these forces must be in balance if the chain is in static equilibrium. Let the path followed by the chain be given parametrically by where represents arc length and is the position vector. This is the natural parameterization and has the property that where is a unit tangent vector. A differential equation for the curve may be derived as follows. Let be the lowest point on the chain, called the vertex of the catenary. The slope of the curve is zero at since it is a minimum point. Assume is to the right of since the other case is implied by symmetry. The forces acting on the section of the chain from to are the tension of the chain at , the tension of the chain at , and the weight of the chain. The tension at is tangent to the curve at and is therefore horizontal without any vertical component and it pulls the section to the left so it may be written where is the magnitude of the force. The tension at is parallel to the curve at and pulls the section to the right. The tension at can be split into two components so it may be written , where is the magnitude of the force and is the angle between the curve at and the -axis (see tangential angle). Finally, the weight of the chain is represented by where is the weight per unit length and is the length of the segment of chain between and . The chain is in equilibrium so the sum of three forces is , therefore and and dividing these gives It is convenient to write which is the length of chain whose weight is equal in magnitude to the tension at . Then is an equation defining the curve. The horizontal component of the tension, is constant and the vertical component of the tension, is proportional to the length of chain between and the vertex. Derivation of equations for the curve The differential equation , given above, can be solved to produce equations for the curve. We will solve the equation using the boundary condition that the vertex is positioned at and . First, invoke the formula for arc length to get then separate variables to obtain A reasonably straightforward approach to integrate this is to use hyperbolic substitution, which gives (where is a constant of integration), and hence But , so which integrates as (with being the constant of integration satisfying the boundary condition). Since the primary interest here is simply the shape of the curve, the placement of the coordinate axes are arbitrary; so make the convenient choice of to simplify the result to For completeness, the relation can be derived by solving each of the and relations for , giving: so which can be rewritten as Alternative derivation The differential equation can be solved using a different approach. From it follows that and Integrating gives, and As before, the and -axes can be shifted so and can be taken to be 0. Then and taking the reciprocal of both sides Adding and subtracting the last two equations then gives the solution and Determining parameters In general the parameter is the position of the axis. The equation can be determined in this case as follows: Relabel if necessary so that is to the left of and let be the horizontal and be the vertical distance from to . Translate the axes so that the vertex of the catenary lies on the -axis and its height is adjusted so the catenary satisfies the standard equation of the curve and let the coordinates of and be and respectively. The curve passes through these points, so the difference of height is and the length of the curve from to is When is expanded using these expressions the result is so This is a transcendental equation in and must be solved numerically. Since is strictly monotonic on , there is at most one solution with and so there is at most one position of equilibrium. However, if both ends of the curve ( and ) are at the same level (), it can be shown that where L is the total length of the curve between and and is the sag (vertical distance between , and the vertex of the curve). It can also be shown that and where H is the horizontal distance between and which are located at the same level (). The horizontal traction force at and is , where is the weight per unit length of the chain or cable. Tension relations There is a simple relationship between the tension in the cable at a point and its - and/or - coordinate. Begin by combining the squares of the vector components of the tension: which (recalling that ) can be rewritten as But, as shown above, (assuming that ), so we get the simple relations Variational formulation Consider a chain of length suspended from two points of equal height and at distance . The curve has to minimize its potential energy (where is the weight per unit length) and is subject to the constraint The modified Lagrangian is therefore where is the Lagrange multiplier to be determined. As the independent variable does not appear in the Lagrangian, we can use the Beltrami identity where is an integration constant, in order to obtain a first integral This is an ordinary first order differential equation that can be solved by the method of separation of variables. Its solution is the usual hyperbolic cosine where the parameters are obtained from the constraints. Generalizations with vertical force Nonuniform chains If the density of the chain is variable then the analysis above can be adapted to produce equations for the curve given the density, or given the curve to find the density. Let denote the weight per unit length of the chain, then the weight of the chain has magnitude where the limits of integration are and . Balancing forces as in the uniform chain produces and and therefore Differentiation then gives In terms of and the radius of curvature this becomes Suspension bridge curve A similar analysis can be done to find the curve followed by the cable supporting a suspension bridge with a horizontal roadway. If the weight of the roadway per unit length is and the weight of the cable and the wire supporting the bridge is negligible in comparison, then the weight on the cable (see the figure in Catenary#Model of chains and arches) from to is where is the horizontal distance between and . Proceeding as before gives the differential equation This is solved by simple integration to get and so the cable follows a parabola. If the weight of the cable and supporting wires is not negligible then the analysis is more complex. Catenary of equal strength In a catenary of equal strength, the cable is strengthened according to the magnitude of the tension at each point, so its resistance to breaking is constant along its length. Assuming that the strength of the cable is proportional to its density per unit length, the weight, , per unit length of the chain can be written , where is constant, and the analysis for nonuniform chains can be applied. In this case the equations for tension are Combining gives and by differentiation where is the radius of curvature. The solution to this is In this case, the curve has vertical asymptotes and this limits the span to . Other relations are The curve was studied 1826 by Davies Gilbert and, apparently independently, by Gaspard-Gustave Coriolis in 1836. Recently, it was shown that this type of catenary could act as a building block of electromagnetic metasurface and was known as "catenary of equal phase gradient". Elastic catenary In an elastic catenary, the chain is replaced by a spring which can stretch in response to tension. The spring is assumed to stretch in accordance with Hooke's Law. Specifically, if is the natural length of a section of spring, then the length of the spring with tension applied has length where is a constant equal to , where is the stiffness of the spring. In the catenary the value of is variable, but ratio remains valid at a local level, so The curve followed by an elastic spring can now be derived following a similar method as for the inelastic spring. The equations for tension of the spring are and from which where is the natural length of the segment from to and is the weight per unit length of the spring with no tension. Write so Then from which Integrating gives the parametric equations Again, the and -axes can be shifted so and can be taken to be 0. So are parametric equations for the curve. At the rigid limit where is large, the shape of the curve reduces to that of a non-elastic chain. Other generalizations Chain under a general force With no assumptions being made regarding the force acting on the chain, the following analysis can be made. First, let be the force of tension as a function of . The chain is flexible so it can only exert a force parallel to itself. Since tension is defined as the force that the chain exerts on itself, must be parallel to the chain. In other words, where is the magnitude of and is the unit tangent vector. Second, let be the external force per unit length acting on a small segment of a chain as a function of . The forces acting on the segment of the chain between and are the force of tension at one end of the segment, the nearly opposite force at the other end, and the external force acting on the segment which is approximately . These forces must balance so Divide by and take the limit as to obtain These equations can be used as the starting point in the analysis of a flexible chain acting under any external force. In the case of the standard catenary, where the chain has weight per unit length. See also Catenary arch Chain fountain or self-siphoning beads Overhead catenary – power lines suspended over rail or tram vehicles Roulette (curve) – an elliptic/hyperbolic catenary Troposkein – the shape of a spun rope Weighted catenary Notes Bibliography Further reading External links Catenary curve calculator Catenary at The Geometry Center "Catenary" at Visual Dictionary of Special Plane Curves The Catenary - Chains, Arches, and Soap Films. Cable Sag Error Calculator – Calculates the deviation from a straight line of a catenary curve and provides derivation of the calculator and references. Dynamic as well as static cetenary curve equations derived – The equations governing the shape (static case) as well as dynamics (dynamic case) of a centenary is derived. Solution to the equations discussed. The straight line, the catenary, the brachistochrone, the circle, and Fermat Unified approach to some geodesics. Ira Freeman "A General Form of the Suspension Bridge Catenary" Bulletin of the AMS Roulettes (curve) Exponentials Analytic geometry
Catenary
[ "Mathematics" ]
4,230
[ "E (mathematical constant)", "Exponentials" ]
7,176
https://en.wikipedia.org/wiki/Cryogenics
In physics, cryogenics is the production and behaviour of materials at very low temperatures. The 13th International Institute of Refrigeration's (IIR) International Congress of Refrigeration (held in Washington DC in 1971) endorsed a universal definition of "cryogenics" and "cryogenic" by accepting a threshold of to distinguish these terms from conventional refrigeration. This is a logical dividing line, since the normal boiling points of the so-called permanent gases (such as helium, hydrogen, neon, nitrogen, oxygen, and normal air) lie below 120 K, while the Freon refrigerants, hydrocarbons, and other common refrigerants have boiling points above 120 K. Discovery of superconducting materials with critical temperatures significantly above the boiling point of nitrogen has provided new interest in reliable, low-cost methods of producing high-temperature cryogenic refrigeration. The term "high temperature cryogenic" describes temperatures ranging from above the boiling point of liquid nitrogen, , up to . The discovery of superconductive properties is first attributed to Heike Kamerlingh Onnes on July 10, 1908. The discovery came after the ability to reach a temperature of 2 K. These first superconductive properties were observed in mercury at a temperature of 4.2 K. Cryogenicists use the Kelvin or Rankine temperature scale, both of which measure from absolute zero, rather than more usual scales such as Celsius which measures from the freezing point of water at sea level or Fahrenheit which measures from the freezing point of a particular brine solution at sea level. Definitions and distinctions Cryogenics The branches of engineering that involve the study of very low temperatures (ultra low temperature i.e. below 123 K), how to produce them, and how materials behave at those temperatures. Cryobiology The branch of biology involving the study of the effects of low temperatures on organisms (most often for the purpose of achieving cryopreservation). Other applications include Lyophilization (freeze-drying) of pharmaceutical components and medicine. Cryoconservation of animal genetic resources The conservation of genetic material with the intention of conserving a breed. The conservation of genetic material is not limited to non-humans. Many services provide genetic storage or the preservation of stem cells at birth. They may be used to study the generation of cell lines or for stem-cell therapy. Cryosurgery The branch of surgery applying cryogenic temperatures to destroy and kill tissue, e.g. cancer cells. Commonly referred to as Cryoablation. Cryoelectronics The study of electronic phenomena at cryogenic temperatures. Examples include superconductivity and variable-range hopping. Cryonics Cryopreserving humans and animals with the intention of future revival. "Cryogenics" is sometimes erroneously used to mean "Cryonics" in popular culture and the press. Etymology The word cryogenics stems from Greek κρύος (cryos) – "cold" + γενής (genis) – "generating". Cryogenic fluids Cryogenic fluids with their boiling point in Kelvin and degree Celsius. Industrial applications Liquefied gases, such as liquid nitrogen and liquid helium, are used in many cryogenic applications. Liquid nitrogen is the most commonly used element in cryogenics and is legally purchasable around the world. Liquid helium is also commonly used and allows for the lowest attainable temperatures to be reached. These liquids may be stored in Dewar flasks, which are double-walled containers with a high vacuum between the walls to reduce heat transfer into the liquid. Typical laboratory Dewar flasks are spherical, made of glass and protected in a metal outer container. Dewar flasks for extremely cold liquids such as liquid helium have another double-walled container filled with liquid nitrogen. Dewar flasks are named after their inventor, James Dewar, the man who first liquefied hydrogen. Thermos bottles are smaller vacuum flasks fitted in a protective casing. Cryogenic barcode labels are used to mark Dewar flasks containing these liquids, and will not frost over down to −195 degrees Celsius. Cryogenic transfer pumps are the pumps used on LNG piers to transfer liquefied natural gas from LNG carriers to LNG storage tanks, as are cryogenic valves. Cryogenic processing The field of cryogenics advanced during World War II when scientists found that metals frozen to low temperatures showed more resistance to wear. Based on this theory of cryogenic hardening, the commercial cryogenic processing industry was founded in 1966 by Bill and Ed Busch. With a background in the heat treating industry, the Busch brothers founded a company in Detroit called CryoTech in 1966. Busch originally experimented with the possibility of increasing the life of metal tools to anywhere between 200% and 400% of the original life expectancy using cryogenic tempering instead of heat treating. This evolved in the late 1990s into the treatment of other parts. Cryogens, such as liquid nitrogen, are further used for specialty chilling and freezing applications. Some chemical reactions, like those used to produce the active ingredients for the popular statin drugs, must occur at low temperatures of approximately . Special cryogenic chemical reactors are used to remove reaction heat and provide a low temperature environment. The freezing of foods and biotechnology products, like vaccines, requires nitrogen in blast freezing or immersion freezing systems. Certain soft or elastic materials become hard and brittle at very low temperatures, which makes cryogenic milling (cryomilling) an option for some materials that cannot easily be milled at higher temperatures. Cryogenic processing is not a substitute for heat treatment, but rather an extension of the heating–quenching–tempering cycle. Normally, when an item is quenched, the final temperature is ambient. The only reason for this is that most heat treaters do not have cooling equipment. There is nothing metallurgically significant about ambient temperature. The cryogenic process continues this action from ambient temperature down to . In most instances the cryogenic cycle is followed by a heat tempering procedure. As all alloys do not have the same chemical constituents, the tempering procedure varies according to the material's chemical composition, thermal history and/or a tool's particular service application. The entire process takes 3–4 days. Fuels Another use of cryogenics is cryogenic fuels for rockets with liquid hydrogen as the most widely used example. Liquid oxygen (LOX) is even more widely used but as an oxidizer, not a fuel. NASA's workhorse Space Shuttle used cryogenic hydrogen/oxygen propellant as its primary means of getting into orbit. LOX is also widely used with RP-1 kerosene, a non-cryogenic hydrocarbon, such as in the rockets built for the Soviet space program by Sergei Korolev. Russian aircraft manufacturer Tupolev developed a version of its popular design Tu-154 with a cryogenic fuel system, known as the Tu-155. The plane uses a fuel referred to as liquefied natural gas or LNG, and made its first flight in 1989. Other applications Some applications of cryogenics: Nuclear magnetic resonance (NMR) is one of the most common methods to determine the physical and chemical properties of atoms by detecting the radio frequency absorbed and subsequent relaxation of nuclei in a magnetic field. This is one of the most commonly used characterization techniques and has applications in numerous fields. Primarily, the strong magnetic fields are generated by supercooling electromagnets, although there are spectrometers that do not require cryogens. In traditional superconducting solenoids, liquid helium is used to cool the inner coils because it has a boiling point of around 4 K at ambient pressure. Inexpensive metallic superconductors can be used for the coil wiring. So-called high-temperature superconducting compounds can be made to super conduct with the use of liquid nitrogen, which boils at around 77 K. Magnetic resonance imaging (MRI) is a complex application of NMR where the geometry of the resonances is deconvoluted and used to image objects by detecting the relaxation of protons that have been perturbed by a radio-frequency pulse in the strong magnetic field. This is most commonly used in health applications. Cryogenic electron microscopy (cryoEM) is a popular method in structural biology for elucidating the structures of proteins, cells, and other biological systems. Samples are plunge-frozen into a cryogen such as liquid ethane cooled by liquid nitrogen, and are then kept at liquid nitrogen temperature as they are inserted into an electron microscope for imaging. Electron microscopes are also themselves cooled by liquid nitrogen. In large cities, it is difficult to transmit power by overhead cables, so underground cables are used. But underground cables get heated and the resistance of the wire increases, leading to waste of power. Superconductors could be used to increase power throughput, although they would require cryogenic liquids such as nitrogen or helium to cool special alloy-containing cables to increase power transmission. Several feasibility studies have been performed and the field is the subject of an agreement within the International Energy Agency. Cryogenic gases are used in transportation and storage of large masses of frozen food. When very large quantities of food must be transported to regions like war zones, earthquake hit regions, etc., they must be stored for a long time, so cryogenic food freezing is used. Cryogenic food freezing is also helpful for large scale food processing industries. Many infrared (forward looking infrared) cameras require their detectors to be cryogenically cooled. Certain rare blood groups are stored at low temperatures, such as −165°C, at blood banks. Cryogenics technology using liquid nitrogen and CO2 has been built into nightclub effect systems to create a chilling effect and white fog that can be illuminated with colored lights. Cryogenic cooling is used to cool the tool tip at the time of machining in manufacturing process. It increases the tool life. Oxygen is used to perform several important functions in the steel manufacturing process. Many rockets and lunar landers use cryogenic gases as propellants. These include liquid oxygen, liquid hydrogen, and liquid methane. By freezing an automobile or truck tire in liquid nitrogen, the rubber is made brittle and can be crushed into small particles. These particles can be used again for other items. Experimental research on certain physics phenomena, such as spintronics and magnetotransport properties, requires cryogenic temperatures for the effects to be observable. Certain vaccines must be stored at cryogenic temperatures. For example, the Pfizer–BioNTech COVID-19 vaccine must be stored at temperatures of . (See cold chain.) Production Cryogenic cooling of devices and material is usually achieved via the use of liquid nitrogen, liquid helium, or a mechanical cryocooler (which uses high-pressure helium lines). Gifford-McMahon cryocoolers, pulse tube cryocoolers and Stirling cryocoolers are in wide use with selection based on required base temperature and cooling capacity. The most recent development in cryogenics is the use of magnets as regenerators as well as refrigerators. These devices work on the principle known as the magnetocaloric effect. Detectors There are various cryogenic detectors which are used to detect particles. For cryogenic temperature measurement down to 30 K, Pt100 sensors, a resistance temperature detector (RTD), are used. For temperatures lower than 30 K, it is necessary to use a silicon diode for accuracy. See also Absolute zero Lowest temperature recorded on Earth Cryogenic grinding Flash freezing Frozen food References Further reading Haselden, G. G. (1971), Cryogenic fundamentals, Academic Press, New York, . Cooling technology Industrial gases
Cryogenics
[ "Physics", "Chemistry" ]
2,422
[ "Chemical process engineering", "Applied and interdisciplinary physics", "Cryogenics", "Industrial gases" ]
7,184
https://en.wikipedia.org/wiki/C%2A-algebra
In mathematics, specifically in functional analysis, a C∗-algebra (pronounced "C-star") is a Banach algebra together with an involution satisfying the properties of the adjoint. A particular case is that of a complex algebra A of continuous linear operators on a complex Hilbert space with two additional properties: A is a topologically closed set in the norm topology of operators. A is closed under the operation of taking adjoints of operators. Another important class of non-Hilbert C*-algebras includes the algebra of complex-valued continuous functions on X that vanish at infinity, where X is a locally compact Hausdorff space. C*-algebras were first considered primarily for their use in quantum mechanics to model algebras of physical observables. This line of research began with Werner Heisenberg's matrix mechanics and in a more mathematically developed form with Pascual Jordan around 1933. Subsequently, John von Neumann attempted to establish a general framework for these algebras, which culminated in a series of papers on rings of operators. These papers considered a special class of C*-algebras that are now known as von Neumann algebras. Around 1943, the work of Israel Gelfand and Mark Naimark yielded an abstract characterisation of C*-algebras making no reference to operators on a Hilbert space. C*-algebras are now an important tool in the theory of unitary representations of locally compact groups, and are also used in algebraic formulations of quantum mechanics. Another active area of research is the program to obtain classification, or to determine the extent of which classification is possible, for separable simple nuclear C*-algebras. Abstract characterization We begin with the abstract characterization of C*-algebras given in the 1943 paper by Gelfand and Naimark. A C*-algebra, A, is a Banach algebra over the field of complex numbers, together with a map for with the following properties: It is an involution, for every x in A: For all x, y in A: For every complex number and every x in A: For all x in A: Remark. The first four identities say that A is a *-algebra. The last identity is called the C* identity and is equivalent to: which is sometimes called the B*-identity. For history behind the names C*- and B*-algebras, see the history section below. The C*-identity is a very strong requirement. For instance, together with the spectral radius formula, it implies that the C*-norm is uniquely determined by the algebraic structure: A bounded linear map, π : A → B, between C*-algebras A and B is called a *-homomorphism if For x and y in A For x in A In the case of C*-algebras, any *-homomorphism π between C*-algebras is contractive, i.e. bounded with norm ≤ 1. Furthermore, an injective *-homomorphism between C*-algebras is isometric. These are consequences of the C*-identity. A bijective *-homomorphism π is called a C*-isomorphism, in which case A and B are said to be isomorphic. Some history: B*-algebras and C*-algebras The term B*-algebra was introduced by C. E. Rickart in 1946 to describe Banach *-algebras that satisfy the condition: for all x in the given B*-algebra. (B*-condition) This condition automatically implies that the *-involution is isometric, that is, . Hence, , and therefore, a B*-algebra is also a C*-algebra. Conversely, the C*-condition implies the B*-condition. This is nontrivial, and can be proved without using the condition . For these reasons, the term B*-algebra is rarely used in current terminology, and has been replaced by the term 'C*-algebra'. The term C*-algebra was introduced by I. E. Segal in 1947 to describe norm-closed subalgebras of B(H), namely, the space of bounded operators on some Hilbert space H. 'C' stood for 'closed'. In his paper Segal defines a C*-algebra as a "uniformly closed, self-adjoint algebra of bounded operators on a Hilbert space". Structure of C*-algebras C*-algebras have a large number of properties that are technically convenient. Some of these properties can be established by using the continuous functional calculus or by reduction to commutative C*-algebras. In the latter case, we can use the fact that the structure of these is completely determined by the Gelfand isomorphism. Self-adjoint elements Self-adjoint elements are those of the form . The set of elements of a C*-algebra A of the form forms a closed convex cone. This cone is identical to the elements of the form . Elements of this cone are called non-negative (or sometimes positive, even though this terminology conflicts with its use for elements of ) The set of self-adjoint elements of a C*-algebra A naturally has the structure of a partially ordered vector space; the ordering is usually denoted . In this ordering, a self-adjoint element satisfies if and only if the spectrum of is non-negative, if and only if for some . Two self-adjoint elements and of A satisfy if . This partially ordered subspace allows the definition of a positive linear functional on a C*-algebra, which in turn is used to define the states of a C*-algebra, which in turn can be used to construct the spectrum of a C*-algebra using the GNS construction. Quotients and approximate identities Any C*-algebra A has an approximate identity. In fact, there is a directed family {eλ}λ∈I of self-adjoint elements of A such that In case A is separable, A has a sequential approximate identity. More generally, A will have a sequential approximate identity if and only if A contains a strictly positive element, i.e. a positive element h such that hAh is dense in A. Using approximate identities, one can show that the algebraic quotient of a C*-algebra by a closed proper two-sided ideal, with the natural norm, is a C*-algebra. Similarly, a closed two-sided ideal of a C*-algebra is itself a C*-algebra. Examples Finite-dimensional C*-algebras The algebra M(n, C) of n × n matrices over C becomes a C*-algebra if we consider matrices as operators on the Euclidean space, Cn, and use the operator norm ||·|| on matrices. The involution is given by the conjugate transpose. More generally, one can consider finite direct sums of matrix algebras. In fact, all C*-algebras that are finite dimensional as vector spaces are of this form, up to isomorphism. The self-adjoint requirement means finite-dimensional C*-algebras are semisimple, from which fact one can deduce the following theorem of Artin–Wedderburn type: Theorem. A finite-dimensional C*-algebra, A, is canonically isomorphic to a finite direct sum where min A is the set of minimal nonzero self-adjoint central projections of A. Each C*-algebra, Ae, is isomorphic (in a noncanonical way) to the full matrix algebra M(dim(e), C). The finite family indexed on min A given by {dim(e)}e is called the dimension vector of A. This vector uniquely determines the isomorphism class of a finite-dimensional C*-algebra. In the language of K-theory, this vector is the positive cone of the K0 group of A. A †-algebra (or, more explicitly, a †-closed algebra) is the name occasionally used in physics for a finite-dimensional C*-algebra. The dagger, †, is used in the name because physicists typically use the symbol to denote a Hermitian adjoint, and are often not worried about the subtleties associated with an infinite number of dimensions. (Mathematicians usually use the asterisk, *, to denote the Hermitian adjoint.) †-algebras feature prominently in quantum mechanics, and especially quantum information science. An immediate generalization of finite dimensional C*-algebras are the approximately finite dimensional C*-algebras. C*-algebras of operators The prototypical example of a C*-algebra is the algebra B(H) of bounded (equivalently continuous) linear operators defined on a complex Hilbert space H; here x* denotes the adjoint operator of the operator x : H → H. In fact, every C*-algebra, A, is *-isomorphic to a norm-closed adjoint closed subalgebra of B(H) for a suitable Hilbert space, H; this is the content of the Gelfand–Naimark theorem. C*-algebras of compact operators Let H be a separable infinite-dimensional Hilbert space. The algebra K(H) of compact operators on H is a norm closed subalgebra of B(H). It is also closed under involution; hence it is a C*-algebra. Concrete C*-algebras of compact operators admit a characterization similar to Wedderburn's theorem for finite dimensional C*-algebras: Theorem. If A is a C*-subalgebra of K(H), then there exists Hilbert spaces {Hi}i∈I such that where the (C*-)direct sum consists of elements (Ti) of the Cartesian product Π K(Hi) with ||Ti|| → 0. Though K(H) does not have an identity element, a sequential approximate identity for K(H) can be developed. To be specific, H is isomorphic to the space of square summable sequences l2; we may assume that H = l2. For each natural number n let Hn be the subspace of sequences of l2 which vanish for indices k ≥ n and let en be the orthogonal projection onto Hn. The sequence {en}n is an approximate identity for K(H). K(H) is a two-sided closed ideal of B(H). For separable Hilbert spaces, it is the unique ideal. The quotient of B(H) by K(H) is the Calkin algebra. Commutative C*-algebras Let X be a locally compact Hausdorff space. The space of complex-valued continuous functions on X that vanish at infinity (defined in the article on local compactness) forms a commutative C*-algebra under pointwise multiplication and addition. The involution is pointwise conjugation. has a multiplicative unit element if and only if is compact. As does any C*-algebra, has an approximate identity. In the case of this is immediate: consider the directed set of compact subsets of , and for each compact let be a function of compact support which is identically 1 on . Such functions exist by the Tietze extension theorem, which applies to locally compact Hausdorff spaces. Any such sequence of functions is an approximate identity. The Gelfand representation states that every commutative C*-algebra is *-isomorphic to the algebra , where is the space of characters equipped with the weak* topology. Furthermore, if is isomorphic to as C*-algebras, it follows that and are homeomorphic. This characterization is one of the motivations for the noncommutative topology and noncommutative geometry programs. C*-enveloping algebra Given a Banach *-algebra A with an approximate identity, there is a unique (up to C*-isomorphism) C*-algebra E(A) and *-morphism π from A into E(A) that is universal, that is, every other continuous *-morphism factors uniquely through π. The algebra E(A) is called the C*-enveloping algebra of the Banach *-algebra A. Of particular importance is the C*-algebra of a locally compact group G. This is defined as the enveloping C*-algebra of the group algebra of G. The C*-algebra of G provides context for general harmonic analysis of G in the case G is non-abelian. In particular, the dual of a locally compact group is defined to be the primitive ideal space of the group C*-algebra. See spectrum of a C*-algebra. Von Neumann algebras Von Neumann algebras, known as W* algebras before the 1960s, are a special kind of C*-algebra. They are required to be closed in the weak operator topology, which is weaker than the norm topology. The Sherman–Takeda theorem implies that any C*-algebra has a universal enveloping W*-algebra, such that any homomorphism to a W*-algebra factors through it. Type for C*-algebras A C*-algebra A is of type I if and only if for all non-degenerate representations π of A the von Neumann algebra π(A) (that is, the bicommutant of π(A)) is a type I von Neumann algebra. In fact it is sufficient to consider only factor representations, i.e. representations π for which π(A) is a factor. A locally compact group is said to be of type I if and only if its group C*-algebra is type I. However, if a C*-algebra has non-type I representations, then by results of James Glimm it also has representations of type II and type III. Thus for C*-algebras and locally compact groups, it is only meaningful to speak of type I and non type I properties. C*-algebras and quantum field theory In quantum mechanics, one typically describes a physical system with a C*-algebra A with unit element; the self-adjoint elements of A (elements x with x* = x) are thought of as the observables, the measurable quantities, of the system. A state of the system is defined as a positive functional on A (a C-linear map φ : A → C with φ(u*u) ≥ 0 for all u ∈ A) such that φ(1) = 1. The expected value of the observable x, if the system is in state φ, is then φ(x). This C*-algebra approach is used in the Haag–Kastler axiomatization of local quantum field theory, where every open set of Minkowski spacetime is associated with a C*-algebra. See also Banach algebra Banach *-algebra *-algebra Hilbert C*-module Operator K-theory Operator system, a unital subspace of a C*-algebra that is *-closed. Gelfand–Naimark–Segal construction Jordan operator algebra Notes References . An excellent introduction to the subject, accessible for those with a knowledge of basic functional analysis. . This book is widely regarded as a source of new research material, providing much supporting intuition, but it is difficult. . This is a somewhat dated reference, but is still considered as a high-quality technical exposition. It is available in English from North Holland press. . . Mathematically rigorous reference which provides extensive physics background. . . Functional analysis
C*-algebra
[ "Mathematics" ]
3,258
[ "Functional analysis", "Mathematical objects", "Functions and mappings", "Mathematical relations" ]
7,193
https://en.wikipedia.org/wiki/Commutator
In mathematics, the commutator gives an indication of the extent to which a certain binary operation fails to be commutative. There are different definitions used in group theory and ring theory. Group theory The commutator of two elements, and , of a group , is the element . This element is equal to the group's identity if and only if and commute (that is, if and only if ). The set of all commutators of a group is not in general closed under the group operation, but the subgroup of G generated by all commutators is closed and is called the derived group or the commutator subgroup of G. Commutators are used to define nilpotent and solvable groups and the largest abelian quotient group. The definition of the commutator above is used throughout this article, but many group theorists define the commutator as . Using the first definition, this can be expressed as . Identities (group theory) Commutator identities are an important tool in group theory. The expression denotes the conjugate of by , defined as . and and and Identity (5) is also known as the Hall–Witt identity, after Philip Hall and Ernst Witt. It is a group-theoretic analogue of the Jacobi identity for the ring-theoretic commutator (see next section). N.B., the above definition of the conjugate of by is used by some group theorists. Many other group theorists define the conjugate of by as . This is often written . Similar identities hold for these conventions. Many identities that are true modulo certain subgroups are also used. These can be particularly useful in the study of solvable groups and nilpotent groups. For instance, in any group, second powers behave well: If the derived subgroup is central, then Ring theory Rings often do not support division. Thus, the commutator of two elements a and b of a ring (or any associative algebra) is defined differently by The commutator is zero if and only if a and b commute. In linear algebra, if two endomorphisms of a space are represented by commuting matrices in terms of one basis, then they are so represented in terms of every basis. By using the commutator as a Lie bracket, every associative algebra can be turned into a Lie algebra. The anticommutator of two elements and of a ring or associative algebra is defined by Sometimes is used to denote anticommutator, while is then used for commutator. The anticommutator is used less often, but can be used to define Clifford algebras and Jordan algebras and in the derivation of the Dirac equation in particle physics. The commutator of two operators acting on a Hilbert space is a central concept in quantum mechanics, since it quantifies how well the two observables described by these operators can be measured simultaneously. The uncertainty principle is ultimately a theorem about such commutators, by virtue of the Robertson–Schrödinger relation. In phase space, equivalent commutators of function star-products are called Moyal brackets and are completely isomorphic to the Hilbert space commutator structures mentioned. Identities (ring theory) The commutator has the following properties: Lie-algebra identities Relation (3) is called anticommutativity, while (4) is the Jacobi identity. Additional identities If is a fixed element of a ring R, identity (1) can be interpreted as a Leibniz rule for the map given by . In other words, the map adA defines a derivation on the ring R. Identities (2), (3) represent Leibniz rules for more than two factors, and are valid for any derivation. Identities (4)–(6) can also be interpreted as Leibniz rules. Identities (7), (8) express Z-bilinearity. From identity (9), one finds that the commutator of integer powers of ring elements is: Some of the above identities can be extended to the anticommutator using the above ± subscript notation. For example: Exponential identities Consider a ring or algebra in which the exponential can be meaningfully defined, such as a Banach algebra or a ring of formal power series. In such a ring, Hadamard's lemma applied to nested commutators gives: (For the last expression, see Adjoint derivation below.) This formula underlies the Baker–Campbell–Hausdorff expansion of log(exp(A) exp(B)). A similar expansion expresses the group commutator of expressions (analogous to elements of a Lie group) in terms of a series of nested commutators (Lie brackets), Graded rings and algebras When dealing with graded algebras, the commutator is usually replaced by the graded commutator, defined in homogeneous components as Adjoint derivation Especially if one deals with multiple commutators in a ring R, another notation turns out to be useful. For an element , we define the adjoint mapping by: This mapping is a derivation on the ring R: By the Jacobi identity, it is also a derivation over the commutation operation: Composing such mappings, we get for example and We may consider itself as a mapping, , where is the ring of mappings from R to itself with composition as the multiplication operation. Then is a Lie algebra homomorphism, preserving the commutator: By contrast, it is not always a ring homomorphism: usually . General Leibniz rule The general Leibniz rule, expanding repeated derivatives of a product, can be written abstractly using the adjoint representation: Replacing by the differentiation operator , and by the multiplication operator , we get , and applying both sides to a function g, the identity becomes the usual Leibniz rule for the nth derivative . See also Anticommutativity Associator Baker–Campbell–Hausdorff formula Canonical commutation relation Centralizer a.k.a. commutant Derivation (abstract algebra) Moyal bracket Pincherle derivative Poisson bracket Ternary commutator Three subgroups lemma Notes References Further reading External links Abstract algebra Group theory Binary operations Mathematical identities
Commutator
[ "Mathematics" ]
1,303
[ "Mathematical theorems", "Binary operations", "Binary relations", "Group theory", "Fields of abstract algebra", "Mathematical relations", "Abstract algebra", "Mathematical identities", "Mathematical problems", "Algebra" ]
7,214
https://en.wikipedia.org/wiki/Callisto%20%28mythology%29
In Greek mythology, Callisto (; ) was a nymph, or the daughter of King Lycaon; the myth varies in such details. She was believed to be one of the followers of Artemis (Diana for the Romans) who attracted Zeus. Many versions of Callisto's story survive. According to some writers, Zeus transformed himself into the figure of Artemis to pursue Callisto, and she slept with him believing Zeus to be Artemis. She became pregnant and when this was eventually discovered, she was expelled from Artemis's group, after which a furious Hera, the wife of Zeus, transformed her into a bear, although in some versions, Artemis is the one to give her an ursine form. Later, just as she was about to be killed by her son when he was hunting, she was set among the stars as Ursa Major ("the Great Bear") by Zeus. She was the bear-mother of the Arcadians, through her son Arcas by Zeus. In other accounts, the birth mother of Arcas was called Megisto, daughter of Ceteus, son of Lycaon, or else Themisto, daughter of Inachus. The fourth Galilean moon of Jupiter and a main belt asteroid are named after Callisto. Mythology As a follower of Artemis, Callisto, who Hesiod said was the daughter of Lycaon, king of Arcadia, took a vow to remain a virgin, as did all the nymphs of Artemis. According to Hesiod, she was seduced by Zeus, and of the consequences that followed: [Callisto] chose to occupy herself with wild-beasts in the mountains together with Artemis, and, when she was seduced by Zeus, continued some time undetected by the goddess, but afterwards, when she was already with child, was seen by her bathing and so discovered. Upon this, the goddess was enraged and changed her into a beast. Thus she became a bear and gave birth to a son called Arcas. But while she was in the mountains, she was hunted by some goat-herds and given up with her babe to Lycaon. Some while after, she thought fit to go into the forbidden precinct of Zeus, not knowing the law, and being pursued by her own son and the Arcadians, was about to be killed because of the said law; but Zeus delivered her because of her connection with him and put her among the stars, giving her the name Bear because of the misfortune which had befallen her. Eratosthenes also mentions a variation in which the virginal companion of Artemis that was seduced by Zeus and eventually transformed into the constellation Ursa Minor was named Phoenice instead. According to Ovid, it was Jupiter who took the form of Diana so that he might evade his wife Juno's detection, forcing himself upon Callisto while she was separated from Diana and the other nymphs. Callisto recognized that something was wrong the moment Jupiter started giving her "non-virginal kisses", but by that point it was too late, and even though she fought him off, he overpowered her. The real Diana arrived in the scene soon after and called Callisto to her, only for the girl to run away in fear she was Jupiter, until she noticed the nymphs accompanying the goddess. Callisto's subsequent pregnancy was discovered several months later while she was bathing with Diana and her fellow nymphs. Diana became enraged when she saw that Callisto was pregnant and expelled her from the group. Callisto later gave birth to Arcas. Juno then took the opportunity to avenge her wounded pride and transformed the nymph into a bear. Sixteen years later Callisto, still a bear, encountered her son Arcas hunting in the forest. Just as Arcas was about to kill his own mother with his javelin, Jupiter averted the tragedy by placing mother and son amongst the stars as Ursa Major and Minor, respectively. Juno, enraged that her attempt at revenge had been frustrated, appealed to Tethys that the two might never meet her waters, thus providing a poetic explanation for the constellations' circumpolar positions in ancient times. According to Hyginus, the origin of the transformation of Zeus, with its lesbian overtones, was from a rendition of the tale in a comedy in a lost work by the Attic comedian Amphis where Zeus embraced Callisto as Artemis and she, after being questioned by Artemis for her pregnancy, blamed the goddess, thinking she had impregnated her; Artemis then changed her into a bear. She was caught by some Aetolians and brought to Lycaon, her father. Still a bear, she rushed with her son Arcas into a temple of Zeus as the Arcadians followed to kill them; Zeus turned mother and son into constellations. Hyginus also records a version where Hera changed Callisto for sleeping with Zeus, and Artemis later slew her while hunting, not recognizing her. In another of the versions Hyginus records, it was Zeus who turned Callisto into a bear, to conceal her from Juno, who had noticed what her husband was doing. Juno then pointed Callisto to Diana, who proceeded to shoot her with her arrows. According to the mythographer Apollodorus, Zeus forced himself on Callisto when he disguised himself as Artemis or Apollo, in order to lure the sworn maiden into his embrace. Apollodorus is the only author to mention Apollo, but implies that it is not a rarity. Callisto was then turned into a bear by Zeus trying to hide her from Hera, but Hera asked Artemis to shoot the animal, and Artemis complied. Zeus then took the child, named it Arcas, and gave it to Maia to bring up in Arcadia; and Callisto he turned into a star and called it the Bear. Alternatively, Artemis killed Callisto for not protecting her virginity. Nonnus also writes that a "female paramour entered a woman's bed." Either Artemis "slew Kallisto with a shot of her silver bow," according to Homer, in order to please Juno (Hera) as Pausanias and Pseudo-Apollodorus write or later Arcas, the eponym of Arcadia, nearly killed his bear-mother, when she had wandered into the forbidden precinct of Zeus. In every case, Zeus placed them both in the sky as the constellations Ursa Major, called Arktos (), the Bear, by Greeks, and Ursa Minor. According to John Tzetzes, Charon of Lampsacus wrote that Callisto's son Arcas had been fathered not by Zeus but rather by Apollo. As a constellation, Ursa Major (who was also known as Helice, from an alternative origin story of the constellation) told Demeter, when the goddess asked the stars whether they knew anything about her daughter Persephone's abduction, to ask Helios the sun god, for he knew the deeds of the day well, while the night was blameless. Origin of the myth The name Kalliste (), "most beautiful", may be recognized as an epithet of the goddess herself, though none of the inscriptions at Athens that record priests of Artemis Kalliste (), date before the third century BCE. Artemis Kalliste was worshiped in Athens in a shrine which lay outside the Dipylon gate, by the side of the road to the Academy. W. S. Ferguson suggested that Artemis Soteira and Artemis Kalliste were joined in a common cult administered by a single priest. The bearlike character of Artemis herself was a feature of the Brauronia. It has been suggested that the myths of Artemis' nymphs breaking their vows were originally about Artemis herself, before her characterization shifted to that of a sworn virgin who fiercely defends her chastity. The myth in Catasterismi may be derived from the fact that a set of constellations appear close together in the sky, in and near the Zodiac sign of Libra, namely Ursa Minor, Ursa Major, Boötes, and Virgo. The constellation Boötes, was explicitly identified in the Hesiodic Astronomia () as Arcas, the "Bear-warden" (Arktophylax; ): He is Arkas the son of Kallisto and Zeus, and he lived in the country about Lykaion. After Zeus had seduced Kallisto, Lykaon, pretending not to know of the matter, entertained Zeus, as Hesiod says, and set before him on the table the babe [Arkas] which he had cut up. The stars of Ursa Major were all circumpolar in Athens of 400 BCE, and all but the stars in the Great Bear's left foot were circumpolar in Ovid's Rome, in the first century CE. Now, however, due to the precession of the equinoxes, the feet of the Great Bear constellation do sink below the horizon from Rome and especially from Athens; however, Ursa Minor (Arcas) does remain completely above the horizon, even from latitudes as far south as Honolulu and Hong Kong. According to Julien d'Huy, who used phylogenetic and statistical tools, the story could be a recent transformation of a Palaeolithic myth. In art Callisto's story was sometimes depicted in classical art, where the moment of transformation into a bear was the most popular. From the Renaissance on a series of major history paintings as well as many smaller cabinet paintings and book illustrations, usually called "Diana and Callisto", depicted the traumatic moment of discovery of the pregnancy, as the goddess and her nymphs bathed in a pool, following Ovid's account. The subject's attraction was undoubtedly mainly the opportunity it offered for a group of several females to be shown largely nude. Titian's Diana and Callisto (1556–1559), was the greatest (though not the first) of these, quickly disseminated by a print by Cornelius Cort. Here, as in most subsequent depictions, Diana points angrily, as Callisto is held by two nymphs, who may be pulling off what little clothing remains on her. Other versions include one by Rubens, and Diana Bathing with her Nymphs with Actaeon and Callisto by Rembrandt, which unusually combines the moment with the arrival of Actaeon. The basic composition is rather unusually consistent. Carlo Ridolfi said there was a version by Giorgione, who died in 1510, though his many attributions to Giorgione of paintings that are now lost are treated with suspicion by scholars. Other, less dramatic, treatments before Titian established his composition are by Palma Vecchio and Dosso Dossi. Annibale Carracci's The Loves of the Gods includes an image of Juno urging Diana to shoot Callisto in ursine form. Although Ovid places the discovery in the ninth month of Callisto's pregnancy, in paintings she is generally shown with a rather modest bump for late pregnancy. With the Visitation in religious art, this was the leading recurring subject in history painting that required showing pregnancy in art, which Early Modern painters still approached with some caution. In any case, the narrative required that the rest of the group had not previously noticed the pregnancy. Callisto being seduced by Zeus/Jupiter in disguise was also a popular subject, usually called "Jupiter and Callisto"; it was the clearest common subject with lesbian lovers from classical mythology. The two lovers are usually shown happily embracing in a bower. The violence described by Ovid as following Callisto's realization of what is going on is rarely shown. In versions before about 1700 Callisto may show some doubt about what is going on, as in the versions by Rubens. It was especially popular in the 18th century, when depictions were increasingly erotic; François Boucher painted several versions. During the Nazi occupation of France, resistance poet Robert Desnos wrote a collection of poems entitled Calixto suivi de contrée, where he used the myth of Callisto as a symbol for beauty imprisoned beneath ugliness: a metaphor for France under the German occupation. Aeschylus' tragedy Callisto is lost. However, Callisto rejoined the dramatic tradition in the Baroque period when Francesco Cavalli composed La Calisto in 1651. Genealogy |- | style="padding: 0.6em 4em;" | |- |style="text-align: left;"| Gallery See also Baucis and Philemon Lilaeus Rhodopis and Euthynicus Syceus Titanis Notes References Brigstocke, Hugh; Italian and Spanish Paintings in the National Gallery of Scotland, 2nd Edn, 1993, National Galleries of Scotland, "Gods": Aghion I., Barbillon C., Lissarrague, F., "Callisto", in Gods and Heroes of Classical Antiquity, Flammarion Iconographic Guides, pp. 77–78, 1996, Hall, James, "Diana: 5", in Hall's Dictionary of Subjects and Symbols in Art, pp. 102–103, 1996 (2nd edn.), John Murray, Maurus Servius Honoratus. In Vergilii carmina comentarii. Servii Grammatici qui feruntur in Vergilii carmina commentarii; recensuerunt Georgius Thilo et Hermannus Hagen. Georgius Thilo. Leipzig. B. G. Teubner. 1881. Further reading Pseudo-Apollodorus. Bibliotheke III.8.2. Hyginus, attrib., Poeticon astronomicon, II.1: the Great Bear. Scholia to Lycophron's Alexandra, marginal notes by Isaak and Ioannis Tzetzes and others from the Greek edition of Eduard Scheer (Weidmann 1881). Online version at the Topos Text Project.. Greek text available on Archive.org External links Hesiod, Astronomy, quoted by the Pseudo-Eratosthenes, Catasterismi: e-text (English) Theoi Project – Kallisto Richard Wilson's 'Landscape with Diana and Callisto' at the Lady Lever Art Gallery Warburg Institute Iconographic Database (ca 220 images of Callisto) Nymphs Mythological rape victims Mythological bears Mortal women of Zeus Princesses in Greek mythology Metamorphoses into animals in Greek mythology Arcadian mythology Metamorphoses characters Deeds of Artemis LGBTQ themes in Greek mythology Greek feminine given names Epithets of Artemis Deeds of Hera Deeds of Zeus Retinue of Artemis Ursa Major
Callisto (mythology)
[ "Astronomy" ]
3,048
[ "Ursa Major", "Constellations" ]
7,225
https://en.wikipedia.org/wiki/Chemical%20affinity
In chemical physics and physical chemistry, chemical affinity is the electronic property by which dissimilar chemical species are capable of forming chemical compounds. Chemical affinity can also refer to the tendency of an atom or compound to combine by chemical reaction with atoms or compounds of unlike composition. History Early theories The idea of affinity is extremely old. Many attempts have been made at identifying its origins. The majority of such attempts, however, except in a general manner, end in futility since "affinities" lie at the basis of all magic, thereby pre-dating science. Physical chemistry, however, was one of the first branches of science to study and formulate a "theory of affinity". The name affinitas was first used in the sense of chemical relation by German philosopher Albertus Magnus near the year 1250. Later, those as Robert Boyle, John Mayow, Johann Glauber, Isaac Newton, and Georg Stahl put forward ideas on elective affinity in attempts to explain how heat is evolved during combustion reactions. The term affinity has been used figuratively since c. 1600 in discussions of structural relationships in chemistry, philology, etc., and reference to "natural attraction" is from 1616. "Chemical affinity", historically, has referred to the "force" that causes chemical reactions. as well as, more generally, and earlier, the ″tendency to combine″ of any pair of substances. The broad definition, used generally throughout history, is that chemical affinity is that whereby substances enter into or resist decomposition. The modern term chemical affinity is a somewhat modified variation of its eighteenth-century precursor "elective affinity" or elective attractions, a term that was used by the 18th century chemistry lecturer William Cullen. Whether Cullen coined the phrase is not clear, but his usage seems to predate most others, although it rapidly became widespread across Europe, and was used in particular by the Swedish chemist Torbern Olof Bergman throughout his book (1775). Affinity theories were used in one way or another by most chemists from around the middle of the 18th century into the 19th century to explain and organise the different combinations into which substances could enter and from which they could be retrieved. Antoine Lavoisier, in his famed 1789 Traité Élémentaire de Chimie (Elements of Chemistry), refers to Bergman's work and discusses the concept of elective affinities or attractions. According to chemistry historian Henry Leicester, the influential 1923 textbook Thermodynamics and the Free Energy of Chemical Reactions by Gilbert N. Lewis and Merle Randall led to the replacement of the term "affinity" by the term "free energy" in much of the English-speaking world. According to Prigogine, the term was introduced and developed by Théophile de Donder. Johann Wolfgang von Goethe used the concept in his novel Elective Affinities (1809). Visual representations The affinity concept was very closely linked to the visual representation of substances on a table. The first-ever affinity table, which was based on displacement reactions, was published in 1718 by the French chemist Étienne François Geoffroy. Geoffroy's name is best known in connection with these tables of "affinities" (tables des rapports), which were first presented to the French Academy of Sciences in 1718 and 1720. During the 18th century many versions of the table were proposed with leading chemists like Torbern Bergman in Sweden and Joseph Black in Scotland adapting it to accommodate new chemical discoveries. All the tables were essentially lists, prepared by collating observations on the actions of substances one upon another, showing the varying degrees of affinity exhibited by analogous bodies for different reagents. Crucially, the table was the central graphic tool used to teach chemistry to students and its visual arrangement was often combined with other kinds diagrams. Joseph Black, for example, used the table in combination with chiastic and circlet diagrams to visualise the core principles of chemical affinity. Affinity tables were used throughout Europe until the early 19th century when they were displaced by affinity concepts introduced by Claude Berthollet. Modern conceptions In chemical physics and physical chemistry, chemical affinity is the electronic property by which dissimilar chemical species are capable of forming chemical compounds. Chemical affinity can also refer to the tendency of an atom or compound to combine by chemical reaction with atoms or compounds of unlike composition. In modern terms, we relate affinity to the phenomenon whereby certain atoms or molecules have the tendency to aggregate or bond. For example, in the 1919 book Chemistry of Human Life physician George W. Carey states that, "Health depends on a proper amount of iron phosphate Fe3(PO4)2 in the blood, for the molecules of this salt have chemical affinity for oxygen and carry it to all parts of the organism." In this antiquated context, chemical affinity is sometimes found synonymous with the term "magnetic attraction". Many writings, up until about 1925, also refer to a "law of chemical affinity". Ilya Prigogine summarized the concept of affinity, saying, "All chemical reactions drive the system to a state of equilibrium in which the affinities of the reactions vanish." Thermodynamics The present IUPAC definition is that affinity A is the negative partial derivative of Gibbs free energy G with respect to extent of reaction ξ at constant pressure and temperature. That is, It follows that affinity is positive for spontaneous reactions. In 1923, the Belgian mathematician and physicist Théophile de Donder derived a relation between affinity and the Gibbs free energy of a chemical reaction. Through a series of derivations, de Donder showed that if we consider a mixture of chemical species with the possibility of chemical reaction, it can be proven that the following relation holds: With the writings of Théophile de Donder as precedent, Ilya Prigogine and Defay in Chemical Thermodynamics (1954) defined chemical affinity as the rate of change of the uncompensated heat of reaction Q''' as the reaction progress variable or reaction extent ξ grows infinitesimally: This definition is useful for quantifying the factors responsible both for the state of equilibrium systems (where ), and for changes of state of non-equilibrium systems (where A ≠ 0). See also Chemistry Chemical bond Electronegativity Electron affinity Étienne François Geoffroy — Geoffroy's 1718 Affinity Table Valency Affinity chromatography Affinity electrophoresis References Literature External links William Whewell. "Establishment and Development of the Idea of Chemical Affinity". History of Scientific Ideas''. 2:15ff. Chemical Affinity and Absolute Zero - 1920 Nobel Prize in Chemistry Presentation Speech by Gerard De Geer Physical chemistry Jacobus Henricus van 't Hoff
Chemical affinity
[ "Physics", "Chemistry" ]
1,362
[ "Physical chemistry", "Applied and interdisciplinary physics", "nan" ]
7,237
https://en.wikipedia.org/wiki/Common%20Language%20Infrastructure
The Common Language Infrastructure (CLI) is an open specification and technical standard originally developed by Microsoft and standardized by ISO/IEC (ISO/IEC 23271) and Ecma International (ECMA 335) that describes executable code and a runtime environment that allows multiple high-level languages to be used on different computer platforms without being rewritten for specific architectures. This implies it is platform agnostic. The .NET Framework, .NET and Mono are implementations of the CLI. The metadata format is also used to specify the API definitions exposed by the Windows Runtime. Overview Among other things, the CLI specification describes the following five aspects: The Common Type System (CTS) A set of data types and operations that are shared by all CTS-compliant programming languages. The Metadata Information about program structure is language-agnostic, so that it can be referenced between languages and tools, making it easy to work with code written in a language the developer is not using. The Common Language Specification (CLS) The CLS, a subset of the CTS, are rules to which components developed with/for the supported languages must adhere. They apply to consumers (developers who are programmatically accessing a component that is CLS-compliant), frameworks (developers who are using a language compiler to create CLS-compliant libraries), and extenders (developers who are creating a tool such as a language compiler or a code parser that creates CLS-compliant components). The Virtual Execution System (VES) The VES loads and executes CLI-compatible programs, using the metadata to combine separately generated pieces of code at runtime. All compatible languages compile to Common Intermediate Language (CIL), which is an intermediate language that is abstracted from the platform hardware. When the code is executed, the platform-specific VES will compile the CIL to the machine language according to the specific hardware and operating system. In the CLI standard initially developed by Microsoft, the VES is implemented by the Common Language Runtime (CLR). The Standard Libraries A set of libraries providing many common functions, such as file reading and writing. Their core is the Base Class Library (BCL). Standardization and licensing In August 2000, Microsoft, Hewlett-Packard, Intel, and others worked to standardize CLI. By December 2001, it was ratified by the Ecma, with ISO/IEC standardization following in April 2003. Microsoft and its partners hold patents for CLI. Ecma and ISO/IEC require that all patents essential to implementation be made available under "reasonable and non-discriminatory (RAND) terms." It is common for RAND licensing to require some royalty payment, which could be a cause for concern with Mono. , neither Microsoft nor its partners have identified any patents essential to CLI implementations subject to RAND terms. , Microsoft added C# and CLI to the list of specifications that the Microsoft Community Promise applies to, so anyone can safely implement specified editions of the standards without fearing a patent lawsuit from Microsoft. To implement the CLI standard requires conformance to one of the supported and defined profiles of the standard, the minimum of which is the kernel profile. The kernel profile is actually a very small set of types to support in comparison to the well known core library of default .NET installations. However, the conformance clause of the CLI allows for extending the supported profile by adding new methods and types to classes, as well as deriving from new namespaces. But it does not allow for adding new members to interfaces. This means that the features of the CLI can be used and extended, as long as the conforming profile implementation does not change the behavior of a program intended to run on that profile, while allowing for unspecified behavior from programs written specifically for that implementation. In 2012, Ecma and ISO/IEC published the new edition of the CLI standard. Implementations .NET Framework is Microsoft's original commercial implementation of the CLI. It only supports Windows. It was superseded by .NET in November 2020. .NET, previously known as .NET Core, is the free and open-source multi-platform successor to .NET Framework, released under the MIT License .NET Compact Framework is Microsoft's commercial implementation of the CLI for portable devices and Xbox 360. .NET Micro Framework is an open source implementation of the CLI for resource-constrained devices. Mono is an alternative open source implementation of CLI and accompanying technologies, mainly used for mobile and game development. DotGNU is a decommissioned part of the GNU Project started in January 2001 that aimed to provide a free and open source software alternative to Microsoft's .NET Framework. See also Standard Libraries (CLI) List of CLI languages .NET Standard Notes References External links Ecma standards IEC standards ISO standards
Common Language Infrastructure
[ "Technology" ]
988
[ "Computer standards", "Ecma standards", "IEC standards" ]
7,243
https://en.wikipedia.org/wiki/Call%20centre
A call centre (Commonwealth spelling) or call center (American spelling; see spelling differences) is a managed capability that can be centralised or remote that is used for receiving or transmitting a large volume of enquiries by telephone. An inbound call centre is operated by a company to administer incoming product or service support or information inquiries from consumers. Outbound call centres are usually operated for sales purposes such as telemarketing, for solicitation of charitable or political donations, debt collection, market research, emergency notifications, and urgent/critical needs blood banks. A contact centre is a further extension of call centres telephony based capabilities, administers centralised handling of individual communications, including letters, faxes, live support software, social media, instant message, and email. A call center was previously seen as an open workspace for call center agents, with workstations that included a computer and display for each agent and were connected to an inbound/outbound call management system, and one or more supervisor stations. It can be independently operated or networked with additional centers, often linked to a corporate computer network, including mainframes, microcomputer, servers and LANs. It is expected that artificial intelligence-based chatbots will significantly impact call centre jobs and will increase productivity substantially. Many organisations have already adopted AI-based chatbots to improve their customer service experience. The contact center is a central point from which all customer contacts are managed. Through contact centers, valuable information can be routed to the appropriate people or systems, contacts can be tracked, and data may be gathered. It is generally a part of the company's customer relationship management infrastructure. The majority of large companies use contact centers as a means of managing their customer interactions. These centers can be operated by either an in-house department responsible or outsourcing customer interaction to a third-party agency (known as Outsourcing Call Centres). History Answering services, as known in the 1960s through the 1980s, earlier and slightly later, involved a business that specifically provided the service. Primarily, by using an off-premises extension (OPX) for each subscribing business, connected at a switchboard at the answering service business, the answering service would answer the otherwise unattended phones of the subscribing businesses with a live operator. The live operator could take messages or relay information, doing so with greater human interactivity than a mechanical answering machine. Although undoubtedly more costly (the human service, the cost of setting up and paying the phone company for the OPX on a monthly basis), it had the advantage of being more ready to respond to the unique needs of after-hours callers. The answering service operators also had the option of calling the client and alerting them to particularly important calls. The origins of call centers date back to the 1960s with the UK-based Birmingham Press and Mail, which installed Private Automated Business Exchanges (PABX) to have rows of agents handling customer contacts. By 1973, call centers had received mainstream attention after Rockwell International patented its Galaxy Automatic Call Distributor (GACD) for a telephone booking system as well as the popularization of telephone headsets as seen on televised NASA Mission Control Center events. During the late 1970s, call center technology expanded to include telephone sales, airline reservations, and banking systems. The term "call center" was first published and recognised by the Oxford English Dictionary in 1983. The 1980s saw the development of toll-free telephone numbers to increase the efficiency of agents and overall call volume. Call centers increased with the deregulation of long-distance calling and growth in information-dependent industries. As call centres expanded, workers in North America began to join unions such as the Communications Workers of America and the United Steelworkers. In Australia, the National Union of Workers represents unionised workers; their activities form part of the Australian labour movement. In Europe, UNI Global Union of Switzerland is involved in assisting unionisation in the call center industry, and in Germany Vereinte Dienstleistungsgewerkschaft represents call centre workers. During the 1990s, call centres expanded internationally and developed into two additional subsets of communication: contact centres and outsourced bureau centres. A contact centre is a coordinated system of people, processes, technologies, and strategies that provides access to information, resources, and expertise, through appropriate channels of communication, enabling interactions that create value for the customer and organization. In contrast to in-house management, outsourced bureau contact centres are a model of contact centre that provide services on a "pay per use" model. The overheads of the contact centre are shared by many clients, thereby supporting a very cost effective model, especially for low volumes of calls. The modern contact centre includes automated call blending of inbound and outbound calls as well as predictive dialing capabilities, dramatically increasing agents' productivity. New implementations of more complex systems require highly skilled operational and management staff that can use multichannel online and offline tools to improve customer interactions. Technology Call centre technologies often include: speech recognition software which allowed Interactive Voice Response (IVR) systems to handle first levels of customer support, text mining, natural language processing to allow better customer handling, agent training via interactive scripting and automatic mining using best practices from past interactions, support automation and many other technologies to improve agent productivity and customer satisfaction. Automatic lead selection or lead steering is also intended to improve efficiencies, both for inbound and outbound campaigns. This allows inbound calls to be directly routed to the appropriate agent for the task, whilst minimising wait times and long lists of irrelevant options for people calling in. For outbound calls, lead selection allows management to designate what type of leads go to which agent based on factors including skill, socioeconomic factors, past performance, and percentage likelihood of closing a sale per lead. The universal queue standardises the processing of communications across multiple technologies such as fax, phone, and email. The virtual queue provides callers with an alternative to waiting on hold when no agents are available to handle inbound call demand. Premises-based technology Historically call centres have been built on Private branch exchange (PBX) equipment owned, hosted, and maintained by the call centre operator. The PBX can provide functions such as automatic call distribution, interactive voice response, and skills-based routing. Virtual call centre In a virtual call centre model, the call centre operator (business) pays a monthly or annual fee to a vendor that hosts the call centre telephony and data equipment in their own facility, cloud-based. In this model, the operator does not own, operate or host the equipment on which the call centre runs. Agents connect to the vendor's equipment through traditional PSTN telephone lines, or over voice over IP. Calls to and from prospects or contacts originate from or terminate at the vendor's data centre, rather than at the call centre operator's premises. The vendor's telephony equipment (at times data servers) then connects the calls to the call centre operator's agents. Virtual call centre technology allows people to work from home or any other location instead of in a traditional, centralised, call centre location, which increasingly allows people 'on the go' or with physical or other disabilities to work from desired locations – i.e. not leaving their house. The only required equipment is Internet access, a workstation, and a softphone. If the virtual call centre software utilizes webRTC, a softphone is not required to dial. The companies are preferring Virtual Call Centre services due to cost advantage. Companies can start their call centre business immediately without installing the basic infrastructure like Dialer, ACD and IVRS. Virtual call centres became increasingly used after the COVID-19 pandemic restricted businesses from operating with large groups of people working in close proximity. Cloud computing Through the use of application programming interfaces (APIs), hosted and on-demand call centres that are built on cloud-based software as a service (SaaS) platforms can integrate their functionality with cloud-based applications for customer relationship management (CRM), lead management and more. Developers use APIs to enhance cloud-based call centre platform functionality—including Computer telephony integration (CTI) APIs which provide basic telephony controls and sophisticated call handling from a separate application, and configuration APIs which enable graphical user interface (GUI) controls of administrative functions. Outsourcing Outsourced call centres are often located in developing countries, where wages are significantly lower than in western countries with higher minimum wages. These include the call centre industries in the Philippines, Bangladesh, and India. Companies that regularly utilise outsourced contact centre services include British Sky Broadcasting and Orange in the telecommunications industry, Adidas in the sports and leisure sector, Audi in car manufacturing and charities such as the RSPCA. Industries Healthcare The healthcare industry has and continues to use outbound call centre programmes for years to help manage billing, collections, and patient communication. The inbound call centre is a new and increasingly popular service for many types of healthcare facilities, including large hospitals. Inbound call centres can be outsourced or managed in-house. These healthcare call centres are designed to help streamline communications, enhance patient retention and satisfaction, reduce expenses and improve operational efficiencies. Hospitality Many large hospitality companies such as the Hilton Hotels Corporation and Marriott International make use of call centres to manage reservations. These are known in the industry as "central reservations offices". Staff members at these call centres take calls from clients wishing to make reservations or other inquiries via a public number, usually a 1-800 number. These centres may operate as many as 24 hours per day, seven days a week, depending on the call volume the chain receives. Evaluation Mathematical theory Queueing theory is a branch of mathematics in which models of service systems have been developed. A call centre can be seen as a queueing network and results from queueing theory such as the probability an arriving customer needs to wait before starting service useful for provisioning capacity. (Erlang's C formula is such a result for an M/M/c queue and approximations exist for an M/G/k queue.) Statistical analysis of call centre data has suggested arrivals are governed by an inhomogeneous Poisson process and jobs have a log-normal service time distribution. Simulation algorithms are increasingly being used to model call arrival, queueing and service levels. Call centre operations have been supported by mathematical models beyond queueing, with operations research, which considers a wide range of optimisation problems seeking to reduce waiting times while keeping server utilisation and therefore efficiency high. Criticism Call centres have received criticism for low rates of pay and restrictive working practices for employees, which have been deemed as a dehumanising environment. Other research illustrates how call centre workers develop ways to counter or resist this environment by integrating local cultural sensibilities or embracing a vision of a new life. Most call centres provide electronic reports that outline performance metrics, quarterly highlights and other information about the calls made and received. This has the benefit of helping the company to plan the workload and time of its employees. However, it has also been argued that such close monitoring breaches the human right to privacy. Complaints are often logged by callers who find the staff do not have enough skill or authority to resolve problems, as well as appearing apathetic. These concerns are due to a business process that exhibits levels of variability because the experience a customer gets and results a company achieves on a given call are dependent upon the quality of the agent. Call centres are beginning to address this by using agent-assisted automation to standardise the process all agents use. However, more popular alternatives are using personality and skill based approaches. The various challenges encountered by call operators are discussed by several authors. Media portrayals Call centres located in India have been the focus of several documentary films, the 2004 film Thomas L. Friedman Reporting: The Other Side of Outsourcing, the 2005 films John and Jane, Nalini by Day, Nancy by Night, and 1-800-India: Importing a White-Collar Economy, and the 2006 film Bombay Calling, among others. An Indian call centre is also the subject of the 2006 film Outsourced and a key location in the 2008 film, Slumdog Millionaire. The 2014 BBC fly on the wall documentary series The Call Centre gave an often distorted although humorous view of life in a Welsh call centre. Appointment Setting Appointment setting is a specialized function within call centres, where dedicated agents focus on facilitating and scheduling meetings between clients and businesses or sales representatives. This service is particularly prevalent in various industries such as financial services, healthcare, real estate, and B2B sales, where time-sensitive and personalized communications are essential for effective client engagement. Lead Generation Lead generation is a common operation for call centers, encompassing strategies and activities aimed at identifying potential customers or clients for businesses or sales representatives. It involves gathering information and generating interest among individuals or organizations who may have a potential interest in the products or services offered. See also Automatic call distributor Business process outsourcing Call management List of call centre companies Predictive dialling Operator messaging Queue management system Skills based routing Virtual queue The Call Centre, a BBC fly-on-the-wall documentary at a Welsh call centre References Further reading Cusack M., "Online Customer Care", American Society for Quality (ASQ) Press, 2000. Brad Cleveland, "Call Center Management on Fast Forward", ICMI Press, 2006. Kennedy I., Call centres, School of Electrical and Information Engineering, University of the Witwatersrand, 2003. Masi D.M.B., Fischer M.J., Harris C.M., Numerical Analysis of Routing Rules for Call centres, Telecommunications Review, 1998, noblis.org HSE website Psychosocial risk factors in call centres: An evaluation of work design and well-being. Reena Patel, Working the Night Shift: Women in India's Call Center Industry (Stanford University Press; 2010) 219 pages; traces changing views of "women's work" in India under globalization. Fluss, Donna, "The Real-Time Contact centre", 2005 AMACOM Wegge, J., van Dick, R., Fisher, G., Wecking, C., & Moltzen, K. (2006, January). Work motivation, organisational identification, and well-being in call centre work. Work & Stress, 20(1), 60–83. Legros, B. (2016). Unintended consequences of optimizing a queue discipline for a service level defined by a percentile of the waiting time. Operations Research Letters, 44(6), 839–845. Krishnan, C., Gupta, A., Gupta, A., Singh, G. (2022). Impact of Artificial Intelligence-Based Chatbots on Customer Engagement and Business Growth. In: Hong, TP., Serrano-Estrada, L., Saxena, A., Biswas, A. (eds) Deep Learning for Social Media Data Analytics. Studies in Big Data, vol 113. Springer, Cham. Adam, M., Wessel, M. & Benlian, A. AI-based chatbots in customer service and their effects on user compliance. Electron Markets 31, 427–445 (2021). Hardalov, M., Koychev, I., Nakov, P. (2018). Towards Automated Customer Support. In: Agre, G., van Genabith, J., Declerck, T. (eds) Artificial Intelligence: Methodology, Systems, and Applications. AIMSA 2018. Lecture Notes in Computer Science(), vol 11089. Springer, Cham. Roberts, C. and Maier, T. (2024), "The evolution of service toward automated customer assistance: there is a difference", International Journal of Contemporary Hospitality Management, Vol. 36 No. 6, pp. 1914-1925. Suendermann, D., Liscombe, J., Pieraccini, R., Evanini, K. (2010). “How am I Doing?”: A New Framework to Effectively Measure the Performance of Automated Customer Care Contact Centers. In: Neustein, A. (eds) Advances in Speech Recognition. Springer, Boston, MA. External links Mandelbaum, Avishai Call Centers (Centres) Research Bibliography with Abstracts . Faculty of Industrial Engineering and Management, Technion-Israel Institute of Technology. Computer telephony integration Telemarketing Outsourcing Telephony Customer service
Call centre
[ "Technology" ]
3,417
[ "Information technology", "Computer telephony integration" ]
7,249
https://en.wikipedia.org/wiki/Crankshaft
A crankshaft is a mechanical component used in a piston engine to convert the reciprocating motion into rotational motion. The crankshaft is a rotating shaft containing one or more crankpins, that are driven by the pistons via the connecting rods. The crankpins are also called rod bearing journals, and they rotate within the "big end" of the connecting rods. Most modern crankshafts are located in the engine block. They are made from steel or cast iron, using either a forging, casting or machining process. Design The crankshaft is located within the engine block and held in place via main bearings which allow the crankshaft to rotate within the block. The up-down motion of each piston is transferred to the crankshaft via connecting rods. A flywheel is often attached to one end of the crankshaft, in order to smoothen the power delivery and reduce vibration. A crankshaft is subjected to enormous stresses, in some cases more than per cylinder. Crankshafts for single-cylinder engines are usually a simpler design than for engines with multiple cylinders. Bearings The crankshaft is able to rotate in the engine block due to the 'main bearings'. Since the crankshaft is subject to large horizontal and torsional forces from each cylinder, these main bearings are located at various points along the crankshaft, rather than just one at each end. The number of main bearings is determined based on the overall load factor and the maximum engine speed. Crankshafts in diesel engines often use a main bearing between every cylinder and at both ends of the crankshaft, due to the high forces of combustion present. Flexing of the crankshaft was a factor in V8 engines replacing straight-eight engines in the 1950s; the long crankshafts of the latter suffered from an unacceptable amount of flex when engine designers began using higher compression ratios and higher engine speeds (RPM). Piston stroke The distance between the axis of the crankpins and the axis of the crankshaft determines the stroke length of the engine. Most modern car engines are classified as "over square" or short-stroke, wherein the stroke is less than the diameter of the cylinder bore. A common way to increase the low-RPM torque of an engine is to increase the stroke, sometimes known as "stroking" the engine. Historically, the trade-off for a long-stroke engine was a lower rev limit and increased vibration at high RPM, due to the increased piston velocity. Cross-plane and flat-plane configurations When designing an engine, the crankshaft configuration is closely related to the engine's firing order. Most production V8 engines (such as the Ford Modular engine and the General Motors LS engine) use a cross-plane crank whereby the crank throws are spaced 90 degrees apart. However, some high-performance V8 engines (such as the Ferrari 488) instead use a flat-plane crank, whereby the throws are spaced 180° apart, which essentially results in two inline-four engines sharing a common crankcase. Flat-plane engines are usually able to operate at higher RPM, however they have higher second-order vibrations, so they are better suited to racing car engines. Engine balance For some engines it is necessary to provide counterweights for the reciprocating mass of the piston, conrods and crankshaft, in order to improve the engine balance. These counterweights are typically cast as part of the crankshaft but, occasionally, are bolt-on pieces. Flying arms In some engines, the crankshaft contains direct links between adjacent crankpins, without the usual intermediate main bearing. These links are called flying arms. This arrangement is sometimes used in V6 and V8 engines, in order to maintain an even firing interval while using different V angles, and to reduce the number of main bearings required. The downside of flying arms is that the rigidity of the crankshaft is reduced, which can cause problems at high RPM or high power outputs. Counter-rotating crankshafts In most engines, each connecting rod is attached a single crankshaft, which results in the angle of the connecting rod varying as the piston moves through its stroke. This variation in angle pushes the pistons against the cylinder wall, which causes friction between the piston and cylinder wall. To prevent this, some early engines – such as the 1900–1904 Lanchester Engine Company flat-twin engines – connected each piston to two crankshafts that are rotating in opposite directions. This arrangement cancels out the lateral forces and reduces the requirement for counterweights. This design is rarely used, however a similar principle applies to balance shafts, which are occasionally used. Construction Forged crankshafts Crankshafts can be created from a steel bar using roll forging. Today, manufacturers tend to favour the use of forged crankshafts due to their lighter weight, more compact dimensions and better inherent damping. With forged crankshafts, vanadium micro-alloyed steels are mainly used as these steels can be air-cooled after reaching high strengths without additional heat treatment, except for the surface hardening of the bearing surfaces. The low alloy content also makes the material cheaper than high-alloy steels. Carbon steels also require additional heat treatment to reach the desired properties. Cast crankshafts Another construction method is to cast the crankshaft from ductile iron. Cast iron crankshafts are today mostly found in cheaper production engines where the loads are lower. Machined crankshafts Crankshafts can also be machined from billet, often a bar of high quality vacuum remelted steel. Though the fiber flow (local inhomogeneities of the material's chemical composition generated during casting) does not follow the shape of the crankshaft (which is undesirable), this is usually not a problem since higher quality steels, which normally are difficult to forge, can be used. Per unit, these crankshafts tend to be expensive due to the large amount of material that must be removed with lathes and milling machines, the high material cost, and the additional heat treatment required. However, since no expensive tooling is needed, this production method allows small production runs without high up-front costs. History Crankshaft In 9th century Abbasid Baghdad, automatically operated cranks appear in several of the hydraulic devices described by the Banū Mūsā brothers in the Book of Ingenious Devices. These automatically operated cranks appear in several devices, two of which contain an action which approximates to that of a crankshaft, five centuries before the earliest known European description of a crankshaft. However, the automatic crank mechanism described by the Banū Mūsā would not have allowed a full rotation, but only a small modification was required to convert it to a crankshaft. In the Artuqid Sultanate, Arab engineer Ismail al-Jazari (1136–1206) described a crank and connecting rod system in a rotating machine for two of his water-raising machines, which include both crank and shaft mechanisms. The Italian physician Guido da Vigevano (), planning for a new Crusade, made illustrations for a paddle boat and war carriages that were propelled by manually turned compound cranks and gear wheels, identified as an early crankshaft prototype by Lynn Townsend White. Crankshafts were described by Leonardo da Vinci (1452–1519) and a Dutch farmer and windmill owner by the name Cornelis Corneliszoon van Uitgeest in 1592. His wind-powered sawmill used a crankshaft to convert a windmill's circular motion into a back-and-forward motion powering the saw. Corneliszoon was granted a patent for his crankshaft in 1597. From the 16th century onwards, evidence of cranks and connecting rods integrated into machine design becomes abundant in the technological treatises of the period: Agostino Ramelli's The Diverse and Artifactitious Machines of 1588 depicts eighteen examples, a number that rises in the Theatrum Machinarum Novum by Georg Andreas Böckler to 45 different machines. Cranks were formerly common on some machines in the early 20th century; for example almost all phonographs before the 1930s were powered by clockwork motors wound with cranks. Reciprocating piston engines use cranks to convert the linear piston motion into rotational motion. Internal combustion engines of early 20th century automobiles were usually started with hand cranks, before electric starters came into general use. See also Bicycle crankset Brace (tool) Cam (mechanism) Cam engine Camshaft Crank (mechanism) Crankcase Crankshaft torsional vibration List of auto parts Piston motion equations Tunnel crankshaft Scotch yoke Swashplate References Sources External links Interactive crank animation https://www.desmos.com/calculator/8l2kvyivqo D & T Mechanisms – Interactive Tools for Teachers (applets) https://web.archive.org/web/20140714155346/http://www.content.networcs.net/tft/mechanisms.htm Engine components Engine technology Linkages (mechanical) Piston engines
Crankshaft
[ "Technology" ]
1,842
[ "Engine components", "Piston engines", "Engine technology", "Engines" ]
7,251
https://en.wikipedia.org/wiki/Central%20nervous%20system
The central nervous system (CNS) is the part of the nervous system consisting primarily of the brain and spinal cord. The CNS is so named because the brain integrates the received information and coordinates and influences the activity of all parts of the bodies of bilaterally symmetric and triploblastic animals—that is, all multicellular animals except sponges and diploblasts. It is a structure composed of nervous tissue positioned along the rostral (nose end) to caudal (tail end) axis of the body and may have an enlarged section at the rostral end which is a brain. Only arthropods, cephalopods and vertebrates have a true brain, though precursor structures exist in onychophorans, gastropods and lancelets. The rest of this article exclusively discusses the vertebrate central nervous system, which is radically distinct from all other animals. Overview In vertebrates, the brain and spinal cord are both enclosed in the meninges. The meninges provide a barrier to chemicals dissolved in the blood, protecting the brain from most neurotoxins commonly found in food. Within the meninges the brain and spinal cord are bathed in cerebral spinal fluid which replaces the body fluid found outside the cells of all bilateral animals. In vertebrates, the CNS is contained within the dorsal body cavity, while the brain is housed in the cranial cavity within the skull. The spinal cord is housed in the spinal canal within the vertebrae. Within the CNS, the interneuronal space is filled with a large amount of supporting non-nervous cells called neuroglia or glia from the Greek for "glue". In vertebrates, the CNS also includes the retina and the optic nerve (cranial nerve II), as well as the olfactory nerves and olfactory epithelium. As parts of the CNS, they connect directly to brain neurons without intermediate ganglia. The olfactory epithelium is the only central nervous tissue outside the meninges in direct contact with the environment, which opens up a pathway for therapeutic agents which cannot otherwise cross the meninges barrier. Structure The CNS consists of two major structures: the brain and spinal cord. The brain is encased in the skull, and protected by the cranium. The spinal cord is continuous with the brain and lies caudally to the brain. It is protected by the vertebrae. The spinal cord reaches from the base of the skull, and continues through or starting below the foramen magnum, and terminates roughly level with the first or second lumbar vertebra, occupying the upper sections of the vertebral canal. White and gray matter Microscopically, there are differences between the neurons and tissue of the CNS and the peripheral nervous system (PNS). The CNS is composed of white and gray matter. This can also be seen macroscopically on brain tissue. The white matter consists of axons and oligodendrocytes, while the gray matter consists of neurons and unmyelinated fibers. Both tissues include a number of glial cells (although the white matter contains more), which are often referred to as supporting cells of the CNS. Different forms of glial cells have different functions, some acting almost as scaffolding for neuroblasts to climb during neurogenesis such as bergmann glia, while others such as microglia are a specialized form of macrophage, involved in the immune system of the brain as well as the clearance of various metabolites from the brain tissue. Astrocytes may be involved with both clearance of metabolites as well as transport of fuel and various beneficial substances to neurons from the capillaries of the brain. Upon CNS injury astrocytes will proliferate, causing gliosis, a form of neuronal scar tissue, lacking in functional neurons. The brain (cerebrum as well as midbrain and hindbrain) consists of a cortex, composed of neuron-bodies constituting gray matter, while internally there is more white matter that form tracts and commissures. Apart from cortical gray matter there is also subcortical gray matter making up a large number of different nuclei. Spinal cord From and to the spinal cord are projections of the peripheral nervous system in the form of spinal nerves (sometimes segmental nerves). The nerves connect the spinal cord to skin, joints, muscles etc. and allow for the transmission of efferent motor as well as afferent sensory signals and stimuli. This allows for voluntary and involuntary motions of muscles, as well as the perception of senses. All in all 31 spinal nerves project from the brain stem, some forming plexa as they branch out, such as the brachial plexa, sacral plexa etc. Each spinal nerve will carry both sensory and motor signals, but the nerves synapse at different regions of the spinal cord, either from the periphery to sensory relay neurons that relay the information to the CNS or from the CNS to motor neurons, which relay the information out. The spinal cord relays information up to the brain through spinal tracts through the final common pathway to the thalamus and ultimately to the cortex. Cranial nerves Apart from the spinal cord, there are also peripheral nerves of the PNS that synapse through intermediaries or ganglia directly on the CNS. These 12 nerves exist in the head and neck region and are called cranial nerves. Cranial nerves bring information to the CNS to and from the face, as well as to certain muscles (such as the trapezius muscle, which is innervated by accessory nerves as well as certain cervical spinal nerves). Two pairs of cranial nerves; the olfactory nerves and the optic nerves are often considered structures of the CNS. This is because they do not synapse first on peripheral ganglia, but directly on CNS neurons. The olfactory epithelium is significant in that it consists of CNS tissue expressed in direct contact to the environment, allowing for administration of certain pharmaceuticals and drugs. Brain At the anterior end of the spinal cord lies the brain. The brain makes up the largest portion of the CNS. It is often the main structure referred to when speaking of the nervous system in general. The brain is the major functional unit of the CNS. While the spinal cord has certain processing ability such as that of spinal locomotion and can process reflexes, the brain is the major processing unit of the nervous system. Brainstem The brainstem consists of the medulla, the pons and the midbrain. The medulla can be referred to as an extension of the spinal cord, which both have similar organization and functional properties. The tracts passing from the spinal cord to the brain pass through here. Regulatory functions of the medulla nuclei include control of blood pressure and breathing. Other nuclei are involved in balance, taste, hearing, and control of muscles of the face and neck. The next structure rostral to the medulla is the pons, which lies on the ventral anterior side of the brainstem. Nuclei in the pons include pontine nuclei which work with the cerebellum and transmit information between the cerebellum and the cerebral cortex. In the dorsal posterior pons lie nuclei that are involved in the functions of breathing, sleep, and taste. The midbrain, or mesencephalon, is situated above and rostral to the pons. It includes nuclei linking distinct parts of the motor system, including the cerebellum, the basal ganglia and both cerebral hemispheres, among others. Additionally, parts of the visual and auditory systems are located in the midbrain, including control of automatic eye movements. The brainstem at large provides entry and exit to the brain for a number of pathways for motor and autonomic control of the face and neck through cranial nerves, Autonomic control of the organs is mediated by the tenth cranial nerve. A large portion of the brainstem is involved in such autonomic control of the body. Such functions may engage the heart, blood vessels, and pupils, among others. The brainstem also holds the reticular formation, a group of nuclei involved in both arousal and alertness. Cerebellum The cerebellum lies behind the pons. The cerebellum is composed of several dividing fissures and lobes. Its function includes the control of posture and the coordination of movements of parts of the body, including the eyes and head, as well as the limbs. Further, it is involved in motion that has been learned and perfected through practice, and it will adapt to new learned movements. Despite its previous classification as a motor structure, the cerebellum also displays connections to areas of the cerebral cortex involved in language and cognition. These connections have been shown by the use of medical imaging techniques, such as functional MRI and Positron emission tomography. The body of the cerebellum holds more neurons than any other structure of the brain, including that of the larger cerebrum, but is also more extensively understood than other structures of the brain, as it includes fewer types of different neurons. It handles and processes sensory stimuli, motor information, as well as balance information from the vestibular organ. Diencephalon The two structures of the diencephalon worth noting are the thalamus and the hypothalamus. The thalamus acts as a linkage between incoming pathways from the peripheral nervous system as well as the optical nerve (though it does not receive input from the olfactory nerve) to the cerebral hemispheres. Previously it was considered only a "relay station", but it is engaged in the sorting of information that will reach cerebral hemispheres (neocortex). Apart from its function of sorting information from the periphery, the thalamus also connects the cerebellum and basal ganglia with the cerebrum. In common with the aforementioned reticular system the thalamus is involved in wakefulness and consciousness, such as though the SCN. The hypothalamus engages in functions of a number of primitive emotions or feelings such as hunger, thirst and maternal bonding. This is regulated partly through control of secretion of hormones from the pituitary gland. Additionally the hypothalamus plays a role in motivation and many other behaviors of the individual. Cerebrum The cerebrum of cerebral hemispheres make up the largest visual portion of the human brain. Various structures combine to form the cerebral hemispheres, among others: the cortex, basal ganglia, amygdala and hippocampus. The hemispheres together control a large portion of the functions of the human brain such as emotion, memory, perception and motor functions. Apart from this the cerebral hemispheres stand for the cognitive capabilities of the brain. Connecting each of the hemispheres is the corpus callosum as well as several additional commissures. One of the most important parts of the cerebral hemispheres is the cortex, made up of gray matter covering the surface of the brain. Functionally, the cerebral cortex is involved in planning and carrying out of everyday tasks. The hippocampus is involved in storage of memories, the amygdala plays a role in perception and communication of emotion, while the basal ganglia play a major role in the coordination of voluntary movement. Difference from the peripheral nervous system The PNS consists of neurons, axons, and Schwann cells. Oligodendrocytes and Schwann cells have similar functions in the CNS and PNS, respectively. Both act to add myelin sheaths to the axons, which acts as a form of insulation allowing for better and faster proliferation of electrical signals along the nerves. Axons in the CNS are often very short, barely a few millimeters, and do not need the same degree of isolation as peripheral nerves. Some peripheral nerves can be over 1 meter in length, such as the nerves to the big toe. To ensure signals move at sufficient speed, myelination is needed. The way in which the Schwann cells and oligodendrocytes myelinate nerves differ. A Schwann cell usually myelinates a single axon, completely surrounding it. Sometimes, they may myelinate many axons, especially when in areas of short axons. Oligodendrocytes usually myelinate several axons. They do this by sending out thin projections of their cell membrane, which envelop and enclose the axon. Development During early development of the vertebrate embryo, a longitudinal groove on the neural plate gradually deepens and the ridges on either side of the groove (the neural folds) become elevated, and ultimately meet, transforming the groove into a closed tube called the neural tube. The formation of the neural tube is called neurulation. At this stage, the walls of the neural tube contain proliferating neural stem cells in a region called the ventricular zone. The neural stem cells, principally radial glial cells, multiply and generate neurons through the process of neurogenesis, forming the rudiment of the CNS. The neural tube gives rise to both brain and spinal cord. The anterior (or 'rostral') portion of the neural tube initially differentiates into three brain vesicles (pockets): the prosencephalon at the front, the mesencephalon, and, between the mesencephalon and the spinal cord, the rhombencephalon. (By six weeks in the human embryo) the prosencephalon then divides further into the telencephalon and diencephalon; and the rhombencephalon divides into the metencephalon and myelencephalon. The spinal cord is derived from the posterior or 'caudal' portion of the neural tube. As a vertebrate grows, these vesicles differentiate further still. The telencephalon differentiates into, among other things, the striatum, the hippocampus and the neocortex, and its cavity becomes the first and second ventricles (lateral ventricles). Diencephalon elaborations include the subthalamus, hypothalamus, thalamus and epithalamus, and its cavity forms the third ventricle. The tectum, pretectum, cerebral peduncle and other structures develop out of the mesencephalon, and its cavity grows into the mesencephalic duct (cerebral aqueduct). The metencephalon becomes, among other things, the pons and the cerebellum, the myelencephalon forms the medulla oblongata, and their cavities develop into the fourth ventricle. Evolution Planaria Planarians, members of the phylum Platyhelminthes (flatworms), have the simplest, clearly defined delineation of a nervous system into a CNS and a PNS. Their primitive brains, consisting of two fused anterior ganglia, and longitudinal nerve cords form the CNS. Like vertebrates, have a distinct CNS and PNS. The nerves projecting laterally from the CNS form their PNS. A molecular study found that more than 95% of the 116 genes involved in the nervous system of planarians, which includes genes related to the CNS, also exist in humans. Arthropoda In arthropods, the ventral nerve cord, the subesophageal ganglia and the supraesophageal ganglia are usually seen as making up the CNS. Arthropoda, unlike vertebrates, have inhibitory motor neurons due to their small size. Chordata The CNS of chordates differs from that of other animals in being placed dorsally in the body, above the gut and notochord/spine. The basic pattern of the CNS is highly conserved throughout the different species of vertebrates and during evolution. The major trend that can be observed is towards a progressive telencephalisation: the telencephalon of reptiles is only an appendix to the large olfactory bulb, while in mammals it makes up most of the volume of the CNS. In the human brain, the telencephalon covers most of the diencephalon and the entire mesencephalon. Indeed, the allometric study of brain size among different species shows a striking continuity from rats to whales, and allows us to complete the knowledge about the evolution of the CNS obtained through cranial endocasts. Mammals Mammals – which appear in the fossil record after the first fishes, amphibians, and reptiles – are the only vertebrates to possess the evolutionarily recent, outermost part of the cerebral cortex (main part of the telencephalon excluding olfactory bulb) known as the neocortex. This part of the brain is, in mammals, involved in higher thinking and further processing of all senses in the sensory cortices (processing for smell was previously only done by its bulb while those for non-smell senses were only done by the tectum). The neocortex of monotremes (the duck-billed platypus and several species of spiny anteaters) and of marsupials (such as kangaroos, koalas, opossums, wombats, and Tasmanian devils) lack the convolutions – gyri and sulci – found in the neocortex of most placental mammals (eutherians). Within placental mammals, the size and complexity of the neocortex increased over time. The area of the neocortex of mice is only about 1/100 that of monkeys, and that of monkeys is only about 1/10 that of humans. In addition, rats lack convolutions in their neocortex (possibly also because rats are small mammals), whereas cats have a moderate degree of convolutions, and humans have quite extensive convolutions. Extreme convolution of the neocortex is found in dolphins, possibly related to their complex echolocation. Clinical significance Diseases There are many CNS diseases and conditions, including infections such as encephalitis and poliomyelitis, early-onset neurological disorders including ADHD and autism, seizure disorders such as epilepsy, headache disorders such as migraine, late-onset neurodegenerative diseases such as Alzheimer's disease, Parkinson's disease, and essential tremor, autoimmune and inflammatory diseases such as multiple sclerosis and acute disseminated encephalomyelitis, genetic disorders such as Krabbe's disease and Huntington's disease, as well as amyotrophic lateral sclerosis and adrenoleukodystrophy. Lastly, cancers of the central nervous system can cause severe illness and, when malignant, can have very high mortality rates. Symptoms depend on the size, growth rate, location and malignancy of tumors and can include alterations in motor control, hearing loss, headaches and changes in cognitive ability and autonomic functioning. Specialty professional organizations recommend that neurological imaging of the brain be done only to answer a specific clinical question and not as routine screening. References External links High-Resolution Cytoarchitectural Primate Brain Atlases Explaining the human nervous system. The Department of Neuroscience at Wikiversity Central nervous system histology Neuroscience
Central nervous system
[ "Biology" ]
4,033
[ "Neuroscience" ]