id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
9,931
https://en.wikipedia.org/wiki/Amplifier
An amplifier, electronic amplifier or (informally) amp is an electronic device that can increase the magnitude of a signal (a time-varying voltage or current). It is a two-port electronic circuit that uses electric power from a power supply to increase the amplitude (magnitude of the voltage or current) of a signal applied to its input terminals, producing a proportionally greater amplitude signal at its output. The amount of amplification provided by an amplifier is measured by its gain: the ratio of output voltage, current, or power to input. An amplifier is defined as a circuit that has a power gain greater than one. An amplifier can be either a separate piece of equipment or an electrical circuit contained within another device. Amplification is fundamental to modern electronics, and amplifiers are widely used in almost all electronic equipment. Amplifiers can be categorized in different ways. One is by the frequency of the electronic signal being amplified. For example, audio amplifiers amplify signals in the audio (sound) range of less than 20 kHz, RF amplifiers amplify frequencies in the radio frequency range between 20 kHz and 300 GHz, and servo amplifiers and instrumentation amplifiers may work with very low frequencies down to direct current. Amplifiers can also be categorized by their physical placement in the signal chain; a preamplifier may precede other signal processing stages, for example, while a power amplifier is usually used after other amplifier stages to provide enough output power for the final use of the signal. The first practical electrical device which could amplify was the triode vacuum tube, invented in 1906 by Lee De Forest, which led to the first amplifiers around 1912. Today most amplifiers use transistors. History Vacuum tubes The first practical prominent device that could amplify was the triode vacuum tube, invented in 1906 by Lee De Forest, which led to the first amplifiers around 1912. Vacuum tubes were used in almost all amplifiers until the 1960s–1970s when transistors replaced them. Today, most amplifiers use transistors, but vacuum tubes continue to be used in some applications. The development of audio communication technology in form of the telephone, first patented in 1876, created the need to increase the amplitude of electrical signals to extend the transmission of signals over increasingly long distances. In telegraphy, this problem had been solved with intermediate devices at stations that replenished the dissipated energy by operating a signal recorder and transmitter back-to-back, forming a relay, so that a local energy source at each intermediate station powered the next leg of transmission. For duplex transmission, i.e. sending and receiving in both directions, bi-directional relay repeaters were developed starting with the work of C. F. Varley for telegraphic transmission. Duplex transmission was essential for telephony and the problem was not satisfactorily solved until 1904, when H. E. Shreeve of the American Telephone and Telegraph Company improved existing attempts at constructing a telephone repeater consisting of back-to-back carbon-granule transmitter and electrodynamic receiver pairs. The Shreeve repeater was first tested on a line between Boston and Amesbury, MA, and more refined devices remained in service for some time. After the turn of the century it was found that negative resistance mercury lamps could amplify, and were also tried in repeaters, with little success. The development of thermionic valves which began around 1902, provided an entirely electronic method of amplifying signals. The first practical version of such devices was the Audion triode, invented in 1906 by Lee De Forest, which led to the first amplifiers around 1912. Since the only previous device which was widely used to strengthen a signal was the relay used in telegraph systems, the amplifying vacuum tube was first called an electron relay. The terms amplifier and amplification, derived from the Latin amplificare, (to enlarge or expand), were first used for this new capability around 1915 when triodes became widespread. The amplifying vacuum tube revolutionized electrical technology. It made possible long-distance telephone lines, public address systems, radio broadcasting, talking motion pictures, practical audio recording, radar, television, and the first computers. For 50 years virtually all consumer electronic devices used vacuum tubes. Early tube amplifiers often had positive feedback (regeneration), which could increase gain but also make the amplifier unstable and prone to oscillation. Much of the mathematical theory of amplifiers was developed at Bell Telephone Laboratories during the 1920s to 1940s. Distortion levels in early amplifiers were high, usually around 5%, until 1934, when Harold Black developed negative feedback; this allowed the distortion levels to be greatly reduced, at the cost of lower gain. Other advances in the theory of amplification were made by Harry Nyquist and Hendrik Wade Bode. The vacuum tube was virtually the only amplifying device, other than specialized power devices such as the magnetic amplifier and amplidyne, for 40 years. Power control circuitry used magnetic amplifiers until the latter half of the twentieth century when power semiconductor devices became more economical, with higher operating speeds. The old Shreeve electroacoustic carbon repeaters were used in adjustable amplifiers in telephone subscriber sets for the hearing impaired until the transistor provided smaller and higher quality amplifiers in the 1950s. Transistors The first working transistor was a point-contact transistor invented by John Bardeen and Walter Brattain in 1947 at Bell Labs, where William Shockley later invented the bipolar junction transistor (BJT) in 1948. They were followed by the invention of the metal–oxide–semiconductor field-effect transistor (MOSFET) by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. Due to MOSFET scaling, the ability to scale down to increasingly small sizes, the MOSFET has since become the most widely used amplifier. The replacement of bulky electron tubes with transistors during the 1960s and 1970s created a revolution in electronics, making possible a large class of portable electronic devices, such as the transistor radio developed in 1954. Today, use of vacuum tubes is limited to some high power applications, such as radio transmitters, as well as some musical instrument and high-end audiophile amplifiers. Beginning in the 1970s, more and more transistors were connected on a single chip thereby creating higher scales of integration (such as small-scale, medium-scale and large-scale integration) in integrated circuits. Many amplifiers commercially available today are based on integrated circuits. For special purposes, other active elements have been used. For example, in the early days of the satellite communication, parametric amplifiers were used. The core circuit was a diode whose capacitance was changed by an RF signal created locally. Under certain conditions, this RF signal provided energy that was modulated by the extremely weak satellite signal received at the earth station. Advances in digital electronics since the late 20th century provided new alternatives to the conventional linear-gain amplifiers by using digital switching to vary the pulse-shape of fixed amplitude signals, resulting in devices such as the Class-D amplifier. Ideal In principle, an amplifier is an electrical two-port network that produces a signal at the output port that is a replica of the signal applied to the input port, but increased in magnitude. The input port can be idealized as either being a voltage input, which takes no current, with the output proportional to the voltage across the port; or a current input, with no voltage across it, in which the output is proportional to the current through the port. The output port can be idealized as being either a dependent voltage source, with zero source resistance and its output voltage dependent on the input; or a dependent current source, with infinite source resistance and the output current dependent on the input. Combinations of these choices lead to four types of ideal amplifiers. In idealized form they are represented by each of the four types of dependent source used in linear analysis, as shown in the figure, namely: Each type of amplifier in its ideal form has an ideal input and output resistance that is the same as that of the corresponding dependent source: In real amplifiers the ideal impedances are not possible to achieve, but these ideal elements can be used to construct equivalent circuits of real amplifiers by adding impedances (resistance, capacitance and inductance) to the input and output. For any particular circuit, a small-signal analysis is often used to find the actual impedance. A small-signal AC test current Ix is applied to the input or output node, all external sources are set to AC zero, and the corresponding alternating voltage Vx across the test current source determines the impedance seen at that node as R = Vx / Ix. Amplifiers designed to attach to a transmission line at input and output, especially RF amplifiers, do not fit into this classification approach. Rather than dealing with voltage or current individually, they ideally couple with an input or output impedance matched to the transmission line impedance, that is, match ratios of voltage to current. Many real RF amplifiers come close to this ideal. Although, for a given appropriate source and load impedance, RF amplifiers can be characterized as amplifying voltage or current, they fundamentally are amplifying power. Properties Amplifier properties are given by parameters that include: Gain, the ratio between the magnitude of output and input signals Bandwidth, the width of the useful frequency range Efficiency, the ratio between the power of the output and total power consumption Linearity, the extent to which the proportion between input and output amplitude is the same for high amplitude and low amplitude input Noise, a measure of undesired noise mixed into the output Output dynamic range, the ratio of the largest and the smallest useful output levels Slew rate, the maximum rate of change of the output Rise time, settling time, ringing and overshoot that characterize the step response Stability, the ability to avoid self-oscillation Amplifiers are described according to the properties of their inputs, their outputs, and how they relate. All amplifiers have gain, a multiplication factor that relates the magnitude of some property of the output signal to a property of the input signal. The gain may be specified as the ratio of output voltage to input voltage (voltage gain), output power to input power (power gain), or some combination of current, voltage, and power. In many cases the property of the output that varies is dependent on the same property of the input, making the gain unitless (though often expressed in decibels (dB)). Most amplifiers are designed to be linear. That is, they provide constant gain for any normal input level and output signal. If an amplifier's gain is not linear, the output signal can become distorted. There are, however, cases where variable gain is useful. Certain signal processing applications use exponential gain amplifiers. Amplifiers are usually designed to function well in a specific application, for example: radio and television transmitters and receivers, high-fidelity ("hi-fi") stereo equipment, microcomputers and other digital equipment, and guitar and other instrument amplifiers. Every amplifier includes at least one active device, such as a vacuum tube or transistor. Negative feedback Negative feedback is a technique used in most modern amplifiers to increase bandwidth, reduce distortion, and control gain. In a negative feedback amplifier part of the output is fed back and added to the input in the opposite phase, subtracting from the input. The main effect is to reduce the overall gain of the system. However, any unwanted signals introduced by the amplifier, such as distortion are also fed back. Since they are not part of the original input, they are added to the input in opposite phase, subtracting them from the input. In this way, negative feedback also reduces nonlinearity, distortion and other errors introduced by the amplifier. Large amounts of negative feedback can reduce errors to the point that the response of the amplifier itself becomes almost irrelevant as long as it has a large gain, and the output performance of the system (the "closed loop performance") is defined entirely by the components in the feedback loop. This technique is used particularly with operational amplifiers (op-amps). Non-feedback amplifiers can achieve only about 1% distortion for audio-frequency signals. With negative feedback, distortion can typically be reduced to 0.001%. Noise, even crossover distortion, can be practically eliminated. Negative feedback also compensates for changing temperatures, and degrading or nonlinear components in the gain stage, but any change or nonlinearity in the components in the feedback loop will affect the output. Indeed, the ability of the feedback loop to define the output is used to make active filter circuits. Another advantage of negative feedback is that it extends the bandwidth of the amplifier. The concept of feedback is used in operational amplifiers to precisely define gain, bandwidth, and other parameters entirely based on the components in the feedback loop. Negative feedback can be applied at each stage of an amplifier to stabilize the operating point of active devices against minor changes in power-supply voltage or device characteristics. Some feedback, positive or negative, is unavoidable and often undesirable—introduced, for example, by parasitic elements, such as inherent capacitance between input and output of devices such as transistors, and capacitive coupling of external wiring. Excessive frequency-dependent positive feedback can produce parasitic oscillation and turn an amplifier into an oscillator. Categories Active devices All amplifiers include some form of active device: this is the device that does the actual amplification. The active device can be a vacuum tube, discrete solid state component, such as a single transistor, or part of an integrated circuit, as in an op-amp. Transistor amplifiers (or solid state amplifiers) are the most common type of amplifier in use today. A transistor is used as the active element. The gain of the amplifier is determined by the properties of the transistor itself as well as the circuit it is contained within. Common active devices in transistor amplifiers include bipolar junction transistors (BJTs) and metal oxide semiconductor field-effect transistors (MOSFETs). Applications are numerous. Some common examples are audio amplifiers in a home stereo or public address system, RF high power generation for semiconductor equipment, to RF and microwave applications such as radio transmitters. Transistor-based amplification can be realized using various configurations: for example a bipolar junction transistor can realize common base, common collector or common emitter amplification; a MOSFET can realize common gate, common source or common drain amplification. Each configuration has different characteristics. Vacuum-tube amplifiers (also known as tube amplifiers or valve amplifiers) use a vacuum tube as the active device. While semiconductor amplifiers have largely displaced valve amplifiers for low-power applications, valve amplifiers can be much more cost effective in high power applications such as radar, countermeasures equipment, and communications equipment. Many microwave amplifiers are specially designed valve amplifiers, such as the klystron, gyrotron, traveling wave tube, and crossed-field amplifier, and these microwave valves provide much greater single-device power output at microwave frequencies than solid-state devices. Vacuum tubes remain in use in some high end audio equipment, as well as in musical instrument amplifiers, due to a preference for "tube sound". Magnetic amplifiers are devices somewhat similar to a transformer where one winding is used to control the saturation of a magnetic core and hence alter the impedance of the other winding. They have largely fallen out of use due to development in semiconductor amplifiers but are still useful in HVDC control, and in nuclear power control circuitry due to not being affected by radioactivity. Negative resistances can be used as amplifiers, such as the tunnel diode amplifier. Power amplifiers A power amplifier is an amplifier designed primarily to increase the power available to a load. In practice, amplifier power gain depends on the source and load impedances, as well as the inherent voltage and current gain. A radio frequency (RF) amplifier design typically optimizes impedances for power transfer, while audio and instrumentation amplifier designs normally optimize input and output impedance for least loading and highest signal integrity. An amplifier that is said to have a gain of 20 dB might have a voltage gain of 20 dB and an available power gain of much more than 20 dB (power ratio of 100)—yet actually deliver a much lower power gain if, for example, the input is from a 600 Ω microphone and the output connects to a 47 kΩ input socket for a power amplifier. In general, the power amplifier is the last 'amplifier' or actual circuit in a signal chain (the output stage) and is the amplifier stage that requires attention to power efficiency. Efficiency considerations lead to the various classes of power amplifiers based on the biasing of the output transistors or tubes: see power amplifier classes below. Audio power amplifiers are typically used to drive loudspeakers. They will often have two output channels and deliver equal power to each. An RF power amplifier is found in radio transmitter final stages. A servo motor controller amplifies a control voltage to adjust the speed of a motor, or the position of a motorized system. Operational amplifiers (op-amps) An operational amplifier is an amplifier circuit which typically has very high open loop gain and differential inputs. Op amps have become very widely used as standardized "gain blocks" in circuits due to their versatility; their gain, bandwidth and other characteristics can be controlled by feedback through an external circuit. Though the term today commonly applies to integrated circuits, the original operational amplifier design used valves, and later designs used discrete transistor circuits. A fully differential amplifier is similar to the operational amplifier, but also has differential outputs. These are usually constructed using BJTs or FETs. Distributed amplifiers These use balanced transmission lines to separate individual single stage amplifiers, the outputs of which are summed by the same transmission line. The transmission line is a balanced type with the input at one end and on one side only of the balanced transmission line and the output at the opposite end is also the opposite side of the balanced transmission line. The gain of each stage adds linearly to the output rather than multiplies one on the other as in a cascade configuration. This allows a higher bandwidth to be achieved than could otherwise be realised even with the same gain stage elements. Switched mode amplifiers These nonlinear amplifiers have much higher efficiencies than linear amps, and are used where the power saving justifies the extra complexity. Class-D amplifiers are the main example of this type of amplification. Negative resistance amplifier A negative resistance amplifier is a type of regenerative amplifier that can use the feedback between the transistor's source and gate to transform a capacitive impedance on the transistor's source to a negative resistance on its gate. Compared to other types of amplifiers, a negative resistance amplifier will require only a tiny amount of power to achieve very high gain, maintaining a good noise figure at the same time. Applications Video amplifiers Video amplifiers are designed to process video signals and have varying bandwidths depending on whether the video signal is for SDTV, EDTV, HDTV 720p or 1080i/p etc. The specification of the bandwidth itself depends on what kind of filter is used—and at which point ( or for example) the bandwidth is measured. Certain requirements for step response and overshoot are necessary for an acceptable TV image. Microwave amplifiers Traveling wave tube amplifiers (TWTAs) are used for high power amplification at low microwave frequencies. They typically can amplify across a broad spectrum of frequencies; however, they are usually not as tunable as klystrons. Klystrons are specialized linear-beam vacuum-devices, designed to provide high power, widely tunable amplification of millimetre and sub-millimetre waves. Klystrons are designed for large scale operations and despite having a narrower bandwidth than TWTAs, they have the advantage of coherently amplifying a reference signal so its output may be precisely controlled in amplitude, frequency and phase. Solid-state devices such as silicon short channel MOSFETs like double-diffused metal–oxide–semiconductor (DMOS) FETs, GaAs FETs, SiGe and GaAs heterojunction bipolar transistors/HBTs, HEMTs, IMPATT diodes, and others, are used especially at lower microwave frequencies and power levels on the order of watts specifically in applications like portable RF terminals/cell phones and access points where size and efficiency are the drivers. New materials like gallium nitride (GaN) or GaN on silicon or on silicon carbide/SiC are emerging in HEMT transistors and applications where improved efficiency, wide bandwidth, operation roughly from few to few tens of GHz with output power of few watts to few hundred of watts are needed. Depending on the amplifier specifications and size requirements microwave amplifiers can be realised as monolithically integrated, integrated as modules or based on discrete parts or any combination of those. The maser is a non-electronic microwave amplifier. Musical instrument amplifiers Instrument amplifiers are a range of audio power amplifiers used to increase the sound level of musical instruments, for example guitars, during performances. An amplifier's tone mainly comes from the order and amount in which it applies EQ and distortion. Classification of amplifier stages and systems Common terminal One set of classifications for amplifiers is based on which device terminal is common to both the input and the output circuit. In the case of bipolar junction transistors, the three classes are common emitter, common base, and common collector. For field-effect transistors, the corresponding configurations are common source, common gate, and common drain; for vacuum tubes, common cathode, common grid, and common plate. The common emitter (or common source, common cathode, etc.) is most often configured to provide amplification of a voltage applied between base and emitter, and the output signal taken between collector and emitter is inverted, relative to the input. The common collector arrangement applies the input voltage between base and collector, and to take the output voltage between emitter and collector. This causes negative feedback, and the output voltage tends to follow the input voltage. This arrangement is also used as the input presents a high impedance and does not load the signal source, though the voltage amplification is less than one. The common-collector circuit is, therefore, better known as an emitter follower, source follower, or cathode follower. Unilateral or bilateral An amplifier whose output exhibits no feedback to its input side is described as 'unilateral'. The input impedance of a unilateral amplifier is independent of load, and output impedance is independent of signal source impedance. An amplifier that uses feedback to connect part of the output back to the input is a bilateral amplifier. Bilateral amplifier input impedance depends on the load, and output impedance on the signal source impedance. All amplifiers are bilateral to some degree; however they may often be modeled as unilateral under operating conditions where feedback is small enough to neglect for most purposes, simplifying analysis (see the common base article for an example). Inverting or non-inverting Another way to classify amplifiers is by the phase relationship of the input signal to the output signal. An 'inverting' amplifier produces an output 180 degrees out of phase with the input signal (that is, a polarity inversion or mirror image of the input as seen on an oscilloscope). A 'non-inverting' amplifier maintains the phase of the input signal waveforms. An emitter follower is a type of non-inverting amplifier, indicating that the signal at the emitter of a transistor is following (that is, matching with unity gain but perhaps an offset) the input signal. Voltage follower is also non-inverting type of amplifier having unity gain. This description can apply to a single stage of an amplifier, or to a complete amplifier system. Function Other amplifiers may be classified by their function or output characteristics. These functional descriptions usually apply to complete amplifier systems or sub-systems and rarely to individual stages. A servo amplifier indicates an integrated feedback loop to actively control the output at some desired level. A DC servo indicates use at frequencies down to DC levels, where the rapid fluctuations of an audio or RF signal do not occur. These are often used in mechanical actuators, or devices such as DC motors that must maintain a constant speed or torque. An AC servo amp. can do this for some AC motors. A linear amplifier responds to different frequency components independently, and does not generate harmonic distortion or intermodulation distortion. No amplifier can provide perfect linearity (even the most linear amplifier has some nonlinearities, since the amplifying devices—transistors or vacuum tubes—follow nonlinear power laws such as square-laws and rely on circuitry techniques to reduce those effects). A nonlinear amplifier generates significant distortion and so changes the harmonic content; there are situations where this is useful. Amplifier circuits intentionally providing a non-linear transfer function include: a device like a silicon controlled rectifier or a transistor used as a switch may be employed to turn either fully on or off a load such as a lamp based on a threshold in a continuously variable input. a non-linear amplifier in an analog computer or true RMS converter for example can provide a special transfer function, such as logarithmic or square-law. a Class C RF amplifier may be chosen because it can be very efficient—but is non-linear. Following such an amplifier with a so-called tank tuned circuit can reduce unwanted harmonics (distortion) sufficiently to make it useful in transmitters, or some desired harmonic may be selected by setting the resonant frequency of the tuned circuit to a higher frequency rather than fundamental frequency in frequency multiplier circuits. Automatic gain control circuits require an amplifier's gain be controlled by the time-averaged amplitude so that the output amplitude varies little when weak stations are being received. The non-linearities are assumed arranged so the relatively small signal amplitude suffers from little distortion (cross-channel interference or intermodulation) yet is still modulated by the relatively large gain-control DC voltage. AM detector circuits that use amplification such as anode-bend detectors, precision rectifiers and infinite impedance detectors (so excluding unamplified detectors such as cat's-whisker detectors), as well as peak detector circuits, rely on changes in amplification based on the signal's instantaneous amplitude to derive a direct current from an alternating current input. Operational amplifier comparator and detector circuits. A wideband amplifier has a precise amplification factor over a wide frequency range, and is often used to boost signals for relay in communications systems. A narrowband amp amplifies a specific narrow range of frequencies, to the exclusion of other frequencies. An RF amplifier amplifies signals in the radio frequency range of the electromagnetic spectrum, and is often used to increase the sensitivity of a receiver or the output power of a transmitter. An audio amplifier amplifies audio frequencies. This category subdivides into small signal amplification, and power amps that are optimised to driving speakers, sometimes with multiple amps grouped together as separate or bridgeable channels to accommodate different audio reproduction requirements. Frequently used terms within audio amplifiers include: Preamplifier (preamp.), which may include a phono preamp with RIAA equalization, or tape head preamps with CCIR equalisation filters. They may include filters or tone control circuitry. Power amplifier (normally drives loudspeakers), headphone amplifiers, and public address amplifiers. Stereo amplifiers imply two channels of output (left and right), though the term simply means "solid" sound (referring to three-dimensional)—so quadraphonic stereo was used for amplifiers with four channels. 5.1 and 7.1 systems refer to Home theatre systems with 5 or 7 normal spatial channels, plus a subwoofer channel. Buffer amplifiers, which may include emitter followers, provide a high impedance input for a device (perhaps another amplifier, or perhaps an energy-hungry load such as lights) that would otherwise draw too much current from the source. Line drivers are a type of buffer that feeds long or interference-prone interconnect cables, possibly with differential outputs through twisted pair cables. Interstage coupling method Amplifiers are sometimes classified by the coupling method of the signal at the input, output, or between stages. Different types of these include: Resistive-capacitive (RC) coupled amplifier, using a network of resistors and capacitors By design these amplifiers cannot amplify DC signals as the capacitors block the DC component of the input signal. RC-coupled amplifiers were used very often in circuits with vacuum tubes or discrete transistors. In the days of the integrated circuit a few more transistors on a chip are much cheaper and smaller than a capacitor. Inductive-capacitive (LC) coupled amplifier, using a network of inductors and capacitors This kind of amplifier is most often used in selective radio-frequency circuits. Transformer coupled amplifier, using a transformer to match impedances or to decouple parts of the circuits Quite often LC-coupled and transformer-coupled amplifiers cannot be distinguished as a transformer is some kind of inductor. Direct coupled amplifier, using no impedance and bias matching components This class of amplifier was very uncommon in the vacuum tube days when the anode (output) voltage was at greater than several hundred volts and the grid (input) voltage at a few volts minus. So they were used only if the gain was specified down to DC (e.g., in an oscilloscope). In the context of modern electronics developers are encouraged to use directly coupled amplifiers whenever possible. In FET and CMOS technologies direct coupling is dominant since gates of MOSFETs theoretically pass no current through themselves. Therefore, DC component of the input signals is automatically filtered. Frequency range Depending on the frequency range and other properties amplifiers are designed according to different principles. Frequency ranges down to DC are used only when this property is needed. Amplifiers for direct current signals are vulnerable to minor variations in the properties of components with time. Special methods, such as chopper stabilized amplifiers are used to prevent objectionable drift in the amplifier's properties for DC. "DC-blocking" capacitors can be added to remove DC and sub-sonic frequencies from audio amplifiers. Depending on the frequency range specified different design principles must be used. Up to the MHz range only "discrete" properties need be considered; e.g., a terminal has an input impedance. As soon as any connection within the circuit gets longer than perhaps 1% of the wavelength of the highest specified frequency (e.g., at 100 MHz the wavelength is 3 m, so the critical connection length is approx. 3 cm) design properties radically change. For example, a specified length and width of a PCB trace can be used as a selective or impedance-matching entity. Above a few hundred MHz, it gets difficult to use discrete elements, especially inductors. In most cases, PCB traces of very closely defined shapes are used instead (stripline techniques). The frequency range handled by an amplifier might be specified in terms of bandwidth (normally implying a response that is 3 dB down when the frequency reaches the specified bandwidth), or by specifying a frequency response that is within a certain number of decibels between a lower and an upper frequency (e.g. "20 Hz to 20 kHz plus or minus 1 dB"). Power amplifier classes Power amplifier circuits (output stages) are classified as A, B, AB and C for analog designs—and class D and E for switching designs. The power amplifier classes are based on the proportion of each input cycle (conduction angle) during which an amplifying device passes current. The image of the conduction angle derives from amplifying a sinusoidal signal. If the device is always on, the conducting angle is 360°. If it is on for only half of each cycle, the angle is 180°. The angle of flow is closely related to the amplifier power efficiency. Example amplifier circuit The practical amplifier circuit shown above could be the basis for a moderate-power audio amplifier. It features a typical (though substantially simplified) design as found in modern amplifiers, with a class-AB push–pull output stage, and uses some overall negative feedback. Bipolar transistors are shown, but this design would also be realizable with FETs or valves. The input signal is coupled through capacitor C1 to the base of transistor Q1. The capacitor allows the AC signal to pass, but blocks the DC bias voltage established by resistors R1 and R2 so that any preceding circuit is not affected by it. Q1 and Q2 form a differential amplifier (an amplifier that multiplies the difference between two inputs by some constant), in an arrangement known as a long-tailed pair. This arrangement is used to conveniently allow the use of negative feedback, which is fed from the output to Q2 via R7 and R8. The negative feedback into the difference amplifier allows the amplifier to compare the input to the actual output. The amplified signal from Q1 is directly fed to the second stage, Q3, which is a common emitter stage that provides further amplification of the signal and the DC bias for the output stages, Q4 and Q5. R6 provides the load for Q3 (a better design would probably use some form of active load here, such as a constant-current sink). So far, all of the amplifier is operating in class A. The output pair are arranged in class-AB push–pull, also called a complementary pair. They provide the majority of the current amplification (while consuming low quiescent current) and directly drive the load, connected via DC-blocking capacitor C2. The diodes D1 and D2 provide a small amount of constant voltage bias for the output pair, just biasing them into the conducting state so that crossover distortion is minimized. That is, the diodes push the output stage firmly into class-AB mode (assuming that the base-emitter drop of the output transistors is reduced by heat dissipation). This design is simple, but a good basis for a practical design because it automatically stabilises its operating point, since feedback internally operates from DC up through the audio range and beyond. Further circuit elements would probably be found in a real design that would roll-off the frequency response above the needed range to prevent the possibility of unwanted oscillation. Also, the use of fixed diode bias as shown here can cause problems if the diodes are not both electrically and thermally matched to the output transistors if the output transistors turn on too much, they can easily overheat and destroy themselves, as the full current from the power supply is not limited at this stage. A common solution to help stabilise the output devices is to include some emitter resistors, typically one ohm or so. Calculating the values of the circuit's resistors and capacitors is done based on the components employed and the intended use of the amp. Notes on implementation Any real amplifier is an imperfect realization of an ideal amplifier. An important limitation of a real amplifier is that the output it generates is ultimately limited by the power available from the power supply. An amplifier saturates and clips the output if the input signal becomes too large for the amplifier to reproduce or exceeds operational limits for the device. The power supply may influence the output, so must be considered in the design. The power output from an amplifier cannot exceed its input power. The amplifier circuit has an "open loop" performance. This is described by various parameters (gain, slew rate, output impedance, distortion, bandwidth, signal-to-noise ratio, etc.). Many modern amplifiers use negative feedback techniques to hold the gain at the desired value and reduce distortion. Negative loop feedback has the intended effect of lowering the output impedance and thereby increasing electrical damping of loudspeaker motion at and near the resonance frequency of the speaker. When assessing rated amplifier power output, it is useful to consider the applied load, the signal type (e.g., speech or music), required power output duration (i.e., short-time or continuous), and required dynamic range (e.g., recorded or live audio). In high-powered audio applications that require long cables to the load (e.g., cinemas and shopping centres) it may be more efficient to connect to the load at line output voltage, with matching transformers at source and loads. This avoids long runs of heavy speaker cables. To prevent instability or overheating requires care to ensure solid state amplifiers are adequately loaded. Most have a rated minimum load impedance. All amplifiers generate heat through electrical losses. The amplifier must dissipate this heat via convection or forced air cooling. Heat can damage or reduce electronic component service life. Designers and installers must also consider heating effects on adjacent equipment. Different power supply types result in many different methods of bias. Bias is a technique by which active devices are set to operate in a particular region, or by which the DC component of the output signal is set to the midpoint between the maximum voltages available from the power supply. Most amplifiers use several devices at each stage; they are typically matched in specifications except for polarity. Matched inverted polarity devices are called complementary pairs. Class-A amplifiers generally use only one device, unless the power supply is set to provide both positive and negative voltages, in which case a dual device symmetrical design may be used. Class-C amplifiers, by definition, use a single polarity supply. Amplifiers often have multiple stages in cascade to increase gain. Each stage of these designs may be a different type of amp to suit the needs of that stage. For instance, the first stage might be a class-A stage, feeding a class-AB push–pull second stage, which then drives a class-G final output stage, taking advantage of the strengths of each type, while minimizing their weaknesses. Special types Charge transfer amplifier CMOS amplifiers Current sense amplifier Distributed amplifier Doherty amplifier Double-tuned amplifier Faithful amplifier Intermediate power amplifier Low-noise amplifier Negative feedback amplifier Optical amplifier Programmable-gain amplifier Tuned amplifier Valve amplifier See also Power added efficiency References External links AES guide to amplifier classes contains an explanation of different amplifier classes Electronic circuits Audiovisual introductions in 1906
Amplifier
[ "Technology", "Engineering" ]
7,969
[ "Electronic engineering", "Electronic amplifiers", "Amplifiers", "Electronic circuits" ]
9,942
https://en.wikipedia.org/wiki/Erwin%20Schr%C3%B6dinger
Erwin Rudolf Josef Alexander Schrödinger (, ; ; 12 August 1887 – 4 January 1961), sometimes written as or , was a Nobel Prize–winning Austrian and naturalized Irish physicist who developed fundamental results in quantum theory. In particular, he is recognized for postulating the Schrödinger equation, an equation that provides a way to calculate the wave function of a system and how it changes dynamically in time. Schrödinger coined the term "quantum entanglement", and was the earliest to discuss it, doing so in 1932. He also anticipated the many-worlds interpretation of quantum mechanics. In addition, he wrote many works on various aspects of physics: statistical mechanics and thermodynamics, physics of dielectrics, colour theory, electrodynamics, general relativity, and cosmology, and he made several attempts to construct a unified field theory. In his book What Is Life? Schrödinger addressed the problems of genetics, looking at the phenomenon of life from the point of view of physics. He also paid great attention to the philosophical aspects of science, ancient, and oriental philosophical concepts, ethics, and religion. He also wrote on philosophy and theoretical biology. In popular culture, he is best known for his "Schrödinger's cat" thought experiment. Spending most of his life as an academic with positions at various universities, Schrödinger, along with Paul Dirac, won the Nobel Prize in Physics in 1933 for his work on quantum mechanics, the same year he left Germany due to his opposition to Nazism. In his personal life, he lived with both his wife and his mistress which may have led to problems causing him to leave his position at Oxford. Subsequently, until 1938, he had a position in Graz, Austria, until the Nazi takeover when he fled, finally finding a long-term arrangement in Dublin, Ireland, where he remained until retirement in 1955, and where he sexually abused several minors. Biography Early years Schrödinger was born in , Vienna, Austria, on 12 August 1887, to Rudolf Schrödinger ( producer, botanist) and Georgine Emilia Brenda Schrödinger (née Bauer) (daughter of , professor of chemistry, TU Wien). He was their only child. His mother was of half Austrian and half English descent; his father was Catholic and his mother was Lutheran. He himself was an atheist. However, he had strong interests in Eastern religions and pantheism, and he used religious symbolism in his works. He also believed his scientific work was an approach to divinity in an intellectual sense. He was also able to learn English outside school, as his maternal grandmother was British. Between 1906 and 1910 (the year he earned his doctorate) Schrödinger studied at the University of Vienna under the physicists Franz S. Exner (1849–1926) and Friedrich Hasenöhrl (1874–1915). He received his doctorate at Vienna under Hasenöhrl. He also conducted experimental work with Karl Wilhelm Friedrich "Fritz" Kohlrausch. In 1911, Schrödinger became an assistant to Exner. Middle years In 1914 Schrödinger achieved habilitation (venia legendi). Between 1914 and 1918 he participated in war work as a commissioned officer in the Austrian fortress artillery (Gorizia, Duino, Sistiana, Prosecco, Vienna). In 1920 he became the assistant to Max Wien, in Jena, and in September 1920 he attained the position of ao. Prof. (ausserordentlicher Professor), roughly equivalent to Reader (UK) or associate professor (US), in Stuttgart. In 1921, he became o. Prof. (ordentlicher Professor, i.e. full professor), in Breslau (now Wrocław, Poland). In 1921, he moved to the University of Zürich. In 1927, he succeeded Max Planck at the Friedrich Wilhelm University in Berlin. In 1933, Schrödinger decided to leave Germany because he strongly disapproved of the Nazis' antisemitism. He became a Fellow of Magdalen College at the University of Oxford. Soon after he arrived, he received the Nobel Prize in Physics together with Paul Dirac. His position at Oxford did not work out well; his unconventional domestic arrangements, sharing living quarters with two women, were not met with acceptance. In 1934, Schrödinger lectured at Princeton University; he was offered a permanent position there, but did not accept it. Again, his wish to set up house with his wife and his mistress may have created a problem. He had the prospect of a position at the University of Edinburgh but visa delays occurred, and in the end he took up a position at the University of Graz in Austria in 1936. He had also accepted the offer of chair position at Department of Physics, Allahabad University in India. In the midst of these tenure issues in 1935, after extensive correspondence with Albert Einstein, he proposed what is now called the "Schrödinger's cat" thought experiment. Later years In 1938, after the Anschluss, Schrödinger had problems in Graz because of his flight from Germany in 1933 and his known opposition to Nazism. He issued a statement recanting this opposition. He later regretted doing so and explained the reason to Einstein: "I wanted to remain free – and could not do so without great duplicity". However, this did not fully appease the new dispensation and the University of Graz dismissed him from his post for political unreliability. He suffered harassment and was instructed not to leave the country. He and his wife, however, fled to Italy. From there, he went to visiting positions in Oxford and Ghent University. In the same year he received a personal invitation from Ireland's Taoiseach, Éamon de Valera – a mathematician himself – to reside in Ireland and agreed to help establish an Institute for Advanced Studies in Dublin. He moved to Kincora Road, Clontarf, Dublin, and lived modestly. A plaque has been erected at his Clontarf residence and at the address of his workplace in Merrion Square. Schrödinger believed that as an Austrian he had a unique relationship to Ireland. In October 1940, a writer from the Irish Press interviewed Schrödinger who spoke of Celtic heritage of Austrians, saying: "I believe there is a deeper connection between us Austrians and the Celts. Names of places in the Austrian Alps are said to be of Celtic origin." He became the Director of the School for Theoretical Physics in 1940 and remained there for 17 years. He became a naturalized Irish citizen in 1948, but also retained his Austrian citizenship. He wrote around 50 further publications on various topics, including his explorations of unified field theory. In 1944, he wrote What Is Life?, which contains a discussion of negentropy and the concept of a complex molecule with the genetic code for living organisms. According to James D. Watson's memoir, DNA, the Secret of Life, Schrödinger's book gave Watson the inspiration to research the gene, which led to the discovery of the DNA double helix structure in 1953. Similarly, Francis Crick, in his autobiographical book What Mad Pursuit, described how he was influenced by Schrödinger's speculations about how genetic information might be stored in molecules. Schrödinger stayed in Dublin until retiring in 1955. A manuscript "Fragment from an unpublished dialogue of Galileo" from this time resurfaced at The King's Hospital boarding school, Dublin after it was written for the School's 1955 edition of their Blue Coat to celebrate his leaving of Dublin to take up his appointment as Chair of Physics at the University of Vienna. In 1956, he returned to Vienna (chair ad personam). At an important lecture during the World Energy Conference he refused to speak on nuclear energy because of his scepticism about it and gave a philosophical lecture instead. During this period, Schrödinger turned from mainstream quantum mechanics' definition of wave–particle duality and promoted the wave idea alone, causing much controversy. Tuberculosis and death Schrödinger suffered from tuberculosis and several times in the 1920s stayed at a sanatorium in Arosa in Switzerland. It was there that he formulated his wave equation. On 4 January 1961, Schrödinger died of tuberculosis, aged 73, in Vienna. He left Anny a widow, and was buried in Alpbach, Austria, in a Catholic cemetery. Although he was not Catholic, the priest in charge of the cemetery permitted the burial after learning Schrödinger was a member of the Pontifical Academy of Sciences. Personal life On April 6, 1920, Schrödinger married Annemarie (Anny) Bertel. When he migrated to Ireland in 1938, he obtained visas for himself, his wife and also another woman, Hilde March. March was the wife of an Austrian colleague and Schrödinger had fathered a daughter with her in 1934. Schrödinger wrote to the Taoiseach, Éamon de Valera personally, so as to obtain a visa for March. In October 1939 the ménage à trois duly took up residence in Dublin. His wife, Anny (born 3 December 1896), died on 3 October 1965. One of Schrödinger's grandchildren, Terry Rudolph, has followed in his footsteps as a quantum physicist, and teaches at Imperial College London. Sexual abuse allegations At the age of 39, Schrödinger tutored a 14-year-old girl named "Ithi" Junger. Walter Moore relates in his 1989 biography of Schrödinger that the lessons "included 'a fair amount of petting and cuddling and Schrödinger "had fallen in love with his pupil". Moore further relates that "not long after her seventeenth birthday, they became lovers". The relationship continued and in 1932 she became pregnant (then aged 20). "Erwin tried to persuade her to have the child; he said he would take care of it, but he did not offer to divorce [wife] Anny... in desperation, Ithi arranged for an abortion." Moore describes Schrödinger having a 'Lolita complex'. He quotes from Schrödinger's diary from the time where he said that "men of strong, genuine intellectuality are immensely attracted only by women who, forming the very beginning of the intellectual series, are as nearly connected to the preferred springs of nature as they". A 2021 Irish Times article summarized this as a "predilection for teenage girls", and denounced Schrödinger as "a serial abuser whose behaviour fitted the profile of a paedophile in the widely understood sense of that term". Schrödinger's grandson and his mother were unhappy with the accusation made by Moore, and once the biography was published, their family broke off contact with him. Carlo Rovelli notes in his book Helgoland that Schrödinger "always kept a number of relationships going at once – and made no secret of his fascination with preadolescent girls". In Ireland, Rovelli writes, he fathered children from two students identified in a Der Standard article as being a 26-year-old and a married political activist of unknown age. Moore's book described both of these episodes, giving the name Kate Nolan as a pseudonym for the first and naming the other as Sheila May, though neither were students. The book also described an episode of Schrödinger being "infatuated" with a twelve-year-old girl, Barbara MacEntee, while in Ireland. He desisted from attentions after a "serious word" from someone, and later "listed her among the unrequited loves of his life." This episode from the book was highlighted by the Irish Times article and others. Walter Moore stated that Schrödinger's attitude towards women was "that of a male supremacist", but that he disliked the "official misogyny" at Oxford which socially excluded women. Helge Kragh, in his review of Moore's biography, said the "conquest of women, especially very young women, was the salt of life for this sincere romantic and male chauvinist". The physics department of Trinity College Dublin announced in January 2022 that they would recommend a lecture theatre that had been named for Schrödinger since the 1990s be renamed in light of his history of sexual abuse, while a picture of the scientist would be removed, and the renaming of an eponymous lecture series would be considered. Academic interests and life of the mind Early in his life, Schrödinger experimented in the fields of electrical engineering, atmospheric electricity, and atmospheric radioactivity, but he usually worked with his former teacher Franz Exner. He also studied vibrational theory, the theory of Brownian motion, and mathematical statistics. In 1912, at the request of the editors of the Handbook of Electricity and Magnetism, Schrödinger wrote an article titled Dielectrism. That same year, Schrödinger gave a theoretical estimate of the probable height distribution of radioactive substances, which is required to explain the observed radioactivity of the atmosphere, and in August 1913 executed several experiments in Zeehame that confirmed his theoretical estimate and those of Victor Franz Hess. For this work, Schrödinger was awarded the 1920 Haitinger Prize (Haitinger-Preis) of the Austrian Academy of Sciences. Other experimental studies conducted by the young researcher in 1914 were checking formulas for capillary pressure in gas bubbles and the study of the properties of soft beta radiation produced by gamma rays striking a metal surface. The last work he performed together with his friend Fritz Kohlrausch. In 1919, Schrödinger performed his last physical experiment on coherent light and subsequently focused on theoretical studies. Quantum mechanics New quantum theory In the first years of his career, Schrödinger became acquainted with the ideas of the old quantum theory, developed in the works of Einstein, Max Planck, Niels Bohr, Arnold Sommerfeld, and others. This knowledge helped him work on some problems in theoretical physics, but the Austrian scientist at the time was not yet ready to part with the traditional methods of classical physics. Schrödinger's first publications about atomic theory and the theory of spectra began to emerge only from the beginning of the 1920s, after his personal acquaintance with Sommerfeld and Wolfgang Pauli and his move to Germany. In January 1921, Schrödinger finished his first article on this subject, about the framework of the Bohr–Sommerfeld quantization of the interaction of electrons on some features of the spectra of the alkali metals. Of particular interest to him was the introduction of relativistic considerations in quantum theory. In autumn 1922, he analyzed the electron orbits in an atom from a geometric point of view, using methods developed by his friend Hermann Weyl. This work, in which it was shown that quantum orbits are associated with certain geometric properties, was an important step in predicting some of the features of wave mechanics. Earlier in the same year, he created the Schrödinger equation of the relativistic Doppler effect for spectral lines, based on the hypothesis of light quanta and considerations of energy and momentum. He liked the idea of his teacher Exner on the statistical nature of the conservation laws, so he enthusiastically embraced the BKS theory of Bohr, Hans Kramers, and John C. Slater, which suggested the possibility of violation of these laws in individual atomic processes (for example, in the process of emission of radiation). Although the Bothe–Geiger coincidence experiment soon cast doubt on this, the idea of energy as a statistical concept was a lifelong attraction for Schrödinger, and he discussed it in some reports and publications. Creation of wave mechanics In January 1926, Schrödinger published in Annalen der Physik the paper "" (Quantization as an Eigenvalue Problem) on wave mechanics and presented what is now known as the Schrödinger equation. In this paper, he gave a "derivation" of the wave equation for time-independent systems and showed that it gave the correct energy eigenvalues for a hydrogen-like atom. This paper has been universally celebrated as one of the most important achievements of the twentieth century and created a revolution in most areas of quantum mechanics and indeed of all physics and chemistry. A second paper was submitted just four weeks later that solved the quantum harmonic oscillator, rigid rotor, and diatomic molecule problems and gave a new derivation of the Schrödinger equation. A third paper, published in May, showed the equivalence of his approach to that of Werner Heisenberg's matrix mechanics and gave the treatment of the Stark effect. A fourth paper in this series showed how to treat problems in which the system changes with time, as in scattering problems. In this paper, he introduced a complex solution to the wave equation in order to prevent the occurrence of fourth- and sixth-order differential equations. Schrödinger ultimately reduced the order of the equation to one. Schrödinger was not entirely comfortable with the implications of quantum theory referring to his theory as "wave mechanics". He wrote about the probability interpretation of quantum mechanics, saying, "I don't like it, and I'm sorry I ever had anything to do with it." (Just in order to ridicule the Copenhagen interpretation of quantum mechanics, he contrived the famous thought experiment called Schrödinger's cat paradox and was said to have angrily complained to his students that "now the damned Göttingen physicists use my beautiful wave mechanics for calculating their shitty matrix elements.") Work on a unified field theory Following his work on quantum mechanics, Schrödinger devoted considerable effort to working on a unified field theory that would unite gravity, electromagnetism, and nuclear forces within the basic framework of general relativity, doing the work with an extended correspondence with Albert Einstein. In 1947, he announced a result, "Affine Field Theory", in a talk at the Royal Irish Academy, but the announcement was criticized by Einstein as "preliminary" and failed to lead to the desired unified theory. Following the failure of his attempt at unification, Schrödinger gave up his work on unification and turned to other topics. Additionally, Schrödinger reportedly never collaborated with a major physicist for the remainder of his career. Color Schrödinger had a strong interest in psychology, in particular color perception and colorimetry (German: ). He spent quite a few years of his life working on these questions and published a series of papers in this area: "Theorie der Pigmente von größter Leuchtkraft", Annalen der Physik, (4), 62, (1920), 603–22 (Theory of Pigments with Highest Luminosity) "Grundlinien einer Theorie der Farbenmetrik im Tagessehen", Annalen der Physik, (4), 63, (1920), 397–456; 481–520 (Outline of a theory of colour measurement for daylight vision) "Farbenmetrik", Zeitschrift für Physik, 1, (1920), 459–66 (Colour measurement). "Über das Verhältnis der Vierfarben- zur Dreifarben-Theorie", Mathematisch-Naturwissenschaftliche Klasse, Akademie der Wissenschaften, Wien, 134, 471, (On The Relationship of Four-Color Theory to Three-Color Theory). "Lehre von der strahlenden Energie", Müller-Pouillets Lehrbuch der Physik und Meteorologie, Vol 2, Part 1 (1926) (Thresholds of Color Differences). His work on the psychology of color perception follows the step of Isaac Newton, James Clerk Maxwell and Hermann von Helmholtz in the same area. Some of these papers have been translated into English and can be found in: Sources of Colour Science, Ed. David L. MacAdam, MIT Press (1970) and in Erwin Schrödinger’s Color Theory, Translated with Modern Commentary, Ed. Keith K. Niall, Springer (2017). . Interest in philosophy Schrödinger had a deep interest in philosophy, and was influenced by the works of Arthur Schopenhauer and Baruch Spinoza. In his 1956 lecture "Mind and Matter", he said that "The world extended in space and time is but our representation." This is a repetition of the first words of Schopenhauer's main work. Schopenhauer's works also introduced him to Indian philosophy, more specifically to the Upanishads and Advaita Vedanta’s interpretation. He once took on a particular line of thought: "If the world is indeed created by our act of observation, there should be billions of such worlds, one for each of us. How come your world and my world are the same? If something happens in my world, does it happen in your world, too? What causes all these worlds to synchronize with each other?". There is obviously only one alternative, namely the unification of minds or consciousnesses. Their multiplicity is only apparent, in truth there is only one mind. This is the doctrine of the Upanishads.Schrödinger discussed topics such as consciousness, the mind–body problem, sense perception, free will, and objective reality in his lectures and writings. Schrödinger's attitude with respect to the relations between Eastern and Western thought was one of prudence, expressing appreciation for Eastern philosophy while also admitting that some of the ideas did not fit with empirical approaches to natural philosophy. Some commentators have suggested that Schrödinger was so deeply immersed in a non-dualist Vedântic-like view that it may have served as a broad framework or subliminal inspiration for much of his work including that in theoretical physics. Schrödinger expressed sympathy for the idea of Tat Tvam Asi, stating "you can throw yourself flat on the ground, stretched out upon Mother Earth, with the certain conviction that you are one with her and she with you." Schrödinger said that "Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental. It cannot be accounted for in terms of anything else." Legacy The philosophical issues raised by Schrödinger's cat are still debated today and remain his most enduring legacy in popular science, while Schrödinger's equation is his most enduring legacy at a more technical level. Schrödinger is one of several individuals who have been called "the father of quantum mechanics". The large crater Schrödinger, on the far side of the Moon, is named after him. The Erwin Schrödinger International Institute for Mathematical Physics was founded in Vienna in 1992. Schrödinger's portrait was the main feature of the design of the 1983–97 Austrian 1000-schilling banknote, the second-highest denomination. A building is named after him at the University of Limerick, in Limerick, Ireland, as is the 'Erwin Schrödinger Zentrum' at Adlershof in Berlin and the Route Schrödinger at CERN, Prévessin, France. Schrödinger's 126th birthday anniversary in 2013 was celebrated with a Google Doodle. Awards and honors 1920: Haitinger Prize of the Austrian Academy of Sciences 1927: Matteucci Medal of the Accademia nazionale delle scienze 1931: Honorary membership of the Royal Irish Academy 1933: Nobel Prize in Physics for the formulation of the Schrödinger equation – shared with Paul Dirac 1937: Max Planck Medal of the German Physical Society 1949: Foreign membership of the Royal Society 1956: Erwin Schrödinger Prize of the Austrian Academy of Sciences See also List of things named after Erwin Schrödinger. Published works Science and the human temperament, Allen & Unwin (1935), translated and introduced by James Murphy, with a foreword by Ernest Rutherford. Nature and the Greeks and Science and Humanism, Cambridge University Press (1996) . The Interpretation of Quantum Mechanics, Ox Bow Press (1995) . Statistical Thermodynamics, Dover Publications (1989) . Collected papers, Friedr. Vieweg & Sohn (1984) . My View of the World, Ox Bow Press (1983) . Expanding Universes, Cambridge University Press (1956). Space-Time Structure, Cambridge University Press (1950) . What Is Life?, Macmillan (1944). What Is Life? & Mind and Matter, Cambridge University Press (1974) . See also the list of Erwin Schrödinger's publications (), compiled by Auguste Dick, Gabriele Kerber, Wolfgang Kerber and Karl von Meyenn. References Sources External links Erwin Schrödinger and others on Austrian banknotes "biographie" (in German) or "Biography from the Austrian Central Library for Physics" (in English) Encyclopædia Britannica article on Erwin Schrödinger with his Nobel Lecture, 12 December 1933 The Fundamental Idea of Wave Mechanics Vallabhan, C. P. Girija, "Indian influences on Quantum Dynamics" [ed. Schrödinger's interest in Vedanta] Schrödinger Medal of the World Association of Theoretically Oriented Chemists (WATOC) The Discovery of New Productive Forms of Atomic Theory Nobel Banquet speech (in German) Annotated bibliography for Erwin Schrödinger from the Alsos Digital Library for Nuclear Issues Critical interdisciplinary review of Schrödinger's "What Is life?" Schrödinger in Oxford by Sir David C Clary , World Scientific, 2022 Academics of the Dublin Institute for Advanced Studies Austrian atheists Austrian emigrants to Ireland Austrian Nobel laureates Austrian people of English descent Nobel laureates from Austria-Hungary Austrian people of British descent Austro-Hungarian military personnel of World War I Corresponding Members of the USSR Academy of Sciences Emigrants from Austria after the Anschluss Fellows of Magdalen College, Oxford Fellows of the American Physical Society Foreign members of the Royal Society Honorary members of the USSR Academy of Sciences Academic staff of the Humboldt University of Berlin Irish physicists Mathematical physicists Members of the German Academy of Sciences at Berlin Members of the Pontifical Academy of Sciences Members of the Royal Irish Academy 20th-century mystics Nobel laureates in Physics Optical physicists People associated with the University of Zurich People from Clontarf, Dublin People from Landstraße Philosophers of science Quantum physicists Recipients of the Austrian Decoration for Science and Art Recipients of the Matteucci Medal Recipients of the Pour le Mérite (civil class) Theoretical physicists Thermodynamicists Tuberculosis deaths in Austria Academic staff of the University of Breslau Academic staff of the University of Graz Academic staff of the University of Stuttgart Academic staff of the University of Vienna Winners of the Max Planck Medal 1887 births 1961 deaths 20th-century atheists 20th-century Austrian mathematicians 20th-century Austrian physicists 20th-century deaths from tuberculosis
Erwin Schrödinger
[ "Physics", "Chemistry" ]
5,601
[ "Theoretical physics", "Quantum physicists", "Quantum mechanics", "Thermodynamics", "Thermodynamicists", "Theoretical physicists" ]
9,944
https://en.wikipedia.org/wiki/Episome
An episome is a special type of plasmid, which remains as a part of the eukaryotic genome without integration. Episomes manage this by replicating together with the rest of the genome and subsequently associating with metaphase chromosomes during mitosis. Episomes do not degrade, unlike standard plasmids, and can be designed so that they are not epigenetically silenced inside the eukaryotic cell nucleus. Episomes can be observed in nature in certain types of long-term infection by adeno-associated virus or Epstein-Barr virus. In 2004, it was proposed that non-viral episomes might be used in genetic therapy for long-term change in gene expression. As of 1999, there were many known sequences of DNA (deoxyribonucleic acid) that allow a standard plasmid to become episomally retained. One example is the S/MAR sequence. The length of episomal retention is fairly variable between different genetic constructs and there are many known features in the sequence of an episome which will affect the length and stability of genetic expression of the carried transgene. Among these features is the number of CpG sites which contribute to epigenetic silencing of the transgene carried by the episome. Mechanism of episomal retention The mechanism behind episomal retention in the case of S/MAR episomes is generally still uncertain. As of 1985, in the case of latent Epstein-Barr virus infection, episomes seemed to be associated with nuclear proteins of the host cell through a set of viral proteins. Episomes in prokaryotes Episomes in prokaryotes are special sequences which can divide either separate from or integrated into the prokaryotic chromosome. References Molecular biology
Episome
[ "Chemistry", "Biology" ]
381
[ "Biochemistry", "Molecular biology" ]
9,971
https://en.wikipedia.org/wiki/Eden%20Project
The Eden Project () is a visitor attraction in Cornwall, England. The project is located in a reclaimed china clay pit. The complex is dominated by two huge enclosures consisting of adjoining domes that house thousands of plant species, and each enclosure emulates a natural biome. The biomes consist of hundreds of hexagonal and pentagonal ethylene tetrafluoroethylene (ETFE) inflated cells supported by geodesic tubular steel domes. The larger of the two biomes simulates a rainforest environment (and is the largest indoor rainforest in the world) and the second, a Mediterranean environment. The attraction also has an outside botanical garden which is home to many plants and wildlife native to Cornwall and the UK in general; it also has many plants that provide an important and interesting backstory, for example, those with a prehistoric heritage. There are plans to build an Eden Project North in the seaside town of Morecambe, Lancashire, with a focus on the marine environment. History The clay pit in which the project is sited was in use for over 160 years. In 1981, the pit was used by the BBC as the planet surface of Magrathea in the TV series the Hitchhiker's Guide to the Galaxy. By the mid-1990s the pit was all but exhausted. The initial idea for the project dates back to 1996, with construction beginning in 1998. The work was hampered by torrential rain in the first few months of the project, and parts of the pit flooded as it sits below the water table. The first part of the Eden Project, the visitor centre, opened to the public in May 2000. The first plants began arriving in September of that year, and the full site opened on 17 March 2001. To counter criticism from environmental groups, the Eden Project committed to investigate a rail link to the site. The rail link was never built, and car parking on the site is still funded from revenue generated from general admission ticket sales. A bus service links the site to St Austell railway station, on the Cornish Main Line. The Eden Project was used as a filming location for the 2002 James Bond film Die Another Day. On 2 July 2005 The Eden Project hosted the "Africa Calling" concert of the Live 8 concert series. It has also provided some plants for the British Museum's Africa garden. In 2005, the Project launched "A Time of Gifts" for the winter months, November to February. This features an ice rink covering the lake, with a small café-bar attached, as well as a Christmas market. Cornish choirs regularly perform in the biomes. In 2007, the Eden Project campaigned unsuccessfully for £50 million in Big Lottery Fund money for a proposed desert biome. It received just 12.07% of the votes, the lowest for the four projects being considered. As part of the campaign, the Eden Project invited people all over Cornwall to try to break the world record for the biggest ever pub quiz as part of its campaign to bring £50 million of lottery funds to Cornwall. In December 2009, much of the project, including both greenhouses, became available to navigate through Google Street View. The Eden Trust revealed a trading loss of £1.3 million for 2012–13, on a turnover of £25.4 million. The Eden Project had posted a surplus of £136,000 for the previous year. In 2014 Eden accounts showed a surplus of £2 million. The World Pasty Championships, an international competition to find the best Cornish pasties and other pasty-type savoury snacks, have been held at the Eden Project since 2012. The Eden Project is said to have contributed over £1 billion to the Cornish economy. In 2016, Eden became home to Europe's second-largest redwood forest (after the Giants Grove at Birr Castle, Birr Castle, Ireland) when forty saplings of coast redwoods, Sequoia sempervirens, which could live for 4,000 years and reach 115 metres in height, were planted there. The Eden Project received 1,010,095 visitors in 2019. In December 2020 the project was closed after heavy rain caused several landslips at the site. Managers at the site are assessing the damage and will announce when the project will reopen on the company's website. Reopening became irrelevant as Covid lockdown measures in the UK indefinitely closed the venue from early 2021, though it had reopened by May 2021 after remedial works had taken place. The site was used for an event during the 2021 G7 Summit, hosted by the United Kingdom. Design and construction The project was conceived by Tim Smit and Jonathan Ball, and designed by Grimshaw Architects and structural engineering firm Anthony Hunt Associates (now part of Sinclair Knight Merz). Davis Langdon carried out the project management, Sir Robert McAlpine and Alfred McAlpine did the construction, MERO jointly designed and built the biome steel structures, the ETFE pillows that build the façade were realized by Vector Foiltec, and Arup was the services engineer, economic consultant, environmental engineer and transportation engineer. Land Use Consultants led the masterplan and landscape design. The project took 2½ years to construct and opened to the public on 17 March 2001. Site Layout Once into the attraction, there is a meandering path with views of the two biomes, planted landscapes, including vegetable gardens, and sculptures that include a giant bee and previously The WEEE Man (removed in 2016), a towering figure made from old electrical appliances and was meant to represent the average electrical waste used by one person in a lifetime. Biomes At the bottom of the pit are two covered biomes: The Rainforest Biome, covers and measures high, wide, and long. It is used for tropical plants, such as fruiting banana plants, coffee, rubber, and giant bamboo, and is kept at a tropical temperature and moisture level. The Mediterranean Biome covers and measures high, wide, and long. It houses familiar warm temperate and arid plants such as olives and grape vines and various sculptures. The Outdoor Gardens represent the temperate regions of the world with plants such as tea, lavender, hops, hemp, and sunflowers, as well as local plant species. The covered biomes are constructed from a tubular steel (hex-tri-hex) with mostly hexagonal external cladding panels made from the thermoplastic ETFE. Glass was avoided due to its weight and potential dangers. The cladding panels themselves are created from several layers of thin UV-transparent ETFE film, which are sealed around their perimeter and inflated to create a large cushion. The resulting cushion acts as a thermal blanket to the structure. The ETFE material is resistant to most stains, which simply wash off in the rain. If required, cleaning can be performed by abseilers. Although the ETFE is susceptible to punctures, these can be easily fixed with ETFE tape. The structure is completely self-supporting, with no internal supports, and takes the form of a geodesic structure. The panels vary in size up to across, with the largest at the top of the structure. The ETFE technology was supplied and installed by the firm Vector Foiltec, which is also responsible for ongoing maintenance of the cladding. The steel spaceframe and cladding package (with Vector Foiltec as ETFE subcontractor) was designed, supplied and installed by MERO (UK) PLC, who also jointly developed the overall scheme geometry with the architect, Nicholas Grimshaw & Partners. The entire build project was managed by McAlpine Joint Venture. The Core The Core is the latest addition to the site and opened in September 2005. It provides the Eden Project with an education facility, incorporating classrooms and exhibition spaces designed to help communicate Eden's central message about the relationship between people and plants. Accordingly, the building has taken its inspiration from plants, most noticeable in the form of the soaring timber roof, which gives the building its distinctive shape. Grimshaw developed the geometry of the copper-clad roof in collaboration with a sculptor, Peter Randall-Page, and Mike Purvis of structural engineers SKM Anthony Hunts. It is derived from phyllotaxis, which is the mathematical basis for nearly all plant growth; the "opposing spirals" found in many plants such as the seeds in a sunflower's head, pine cones, and pineapples. The copper was obtained from traceable sources, and the Eden Project is working with Rio Tinto to explore the possibility of encouraging further traceable supply routes for metals, which would enable users to avoid metals mined unethically. The services and acoustic, mechanical, and electrical engineering design was carried out by Buro Happold. Art at The Core The Core is also home to art exhibitions throughout the year. A permanent installation entitled Seed, by Peter Randall-Page, occupies the anteroom. Seed is a large, 70 tonne egg-shaped installation, carved from a single block of granite from De Lank Quarry on Bodmin Moor, standing some tall and displaying a complex pattern of protrusions that are based upon the geometric and mathematical principles that underlie plant growth. Environmental aspects The biomes provide diverse growing conditions, and many plants are on display. The Eden Project includes environmental education focusing on the interdependence of plants and people; plants are labelled with their medicinal uses. The massive amounts of water required to create the humid conditions of the Tropical Biome, and to serve the toilet facilities, are all sanitised rain water that would otherwise collect at the bottom of the quarry. The only mains water used is for hand washing and for cooking. The complex also uses Green Tariff Electricity – some of the energy comes from one of the many wind turbines in Cornwall, which were among the first in Europe. In December 2010 the Eden Project received permission to build a geothermal electricity plant which will generate approx 4MWe, enough to supply Eden and about 5000 households. The project will involve geothermal heating as well as geothermal electricity. Cornwall Council and the European Union came up with the greater part of £16.8m required to start the project. First a well will be sunk nearly 3 miles (4.5 km) into the granite crust underneath Eden. Eden co-founder, Sir Tim Smit said, "Since we began, Eden has had a dream that the world should be powered by renewable energy. The sun can provide massive solar power and the wind has been harnessed by humankind for thousands of years, but because both are intermittent and battery technology cannot yet store all we need there is a gap. We believe the answer lies beneath our feet in the heat underground that can be accessed by drilling technology that pumps water towards the centre of the Earth and brings it back up superheated to provide us with heat and electricity". Drilling began in May 2021, and heating of the biomes began in 2023, using 85 degrees Centigrade. Other projects Eden Project Morecambe In 2018, the Eden Project revealed its design for a new version of the project, located on the seafront in Morecambe, Lancashire. There will be biomes shaped like mussels and a focus on the marine environment. There will also be reimagined lidos, gardens, performance spaces, immersive experiences, and observatories. Grimshaw are the architects for the project, which is expected to cost £80 million. The project is a partnership with the Lancashire Enterprise Partnership, Lancaster University, Lancashire County Council, and Lancaster City Council. In December 2018, the four local partners agreed to provide £1 million to develop the idea, which allowed the development of an outline planning application for the project. It is expected that there will be 500 jobs created and 8,000 visitors a day to the site. Having been granted planning permission in January 2022 and with £50 million of levelling-up funding granted in January 2023, it is due to open in 2026 and predicted to benefit the North West economy by £200 million per year. In July 2024, Lancaster City Council received the first £2.5m of a promised £50m in UK government funding for the scheme. The grant would be used to appoint a main contractor to develop the designs for Eden Project Morecambe. Eden Project Dundee In May 2020, the Eden Project revealed plans to establish their first attraction in Scotland, and named Dundee as the proposed site of the location. The city's Camperdown Park was widely touted to be the proposed location of the new attraction however in May 2021, it was announced that the Eden Project had chosen the site of the former gasworks in Dundee as the location. It was planned that the new development would result in 200 new jobs and "contribute £27m a year to the regional economy". The project is in partnership with Dundee City Council, the University of Dundee and the Northwood Charitable Trust. In 2021, Eden Project announced that they would establish fourteen hectares of new wildflower habitat in areas across Dundee, including Morgan Academy and Caird Park. In July 2023, new images were released depicting what the Dundee attraction would look which accompanied the planning permission documents for the new attraction which would be submitted by autumn 2023. Planning permission for the project was approved by Dundee City Council in June 2024. South Downs In 2020, Eastbourne Borough Council and the Eden Project announced a joint project to explore the viability of a new Eden site in the South Downs National Park. Qingdao, China In 2015, the Eden Project announced that it had reached an agreement to construct an Eden site in Qingdao, China. While the site had originally been slated to open by 2020, construction fell behind schedule due to the COVID-19 pandemic and the opening date was delayed to 2023. The new site is expected to focus on "water" and its central role in civilization and nature. Eden Project New Zealand A planned Eden Project for the New Zealand city of Christchurch, to be called Eden Project New Zealand/Eden Project Aotearoa, was expected to be inaugurated in 2025. It was to be centred close to the Avon River, on a site largely razed as a result of the 2011 Christchurch Earthquake. The project has since been cancelled. Eden Sessions Since 2002, the Project has hosted a series of musical performances, called the Eden Sessions, usually held during the summer. The 2020 sessions were postponed due to the COVID-19 pandemic and were rescheduled as the 2022 sessions lineup. The 2024 sessions were headlined by Fatboy Slim, Suede, Manic Street Preachers, The National, JLS, Crowded House, Rick Astley, Tom Grennan and Paolo Nutini. Lineup history In the media The Eden Project has appeared in various television shows and films such as the James Bond film Die Another Day, The Bad Education Movie, in the Netflix series The Last Bus, in the CBeebies show Andy's Aquatic Adventure and in Armenia’s postcard in the Eurovision Song Contest 2023. A weekly radio show called The Eden Radio Project is held every Thursday afternoon on CHAOS Radio, formerly known as Radio St Austell Bay. On 18 November 2019, on the Trees A Crowd podcast, David Oakes interviewed Eden Project's Head of Interpretation, Dr Jo Elworthy, about the site. See also BIOS-3 Biosphere 2 Closed ecological system IBTS Greenhouse Montreal Biodome Montreal Biosphère Mitchell Park Horticultural Conservatory ("The Domes" of Milwaukee) Ecosystem Vivarium The Lost Gardens of Heligan List of topics related to Cornwall Earthpark Sir Richard Carew Pole Thin-shell structure List of thin shell structures References Further reading Philip McMillan Browse, Louise Frost, Alistair Griffiths: Plants of Eden (Eden Project). Penzance 2001: Alison Hodge. Richard Mabey: Fencing Paradise: Exploring the Gardens of Eden London 2005: Eden Project Books. Hugh Pearman, Andrew Whalley: The Architecture of Eden. With a foreword by Sir Nicholas Grimshaw. London 2003: Eden Project Books. Eden Team (Ed.): Eden Project: The Guide 2008/9. London 2008: Eden Project Books. Tim Smit: Eden. London 2001: Bantam Press. Paul Spooner: The Revenge of the Green Planet: The Eden Project Book of Amazing Facts About Plants. London 2003: Eden Project Books. Alan Titchmarsh: The Eden Project. United Kingdom: Acorn Media, 2006. . External links Eden Sessions Website—Official site for live gigs Buildings and structures completed in 2000 Nicholas Grimshaw buildings High-tech architecture Botanical gardens in Cornwall Building engineering Buildings and structures in Cornwall Ecological experiments Environmental design Geodesic domes Greenhouses in the United Kingdom Tourist attractions in Cornwall 2000 establishments in England St Blazey
Eden Project
[ "Engineering" ]
3,418
[ "Environmental design", "Building engineering", "Civil engineering", "Design", "Architecture" ]
9,975
https://en.wikipedia.org/wiki/Linear%20filter
Linear filters process time-varying input signals to produce output signals, subject to the constraint of linearity. In most cases these linear filters are also time invariant (or shift invariant) in which case they can be analyzed exactly using LTI ("linear time-invariant") system theory revealing their transfer functions in the frequency domain and their impulse responses in the time domain. Real-time implementations of such linear signal processing filters in the time domain are inevitably causal, an additional constraint on their transfer functions. An analog electronic circuit consisting only of linear components (resistors, capacitors, inductors, and linear amplifiers) will necessarily fall in this category, as will comparable mechanical systems or digital signal processing systems containing only linear elements. Since linear time-invariant filters can be completely characterized by their response to sinusoids of different frequencies (their frequency response), they are sometimes known as frequency filters. Non real-time implementations of linear time-invariant filters need not be causal. Filters of more than one dimension are also used such as in image processing. The general concept of linear filtering also extends into other fields and technologies such as statistics, data analysis, and mechanical engineering. Impulse response and transfer function A linear time-invariant (LTI) filter can be uniquely specified by its impulse response h, and the output of any filter is mathematically expressed as the convolution of the input with that impulse response. The frequency response, given by the filter's transfer function , is an alternative characterization of the filter. Typical filter design goals are to realize a particular frequency response, that is, the magnitude of the transfer function ; the importance of the phase of the transfer function varies according to the application, inasmuch as the shape of a waveform can be distorted to a greater or lesser extent in the process of achieving a desired (amplitude) response in the frequency domain. The frequency response may be tailored to, for instance, eliminate unwanted frequency components from an input signal, or to limit an amplifier to signals within a particular band of frequencies. The impulse response h of a linear time-invariant causal filter specifies the output that the filter would produce if it were to receive an input consisting of a single impulse at time 0. An "impulse" in a continuous time filter means a Dirac delta function; in a discrete time filter the Kronecker delta function would apply. The impulse response completely characterizes the response of any such filter, inasmuch as any possible input signal can be expressed as a (possibly infinite) combination of weighted delta functions. Multiplying the impulse response shifted in time according to the arrival of each of these delta functions by the amplitude of each delta function, and summing these responses together (according to the superposition principle, applicable to all linear systems) yields the output waveform. Mathematically this is described as the convolution of a time-varying input signal x(t) with the filter's impulse response h, defined as: or . The first form is the continuous-time form, which describes mechanical and analog electronic systems, for instance. The second equation is a discrete-time version used, for example, by digital filters implemented in software, so-called digital signal processing. The impulse response h completely characterizes any linear time-invariant (or shift-invariant in the discrete-time case) filter. The input x is said to be "convolved" with the impulse response h having a (possibly infinite) duration of time T (or of N sampling periods). Filter design consists of finding a possible transfer function that can be implemented within certain practical constraints dictated by the technology or desired complexity of the system, followed by a practical design that realizes that transfer function using the chosen technology. The complexity of a filter may be specified according to the order of the filter. Among the time-domain filters we here consider, there are two general classes of filter transfer functions that can approximate a desired frequency response. Very different mathematical treatments apply to the design of filters termed infinite impulse response (IIR) filters, characteristic of mechanical and analog electronics systems, and finite impulse response (FIR) filters, which can be implemented by discrete time systems such as computers (then termed digital signal processing). Implementation issues Classical analog filters are IIR filters, and classical filter theory centers on the determination of transfer functions given by low order rational functions, which can be synthesized using the same small number of reactive components. Using digital computers, on the other hand, both FIR and IIR filters are straightforward to implement in software. A digital IIR filter can generally approximate a desired filter response using less computing power than a FIR filter, however this advantage is more often unneeded given the increasing power of digital processors. The ease of designing and characterizing FIR filters makes them preferable to the filter designer (programmer) when ample computing power is available. Another advantage of FIR filters is that their impulse response can be made symmetric, which implies a response in the frequency domain that has zero phase at all frequencies (not considering a finite delay), which is absolutely impossible with any IIR filter. Frequency response The frequency response or transfer function of a filter can be obtained if the impulse response is known, or directly through analysis using Laplace transforms, or in discrete-time systems the Z-transform. The frequency response also includes the phase as a function of frequency, however in many cases the phase response is of little or no interest. FIR filters can be made to have zero phase, but with IIR filters that is generally impossible. With most IIR transfer functions there are related transfer functions having a frequency response with the same magnitude but a different phase; in most cases the so-called minimum phase transfer function is preferred. Filters in the time domain are most often requested to follow a specified frequency response. Then, a mathematical procedure finds a filter transfer function that can be realized (within some constraints), and approximates the desired response to within some criterion. Common filter response specifications are described as follows: A low-pass filter passes low frequencies while blocking higher frequencies. A high-pass filter passes high frequencies. A band-pass filter passes a band (range) of frequencies. A band-stop filter passes high and low frequencies outside of a specified band. A notch filter has a null response at a particular frequency. This function may be combined with one of the above responses. An all-pass filter passes all frequencies equally well, but alters the group delay and phase relationship among them. An equalization filter is not designed to fully pass or block any frequency, but instead to gradually vary the amplitude response as a function of frequency: filters used as pre-emphasis filters, equalizers, or tone controls are good examples. FIR transfer functions Meeting a frequency response requirement with an FIR filter uses relatively straightforward procedures. In the most basic form, the desired frequency response itself can be sampled with a resolution of and Fourier transformed to the time domain. This obtains the filter coefficients hi, which implements a zero phase FIR filter that matches the frequency response at the sampled frequencies used. To better match a desired response, must be reduced. However the duration of the filter's impulse response, and the number of terms that must be summed for each output value (according to the above discrete time convolution) is given by where T is the sampling period of the discrete time system (N-1 is also termed the order of an FIR filter). Thus the complexity of a digital filter and the computing time involved, grows inversely with , placing a higher cost on filter functions that better approximate the desired behavior. For the same reason, filter functions whose critical response is at lower frequencies (compared to the sampling frequency 1/T) require a higher order, more computationally intensive FIR filter. An IIR filter can thus be much more efficient in such cases. Elsewhere the reader may find further discussion of design methods for practical FIR filter design. IIR transfer functions Since classical analog filters are IIR filters, there has been a long history of studying the range of possible transfer functions implementing various of the above desired filter responses in continuous time systems. Using transforms it is possible to convert these continuous time frequency responses to ones that are implemented in discrete time, for use in digital IIR filters. The complexity of any such filter is given by the order N, which describes the order of the rational function describing the frequency response. The order N is of particular importance in analog filters, because an Nth order electronic filter requires N reactive elements (capacitors and/or inductors) to implement. If a filter is implemented using, for instance, biquad stages using op-amps, N/2 stages are needed. In a digital implementation, the number of computations performed per sample is proportional to N. Thus the mathematical problem is to obtain the best approximation (in some sense) to the desired response using a smaller N, as we shall now illustrate. Below are the frequency responses of several standard filter functions that approximate a desired response, optimized according to some criterion. These are all fifth-order low-pass filters, designed for a cutoff frequency of .5 in normalized units. Frequency responses are shown for the Butterworth, Chebyshev, inverse Chebyshev, and elliptic filters. As is clear from the image, the elliptic filter is sharper than the others, but at the expense of ripples in both its passband and stopband. The Butterworth filter has the poorest transition but has a more even response, avoiding ripples in either the passband or stopband. A Bessel filter (not shown) has an even poorer transition in the frequency domain, but maintains the best phase fidelity of a waveform. Different applications emphasize different design requirements, leading to different choices among these (and other) optimizations, or requiring a filter of a higher order. Example implementations A popular circuit implementing a second order active R-C filter is the Sallen-Key design, whose schematic diagram is shown here. This topology can be adapted to produce low-pass, band-pass, and high pass filters. An Nth order FIR filter can be implemented in a discrete time system using a computer program or specialized hardware in which the input signal is subject to N delay stages. The output of the filter is formed as the weighted sum of those delayed signals, as is depicted in the accompanying signal flow diagram. The response of the filter depends on the weighting coefficients denoted b0, b1, .... bN. For instance, if all of the coefficients were equal to unity, a so-called boxcar function, then it would implement a low-pass filter with a low frequency gain of N+1 and a frequency response given by the sinc function. Superior shapes for the frequency response can be obtained using coefficients derived from a more sophisticated design procedure. Mathematics of filter design LTI system theory describes linear time-invariant (LTI) filters of all types. LTI filters can be completely described by their frequency response and phase response, the specification of which uniquely defines their impulse response, and vice versa. From a mathematical viewpoint, continuous-time IIR LTI filters may be described in terms of linear differential equations, and their impulse responses considered as Green's functions of the equation. Continuous-time LTI filters may also be described in terms of the Laplace transform of their impulse response, which allows all of the characteristics of the filter to be analyzed by considering the pattern of zeros and poles of their Laplace transform in the complex plane. Similarly, discrete-time LTI filters may be analyzed via the Z-transform of their impulse response. Before the advent of computer filter synthesis tools, graphical tools such as Bode plots and Nyquist plots were extensively used as design tools. Even today, they are invaluable tools to understanding filter behavior. Reference books had extensive plots of frequency response, phase response, group delay, and impulse response for various types of filters, of various orders. They also contained tables of values showing how to implement such filters as RLC ladders - very useful when amplifying elements were expensive compared to passive components. Such a ladder can also be designed to have minimal sensitivity to component variation a property hard to evaluate without computer tools. Many different analog filter designs have been developed, each trying to optimise some feature of the system response. For practical filters, a custom design is sometimes desirable, that can offer the best tradeoff between different design criteria, which may include component count and cost, as well as filter response characteristics. These descriptions refer to the mathematical properties of the filter (that is, the frequency and phase response). These can be implemented as analog circuits (for instance, using a Sallen Key filter topology, a type of active filter), or as algorithms in digital signal processing systems. Digital filters are much more flexible to synthesize and use than analog filters, where the constraints of the design permits their use. Notably, there is no need to consider component tolerances, and very high Q levels may be obtained. FIR digital filters may be implemented by the direct convolution of the desired impulse response with the input signal. They can easily be designed to give a matched filter for any arbitrary pulse shape. IIR digital filters are often more difficult to design, due to problems including dynamic range issues, quantization noise and instability. Typically digital IIR filters are designed as a series of digital biquad filters. All low-pass second-order continuous-time filters have a transfer function given by All band-pass second-order continuous-time filters have a transfer function given by where K is the gain (low-pass DC gain, or band-pass mid-band gain) (K is 1 for passive filters) Q is the Q factor is the center frequency is the complex frequency See also Filter design Laplace transform Green's function Prototype filter Z-transform System theory LTI system theory Nonlinear filter Wiener filter Gabor filter Leapfrog filter Notes and references Further reading National Semiconductor AN-779 application note describing analog filter theory Lattice AN6017 application note comparing and contrasting filters (in order of damping coefficient, from lower to higher values): Gaussian, Bessel, linear phase, Butterworth, Chebyshev, Legendre, elliptic. (with graphs). USING THE ANALOG DEVICES ACTIVE FILTER DESIGN TOOL: a similar application note from Analog Devices with extensive graphs, active RC filter topologies, and tables for practical design. "Design and Analysis of Analog Filters: A Signal Processing Perspective" by L. D. Paarmann Filter theory
Linear filter
[ "Engineering" ]
2,972
[ "Telecommunications engineering", "Filter theory" ]
9,994
https://en.wikipedia.org/wiki/Ephemeris%20time
The term ephemeris time (often abbreviated ET) can in principle refer to time in association with any ephemeris (itinerary of the trajectory of an astronomical object). In practice it has been used more specifically to refer to: a former standard astronomical time scale adopted in 1952 by the IAU, and superseded during the 1970s. This time scale was proposed in 1948, to overcome the disadvantages of irregularly fluctuating mean solar time. The intent was to define a uniform time (as far as was then feasible) based on Newtonian theory (see below: Definition of ephemeris time (1952)). Ephemeris time was a first application of the concept of a dynamical time scale, in which the time and time scale are defined implicitly, inferred from the observed position of an astronomical object via the dynamical theory of its motion. a modern relativistic coordinate time scale, implemented by the JPL ephemeris time argument Teph, in a series of numerically integrated Development Ephemerides. Among them is the DE405 ephemeris in widespread current use. The time scale represented by Teph is closely related to, but distinct (by an offset and constant rate) from, the TCB time scale currently adopted as a standard by the IAU (see below: JPL ephemeris time argument Teph). Most of the following sections relate to the ephemeris time of the 1952 standard. An impression has sometimes arisen that ephemeris time was in use from 1900: this probably arose because ET, though proposed and adopted in the period 1948–1952, was defined in detail using formulae that made retrospective use of the epoch date of 1900 January 0 and of Newcomb's Tables of the Sun. The ephemeris time of the 1952 standard leaves a continuing legacy, through its historical unit ephemeris second which became closely duplicated in the length of the current standard SI second (see below: Redefinition of the second). History (1952 standard) Ephemeris time (ET), adopted as standard in 1952, was originally designed as an approach to a uniform time scale, to be freed from the effects of irregularity in the rotation of the Earth, "for the convenience of astronomers and other scientists", for example for use in ephemerides of the Sun (as observed from the Earth), the Moon, and the planets. It was proposed in 1948 by G M Clemence. From the time of John Flamsteed (1646–1719) it had been believed that the Earth's daily rotation was uniform. But in the later nineteenth and early twentieth centuries, with increasing precision of astronomical measurements, it began to be suspected, and was eventually established, that the rotation of the Earth (i.e. the length of the day) showed irregularities on short time scales, and was slowing down on longer time scales. The evidence was compiled by W de Sitter (1927) who wrote "If we accept this hypothesis, then the 'astronomical time', given by the Earth's rotation, and used in all practical astronomical computations, differs from the 'uniform' or 'Newtonian' time, which is defined as the independent variable of the equations of celestial mechanics". De Sitter offered a correction to be applied to the mean solar time given by the Earth's rotation to get uniform time. Other astronomers of the period also made suggestions for obtaining uniform time, including A Danjon (1929), who suggested in effect that observed positions of the Moon, Sun and planets, when compared with their well-established gravitational ephemerides, could better and more uniformly define and determine time. Thus the aim developed, to provide a new time scale for astronomical and scientific purposes, to avoid the unpredictable irregularities of the mean solar time scale, and to replace for these purposes Universal Time (UT) and any other time scale based on the rotation of the Earth around its axis, such as sidereal time. The American astronomer G M Clemence (1948) made a detailed proposal of this type based on the results of the English Astronomer Royal H Spencer Jones (1939). Clemence (1948) made it clear that his proposal was intended "for the convenience of astronomers and other scientists only" and that it was "logical to continue the use of mean solar time for civil purposes". De Sitter and Clemence both referred to the proposal as 'Newtonian' or 'uniform' time. D Brouwer suggested the name 'ephemeris time'. Following this, an astronomical conference held in Paris in 1950 recommended "that in all cases where the mean solar second is unsatisfactory as a unit of time by reason of its variability, the unit adopted should be the sidereal year at 1900.0, that the time reckoned in this unit be designated ephemeris time", and gave Clemence's formula (see Definition of ephemeris time (1952)) for translating mean solar time to ephemeris time. The International Astronomical Union approved this recommendation at its 1952 general assembly. Practical introduction took some time (see Use of ephemeris time in official almanacs and ephemerides); ephemeris time (ET) remained a standard until superseded in the 1970s by further time scales (see Revision). During the currency of ephemeris time as a standard, the details were revised a little. The unit was redefined in terms of the tropical year at 1900.0 instead of the sidereal year; and the standard second was defined first as 1/31556925.975 of the tropical year at 1900.0, and then as the slightly modified fraction 1/31556925.9747 instead, finally being redefined in 1967/8 in terms of the cesium atomic clock standard (see below). Although ET is no longer directly in use, it leaves a continuing legacy. Its successor time scales, such as TDT, as well as the atomic time scale IAT (TAI), were designed with a relationship that "provides continuity with ephemeris time". ET was used for the calibration of atomic clocks in the 1950s. Close equality between the ET second with the later SI second (as defined with reference to the cesium atomic clock) has been verified to within 1 part in 1010. In this way, decisions made by the original designers of ephemeris time influenced the length of today's standard SI second, and in turn, this has a continuing influence on the number of leap seconds which have been needed for insertion into current broadcast time scales, to keep them approximately in step with mean solar time. Definition (1952) Ephemeris time was defined in principle by the orbital motion of the Earth around the Sun (but its practical implementation was usually achieved in another way, see below). Its detailed definition was based on Simon Newcomb's Tables of the Sun (1895), implemented in a new way to accommodate certain observed discrepancies: In the introduction to Tables of the Sun, the basis of the tables (p. 9) includes a formula for the Sun's mean longitude at a time, indicated by interval T (in units of Julian centuries of 36525 mean solar days), reckoned from Greenwich Mean Noon on 0 January 1900: Ls = 279° 41' 48".04 + 129,602,768".13T +1".089T2 . . . . . (1) Spencer Jones' work of 1939 showed that differences between the observed positions of the Sun and the predicted positions given by Newcomb's formula demonstrated the need for the following correction to the formula: ΔLs = + 1".00 + 2".97T + 1".23T2 + 0.0748B where "the times of observation are in Universal time, not corrected to Newtonian time," and 0.0748B represents an irregular fluctuation calculated from lunar observations. Thus, a conventionally corrected form of Newcomb's formula, incorporating the corrections on the basis of mean solar time, would be the sum of the two preceding expressions: Ls = 279° 41' 49".04 + 129,602,771".10T +2".32T2 +0.0748B . . . . . (2) Clemence's 1948 proposal, however, did not adopt such a correction of mean solar time. Instead, the same numbers were used as in Newcomb's original uncorrected formula (1), but now applied somewhat prescriptively, to define a new time and time scale implicitly, based on the real position of the Sun: Ls = 279° 41' 48".04 + 129,602,768".13E +1".089E2 . . . . . (3) With this reapplication, the time variable, now given as E, represents time in ephemeris centuries of 36525 ephemeris days of 86400 ephemeris seconds each. The 1961 official reference summarized the concept as such: "The origin and rate of ephemeris time are defined to make the Sun's mean longitude agree with Newcomb's expression" From the comparison of formulae (2) and (3), both of which express the same real solar motion in the same real time but defined on separate time scales, Clemence arrived at an explicit expression, estimating the difference in seconds of time between ephemeris time and mean solar time, in the sense (ET-UT): . . . . . (4) with the 24.349 seconds of time corresponding to the 1.00" in ΔLs. Clemence's formula (today superseded by more modern estimations) was included in the original conference decision on ephemeris time. In view of the fluctuation term, practical determination of the difference between ephemeris time and UT depended on observation. Inspection of the formulae above shows that the (ideally constant) units of ephemeris time have been, for the whole of the twentieth century, very slightly shorter than the corresponding (but not precisely constant) units of mean solar time (which, besides their irregular fluctuations, tend to lengthen gradually). This finding is consistent with the modern results of Morrison and Stephenson (see article ΔT). Implementations Secondary realizations by lunar observations Although ephemeris time was defined in principle by the orbital motion of the Earth around the Sun, it was usually measured in practice by the orbital motion of the Moon around the Earth. These measurements can be considered as secondary realizations (in a metrological sense) of the primary definition of ET in terms of the solar motion, after a calibration of the mean motion of the Moon with respect to the mean motion of the Sun. Reasons for the use of lunar measurements were practically based: the Moon moves against the background of stars about 13 times as fast as the Sun's corresponding rate of motion, and the accuracy of time determinations from lunar measurements is correspondingly greater. When ephemeris time was first adopted, time scales were still based on astronomical observation, as they always had been. The accuracy was limited by the accuracy of optical observation, and corrections of clocks and time signals were published in arrear. Secondary realizations by atomic clocks A few years later, with the invention of the cesium atomic clock, an alternative offered itself. Increasingly, after the calibration in 1958 of the cesium atomic clock by reference to ephemeris time, cesium atomic clocks running on the basis of ephemeris seconds began to be used and kept in step with ephemeris time. The atomic clocks offered a further secondary realization of ET, on a quasi-real time basis that soon proved to be more useful than the primary ET standard: not only more convenient, but also more precisely uniform than the primary standard itself. Such secondary realizations were used and described as 'ET', with an awareness that the time scales based on the atomic clocks were not identical to that defined by the primary ephemeris time standard, but rather, an improvement over it on account of their closer approximation to uniformity. The atomic clocks gave rise to the atomic time scale, and to what was first called Terrestrial Dynamical Time and is now Terrestrial Time, defined to provide continuity with ET. The availability of atomic clocks, together with the increasing accuracy of astronomical observations (which meant that relativistic corrections were at least in the foreseeable future no longer going to be small enough to be neglected), led to the eventual replacement of the ephemeris time standard by more refined time scales including terrestrial time and barycentric dynamical time, to which ET can be seen as an approximation. Revision of time scales In 1976, the IAU resolved that the theoretical basis for its then-current (since 1952) standard of Ephemeris Time was non-relativistic, and that therefore, beginning in 1984, Ephemeris Time would be replaced by two relativistic timescales intended to constitute dynamical timescales: Terrestrial Dynamical Time (TDT) and Barycentric Dynamical Time (TDB). Difficulties were recognized, which led to these, in turn, being superseded in the 1990s by time scales Terrestrial Time (TT), Geocentric Coordinate Time GCT (TCG) and Barycentric Coordinate Time BCT (TCB). JPL ephemeris time argument Teph High-precision ephemerides of sun, moon and planets were developed and calculated at the Jet Propulsion Laboratory (JPL) over a long period, and the latest available were adopted for the ephemerides in the Astronomical Almanac starting in 1984. Although not an IAU standard, the ephemeris time argument Teph has been in use at that institution since the 1960s. The time scale represented by Teph has been characterized as a relativistic coordinate time that differs from Terrestrial Time only by small periodic terms with an amplitude not exceeding 2 milliseconds of time: it is linearly related to, but distinct (by an offset and constant rate which is of the order of 0.5 s/a) from the TCB time scale adopted in 1991 as a standard by the IAU. Thus for clocks on or near the geoid, Teph (within 2 milliseconds), but not so closely TCB, can be used as approximations to Terrestrial Time, and via the standard ephemerides Teph is in widespread use. Partly in acknowledgement of the widespread use of Teph via the JPL ephemerides, IAU resolution 3 of 2006 (re-)defined Barycentric Dynamical Time (TDB) as a current standard. As re-defined in 2006, TDB is a linear transformation of TCB. The same IAU resolution also stated (in note 4) that the "independent time argument of the JPL ephemeris DE405, which is called Teph" (here the IAU source cites), "is for practical purposes the same as TDB defined in this Resolution". Thus the new TDB, like Teph, is essentially a more refined continuation of the older ephemeris time ET and (apart from the periodic fluctuations) has the same mean rate as that established for ET in the 1950s. Use in official almanacs and ephemerides Ephemeris time based on the standard adopted in 1952 was introduced into the Astronomical Ephemeris (UK) and the American Ephemeris and Nautical Almanac, replacing UT in the main ephemerides in the issues for 1960 and after. (But the ephemerides in the Nautical Almanac, by then a separate publication for the use of navigators, continued to be expressed in terms of UT.) The ephemerides continued on this basis through 1983 (with some changes due to adoption of improved values of astronomical constants), after which, for 1984 onwards, they adopted the JPL ephemerides. Previous to the 1960 change, the 'Improved Lunar Ephemeris' had already been made available in terms of ephemeris time for the years 1952—1959 (computed by W J Eckert from Brown's theory with modifications recommended by Clemence (1948)). Redefinition of the second Successive definitions of the unit of ephemeris time are mentioned above (History). The value adopted for the 1956/1960 standard second: the fraction 1/31 556 925.9747 of the tropical year for 1900 January 0 at 12 hours ephemeris time. was obtained from the linear time-coefficient in Newcomb's expression for the solar mean longitude (above), taken and applied with the same meaning for the time as in formula (3) above. The relation with Newcomb's coefficient can be seen from: 1/31 556 925.9747 = 129 602 768.13 / (360×60×60×36 525×86 400). Caesium atomic clocks became operational in 1955, and quickly confirmed the evidence that the rotation of the Earth fluctuated irregularly. This confirmed the unsuitability of the mean solar second of Universal Time as a measure of time interval for the most precise purposes. After three years of comparisons with lunar observations, Markowitz et al. (1958) determined that the ephemeris second corresponded to 9 192 631 770 ± 20 cycles of the chosen cesium resonance. Following this, in 1967/68, the General Conference on Weights and Measures (CGPM) replaced the definition of the SI second by the following: The second is the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom. Although this is an independent definition that does not refer to the older basis of ephemeris time, it uses the same quantity as the value of the ephemeris second measured by the cesium clock in 1958. This SI second referred to atomic time was later verified by Markowitz (1988) to be in agreement, within 1 part in 1010, with the second of ephemeris time as determined from lunar observations. For practical purposes the length of the ephemeris second can be taken as equal to the length of the second of Barycentric Dynamical Time (TDB) or Terrestrial Time (TT) or its predecessor TDT. The difference between ET and UT is called ΔT; it changes irregularly, but the long-term trend is parabolic, decreasing from ancient times until the nineteenth century, and increasing since then at a rate corresponding to an increase in the solar day length of 1.7 ms per century (see leap seconds). International Atomic Time (TAI) was set equal to UT2 at 1 January 1958 0:00:00. At that time, ΔT was already about 32.18 seconds. The difference between Terrestrial Time (TT) (the successor to ephemeris time) and atomic time was later defined as follows: 1977 January 1.000 3725 TT = 1977 January 1.000 0000 TAI, i.e. TT − TAI = 32.184 seconds This difference may be assumed constant—the rates of TT and TAI are designed to be identical. Notes and references Bibliography G M Clemence, "On the System of Astronomical Constants", Astronomical Journal, vol. 53(6) (1948), issue #1170, pp. 169–179. G M Clemence (1971), "The Concept of Ephemeris Time", Journal for the History of Astronomy, vol. 2 (1971), pp. 73–79. B Guinot and P K Seidelmann (1988), "Time scales – Their history, definition and interpretation", Astronomy and Astrophysics, vol. 194 (nos. 1–2) (April 1988), pp. 304–308. 'ESAA (1992)': P K Seidelmann (ed.), "Explanatory Supplement to the Astronomical Almanac", University Science Books, CA, 1992; . 'ESAE 1961': "Explanatory Supplement to the Astronomical Ephemeris and the American Ephemeris and Nautical Almanac" ('prepared jointly by the Nautical Almanac Offices of the United Kingdom and the United States of America', HMSO, London, 1961). IAU resolutions (1976): Resolutions adopted by the IAU in 1976 at Grenoble. "Improved Lunar Ephemeris", US Government Printing Office, 1954. W Markowitz, R G Hall, S Edelson (1955), "Ephemeris time from photographic positions of the moon", Astronomical Journal, vol. 60 (1955), p. 171. W Markowitz, R G Hall, L Essen, J V L Parry (1958), "Frequency of cesium in terms of ephemeris time", Physical Review Letters, vol. 1 (1958), 105–107. W Markowitz (1959), "Variations in the Rotation of the Earth, Results Obtained with the Dual-Rate Moon Camera and Photographic Zenith Tubes", Astronomical Journal, vol. 64 (1959), pp. 106–113. Wm Markowitz (1988), "Comparisons of ET(Solar), ET(Lunar), UT and TDT", in A K Babcock & G A Wilkins (eds.), The Earth's Rotation and Reference Frames for Geodesy and Geophysics, IAU Symposia #128 (1988), pp. 413–418. Dennis McCarthy & P. Kenneth Seidelmann (2009), TIME From Earth Rotation to Atomic Physics, Wiley-VCH, Weinheim, . W G Melbourne, J D Mulholland, W L Sjogren, F M Sturms (1968), "Constants and Related Information for Astrodynamic Calculations", NASA Technical Report 32-1306, Jet Propulsion Laboratory, July 15, 1968. L V Morrison, F R Stephenson (2004), "Historical values of the Earth's clock error ΔT and the calculation of eclipses", Journal for the History of Astronomy (), vol. 35(3) (2004), #120, pp. 327–336 (with addendum at vol. 36, p. 339). Simon Newcomb (1895), Tables of the Sun ("Tables of the Motion of the Earth on its Axis and Around the Sun", in "Tables of the Four Inner Planets", vol. 6, part 1, of Astronomical Papers prepared for the use of the American Ephemeris and Nautical Almanac (1895), at pages 1–169). W de Sitter (1927), "On the secular accelerations and the fluctuations of the longitudes of the moon, the sun, Mercury and Venus", Bull. Astron. Inst. Netherlands, vol. 4 (1927), pages 21–38. H Spencer Jones, "The Rotation of the Earth, and the Secular Accelerations of the Sun, Moon and Planets", in Monthly Notes of the Royal Astronomical Society, vol. 99 (1939), pp. 541–558. E M Standish, "Time scales in the JPL and CfA ephemerides", Astronomy & Astrophysics, vol. 336 (1998), 381–384. F R Stephenson, L V Morrison (1984), "Long-term changes in the rotation of the earth – 700 B.C. to A.D. 1980", (Royal Society, Discussion on Rotation in the Solar System, London, England, Mar. 8, 9, 1984) Royal Society (London), Philosophical Transactions, Series A (), vol. 313 (1984), #1524, pp. 47–70. F R Stephenson, L V Morrison (1995), "Long-Term Fluctuations in the Earth's Rotation: 700 BC to AD 1990", Royal Society (London), Philosophical Transactions, Series A (), vol. 351 (1995), #1695, pp. 165–202. G M R Winkler and T C van Flandern (1977), "Ephemeris Time, relativity, and the problem of uniform time in astronomy", Astronomical Journal, vol. 82 (Jan. 1977), pp. 84–92. Time scales Time in astronomy
Ephemeris time
[ "Physics", "Astronomy" ]
5,127
[ "Time in astronomy", "Physical quantities", "Time", "Astronomical coordinate systems", "Spacetime", "Time scales" ]
10,006
https://en.wikipedia.org/wiki/Electronic%20musical%20instrument
An electronic musical instrument or electrophone is a musical instrument that produces sound using electronic circuitry. Such an instrument sounds by outputting an electrical, electronic or digital audio signal that ultimately is plugged into a power amplifier which drives a loudspeaker, creating the sound heard by the performer and listener. An electronic instrument might include a user interface for controlling its sound, often by adjusting the pitch, frequency, or duration of each note. A common user interface is the musical keyboard, which functions similarly to the keyboard on an acoustic piano where the keys are each linked mechanically to swinging string hammers - whereas with an electronic keyboard, the keyboard interface is linked to a synth module, computer or other electronic or digital sound generator, which then creates a sound. However, it is increasingly common to separate user interface and sound-generating functions into a music controller (input device) and a music synthesizer, respectively, with the two devices communicating through a musical performance description language such as MIDI or Open Sound Control. The solid state nature of electronic keyboards also offers differing "feel" and "response", offering a novel experience in playing relative to operating a mechanically linked piano keyboard. All electronic musical instruments can be viewed as a subset of audio signal processing applications. Simple electronic musical instruments are sometimes called sound effects; the border between sound effects and actual musical instruments is often unclear. In the 21st century, electronic musical instruments are now widely used in most styles of music. In popular music styles such as electronic dance music, almost all of the instrument sounds used in recordings are electronic instruments (e.g., bass synth, synthesizer, drum machine). Development of new electronic musical instruments, controllers, and synthesizers continues to be a highly active and interdisciplinary field of research. Specialized conferences, such as the International Conference on New Interfaces for Musical Expression, have organized to report cutting-edge work, as well as to provide a showcase for artists who perform or create music with new electronic music instruments, controllers, and synthesizers. Classification In musicology, electronic musical instruments are known as electrophones. Electrophones are the fifth category of musical instrument under the Hornbostel-Sachs system. Musicologists typically only classify music as electrophones if the sound is initially produced by electricity, excluding electronically controlled acoustic instruments such as pipe organs and amplified instruments such as electric guitars. The category was added to the Hornbostel-Sachs musical instrument classification system by Sachs in 1940, in his 1940 book The History of Musical Instruments; the original 1914 version of the system did not include it. Sachs divided electrophones into three subcategories: 51=electrically actuated acoustic instruments (e.g., pipe organ with electronic tracker action) 52=electrically amplified acoustic instruments (e.g., acoustic guitar with pickup) 53=instruments which make sound primarily by way of electrically driven oscillators The last category included instruments such as theremins or synthesizers, which he called radioelectric instruments. Francis William Galpin provided such a group in his own classification system, which is closer to Mahillon than Sachs-Hornbostel. For example, in Galpin's 1937 book A Textbook of European Musical Instruments, he lists electrophones with three second-level divisions for sound generation ("by oscillation", "electro-magnetic", and "electro-static"), as well as third-level and fourth-level categories based on the control method. Present-day ethnomusicologists, such as Margaret Kartomi and Terry Ellingson, suggest that, in keeping with the spirit of the original Hornbostel Sachs classification scheme, if one categorizes instruments by what first produces the initial sound in the instrument, that only subcategory 53 should remain in the electrophones category. Thus, it has been more recently proposed, for example, that the pipe organ (even if it uses electric key action to control solenoid valves) remain in the aerophones category, and that the electric guitar remain in the chordophones category, and so on. Early examples In the 18th-century, musicians and composers adapted a number of acoustic instruments to exploit the novelty of electricity. Thus, in the broadest sense, the first electrified musical instrument was the Denis d'or keyboard, dating from 1753, followed shortly by the clavecin électrique by the Frenchman Jean-Baptiste de Laborde in 1761. The Denis d'or consisted of a keyboard instrument of over 700 strings, electrified temporarily to enhance sonic qualities. The clavecin électrique was a keyboard instrument with plectra (picks) activated electrically. However, neither instrument used electricity as a sound source. The first electric synthesizer was invented in 1876 by Elisha Gray. The "Musical Telegraph" was a chance by-product of his telephone technology when Gray discovered that he could control sound from a self-vibrating electromagnetic circuit and so invented a basic oscillator. The Musical Telegraph used steel reeds oscillated by electromagnets and transmitted over a telephone line. Gray also built a simple loudspeaker device into later models, which consisted of a diaphragm vibrating in a magnetic field. A significant invention, which later had a profound effect on electronic music, was the audion in 1906. This was the first thermionic valve, or vacuum tube and which led to the generation and amplification of electrical signals, radio broadcasting, and electronic computation, among other things. Other early synthesizers included the Telharmonium (1897), the Theremin (1919), Jörg Mager's Spharophon (1924) and Partiturophone, Taubmann's similar Electronde (1933), Maurice Martenot's ondes Martenot ("Martenot waves", 1928), Trautwein's Trautonium (1930). The Mellertion (1933) used a non-standard scale, Bertrand's Dynaphone could produce octaves and perfect fifths, while the Emicon was an American, keyboard-controlled instrument constructed in 1930 and the German Hellertion combined four instruments to produce chords. Three Russian instruments also appeared, Oubouhof's Croix Sonore (1934), Ivor Darreg's microtonal 'Electronic Keyboard Oboe' (1937) and the ANS synthesizer, constructed by the Russian scientist Evgeny Murzin from 1937 to 1958. Only two models of this latter were built and the only surviving example is currently stored at the Lomonosov University in Moscow. It has been used in many Russian movies—like Solaris—to produce unusual, "cosmic" sounds. Hugh Le Caine, John Hanert, Raymond Scott, composer Percy Grainger (with Burnett Cross), and others built a variety of automated electronic-music controllers during the late 1940s and 1950s. In 1959 Daphne Oram produced a novel method of synthesis, her "Oramics" technique, driven by drawings on a 35 mm film strip; it was used for a number of years at the BBC Radiophonic Workshop. This workshop was also responsible for the theme to the TV series Doctor Who a piece, largely created by Delia Derbyshire, that more than any other ensured the popularity of electronic music in the UK. Telharmonium In 1897 Thaddeus Cahill patented an instrument called the Telharmonium (or Teleharmonium, also known as the Dynamaphone). Using tonewheels to generate musical sounds as electrical signals by additive synthesis, it was capable of producing any combination of notes and overtones, at any dynamic level. This technology was later used to design the Hammond organ. Between 1901 and 1910 Cahill had three progressively larger and more complex versions made, the first weighing seven tons, the last in excess of 200 tons. Portability was managed only by rail and with the use of thirty boxcars. By 1912, public interest had waned, and Cahill's enterprise was bankrupt. Theremin Another development, which aroused the interest of many composers, occurred in 1919–1920. In Leningrad, Leon Theremin built and demonstrated his Etherophone, which was later renamed the Theremin. This led to the first compositions for electronic instruments, as opposed to noisemakers and re-purposed machines. The Theremin was notable for being the first musical instrument played without touching it. In 1929, Joseph Schillinger composed First Airphonic Suite for Theremin and Orchestra, premièred with the Cleveland Orchestra with Leon Theremin as soloist. The next year Henry Cowell commissioned Theremin to create the first electronic rhythm machine, called the Rhythmicon. Cowell wrote some compositions for it, which he and Schillinger premiered in 1932. Ondes Martenot The ondes Martenot is played with a keyboard or by moving a ring along a wire, creating "wavering" sounds similar to a theremin. It was invented in 1928 by the French cellist Maurice Martenot, who was inspired by the accidental overlaps of tones between military radio oscillators, and wanted to create an instrument with the expressiveness of the cello. The French composer Olivier Messiaen used the ondes Martenot in pieces such as his 1949 symphony Turangalîla-Symphonie, and his sister-in-law Jeanne Loriod was a celebrated player. It appears in numerous film and television soundtracks, particularly science fiction and horror films. Contemporary users of the ondes Martenot include Tom Waits, Daft Punk and the Radiohead guitarist Jonny Greenwood. Trautonium The Trautonium was invented in 1928. It was based on the subharmonic scale, and the resulting sounds were often used to emulate bell or gong sounds, as in the 1950s Bayreuth productions of Parsifal. In 1942, Richard Strauss used it for the bell- and gong-part in the Dresden première of his Japanese Festival Music. This new class of instruments, microtonal by nature, was only adopted slowly by composers at first, but by the early 1930s there was a burst of new works incorporating these and other electronic instruments. Hammond organ and Novachord In 1929 Laurens Hammond established his company for the manufacture of electronic instruments. He went on to produce the Hammond organ, which was based on the principles of the Telharmonium, along with other developments including early reverberation units. The Hammond organ is an electromechanical instrument, as it used both mechanical elements and electronic parts. A Hammond organ used spinning metal tonewheels to produce different sounds. A magnetic pickup similar in design to the pickups in an electric guitar is used to transmit the pitches in the tonewheels to an amplifier and speaker enclosure. While the Hammond organ was designed to be a lower-cost alternative to a pipe organ for church music, musicians soon discovered that the Hammond was an excellent instrument for blues and jazz; indeed, an entire genre of music developed built around this instrument, known as the organ trio (typically Hammond organ, drums, and a third instrument, either saxophone or guitar). The first commercially manufactured synthesizer was the Novachord, built by the Hammond Organ Company from 1938 to 1942, which offered 72-note polyphony using 12 oscillators driving monostable-based divide-down circuits, basic envelope control and resonant low-pass filters. The instrument featured 163 vacuum tubes and weighed 500 pounds. The instrument's use of envelope control is significant, since this is perhaps the most significant distinction between the modern synthesizer and other electronic instruments. Analogue synthesis 1950–1980 The most commonly used electronic instruments are synthesizers, so-called because they artificially generate sound using a variety of techniques. All early circuit-based synthesis involved the use of analogue circuitry, particularly voltage controlled amplifiers, oscillators and filters. An important technological development was the invention of the Clavivox synthesizer in 1956 by Raymond Scott with subassembly by Robert Moog. French composer and engineer Edgard Varèse created a variety of compositions using electronic horns, whistles, and tape. Most notably, he wrote Poème électronique for the Philips pavilion at the Brussels World Fair in 1958. Modular synthesizers RCA produced experimental devices to synthesize voice and music in the 1950s. The Mark II Music Synthesizer, housed at the Columbia-Princeton Electronic Music Center in New York City. Designed by Herbert Belar and Harry Olson at RCA, with contributions from Vladimir Ussachevsky and Peter Mauzey, it was installed at Columbia University in 1957. Consisting of a room-sized array of interconnected sound synthesis components, it was only capable of producing music by programming, using a paper tape sequencer punched with holes to control pitch sources and filters, similar to a mechanical player piano but capable of generating a wide variety of sounds. The vacuum tube system had to be patched to create timbres. In the 1960s synthesizers were still usually confined to studios due to their size. They were usually modular in design, their stand-alone signal sources and processors connected with patch cords or by other means and controlled by a common controlling device. Harald Bode, Don Buchla, Hugh Le Caine, Raymond Scott and Paul Ketoff were among the first to build such instruments, in the late 1950s and early 1960s. Buchla later produced a commercial modular synthesizer, the Buchla Music Easel. Robert Moog, who had been a student of Peter Mauzey and one of the RCA Mark II engineers, created a synthesizer that could reasonably be used by musicians, designing the circuits while he was at Columbia-Princeton. The Moog synthesizer was first displayed at the Audio Engineering Society convention in 1964. It required experience to set up sounds but was smaller and more intuitive than what had come before, less like a machine and more like a musical instrument. Moog established standards for control interfacing, using a logarithmic 1-volt-per-octave for pitch control and a separate triggering signal. This standardization allowed synthesizers from different manufacturers to operate simultaneously. Pitch control was usually performed either with an organ-style keyboard or a music sequencer producing a timed series of control voltages. During the late 1960s hundreds of popular recordings used Moog synthesizers. Other early commercial synthesizer manufacturers included ARP, who also started with modular synthesizers before producing all-in-one instruments, and British firm EMS. Integrated synthesizers In 1970, Moog designed the Minimoog, a non-modular synthesizer with a built-in keyboard. The analogue circuits were interconnected with switches in a simplified arrangement called "normalization." Though less flexible than a modular design, normalization made the instrument more portable and easier to use. The Minimoog sold 12,000 units. Further standardized the design of subsequent synthesizers with its integrated keyboard, pitch and modulation wheels and VCO->VCF->VCA signal flow. It has become celebrated for its "fat" sound—and its tuning problems. Miniaturized solid-state components allowed synthesizers to become self-contained, portable instruments that soon appeared in live performance and quickly became widely used in popular music and electronic art music. Polyphony Many early analog synthesizers were monophonic, producing only one tone at a time. Popular monophonic synthesizers include the Moog Minimoog. A few, such as the Moog Sonic Six, ARP Odyssey and EML 101, could produce two different pitches at a time when two keys were pressed. Polyphony (multiple simultaneous tones, which enables chords) was only obtainable with electronic organ designs at first. Popular electronic keyboards combining organ circuits with synthesizer processing included the ARP Omni and Moog's Polymoog and Opus 3. By 1976 affordable polyphonic synthesizers began to appear, such as the Yamaha CS-50, CS-60 and CS-80, the Sequential Circuits Prophet-5 and the Oberheim Four-Voice. These remained complex, heavy and relatively costly. The recording of settings in digital memory allowed storage and recall of sounds. The first practical polyphonic synth, and the first to use a microprocessor as a controller, was the Sequential Circuits Prophet-5 introduced in late 1977. For the first time, musicians had a practical polyphonic synthesizer that could save all knob settings in computer memory and recall them at the touch of a button. The Prophet-5's design paradigm became a new standard, slowly pushing out more complex and recondite modular designs. Tape recording In 1935, another significant development was made in Germany. Allgemeine Elektricitäts Gesellschaft (AEG) demonstrated the first commercially produced magnetic tape recorder, called the Magnetophon. Audio tape, which had the advantage of being fairly light as well as having good audio fidelity, ultimately replaced the bulkier wire recorders. The term "electronic music" (which first came into use during the 1930s) came to include the tape recorder as an essential element: "electronically produced sounds recorded on tape and arranged by the composer to form a musical composition". It was also indispensable to Musique concrète. Tape also gave rise to the first, analogue, sample-playback keyboards, the Chamberlin and its more famous successor the Mellotron, an electro-mechanical, polyphonic keyboard originally developed and built in Birmingham, England in the early 1960s. Sound sequencer During the 1940s–1960s, Raymond Scott, an American composer of electronic music, invented various kind of music sequencers for his electric compositions. Step sequencers played rigid patterns of notes using a grid of (usually) 16 buttons, or steps, each step being 1/16 of a measure. These patterns of notes were then chained together to form longer compositions. Software sequencers were continuously utilized since the 1950s in the context of computer music, including computer-played music (software sequencer), computer-composed music (music synthesis), and computer sound generation (sound synthesis). Digital era 1980–2000 Digital synthesis The first digital synthesizers were academic experiments in sound synthesis using digital computers. FM synthesis was developed for this purpose; as a way of generating complex sounds digitally with the smallest number of computational operations per sound sample. In 1983 Yamaha introduced the first stand-alone digital synthesizer, the DX-7. It used frequency modulation synthesis (FM synthesis), first developed by John Chowning at Stanford University during the late sixties. Chowning exclusively licensed his FM synthesis patent to Yamaha in 1975. Yamaha subsequently released their first FM synthesizers, the GS-1 and GS-2, which were costly and heavy. There followed a pair of smaller, preset versions, the CE20 and CE25 Combo Ensembles, targeted primarily at the home organ market and featuring four-octave keyboards. Yamaha's third generation of digital synthesizers was a commercial success; it consisted of the DX7 and DX9 (1983). Both models were compact, reasonably priced, and dependent on custom digital integrated circuits to produce FM tonalities. The DX7 was the first mass market all-digital synthesizer. It became indispensable to many music artists of the 1980s, and demand soon exceeded supply. The DX7 sold over 200,000 units within three years. The DX series was not easy to program but offered a detailed, percussive sound that led to the demise of the electro-mechanical Rhodes piano, which was heavier and larger than a DX synth. Following the success of FM synthesis Yamaha signed a contract with Stanford University in 1989 to develop digital waveguide synthesis, leading to the first commercial physical modeling synthesizer, Yamaha's VL-1, in 1994. The DX-7 was affordable enough for amateurs and young bands to buy, unlike the costly synthesizers of previous generations, which were mainly used by top professionals. Sampling The Fairlight CMI (Computer Musical Instrument), the first polyphonic digital sampler, was the harbinger of sample-based synthesizers. Designed in 1978 by Peter Vogel and Kim Ryrie and based on a dual microprocessor computer designed by Tony Furse in Sydney, Australia, the Fairlight CMI gave musicians the ability to modify volume, attack, decay, and use special effects like vibrato. Sample waveforms could be displayed on-screen and modified using a light pen. The Synclavier from New England Digital was a similar system. Jon Appleton (with Jones and Alonso) invented the Dartmouth Digital Synthesizer, later to become the New England Digital Corp's Synclavier. The Kurzweil K250, first produced in 1983, was also a successful polyphonic digital music synthesizer, noted for its ability to reproduce several instruments synchronously and having a velocity-sensitive keyboard. Computer music An important new development was the advent of computers for the purpose of composing music, as opposed to manipulating or creating sounds. Iannis Xenakis began what is called musique stochastique, or stochastic music, which is a method of composing that employs mathematical probability systems. Different probability algorithms were used to create a piece under a set of parameters. Xenakis used graph paper and a ruler to aid in calculating the velocity trajectories of glissando for his orchestral composition Metastasis (1953–54), but later turned to the use of computers to compose pieces like ST/4 for string quartet and ST/48 for orchestra (both 1962). The impact of computers continued in 1956. Lejaren Hiller and Leonard Issacson composed Illiac Suite for string quartet, the first complete work of computer-assisted composition using algorithmic composition. In 1957, Max Mathews at Bell Lab wrote MUSIC-N series, a first computer program family for generating digital audio waveforms through direct synthesis. Then Barry Vercoe wrote MUSIC 11 based on MUSIC IV-BF, a next-generation music synthesis program (later evolving into csound, which is still widely used). In mid 80s, Miller Puckette at IRCAM developed graphic signal-processing software for 4X called Max (after Max Mathews), and later ported it to Macintosh (with Dave Zicarelli extending it for Opcode) for real-time MIDI control, bringing algorithmic composition availability to most composers with modest computer programming background. MIDI In 1980, a group of musicians and music merchants met to standardize an interface by which new instruments could communicate control instructions with other instruments and the prevalent microcomputer. This standard was dubbed MIDI (Musical Instrument Digital Interface). A paper was authored by Dave Smith of Sequential Circuits and proposed to the Audio Engineering Society in 1981. Then, in August 1983, the MIDI Specification 1.0 was finalized. The advent of MIDI technology allows a single keystroke, control wheel motion, pedal movement, or command from a microcomputer to activate every device in the studio remotely and in synchrony, with each device responding according to conditions predetermined by the composer. MIDI instruments and software made powerful control of sophisticated instruments easily affordable by many studios and individuals. Acoustic sounds became reintegrated into studios via sampling and sampled-ROM-based instruments. Modern electronic musical instruments The increasing power and decreasing cost of sound-generating electronics (and especially of the personal computer), combined with the standardization of the MIDI and Open Sound Control musical performance description languages, has facilitated the separation of musical instruments into music controllers and music synthesizers. By far the most common musical controller is the musical keyboard. Other controllers include the radiodrum, Akai's EWI and Yamaha's WX wind controllers, the guitar-like SynthAxe, the BodySynth, the Buchla Thunder, the Continuum Fingerboard, the Roland Octapad, various isomorphic keyboards including the Thummer, and Kaossilator Pro, and kits like I-CubeX. Reactable The Reactable is a round translucent table with a backlit interactive display. By placing and manipulating blocks called tangibles on the table surface, while interacting with the visual display via finger gestures, a virtual modular synthesizer is operated, creating music or sound effects. Percussa AudioCubes AudioCubes are autonomous wireless cubes powered by an internal computer system and rechargeable battery. They have internal RGB lighting, and are capable of detecting each other's location, orientation and distance. The cubes can also detect distances to the user's hands and fingers. Through interaction with the cubes, a variety of music and sound software can be operated. AudioCubes have applications in sound design, music production, DJing and live performance. Kaossilator The Kaossilator and Kaossilator Pro are compact instruments where the position of a finger on the touch pad controls two note-characteristics; usually the pitch is changed with a left-right motion and the tonal property, filter or other parameter changes with an up-down motion. The touch pad can be set to different musical scales and keys. The instrument can record a repeating loop of adjustable length, set to any tempo, and new loops of sound can be layered on top of existing ones. This lends itself to electronic dance-music but is more limited for controlled sequences of notes, as the pad on a regular Kaossilator is featureless. Eigenharp The Eigenharp is a large instrument resembling a bassoon, which can be interacted with through big buttons, a drum sequencer and a mouthpiece. The sound processing is done on a separate computer. AlphaSphere The AlphaSphere is a spherical instrument that consists of 48 tactile pads that respond to pressure as well as touch. Custom software allows the pads to be indefinitely programmed individually or by groups in terms of function, note, and pressure parameter among many other settings. The primary concept of the AlphaSphere is to increase the level of expression available to electronic musicians, by allowing for the playing style of a musical instrument. Chip music Chiptune, chipmusic, or chip music is music written in sound formats where many of the sound textures are synthesized or sequenced in real time by a computer or video game console sound chip, sometimes including sample-based synthesis and low bit sample playback. Many chip music devices featured synthesizers in tandem with low rate sample playback. DIY culture During the late 1970s and early 1980s, do-it-yourself designs were published in hobby electronics magazines (such the Formant modular synth, a DIY clone of the Moog system, published by Elektor) and kits were supplied by companies such as Paia in the US, and Maplin Electronics in the UK. Circuit bending In 1966, Reed Ghazala discovered and began to teach math "circuit bending"—the application of the creative short circuit, a process of chance short-circuiting, creating experimental electronic instruments, exploring sonic elements mainly of timbre and with less regard to pitch or rhythm, and influenced by John Cage’s aleatoric music concept. Much of this manipulation of circuits directly, especially to the point of destruction, was pioneered by Louis and Bebe Barron in the early 1950s, such as their work with John Cage on the Williams Mix and especially in the soundtrack to Forbidden Planet. Modern circuit bending is the creative customization of the circuits within electronic devices such as low voltage, battery-powered guitar effects, children's toys and small digital synthesizers to create new musical or visual instruments and sound generators. Emphasizing spontaneity and randomness, the techniques of circuit bending have been commonly associated with noise music, though many more conventional contemporary musicians and musical groups have been known to experiment with "bent" instruments. Circuit bending usually involves dismantling the machine and adding components such as switches and potentiometers that alter the circuit. With the revived interest for analogue synthesizer circuit bending became a cheap solution for many experimental musicians to create their own individual analogue sound generators. Nowadays many schematics can be found to build noise generators such as the Atari Punk Console or the Dub Siren as well as simple modifications for children toys such as the Speak & Spell that are often modified by circuit benders. Modular synthesizers The modular synthesizer is a type of synthesizer consisting of separate interchangeable modules. These are also available as kits for hobbyist DIY constructors. Many hobbyist designers also make available bare PCB boards and front panels for sale to other hobbyists. See also Experimental musical instrument Live electronic music Visual music STEIM Technologies Oscilloscope Stereophonic sound Instrument families Vocoder Individual instruments (historical) Electronic sackbut Individual instruments (modern) Kraakdoos Metronome Razer Hydra In Indian and Asian traditional music Electronic tanpura Shruti box References Works cited External links 120 Years of Electronic Music A chronology of computer and electronic music (including instruments) History of Electronic Music (French) Tons of Tones !! : Site with technical data on Electronic Modelling of Musical Tones DIY DIY Hardware and Software Discussion forum at Electro-music.com The Synth-DIY email list Music From Outer Space Information and parts to self-build a synthesizer. Synthesizer do it yourself a wiki about DIY electronic musical instruments Museums and collections Horniman Museum's music gallery, London, UK. Has one or two synths behind glass. Moogseum, Asheville, North Carolina, US Musical Museum, Brentford, London, UK. Mostly electro-mechanical instruments. Musical Instrument Museum, Phoenix, Arizona, US Staatliches Institut für Musikforschung, Berlin, Germany Swiss Museum & Center for Electronic Music Instruments The National Music Centre Collection, Canada Vintage Synthesizer Museum, California, US Washington And Lee University Synthesizer Museum , Washington, US Popular music Electronic dance music Audio engineering
Electronic musical instrument
[ "Engineering" ]
6,019
[ "Electrical engineering", "Audio engineering" ]
10,008
https://en.wikipedia.org/wiki/Electrode
An electrode is an electrical conductor used to make contact with a nonmetallic part of a circuit (e.g. a semiconductor, an electrolyte, a vacuum or air). Electrodes are essential parts of batteries that can consist of a variety of materials (chemicals) depending on the type of battery. Michael Faraday coined the term "" in 1833; the word recalls the Greek ἤλεκτρον (, "amber") and ὁδός (, "path, way"). The electrophore, invented by Johan Wilcke in 1762, was an early version of an electrode used to study static electricity. Anode and cathode in electrochemical cells Electrodes are an essential part of any battery. The first electrochemical battery was devised by Alessandro Volta and was aptly named the Voltaic cell. This battery consisted of a stack of copper and zinc electrodes separated by brine-soaked paper disks. Due to fluctuation in the voltage provided by the voltaic cell, it was not very practical. The first practical battery was invented in 1839 and named the Daniell cell after John Frederic Daniell. It still made use of the zinc–copper electrode combination. Since then, many more batteries have been developed using various materials. The basis of all these is still using two electrodes, anodes and cathodes. Anode (-) 'Anode' was coined by William Whewell at Michael Faraday's request, derived from the Greek words ἄνο (ano), 'upwards' and ὁδός (hodós), 'a way'. The anode is the electrode through which the conventional current enters from the electrical circuit of an electrochemical cell (battery) into the non-metallic cell. The electrons then flow to the other side of the battery. Benjamin Franklin surmised that the electrical flow moved from positive to negative. The electrons flow away from the anode and the conventional current towards it. From both can be concluded that the charge of the anode is negative. The electron entering the anode comes from the oxidation reaction that takes place next to it. Cathode (+) The cathode is in many ways the opposite of the anode. The name (also coined by Whewell) comes from the Greek words κάτω (kato), 'downwards' and ὁδός (hodós), 'a way'. It is the positive electrode, meaning the electrons flow from the electrical circuit through the cathode into the non-metallic part of the electrochemical cell. At the cathode, the reduction reaction takes place with the electrons arriving from the wire connected to the cathode and are absorbed by the oxidizing agent. Primary cell A primary cell is a battery designed to be used once and then discarded. This is due to the electrochemical reactions taking place at the electrodes in the cell not being reversible. An example of a primary cell is the discardable alkaline battery commonly used in flashlights. Consisting of a zinc anode and a manganese oxide cathode in which ZnO is formed. The half-reactions are: Zn(s) + 2OH−(aq) → ZnO(s) + H2O(l) + 2e− [E0oxidation = -1.28 V] 2MnO2(s) + H2O(l) + 2e− → Mn2O3(s) + 2OH−(aq) [E0reduction = +0.15 V] Overall reaction: Zn(s) + 2MnO2(s) ZnO(s) + Mn2O3(s) [E0total = +1.43 V] The ZnO is prone to clumping and will give less efficient discharge if recharged again. It is possible to recharge these batteries but is due to safety concerns advised against by the manufacturer. Other primary cells include zinc–carbon, zinc–chloride, and lithium iron disulfide. Secondary cell Contrary to the primary cell a secondary cell can be recharged. The first was the lead–acid battery, invented in 1859 by French physicist Gaston Planté. This type of battery is still the most widely used in automobiles, among others. The cathode consists of lead dioxide (PbO2) and the anode of solid lead. Other commonly used rechargeable batteries are nickel–cadmium, nickel–metal hydride, and Lithium-ion. The last of which will be explained more thoroughly in this article due to its importance. Marcus' theory of electron transfer Marcus theory is a theory originally developed by Nobel laureate Rudolph A. Marcus and explains the rate at which an electron can move from one chemical species to another, for this article this can be seen as 'jumping' from the electrode to a species in the solvent or vice versa. We can represent the problem as calculating the transfer rate for the transfer of an electron from donor to an acceptor D + A → D+ + A− The potential energy of the system is a function of the translational, rotational, and vibrational coordinates of the reacting species and the molecules of the surrounding medium, collectively called the reaction coordinates. The abscissa the figure to the right represents these. From the classical electron transfer theory, the expression of the reaction rate constant (probability of reaction) can be calculated, if a non-adiabatic process and parabolic potential energy are assumed, by finding the point of intersection (Qx). One important thing to note, and was noted by Marcus when he came up with the theory, the electron transfer must abide by the law of conservation of energy and the Frank-Condon principle. Doing this and then rearranging this leads to the expression of the free energy activation () in terms of the overall free energy of the reaction (). In which the is the reorganisation energy. Filling this result in the classically derived Arrhenius equation leads to With A being the pre-exponential factor which is usually experimentally determined, although a semi classical derivation provides more information as will be explained below. This classically derived result qualitatively reproduced observations of a maximum electron transfer rate under the conditions . For a more extensive mathematical treatment one could read the paper by Newton. An interpretation of this result and what a closer look at the physical meaning of the one can read the paper by Marcus. the situation at hand can be more accurately described by using the displaced harmonic oscillator model, in this model quantum tunneling is allowed. This is needed in order to explain why even at near-zero Kelvin there still are electron transfers, in contradiction to the classical theory. Without going into too much detail on how the derivation is done, it rests on using Fermi's golden rule from time-dependent perturbation theory with the full Hamiltonian of the system. It is possible to look at the overlap in the wavefunctions of both the reactants and the products (the right and the left side of the chemical reaction) and therefore when their energies are the same and allow for electron transfer. As touched on before this must happen because only then conservation of energy is abided by. Skipping over a few mathematical steps the probability of electron transfer can be calculated (albeit quite difficult) using the following formula With being the electronic coupling constant describing the interaction between the two states (reactants and products) and being the line shape function. Taking the classical limit of this expression, meaning , and making some substitution an expression is obtained very similar to the classically derived formula, as expected. The main difference is now the pre-exponential factor has now been described by more physical parameters instead of the experimental factor . One is once again revered to the sources as listed below for a more in-depth and rigorous mathematical derivation and interpretation. Efficiency The physical properties of electrodes are mainly determined by the material of the electrode and the topology of the electrode. The properties required depend on the application and therefore there are many kinds of electrodes in circulation. The defining property for a material to be used as an electrode is that it be conductive. Any conducting material such as metals, semiconductors, graphite or conductive polymers can therefore be used as an electrode. Often electrodes consist of a combination of materials, each with a specific task. Typical constituents are the active materials which serve as the particles which oxidate or reduct, conductive agents which improve the conductivity of the electrode and binders which are used to contain the active particles within the electrode. The efficiency of electrochemical cells is judged by a number of properties, important quantities are the self-discharge time, the discharge voltage and the cycle performance. The physical properties of the electrodes play an important role in determining these quantities. Important properties of the electrodes are: the electrical resistivity, the specific heat capacity (c_p), the electrode potential and the hardness. Of course, for technological applications, the cost of the material is also an important factor. The values of these properties at room temperature (T = 293 K) for some commonly used materials are listed in the table below. Surface effects The surface topology of the electrode plays an important role in determining the efficiency of an electrode. The efficiency of the electrode can be reduced due to contact resistance. To create an efficient electrode it is therefore important to design it such that it minimizes the contact resistance. Manufacturing The production of electrodes for Li-ion batteries is done in various steps as follows: The various constituents of the electrode are mixed into a solvent. This mixture is designed such that it improves the performance of the electrodes. Common components of this mixture are: The active electrode particles. A binder used to contain the active electrode particles. A conductive agent used to improve the conductivity of the electrode. The mixture created is known as an ‘electrode slurry’. The electrode slurry above is coated onto a conductor which acts as the current collector in the electrochemical cell. Typical current collectors are copper for the cathode and aluminum for the anode. After the slurry has been applied to the conductor it is dried and then pressed to the required thickness. Structure of the electrode For a given selection of constituents of the electrode, the final efficiency is determined by the internal structure of the electrode. The important factors in the internal structure in determining the performance of the electrode are: Clustering of the active material and the conductive agent. In order for all the components of the slurry to perform their task, they should all be spread out evenly within the electrode. An even distribution of the conductive agent over the active material. This makes sure that the conductivity of the electrode is optimal. The adherence of the electrode to the current collectors. The adherence makes sure that the electrode does not dissolve into the electrolyte. The density of the active material. A balance should be found between the amount of active material, the conductive agent and the binder. Since the active material is the important factor in the electrode, the slurry should be designed such that the density of the active material is as high as possible, without the conductive agent and the binder not functioning properly. These properties can be influenced in the production of the electrodes in a number of manners. The most important step in the manufacturing of the electrodes is creating the electrode slurry. As can be seen above, the important properties of the electrode all have to do with the even distribution of the components of the electrode. Therefore, it is very important that the electrode slurry be as homogeneous as possible. Multiple procedures have been developed to improve this mixing stage and current research is still being done. Electrodes in lithium ion batteries A modern application of electrodes is in lithium-ion batteries (Li-ion batteries). A Li-ion battery is a kind of flow battery which can be seen in the image on the right. Furthermore, a Li-ion battery is an example of a secondary cell since it is rechargeable. It can both act as a galvanic or electrolytic cell. Li-ion batteries use lithium ions as the solute in the electrolyte which are dissolved in an organic solvent. Lithium electrodes were first studied by Gilbert N. Lewis and Frederick G. Keyes in 1913. In the following century these electrodes were used to create and study the first Li-ion batteries. Li-ion batteries are very popular due to their great performance. Applications include mobile phones and electric cars. Due to their popularity, much research is being done to reduce the cost and increase the safety of Li-ion batteries. An integral part of the Li-ion batteries are their anodes and cathodes, therefore much research is being done into increasing the efficiency, safety and reducing the costs of these electrodes specifically. Cathodes In Li-ion batteries, the cathode consists of a intercalated lithium compound (a layered material consisting of layers of molecules composed of lithium and other elements). A common element which makes up part of the molecules in the compound is cobalt. Another frequently used element is manganese. The best choice of compound usually depends on the application of the battery. Advantages for cobalt-based compounds over manganese-based compounds are their high specific heat capacity, high volumetric heat capacity, low self-discharge rate, high discharge voltage and high cycle durability. There are however also drawbacks in using cobalt-based compounds such as their high cost and their low thermostability. Manganese has similar advantages and a lower cost, however there are some problems associated with using manganese. The main problem is that manganese tends to dissolve into the electrolyte over time. For this reason, cobalt is still the most common element which is used in the lithium compounds. There is much research being done into finding new materials which can be used to create cheaper and longer lasting Li-ion batteries For example, Chinese and American researchers have demonstrated that ultralong single wall carbon nanotubes significantly enhance lithium iron phosphate cathodes. By creating a highly efficient conductive network that securely binds lithium iron phosphate particles, adding carbon nanotubes as a conductive additive at a dosage of just 0.5 wt.% helps cathodes to achieve a remarkable rate capacity of 161.5 mAh g-1 at 0.5 C and 130.2 mAh g-1 at 5 C, whole maintaining 87.4% capacity retention after 200 cycles at 2 C. Anodes The anodes used in mass-produced Li-ion batteries are either carbon based (usually graphite) or made out of spinel lithium titanate (Li4Ti5O12). Graphite anodes have been successfully implemented in many modern commercially available batteries due to its cheap price, longevity and high energy density. However, it presents issues of dendrite growth, with risks of shorting the battery and posing a safety issue. Li4Ti5O12 has the second largest market share of anodes, due to its stability and good rate capability, but with challenges such as low capacity. During the early 2000s, silicon anode research began picking up pace, becoming one of the decade's most promising candidates for future lithium-ion battery anodes. Silicon has one of the highest gravimetric capacities when compared to graphite and Li4Ti5O12 as well as a high volumetric one. Furthermore, Silicon has the advantage of operating under a reasonable open circuit voltage without parasitic lithium reactions. However, silicon anodes have a major issue of volumetric expansion during lithiation of around 360%. This expansion may pulverize the anode, resulting in poor performance. To fix this problem, scientists looked into varying the dimensionality of the Si. Many studies have been developed in Si nanowires, Si tubes as well as Si sheets. As a result, composite hierarchical Si anodes have become the major technology for future applications in lithium-ion batteries. In the early 2020s, technology is reaching commercial levels with factories being built for mass production of anodes in the United States. Furthermore, metallic lithium is another possible candidate for the anode. It boasts a higher specific capacity than silicon, however, does come with the drawback of working with the highly unstable metallic lithium. Similarly to graphite anodes, dendrite formation is another major limitation of metallic lithium, with the solid electrolyte interphase being a major design challenge. In the end, if stabilized, metallic lithium would be able to produce batteries that hold the most charge, while being the lightest. In recent years, researchers have conducted several studies on the use of single wall carbon nanotubes (SWCNTs) as conductive additives. These SWCNTs help to preserve electron conduction, ensure stable electrochemical reactions, and maintain uniform volume changes during cycling, effectively reducing anode pulverization. Mechanical properties A common failure mechanism of batteries is mechanical shock, which breaks either the electrode or the system's container, leading to poor conductivity and electrolyte leakage. However, the relevance of mechanical properties of electrodes goes beyond the resistance to collisions due to its environment. During standard operation, the incorporation of ions into electrodes leads to a change in volume. This is well exemplified by Si electrodes in lithium-ion batteries expanding around 300% during lithiation. Such change may lead to the deformations in the lattice and, therefore stresses in the material. The origin of stresses may be due to geometric constraints in the electrode or inhomogeneous plating of the ion. This phenomenon is very concerning as it may lead to electrode fracture and performance loss. Thus, mechanical properties are crucial to enable the development of new electrodes for long lasting batteries. A possible strategy for measuring the mechanical behavior of electrodes during operation is by using nanoindentation. The method is able to analyze how the stresses evolve during the electrochemical reactions, being a valuable tool in evaluating possible pathways for coupling mechanical behavior and electrochemistry. More than just affecting the electrode's morphology, stresses are also able to impact electrochemical reactions. While the chemical driving forces are usually higher in magnitude than the mechanical energies, this is not true for Li-ion batteries. A study by Dr. Larché established a direct relation between the applied stress and the chemical potential of the electrode. Though it neglects multiple variables such as the variation of elastic constraints, it subtracts from the total chemical potential the elastic energy induced by the stress. In this equation, μ represents the chemical potential, with μ° being its reference value. T stands for the temperature and k the Boltzmann constant. The term γ inside the logarithm is the activity and x is the ratio of the ion to the total composition of the electrode. The novel term Ω is the partial molar volume of the ion in the host and σ corresponds to the mean stress felt by the system. The result of this equation is that diffusion, which is dependent on chemical potential, gets impacted by the added stress and, therefore changes the battery's performance. Furthermore, mechanical stresses may also impact the electrode's solid-electrolyte-interphase layer. The interface which regulates the ion and charge transfer and can be degraded by stress. Thus, more ions in the solution will be consumed to reform it, diminishing the overall efficiency of the system. Other anodes and cathodes In a vacuum tube or a semiconductor having polarity (diodes, electrolytic capacitors) the anode is the positive (+) electrode and the cathode the negative (−). The electrons enter the device through the cathode and exit the device through the anode. Many devices have other electrodes to control operation, e.g., base, gate, control grid. In a three-electrode cell, a counter electrode, also called an auxiliary electrode, is used only to make a connection to the electrolyte so that a current can be applied to the working electrode. The counter electrode is usually made of an inert material, such as a noble metal or graphite, to keep it from dissolving. Welding electrodes In arc welding, an electrode is used to conduct current through a workpiece to fuse two pieces together. Depending upon the process, the electrode is either consumable, in the case of gas metal arc welding or shielded metal arc welding, or non-consumable, such as in gas tungsten arc welding. For a direct current system, the weld rod or stick may be a cathode for a filling type weld or an anode for other welding processes. For an alternating current arc welder, the welding electrode would not be considered an anode or cathode. Alternating current electrodes For electrical systems which use alternating current, the electrodes are the connections from the circuitry to the object to be acted upon by the electric current but are not designated anode or cathode because the direction of flow of the electrons changes periodically, usually many times per second. Chemically modified electrodes Chemically modified electrodes are electrodes that have their surfaces chemically modified to change the electrode's physical, chemical, electrochemical, optical, electrical, and transportive properties. These electrodes are used for advanced purposes in research and investigation. Uses Electrodes are used to provide current through nonmetal objects to alter them in numerous ways and to measure conductivity for numerous purposes. Examples include: Electrodes for fuel cells Electrodes for medical purposes, such as EEG (for recording brain activity), ECG (recording heart beats), ECT (electrical brain stimulation), defibrillator (recording and delivering cardiac stimulation) Electrodes for electrophysiology techniques in biomedical research Electrodes for execution by the electric chair Electrodes for electroplating Electrodes for arc welding Electrodes for cathodic protection Electrodes for grounding Electrodes for chemical analysis using electrochemical methods Nanoelectrodes for high-precision measurements in nanoelectrochemistry Inert electrodes for electrolysis (made of platinum) Membrane electrode assembly Electrodes for Taser electroshock weapon See also Reference electrode Gas diffusion electrode Cellulose electrode Anion vs. Cation Electron versus electron hole Electron microscope Tafel equation Hot cathode Cold cathode Reversible charge injection limit References Further reading Electricity
Electrode
[ "Chemistry" ]
4,605
[ "Electrochemistry", "Electrodes" ]
10,013
https://en.wikipedia.org/wiki/Evidence-based%20medicine
Evidence-based medicine (EBM) is "the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. ... [It] means integrating individual clinical expertise with the best available external clinical evidence from systematic research." The aim of EBM is to integrate the experience of the clinician, the values of the patient, and the best available scientific information to guide decision-making about clinical management. The term was originally used to describe an approach to teaching the practice of medicine and improving decisions by individual physicians about individual patients. The EBM Pyramid is a tool that helps in visualizing the hierarchy of evidence in medicine, from least authoritative, like expert opinions, to most authoritative, like systematic reviews. Background, history, and definition Medicine has a long history of scientific inquiry about the prevention, diagnosis, and treatment of human disease. In the 11th century AD, Avicenna, a Persian physician and philosopher, developed an approach to EBM that was mostly similar to current ideas and practises. The concept of a controlled clinical trial was first described in 1662 by Jan Baptist van Helmont in reference to the practice of bloodletting. Wrote Van Helmont: The first published report describing the conduct and results of a controlled clinical trial was by James Lind, a Scottish naval surgeon who conducted research on scurvy during his time aboard HMS Salisbury in the Channel Fleet, while patrolling the Bay of Biscay. Lind divided the sailors participating in his experiment into six groups, so that the effects of various treatments could be fairly compared. Lind found improvement in symptoms and signs of scurvy among the group of men treated with lemons or oranges. He published a treatise describing the results of this experiment in 1753. An early critique of statistical methods in medicine was published in 1835, in Comtes Rendus de l’Académie des Sciences, Paris, by a man referred to as "Mr Civiale". In 1990, Gordon Guyatt, then a young internal medicine residency coordinator at McMaster University, introduced a teaching method he initially termed "Scientific Medicine." This approach emphasized applying critical appraisal techniques directly to bedside clinical decision-making, building on the work of his mentor, David Sackett. However, the concept met resistance from colleagues, as it implied that existing clinical practices lacked scientific rigor, even though this was likely true. To address this, Guyatt rebranded the approach as "Evidence-Based Medicine", a term first formally introduced in a 1991 editorial in the ACP Journal Club. Although the name was coined in 1991, it took several years after and a concerted efforts of many other teams to define the foundations of this method. Although more popular in medicine, the concept of "evidence-based" is spreading to other disciplines, such as the humanities, and to languages other than English, albeit at a slower pace. Clinical decision-making Alvan Feinstein's publication of Clinical Judgment in 1967 focused attention on the role of clinical reasoning and identified biases that can affect it. In 1972, Archie Cochrane published Effectiveness and Efficiency, which described the lack of controlled trials supporting many practices that had previously been assumed to be effective. In 1973, John Wennberg began to document wide variations in how physicians practiced. Through the 1980s, David M. Eddy described errors in clinical reasoning and gaps in evidence. In the mid-1980s, Alvin Feinstein, David Sackett and others published textbooks on clinical epidemiology, which translated epidemiological methods to physician decision-making. Toward the end of the 1980s, a group at RAND showed that large proportions of procedures performed by physicians were considered inappropriate even by the standards of their own experts. Evidence-based guidelines and policies David M. Eddy first began to use the term 'evidence-based' in 1987 in workshops and a manual commissioned by the Council of Medical Specialty Societies to teach formal methods for designing clinical practice guidelines. The manual was eventually published by the American College of Physicians. Eddy first published the term 'evidence-based' in March 1990, in an article in the Journal of the American Medical Association (JAMA) that laid out the principles of evidence-based guidelines and population-level policies, which Eddy described as "explicitly describing the available evidence that pertains to a policy and tying the policy to evidence instead of standard-of-care practices or the beliefs of experts. The pertinent evidence must be identified, described, and analyzed. The policymakers must determine whether the policy is justified by the evidence. A rationale must be written." He discussed evidence-based policies in several other papers published in JAMA in the spring of 1990. Those papers were part of a series of 28 published in JAMA between 1990 and 1997 on formal methods for designing population-level guidelines and policies. Medical education The term 'evidence-based medicine' was introduced slightly later, in the context of medical education. In the autumn of 1990, Gordon Guyatt used it in an unpublished description of a program at McMaster University for prospective or new medical students. Guyatt and others first published the term two years later (1992) to describe a new approach to teaching the practice of medicine. In 1996, David Sackett and colleagues clarified the definition of this tributary of evidence-based medicine as "the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. ... [It] means integrating individual clinical expertise with the best available external clinical evidence from systematic research." This branch of evidence-based medicine aims to make individual decision making more structured and objective by better reflecting the evidence from research. Population-based data are applied to the care of an individual patient, while respecting the fact that practitioners have clinical expertise reflected in effective and efficient diagnosis and thoughtful identification and compassionate use of individual patients' predicaments, rights, and preferences. Between 1993 and 2000, the Evidence-Based Medicine Working Group at McMaster University published the methods to a broad physician audience in a series of 25 "Users' Guides to the Medical Literature" in JAMA. In 1995 Rosenberg and Donald defined individual-level, evidence-based medicine as "the process of finding, appraising, and using contemporaneous research findings as the basis for medical decisions." In 2010, Greenhalgh used a definition that emphasized quantitative methods: "the use of mathematical estimates of the risk of benefit and harm, derived from high-quality research on population samples, to inform clinical decision-making in the diagnosis, investigation or management of individual patients." The two original definitions highlight important differences in how evidence-based medicine is applied to populations versus individuals. When designing guidelines applied to large groups of people in settings with relatively little opportunity for modification by individual physicians, evidence-based policymaking emphasizes that good evidence should exist to document a test's or treatment's effectiveness. In the setting of individual decision-making, practitioners can be given greater latitude in how they interpret research and combine it with their clinical judgment. In 2005, Eddy offered an umbrella definition for the two branches of EBM: "Evidence-based medicine is a set of principles and methods intended to ensure that to the greatest extent possible, medical decisions, guidelines, and other types of policies are based on and consistent with good evidence of effectiveness and benefit." Progress In the area of evidence-based guidelines and policies, the explicit insistence on evidence of effectiveness was introduced by the American Cancer Society in 1980. The U.S. Preventive Services Task Force (USPSTF) began issuing guidelines for preventive interventions based on evidence-based principles in 1984. In 1985, the Blue Cross Blue Shield Association applied strict evidence-based criteria for covering new technologies. Beginning in 1987, specialty societies such as the American College of Physicians, and voluntary health organizations such as the American Heart Association, wrote many evidence-based guidelines. In 1991, Kaiser Permanente, a managed care organization in the US, began an evidence-based guidelines program. In 1991, Richard Smith wrote an editorial in the British Medical Journal and introduced the ideas of evidence-based policies in the UK. In 1993, the Cochrane Collaboration created a network of 13 countries to produce systematic reviews and guidelines. In 1997, the US Agency for Healthcare Research and Quality (AHRQ, then known as the Agency for Health Care Policy and Research, or AHCPR) established Evidence-based Practice Centers (EPCs) to produce evidence reports and technology assessments to support the development of guidelines. In the same year, a National Guideline Clearinghouse that followed the principles of evidence-based policies was created by AHRQ, the AMA, and the American Association of Health Plans (now America's Health Insurance Plans). In 1999, the National Institute for Clinical Excellence (NICE) was created in the UK. In the area of medical education, medical schools in Canada, the US, the UK, Australia, and other countries now offer programs that teach evidence-based medicine. A 2009 study of UK programs found that more than half of UK medical schools offered some training in evidence-based medicine, although the methods and content varied considerably, and EBM teaching was restricted by lack of curriculum time, trained tutors and teaching materials. Many programs have been developed to help individual physicians gain better access to evidence. For example, UpToDate was created in the early 1990s. The Cochrane Collaboration began publishing evidence reviews in 1993. In 1995, BMJ Publishing Group launched Clinical Evidence, a 6-monthly periodical that provided brief summaries of the current state of evidence about important clinical questions for clinicians. Current practice By 2000, use of the term evidence-based had extended to other levels of the health care system. An example is evidence-based health services, which seek to increase the competence of health service decision makers and the practice of evidence-based medicine at the organizational or institutional level. The multiple tributaries of evidence-based medicine share an emphasis on the importance of incorporating evidence from formal research in medical policies and decisions. However, because they differ on the extent to which they require good evidence of effectiveness before promoting a guideline or payment policy, a distinction is sometimes made between evidence-based medicine and science-based medicine, which also takes into account factors such as prior plausibility and compatibility with established science (as when medical organizations promote controversial treatments such as acupuncture). Differences also exist regarding the extent to which it is feasible to incorporate individual-level information in decisions. Thus, evidence-based guidelines and policies may not readily "hybridise" with experience-based practices orientated towards ethical clinical judgement, and can lead to contradictions, contest, and unintended crises. The most effective "knowledge leaders" (managers and clinical leaders) use a broad range of management knowledge in their decision making, rather than just formal evidence. Evidence-based guidelines may provide the basis for governmentality in health care, and consequently play a central role in the governance of contemporary health care systems. Methods Steps The steps for designing explicit, evidence-based guidelines were described in the late 1980s: formulate the question (population, intervention, comparison intervention, outcomes, time horizon, setting); search the literature to identify studies that inform the question; interpret each study to determine precisely what it says about the question; if several studies address the question, synthesize their results (meta-analysis); summarize the evidence in evidence tables; compare the benefits, harms and costs in a balance sheet; draw a conclusion about the preferred practice; write the guideline; write the rationale for the guideline; have others review each of the previous steps; implement the guideline. For the purposes of medical education and individual-level decision making, five steps of EBM in practice were described in 1992 and the experience of delegates attending the 2003 Conference of Evidence-Based Health Care Teachers and Developers was summarized into five steps and published in 2005. This five-step process can broadly be categorized as follows: Translation of uncertainty to an answerable question; includes critical questioning, study design and levels of evidence Systematic retrieval of the best evidence available Critical appraisal of evidence for internal validity that can be broken down into aspects regarding: Systematic errors as a result of selection bias, information bias and confounding Quantitative aspects of diagnosis and treatment The effect size and aspects regarding its precision Clinical importance of results External validity or generalizability Application of results in practice Evaluation of performance Evidence reviews Systematic reviews of published research studies are a major part of the evaluation of particular treatments. The Cochrane Collaboration is one of the best-known organisations that conducts systematic reviews. Like other producers of systematic reviews, it requires authors to provide a detailed study protocol as well as a reproducible plan of their literature search and evaluations of the evidence. After the best evidence is assessed, treatment is categorized as (1) likely to be beneficial, (2) likely to be harmful, or (3) without evidence to support either benefit or harm. A 2007 analysis of 1,016 systematic reviews from all 50 Cochrane Collaboration Review Groups found that 44% of the reviews concluded that the intervention was likely to be beneficial, 7% concluded that the intervention was likely to be harmful, and 49% concluded that evidence did not support either benefit or harm. 96% recommended further research. In 2017, a study assessed the role of systematic reviews produced by Cochrane Collaboration to inform US private payers' policymaking; it showed that although the medical policy documents of major US private payers were informed by Cochrane systematic reviews, there was still scope to encourage the further use. Assessing the quality of evidence Evidence-based medicine categorizes different types of clinical evidence and rates or grades them according to the strength of their freedom from the various biases that beset medical research. For example, the strongest evidence for therapeutic interventions is provided by systematic review of randomized, well-blinded, placebo-controlled trials with allocation concealment and complete follow-up involving a homogeneous patient population and medical condition. In contrast, patient testimonials, case reports, and even expert opinion have little value as proof because of the placebo effect, the biases inherent in observation and reporting of cases, and difficulties in ascertaining who is an expert (however, some critics have argued that expert opinion "does not belong in the rankings of the quality of empirical evidence because it does not represent a form of empirical evidence" and continue that "expert opinion would seem to be a separate, complex type of knowledge that would not fit into hierarchies otherwise limited to empirical evidence alone."). Several organizations have developed grading systems for assessing the quality of evidence. For example, in 1989 the U.S. Preventive Services Task Force (USPSTF) put forth the following system: Level I: Evidence obtained from at least one properly designed randomized controlled trial. Level II-1: Evidence obtained from well-designed controlled trials without randomization. Level II-2: Evidence obtained from well-designed cohort studies or case-control studies, preferably from more than one center or research group. Level II-3: Evidence obtained from multiple time series designs with or without the intervention. Dramatic results in uncontrolled trials might also be regarded as this type of evidence. Level III: Opinions of respected authorities, based on clinical experience, descriptive studies, or reports of expert committees. Another example are the Oxford CEBM Levels of Evidence published by the Centre for Evidence-Based Medicine. First released in September 2000, the Levels of Evidence provide a way to rank evidence for claims about prognosis, diagnosis, treatment benefits, treatment harms, and screening, which most grading schemes do not address. The original CEBM Levels were Evidence-Based On Call to make the process of finding evidence feasible and its results explicit. In 2011, an international team redesigned the Oxford CEBM Levels to make them more understandable and to take into account recent developments in evidence ranking schemes. The Oxford CEBM Levels of Evidence have been used by patients and clinicians, as well as by experts to develop clinical guidelines, such as recommendations for the optimal use of phototherapy and topical therapy in psoriasis and guidelines for the use of the BCLC staging system for diagnosing and monitoring hepatocellular carcinoma in Canada. In 2000, a system was developed by the Grading of Recommendations Assessment, Development and Evaluation (GRADE) working group. The GRADE system takes into account more dimensions than just the quality of medical research. It requires users who are performing an assessment of the quality of evidence, usually as part of a systematic review, to consider the impact of different factors on their confidence in the results. Authors of GRADE tables assign one of four levels to evaluate the quality of evidence, on the basis of their confidence that the observed effect (a numeric value) is close to the true effect. The confidence value is based on judgments assigned in five different domains in a structured manner. The GRADE working group defines 'quality of evidence' and 'strength of recommendations' based on the quality as two different concepts that are commonly confused with each other. Systematic reviews may include randomized controlled trials that have low risk of bias, or observational studies that have high risk of bias. In the case of randomized controlled trials, the quality of evidence is high but can be downgraded in five different domains. Risk of bias: A judgment made on the basis of the chance that bias in included studies has influenced the estimate of effect. Imprecision: A judgment made on the basis of the chance that the observed estimate of effect could change completely. Indirectness: A judgment made on the basis of the differences in characteristics of how the study was conducted and how the results are actually going to be applied. Inconsistency: A judgment made on the basis of the variability of results across the included studies. Publication bias: A judgment made on the basis of the question whether all the research evidence has been taken to account. In the case of observational studies per GRADE, the quality of evidence starts off lower and may be upgraded in three domains in addition to being subject to downgrading. Large effect: Methodologically strong studies show that the observed effect is so large that the probability of it changing completely is less likely. Plausible confounding would change the effect: Despite the presence of a possible confounding factor that is expected to reduce the observed effect, the effect estimate still shows significant effect. Dose response gradient: The intervention used becomes more effective with increasing dose. This suggests that a further increase will likely bring about more effect. Meaning of the levels of quality of evidence as per GRADE: High Quality Evidence: The authors are very confident that the presented estimate lies very close to the true value. In other words, the probability is very low that further research will completely change the presented conclusions. Moderate Quality Evidence: The authors are confident that the presented estimate lies close to the true value, but it is also possible that it may be substantially different. In other words, further research may completely change the conclusions. Low Quality Evidence: The authors are not confident in the effect estimate, and the true value may be substantially different. In other words, further research is likely to change the presented conclusions completely. Very Low Quality Evidence: The authors do not have any confidence in the estimate and it is likely that the true value is substantially different from it. In other words, new research will probably change the presented conclusions completely. Categories of recommendations In guidelines and other publications, recommendation for a clinical service is classified by the balance of risk versus benefit and the level of evidence on which this information is based. The U.S. Preventive Services Task Force uses the following system: Level A: Good scientific evidence suggests that the benefits of the clinical service substantially outweigh the potential risks. Clinicians should discuss the service with eligible patients. Level B: At least fair scientific evidence suggests that the benefits of the clinical service outweighs the potential risks. Clinicians should discuss the service with eligible patients. Level C: At least fair scientific evidence suggests that the clinical service provides benefits, but the balance between benefits and risks is too close for general recommendations. Clinicians need not offer it unless individual considerations apply. Level D: At least fair scientific evidence suggests that the risks of the clinical service outweigh potential benefits. Clinicians should not routinely offer the service to asymptomatic patients. Level I: Scientific evidence is lacking, of poor quality, or conflicting, such that the risk versus benefit balance cannot be assessed. Clinicians should help patients understand the uncertainty surrounding the clinical service. GRADE guideline panelists may make strong or weak recommendations on the basis of further criteria. Some of the important criteria are the balance between desirable and undesirable effects (not considering cost), the quality of the evidence, values and preferences and costs (resource utilization). Despite the differences between systems, the purposes are the same: to guide users of clinical research information on which studies are likely to be most valid. However, the individual studies still require careful critical appraisal. Statistical measures Evidence-based medicine attempts to express clinical benefits of tests and treatments using mathematical methods. Tools used by practitioners of evidence-based medicine include: Likelihood ratio The pre-test odds of a particular diagnosis, multiplied by the likelihood ratio, determines the post-test odds. (Odds can be calculated from, and then converted to, the [more familiar] probability.) This reflects Bayes' theorem. The differences in likelihood ratio between clinical tests can be used to prioritize clinical tests according to their usefulness in a given clinical situation. AUC-ROC The area under the receiver operating characteristic curve (AUC-ROC) reflects the relationship between sensitivity and specificity for a given test. High-quality tests will have an AUC-ROC approaching 1, and high-quality publications about clinical tests will provide information about the AUC-ROC. Cutoff values for positive and negative tests can influence specificity and sensitivity, but they do not affect AUC-ROC. Number needed to treat (NNT)/Number needed to harm (NNH). NNT and NNH are ways of expressing the effectiveness and safety, respectively, of interventions in a way that is clinically meaningful. NNT is the number of people who need to be treated in order to achieve the desired outcome (e.g. survival from cancer) in one patient. For example, if a treatment increases the chance of survival by 5%, then 20 people need to be treated in order for 1 additional patient to survive because of the treatment. The concept can also be applied to diagnostic tests. For example, if 1,339 women age 50–59 need to be invited for breast cancer screening over a ten-year period in order to prevent one woman from dying of breast cancer, then the NNT for being invited to breast cancer screening is 1339. Quality of clinical trials Evidence-based medicine attempts to objectively evaluate the quality of clinical research by critically assessing techniques reported by researchers in their publications. Trial design considerations: High-quality studies have clearly defined eligibility criteria and have minimal missing data. Generalizability considerations: Studies may only be applicable to narrowly defined patient populations and may not be generalizable to other clinical contexts. Follow-up: Sufficient time for defined outcomes to occur can influence the prospective study outcomes and the statistical power of a study to detect differences between a treatment and control arm. Power: A mathematical calculation can determine whether the number of patients is sufficient to detect a difference between treatment arms. A negative study may reflect a lack of benefit, or simply a lack of sufficient quantities of patients to detect a difference. Limitations and criticism There are a number of limitations and criticisms of evidence-based medicine. Two widely cited categorization schemes for the various published critiques of EBM include the three-fold division of Straus and McAlister ("limitations universal to the practice of medicine, limitations unique to evidence-based medicine and misperceptions of evidence-based-medicine") and the five-point categorization of Cohen, Stavri and Hersh (EBM is a poor philosophic basis for medicine, defines evidence too narrowly, is not evidence-based, is limited in usefulness when applied to individual patients, or reduces the autonomy of the doctor/patient relationship). In no particular order, some published objections include: Research produced by EBM, such as from randomized controlled trials (RCTs), may not be relevant for all treatment situations. Research tends to focus on specific populations, but individual persons can vary substantially from population norms. Because certain population segments have been historically under-researched (due to reasons such as race, gender, age, and co-morbid diseases), evidence from RCTs may not be generalizable to those populations. Thus, EBM applies to groups of people, but this should not preclude clinicians from using their personal experience in deciding how to treat each patient. One author advises that "the knowledge gained from clinical research does not directly answer the primary clinical question of what is best for the patient at hand" and suggests that evidence-based medicine should not discount the value of clinical experience. Another author stated that "the practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research." Use of evidence-based guidelines often fits poorly for complex, multimorbid patients. This is because the guidelines are usually based on clinical studies focused on single diseases. In reality, the recommended treatments in such circumstances may interact unfavorably with each other and often lead to polypharmacy. The theoretical ideal of EBM (that every narrow clinical question, of which hundreds of thousands can exist, would be answered by meta-analysis and systematic reviews of multiple RCTs) faces the limitation that research (especially the RCTs themselves) is expensive; thus, in reality, for the foreseeable future, the demand for EBM will always be much higher than the supply, and the best humanity can do is to triage the application of scarce resources. Research can be influenced by biases such as political or belief bias, publication bias and conflict of interest in academic publishing. For example, studies with conflicts due to industry funding are more likely to favor their product. It has been argued that contemporary evidence based medicine is an illusion, since evidence based medicine has been corrupted by corporate interests, failed regulation, and commercialisation of academia. Systematic Reviews methodologies are capable of bias and abuse in respect of (i) choice of inclusion criteria (ii) choice of outcome measures, comparisons and analyses (iii) the subjectivity inevitable in Risk of Bias assessments, even when codified procedures and criteria are observed. An example of all these problems can be seen in a Cochrane Review, as analyzed by Edmund J. Fordham, et al. in their relevant review. A lag exists between when the RCT is conducted and when its results are published. A lag exists between when results are published and when they are properly applied. Hypocognition (the absence of a simple, consolidated mental framework into which new information can be placed) can hinder the application of EBM. Values: while patient values are considered in the original definition of EBM, the importance of values is not commonly emphasized in EBM training, a potential problem under current study. A 2018 study, "Why all randomised controlled trials produce biased results", assessed the 10 most cited RCTs and argued that trials face a wide range of biases and constraints, from trials only being able to study a small set of questions amenable to randomisation and generally only being able to assess the average treatment effect of a sample, to limitations in extrapolating results to another context, among many others outlined in the study. Application of evidence in clinical settings Despite the emphasis on evidence-based medicine, unsafe or ineffective medical practices continue to be applied, because of patient demand for tests or treatments, because of failure to access information about the evidence, or because of the rapid pace of change in the scientific evidence. For example, between 2003 and 2017, the evidence shifted on hundreds of medical practices, including whether hormone replacement therapy was safe, whether babies should be given certain vitamins, and whether antidepressant drugs are effective in people with Alzheimer's disease. Even when the evidence unequivocally shows that a treatment is either not safe or not effective, it may take many years for other treatments to be adopted. There are many factors that contribute to lack of uptake or implementation of evidence-based recommendations. These include lack of awareness at the individual clinician or patient (micro) level, lack of institutional support at the organisation level (meso) level or higher at the policy (macro) level. In other cases, significant change can require a generation of physicians to retire or die and be replaced by physicians who were trained with more recent evidence. Physicians may also reject evidence that conflicts with their anecdotal experience or because of cognitive biases – for example, a vivid memory of a rare but shocking outcome (the availability heuristic), such as a patient dying after refusing treatment. They may overtreat to "do something" or to address a patient's emotional needs. They may worry about malpractice charges based on a discrepancy between what the patient expects and what the evidence recommends. They may also overtreat or provide ineffective treatments because the treatment feels biologically plausible. It is the responsibility of those developing clinical guidelines to include an implementation plan to facilitate uptake. The implementation process will include an implementation plan, analysis of the context, identifying barriers and facilitators and designing the strategies to address them. Education Training in evidence based medicine is offered across the continuum of medical education. Educational competencies have been created for the education of health care professionals. The Berlin questionnaire and the Fresno Test are validated instruments for assessing the effectiveness of education in evidence-based medicine. These questionnaires have been used in diverse settings. A Campbell systematic review that included 24 trials examined the effectiveness of e-learning in improving evidence-based health care knowledge and practice. It was found that e-learning, compared to no learning, improves evidence-based health care knowledge and skills but not attitudes and behaviour. No difference in outcomes is present when comparing e-learning with face-to-face learning. Combining e-learning and face-to-face learning (blended learning) has a positive impact on evidence-based knowledge, skills, attitude and behavior. As a form of e-learning, some medical school students engage in editing Wikipedia to increase their EBM skills, and some students construct EBM materials to develop their skills in communicating medical knowledge. See also References Bibliography External links Evidence-Based Medicine – An Oral History, JAMA and the BMJ, 2014. Centre for Evidence-based Medicine at the University of Oxford. Evidence Health informatics Health care quality Clinical research
Evidence-based medicine
[ "Biology" ]
6,312
[ "Health informatics", "Medical technology" ]
10,029
https://en.wikipedia.org/wiki/Timeline%20of%20the%20evolutionary%20history%20of%20life
The timeline of the evolutionary history of life represents the current scientific theory outlining the major events during the development of life on planet Earth. Dates in this article are consensus estimates based on scientific evidence, mainly fossils. In biology, evolution is any change across successive generations in the heritable characteristics of biological populations. Evolutionary processes give rise to diversity at every level of biological organization, from kingdoms to species, and individual organisms and molecules, such as DNA and proteins. The similarities between all present day organisms imply a common ancestor from which all known species, living and extinct, have diverged. More than 99 percent of all species that ever lived (over five billion) are estimated to be extinct. Estimates on the number of Earth's current species range from 10 million to 14 million, with about 1.2 million or 14% documented, the rest not yet described. However, a 2016 report estimates an additional 1 trillion microbial species, with only 0.001% described. There has been controversy between more traditional views of steadily increasing biodiversity, and a newer view of cycles of annihilation and diversification, so that certain past times, such as the Cambrian explosion, experienced maximums of diversity followed by sharp winnowing. Extinction Species go extinct constantly as environments change, as organisms compete for environmental niches, and as genetic mutation leads to the rise of new species from older ones. At long irregular intervals, Earth's biosphere suffers a catastrophic die-off, a mass extinction, often comprising an accumulation of smaller extinction events over a relatively brief period. The first known mass extinction was the Great Oxidation Event 2.4 billion years ago, which killed most of the planet's obligate anaerobes. Researchers have identified five other major extinction events in Earth's history, with estimated losses below: End Ordovician: 440 million years ago, 86% of all species lost, including graptolites Late Devonian: 375 million years ago, 75% of species lost, including most trilobites End Permian, The Great Dying: 251 million years ago, 96% of species lost, including tabulate corals, and most trees and synapsids End Triassic: 200 million years ago, 80% of species lost, including all conodonts End Cretaceous: 66 million years ago, 76% of species lost, including all ammonites, mosasaurs, plesiosaurs, pterosaurs, and nonavian dinosaurs Smaller extinction events have occurred in the periods between, with some dividing geologic time periods and epochs. The Holocene extinction event is currently under way. Factors in mass extinctions include continental drift, changes in atmospheric and marine chemistry, volcanism and other aspects of mountain formation, changes in glaciation, changes in sea level, and impact events. Detailed timeline In this timeline, Ma (for megaannum) means "million years ago," ka (for kiloannum) means "thousand years ago," and ya means "years ago." Hadean Eon 4540 Ma – 4031 Ma Archean Eon 4031 Ma – 2500 Ma Proterozoic Eon 2500 Ma – 539 Ma. Contains the Palaeoproterozoic, Mesoproterozoic and Neoproterozoic eras. Phanerozoic Eon 539 Ma – present The Phanerozoic Eon (Greek: period of well-displayed life) marks the appearance in the fossil record of abundant, shell-forming and/or trace-making organisms. It is subdivided into three eras, the Paleozoic, Mesozoic and Cenozoic, with major mass extinctions at division points. Palaeozoic Era 538.8 Ma – 251.9 Ma and contains the Cambrian, Ordovician, Silurian, Devonian, Carboniferous and Permian periods. Mesozoic Era From 251.9 Ma to 66 Ma and containing the Triassic, Jurassic and Cretaceous periods. Cenozoic Era 66 Ma – present See also Evolutionary history of plants (timeline) Geologic time scale History of Earth Sociocultural evolution Timeline of human evolution References Bibliography Further reading External links Explore complete phylogenetic tree interactively Interactive timeline from Big Bang to present Sequence of Plant Evolution Sequence of Animal Evolution Art of the Nature Timelines on Wikipedia Evolution-related timelines Origin of life evolution
Timeline of the evolutionary history of life
[ "Biology" ]
895
[ "Biological hypotheses", "Origin of life" ]
10,046
https://en.wikipedia.org/wiki/Erie%20Canal
The Erie Canal is a historic canal in upstate New York that runs east–west between the Hudson River and Lake Erie. Completed in 1825, the canal was the first navigable waterway connecting the Atlantic Ocean to the Great Lakes, vastly reducing the costs of transporting people and goods across the Appalachians. The Erie Canal accelerated the settlement of the Great Lakes region, the westward expansion of the United States, and the economic ascendancy of New York state. It has been called "The Nation's First Superhighway". A canal from the Hudson River to the Great Lakes was first proposed in the 1780s, but a formal survey was not conducted until 1808. The New York State Legislature authorized construction in 1817. Political opponents of the canal (referencing its lead supporter New York Governor DeWitt Clinton) denigrated the project as "Clinton's Folly" and "Clinton's Big Ditch". Nonetheless, the canal saw quick success upon opening on October 26, 1825, with toll revenue covering the state's construction debt within the first year of operation. The westward connection gave New York City a strong advantage over all other U.S. ports and brought major growth to canal cities such as Albany, Utica, Syracuse, Rochester, and Buffalo. The construction of the Erie Canal was a landmark civil engineering achievement in the early history of the United States. When built, the canal was the second-longest in the world after the Grand Canal in China. Initially wide and deep, the canal was expanded several times, most notably from 1905 to 1918 when the "Barge Canal" was built and over half the original route was abandoned. The modern Barge Canal measures long, wide, and deep. It has 34 locks, including the Waterford Flight, the steepest locks in the United States. When leaving the canal, boats must also traverse the Black Rock Lock to reach Lake Erie or the Troy Federal Lock to reach the tidal Hudson. The overall elevation difference is about . The Erie's peak year was 1855, when 33,000 commercial shipments took place. It continued to be competitive with railroads until about 1902, when tolls were abolished. Commercial traffic declined heavily in the latter half of the 20th century due to competition from trucking and the 1959 opening of the larger St. Lawrence Seaway. The canal's last regularly scheduled hauler, the Day Peckinpaugh, ended service in 1994. Today, the Erie Canal is mainly used by recreational watercraft. It connects the three other canals in the New York State Canal System: the Champlain, Oswego, and Cayuga–Seneca. Some long-distance boaters take the Erie as part of the Great Loop. The canal has also become a tourist attraction in its own right—several parks and museums are dedicated to its history. The New York State Canalway Trail is a popular cycling path that follows the canal across the state. In 2000, Congress designated the Erie Canalway National Heritage Corridor to protect and promote the system. Ambiguity in name The waterway today referred to as the Erie Canal is quite different from the nineteenth-century Erie Canal. More than half of the original Erie Canal was destroyed or abandoned during construction of the New York State Barge Canal in the early 20th century. The sections of the original route remaining in use were widened significantly, mostly west of Syracuse, with bridges rebuilt and locks replaced. It was called the Barge Canal at the time, but that name fell into disuse with the disappearance of commercial traffic and the increase of recreational travel in the later 20th century. History Background Before railroads, water transport was the most cost-effective way to ship bulk goods. A mule can only carry about but can draw a barge weighing as much as along a towpath. In total, a canal could cut transport costs by about 95 percent. In the early years of the United States, transportation of goods between the coastal ports and the interior was slow and difficult. Close to the seacoast, rivers provided easy inland transport up to the fall line, since floating vessels encounter much less friction than land vehicles. However, the Appalachian Mountains were a great obstacle to further transportation or settlement, stretching from Maine to Alabama, with just five places where mule trains or wagon roads could be routed. Passengers and freight bound for the western parts of the country had to travel overland, a journey made more difficult by the rough condition of the roads. In 1800, it typically took 2½ weeks to travel overland from New York to Cleveland, Ohio, () and 4 weeks to Detroit (). The principal exportable product of the Ohio Valley was grain, which was a high-volume, low-priced commodity, bolstered by supplies from the coast. Frequently it was not worth the cost of transporting it to far-away population centers. This was a factor leading to farmers in the west turning their grains into whiskey for easier transport and higher sales, and later the Whiskey Rebellion. In the 18th and early 19th centuries, it became clear to coastal residents that the city or state that succeeded in developing a cheap, reliable route to the West would enjoy economic success, and the port at the seaward end of such a route would see business increase greatly. In time, projects were devised in Virginia, Maryland, Pennsylvania, and relatively deep into the coastal states. Topography The Mohawk River (a tributary of the Hudson River) rises near Lake Ontario and runs in a glacial meltwater channel just north of the Catskill range of the Appalachian Mountains, separating them from the geologically distinct Adirondacks to the north. The Mohawk and Hudson valleys form the only cut across the Appalachians north of Alabama. A navigable canal through the Mohawk Valley would allow an almost complete water route from New York City in the south to Lake Ontario and Lake Erie in the west. Via the canal and these lakes, other Great Lakes, and to a lesser degree, related rivers, a large part of the continent's interior (and many settlements) would be made well connected to the Eastern seaboard. Conception Among the first attempts made by European colonists to improve upon the future state's navigable waterways was the construction in 1702 of the Wood Creek Carry, or Oneida Carry a short portage road connecting Wood Creek to the Mohawk River near modern-day Rome, New York. However, the first documented instance of the idea of a canal to tie the East Coast to the new western settlements via New York's waterways was discussed as early as 1724: New York provincial official Cadwallader Colden made a passing reference (in a report on fur trading) to improving the natural waterways of western New York. Colden and subsequent figures in the history of the Erie Canal and its development would draw inspiration from other great works of the so-called "canal age," including France's Canal du Midi and the Bridgewater Canal in England. The attempt in the 1780s by George Washington to build a canal from the tidewaters of the Potomac into the fledgling nation's interior was also well known to the planners of the Erie Canal. Gouverneur Morris and Elkanah Watson were early proponents of a canal along the Mohawk River. Their efforts led to the creation of the "Western and Northern Inland Lock Navigation Companies" in 1792, which took the first steps to improve navigation on the Mohawk and construct a canal between the Mohawk and Lake Ontario, but it was soon discovered that private financing was insufficient. Christopher Colles, who was familiar with the Bridgewater Canal, surveyed the Mohawk Valley, and made a presentation to the New York state legislature in 1784, proposing a shorter canal from Lake Ontario. The proposal drew attention and some action but was never implemented. Jesse Hawley had envisioned encouraging the growing of large quantities of grain on the western New York plains (then largely unsettled) for sale on the Eastern seaboard. However, he went bankrupt trying to ship grain to the coast. While in Canandaigua debtors' prison, Hawley began pressing for the construction of a canal along the Mohawk River valley with support from Joseph Ellicott (agent for the Holland Land Company in Batavia). Ellicott realized that a canal would add value to the land he was selling in the western part of the state. He later became the first canal commissioner. New York legislators became interested in the possibility of building a canal across New York in the first decade of the 19th century. Shipping goods west from Albany was a costly and tedious affair; there was no railroad yet, and to cover the distance from Buffalo to New York City by stagecoach took two weeks. The problem was that the land rises about from the Hudson to Lake Erie. Locks at the time could handle up to of lift, so even with the heftiest cuttings and viaducts, fifty locks would be required along the canal. Such a canal would be expensive to build even with modern technology; in 1800, the expense was barely imaginable. President Thomas Jefferson called it "little short of madness" and rejected it. Eventually, Hawley interested New York Governor DeWitt Clinton in the project. There was much opposition, and the project was ridiculed as "Clinton's folly" and "Clinton's ditch". In 1817, though, Clinton received approval from the legislature for $7 million for construction. Construction The original canal was long, from Albany on the Hudson to Buffalo on Lake Erie. The channel was cut wide and deep, with removed soil piled on the downhill side to form a walkway known as a towpath. Its construction, through limestone and mountains, proved a daunting task. To move earth, animals pulled a "slip scraper" (similar to a bulldozer). The sides of the canal were lined with stone set in clay, and the bottom was also lined with clay. The Canal was built by Irish laborers and German stonemasons. All labor on the canal depended upon human and animal power or the force of water. Engineering techniques developed during its construction included the building of aqueducts to redirect water; one aqueduct was long to span of river. As the canal progressed, the crews and engineers working on the project developed expertise and became a skilled labor force. The men who planned and oversaw construction were novices as surveyors and as engineers. There were no civil engineers in the United States. James Geddes and Benjamin Wright, who laid out the route, were judges whose experience in surveying was in settling boundary disputes. Geddes had only used a surveying instrument for a few hours before his work on the Canal. Canvass White was a 27-year-old amateur engineer who persuaded Clinton to let him go to Britain at his own expense to study the canal system there. Nathan Roberts was a mathematics teacher and land speculator. Yet these men "carried the Erie Canal up the Niagara escarpment at Lockport, maneuvered it onto a towering embankment to cross over Irondequoit Creek, spanned the Genesee River on an awesome aqueduct, and carved a route for it out of the solid rock between Little Falls and Schenectady—and all of those venturesome designs worked precisely as planned". Construction began on July 4, 1817, at Rome, New York. The first , from Rome to Utica, opened in 1819. At that rate, the canal would not be finished for 30 years. The main delays were caused by felling trees to clear a path through virgin forest and moving excavated soil, which took longer than expected, but the builders devised ways to solve these problems. To fell a tree, they threw rope over the top branches and winched it down. They pulled out the stumps with an innovative stump puller. Two huge wheels were mounted loose on the ends of an axle. A third wheel, slightly smaller than the others, was fixed to the center of the axle. A chain was wrapped around the axle and hooked to the stump. A rope was wrapped around the center wheel and hooked to a team of oxen. The mechanical advantage (torque) obtained ripped the stumps out of the soil. Soil to be moved was shoveled into large wheelbarrows that were dumped into mule-pulled carts. Using a scraper and a plow, a three-man team with oxen, horses and mules could build a mile in a year. The remaining problem was finding labor; increased immigration helped fill the need. Many of the laborers working on the canal were Irish, who had recently come to the United States as a group of about 5,000. Most of them were Roman Catholic, a religion that raised much suspicion in early America because of its hierarchic structure, and many laborers on the canal suffered violent assault as the result of misjudgment and xenophobia. Construction continued at an increased rate as new workers arrived. When the canal reached Montezuma Marsh (at the outlet of Cayuga Lake west of Syracuse), it was rumored that over 1,000 workers died of "swamp fever" (malaria), and construction was temporarily stopped. However, recent research has revealed that the death toll was likely much lower, as no contemporary reports mention significant worker mortality, and mass graves from the period have never been found in the area. Work continued on the downhill side towards the Hudson, and the crews worked on the section across the swampland when it froze in winter. The middle section from Utica to Salina (Syracuse) was completed in 1820, and traffic on that section started up immediately. Expansion to the east and west proceeded simultaneously, and the whole eastern section, from Brockport to Albany, opened on September 10, 1823, to great fanfare. The Champlain Canal, a separate but connected north–south route from Watervliet on the Hudson to Lake Champlain, opened on the same date. After Montezuma Marsh, the next difficulties were crossing Irondequoit Creek and the Genesee River near Rochester. The former ultimately required building the long "Great Embankment", to carry the canal at a height of above the level of the creek, which ran through a culvert underneath. The canal crossed the river on a stone aqueduct, long and wide, supported by 11 arches. In 1823 construction reached the Niagara Escarpment, an -high wall of hard dolomitic limestone. The route followed the channel of a creek that had cut a ravine steeply down the escarpment. The construction and operation of two sets of five locks along a corridor soon gave rise to the community of Lockport. The lift-locks had a total lift of , exiting into a deeply cut channel. The final leg had to be cut deep through another limestone mass, the Onondaga ridge. Much of that section was blasted with black powder, and the inexperience of the crews often led to accidents, and sometimes to rocks falling on nearby homes. Two villages competed to be the terminus: Black Rock, on the Niagara River, and Buffalo, at the eastern tip of Lake Erie. Buffalo expended great energy to widen and deepen Buffalo Creek to make it navigable and to create a harbor at its mouth. Buffalo won over Black Rock, and grew into a large city, eventually encompassing its former rival. Completion In 1824, before the canal was completed, a detailed Pocket Guide for the Tourist and Traveler, Along the Line of the Canals, and the Interior Commerce of the State of New York, was published for the benefit of travelers and land speculators. The entire canal was officially completed on October 26, 1825. The event was marked by a statewide "Grand Celebration", culminating in a series of cannon shots along the length of the canal and the Hudson, a 90-minute cannonade from Buffalo to New York City. A flotilla of boats, led by Governor Dewitt Clinton aboard Seneca Chief, sailed from Buffalo to New York City over ten days. Clinton then ceremonially poured Lake Erie water into New York Harbor to mark the "Wedding of the Waters". On its return trip, Seneca Chief brought back a keg of Atlantic Ocean water, which was poured into Lake Erie by Buffalo's Judge Samuel Wilkeson, who would later become mayor. The Erie Canal was thus completed in eight years at a total length of and cost $7.143 million (equivalent to $ million in ). It was acclaimed as an engineering marvel that united the country and helped New York City develop as an international trade center. Problems developed but were quickly solved. Leaks developed along the entire length of the canal, but these were sealed using cement that hardened underwater (hydraulic cement). Erosion on the clay bottom proved to be a problem and the speed was limited to . Branch canals Additional feeder canals soon extended the Erie Canal into a system. These included the Cayuga-Seneca Canal south to the Finger Lakes, the Oswego Canal from Three Rivers north to Lake Ontario at Oswego, and the Champlain Canal from Troy north to Lake Champlain. From 1833 to 1877, the short Crooked Lake Canal connected Keuka Lake and Seneca Lake. The Chemung Canal connected the south end of Seneca Lake to Elmira in 1833, and was an important route for Pennsylvania coal and timber into the canal system. The Chenango Canal in 1836 connected the Erie Canal at Utica to Binghamton and caused a business boom in the Chenango River valley. The Chenango and Chemung canals linked the Erie with the Susquehanna River system. The Black River Canal connected the Black River to the Erie Canal at Rome and remained in operation until the 1920s. The Genesee Valley Canal was run along the Genesee River to connect with the Allegheny River at Olean, but the Allegheny section, which would have connected to the Ohio and Mississippi rivers, was never built. The Genesee Valley Canal was later abandoned and became the route of the Genesee Valley Canal Railroad. First enlargement The original design planned for an annual tonnage of 1.5 million tons (1.36 million metric tons), but this was exceeded immediately. An ambitious program to improve the canal began in 1834. During this massive series of construction projects, known as the First Enlargement, the canal was widened from and deepened from . Locks were widened and/or rebuilt in new locations, and many new navigable aqueducts were constructed. The canal was straightened and slightly re-routed in some stretches, resulting in the abandonment of short segments of the original 1825 canal. The First Enlargement was completed in 1862, with further minor enlargements in later decades. Railroad competition The Mohawk and Hudson Railroad opened in 1837, providing a bypass to the slowest part of the canal between Albany and Schenectady. Other railroads were soon chartered and built to continue the line west to Buffalo, and in 1842 a continuous line (which later became the New York Central Railroad and its Auburn Road in 1853) was open the whole way to Buffalo. As the railroad served the same general route as the canal, but provided for faster travel, passengers soon switched to it. However, as late as 1852, the canal carried thirteen times more freight tonnage than all the railroads in New York State combined. The New York, West Shore and Buffalo Railway was completed in 1884, as a route running closely parallel to both the canal and the New York Central Railroad. However, it went bankrupt and was acquired the next year by the New York Central. The canal continued to compete well with the railroads through 1902, when tolls were abolished. Barge Canal In a November 3, 1903 referendum, a majority of New Yorkers authorized an expansion of the canal at a cost of $101,000,000. In 1905, construction of the New York State Barge Canal began, which was completed in 1918, at a cost of $96.7 million. This new canal replaced much of the original route, leaving many abandoned sections (most notably between Syracuse and Rome). New digging and flood control technologies allowed engineers to canalize rivers that the original canal had sought to avoid, such as the Mohawk, Seneca, and Clyde rivers, and Oneida Lake. In sections that did not consist of canalized rivers (particularly between Rochester and Buffalo), the original Erie Canal channel was enlarged to wide and deep. The expansion allowed barges up to to use the Canal. This expensive project was politically unpopular in parts of the state not served by the canal, and failed to save it from becoming obsolete for commercial shipping. Commercial decline Freight traffic reached a total of 5.2 million short tons (4.7 million metric tons) by 1951. The growth of railroads and highways across the state, and the opening of the St. Lawrence Seaway in 1959, caused commercial traffic on the canal to decline dramatically during the second half of the 20th century. Since the 1990s, the canal system has been used primarily by recreational traffic. New York State Canal System In 1992, the New York State Barge Canal was renamed the New York State Canal System (including the Erie, Cayuga-Seneca, Oswego, and Champlain canals) and placed under the newly created New York State Canal Corporation, a subsidiary of the New York State Thruway Authority. While part of the Thruway, the canal system was operated using money generated by Thruway tolls. In 2017, the New York State Canal Corporation was transferred from the New York State Thruway to the New York Power Authority. In 2000, Congress designated the Erie Canalway National Heritage Corridor, covering of navigable water from Lake Champlain to the Capital Region and west to Buffalo. The area has a population of 2.7 million; about 75% of Central and Western New York's population lives within of the Erie Canal. There were some 42 commercial shipments on the canal in 2008, compared to 15 such shipments in 2007 and more than 33,000 shipments in 1855, the canal's peak year. The new growth in commercial traffic is due to the rising cost of diesel fuel. Canal barges can carry a short ton of cargo on one gallon of diesel fuel, while a gallon allows a train to haul the same amount of cargo and a truck . Canal barges can carry loads up to , and are used to transport objects that would be too large for road or rail shipment. In 2012, the New York State Canal System as a whole was used to ship 42,000 tons of cargo. Travel on the canal's middle section (particularly in the Mohawk Valley) was severely hampered by flooding in late June and early July 2006. Flood damage to the canal and its facilities was estimated as at least $15 million. Route Original Canal The Erie made use of the favorable conditions of New York's unique topography, which provided that area with the only break in the Appalachians south of the St. Lawrence River. The Hudson is tidal to Troy, and Albany is west of the Appalachians. It allowed for east–west navigation from the coast to the Great Lakes within US territory. The canal began on the west side of the Hudson River at Albany, and ran north to Watervliet, where the Champlain Canal branched off. At Cohoes, it climbed the escarpment on the west side of the Hudson River—16 locks rising —and then turned west along the south shore of the Mohawk River, crossing to the north side at Crescent and again to the south at Rexford. The canal continued west near the south shore of the Mohawk River all the way to Rome, where the Mohawk turns north. At Rome, the canal continued west parallel to Wood Creek, which flows westward into Oneida Lake, and turned southwest and west cross-country to avoid the lake. From Canastota west, it ran roughly along the north (lower) edge of the Onondaga Escarpment, passing through Syracuse, Onondaga Lake, and Rochester. Before reaching Rochester, the canal uses a series of natural ridges to cross the deep valley of Irondequoit Creek. At Lockport the canal turned southwest to rise to the top of the Niagara Escarpment, using the ravine of Eighteen Mile Creek. The canal continued south-southwest to Pendleton, where it turned west and southwest, mainly using the channel of Tonawanda Creek. From the Tonawanda south toward Buffalo, it ran just east of the Niagara River, where it reached its "Western Terminus" at Little Buffalo Creek (later it became the Commercial Slip), which discharged into the Buffalo River just above its confluence with Lake Erie. With Buffalo's re-excavation of the Commercial Slip, completed in 2008, the Canal's original terminus is now re-watered and again accessible by boats. With several miles of the Canal inland of this location still lying under 20th-century fill and urban construction, the effective western navigable terminus of the Erie Canal is found at Tonawanda. Barge Canal The new alignment began on the Hudson River at the border between Cohoes and Waterford, where it ran northwest with five locks (the so-called "Waterford Flight"), running into the Mohawk River east of Crescent. The Waterford Flight is claimed to be one of the steepest series of locks in the world. While the old Canal ran next to the Mohawk all the way to Rome, the new canal ran through the river, which was straightened or widened where necessary. At Ilion, the new canal left the river for good, but continued to run on a new alignment parallel to both the river and the old canal to Rome. From Rome, the new route continued almost due west, merging with Fish Creek just east of its entry into Oneida Lake. From Oneida Lake, the new canal ran west along the Oneida River, with cutoffs to shorten the route. At Three Rivers, the Oneida River turns northwest, and was deepened for the Oswego Canal to Lake Ontario. The new Erie Canal turned south there along the Seneca River, which turns west near Syracuse and continues west to a point in the Montezuma Marsh. There the Cayuga and Seneca Canal continued south with the Seneca River, and the new Erie Canal again ran parallel to the old canal along the bottom of the Niagara Escarpment, in some places running along the Clyde River, and in some places replacing the old canal. At Pittsford, southeast of Rochester, the canal turned west to run around the south side of Rochester, rather than through downtown. The canal crosses the Genesee River at the Genesee Valley Park, then rejoins the old path near North Gates. From there it was again roughly an upgrade to the original canal, running west to Lockport. This reach of from Henrietta to Lockport is called "the 60‑mile level" since there are no locks and the water level rises only over the entire segment. Diversions from and to adjacent natural streams along the way are used to maintain the canal's level. It runs southwest to Tonawanda, where the new alignment discharges into the Niagara River, which is navigable upstream to the New York Barge Canal's Black Rock Lock and thence to the Canal's original "Western Terminus" at Buffalo's Inner Harbor. Operations Freight boats Canal boats up to in draft were pulled by horses and mules walking on the towpath. The canal had one towpath, generally on the north side. When canal boats met, the boat with the right of way remained on the towpath side of the canal. The other boat steered toward the berm (or heelpath) side of the canal. The driver (or "hoggee", pronounced HO-gee) of the privileged boat kept his towpath team by the canalside edge of the towpath, while the hoggee of the other boat moved to the outside of the towpath and stopped his team. His towline would be unhitched from the horses, go slack, fall into the water and sink to the bottom, while his boat coasted with its remaining momentum. The privileged boat's team would step over the other boat's towline, with its horses pulling the boat over the sunken towline without stopping. Once clear, the other boat's team would continue on its way. Pulled by teams of horses, canal boats moved slowly, but methodically, shrinking time and distance. Efficiently, the smooth, nonstop method of transportation cut the travel time between Albany and Buffalo nearly in half, moving by day and by night. Migrants took passage on freight boats, camping on deck or on top of crates. Passenger boats Packet boats, serving passengers exclusively, reached speeds of up to and ran at much more frequent intervals than the cramped, bumpy stagecoach wagons. These boats, measuring up to long and wide, made ingenious use of space, accommodating up to 40 passengers at night and up to three times as many in the daytime. The best examples, furnished with carpeted floors, stuffed chairs, and mahogany tables stocked with books and current newspapers, served as sitting rooms during the days. At mealtimes, crews transformed the cabin into a dining room. Drawing a curtain across the width of the room divided the cabin into ladies' and gentlemen's sleeping quarters at night. Pull-down tiered beds folded from the walls, and additional cots could be hung from hooks in the ceiling. Some captains hired musicians and held dances. Sunday closing debate In 1858, the New York State Legislature debated closing the locks of the Erie Canal on Sundays. However, George Jeremiah and Dwight Bacheller, two of the bill's opponents, argued that the state had no right to stop canal traffic on the grounds that the Erie Canal and its tributaries had ceased to be wards of the state. The canal at its inception had been imagined as an extension of nature, an artificial river where there had been none. The canal succeeded by sharing more in common with lakes and seas than it had with public roads. Jeremiah and Bacheller argued, successfully, that just as it was unthinkable to halt oceangoing navigation on Sunday, so it was with the canal. Impact Economic impact The Erie Canal greatly lowered the cost of shipping between the Midwest and the Northeast, bringing much lower food costs to Eastern cities and allowing the East to ship machinery and manufactured goods to the Midwest more economically. To give an example, the cost to transport a barrel of flour from Rochester to Albany dropped from $3 (before the canal) to 75¢ on the canal. The canal also made an immense contribution to the wealth and importance of New York City, Buffalo and New York State. Its impact went much further, increasing trade throughout the nation by opening eastern and overseas markets to Midwestern farm products and by enabling migration to the West. The port of New York became essentially the Atlantic home port for all of the Midwest. Because of this vital connection and others to follow, such as the railroads, New York would become known as the "Empire State" or "the great Empire State". The Erie Canal was an immediate success. Tolls collected on freight had already exceeded the state's construction debt in its first year of official operation. By 1828, import duties collected at the New York Customs House supported federal government operations and provided funds for all the expenses in Washington except the interest on the national debt. Additionally, New York State's initial loan for the original canal had been paid by 1837. Although it had been envisioned as primarily a commercial channel for freight boats, passengers also traveled on the canal's packet boats. In 1825 more than 40,000 passengers took advantage of the convenience and beauty of canal travel. The canal's steady flow of tourists, businessmen and settlers lent it to uses never imagined by its initial sponsors. Evangelical preachers made their circuits of the upstate region, and the canal served as the last leg of the Underground Railroad ferrying freedom seekers to Buffalo near the Canada–US border. Aspiring merchants found that tourists were reliable customers. Vendors moved from boat to boat peddling items such as books, watches and fruit, while less scrupulous "confidence men" sold remedies for foot corns or passed off counterfeit bills. Tourists were carried along the "northern tour," which ultimately led to the popular honeymoon destination Niagara Falls, just north of Buffalo. As the canal brought travelers to New York City, it took business away from other ports such as Philadelphia and Baltimore. Those cities and their states started projects to compete with the Erie Canal. In Pennsylvania, the Main Line of Public Works was a combined canal and railroad running west from Philadelphia to Pittsburgh on the Ohio River, opened in 1834. In Maryland, the Baltimore and Ohio Railroad ran west to Wheeling, West Virginia, then a part of Virginia, also on the Ohio River, and was completed in 1853. The canal played a major role in the growth of Standard Oil, as founder John D. Rockefeller used the canal as a cheaper form of transportation – in the summer months when it was not frozen – to get his refined oil from Cleveland to New York City. In the winter months his only options were the three trunk lines: the Erie Railroad, the New York Central Railroad, or the Pennsylvania Railroad. Migratory impact New ethnic Irish communities formed in some towns along its route after completion, as Irish immigrants were a large portion of the construction labor force. A plaque honoring the canal's construction is located in Battery Park in southern Manhattan. Because so many immigrants traveled on the canal, many genealogists have sought copies of canal passenger lists. Apart from the years 1827–1829, canal boat operators were not required to record passenger names or report them to the New York government. Some passenger lists survive today in the New York State Archives, and other sources of traveler information are sometimes available. The canal allowed Buffalo to grow from just 200 settlers in 1820 to more than 18,000 people by 1840. Cultural impact The Canal also helped bind the still-new nation closer to Britain and Europe. Repeal of Britain's Corn Law resulted in a huge increase in exports of Midwestern wheat to Britain. Trade between the United States and Canada also increased as a result of the repeal and a reciprocity (free-trade) agreement signed in 1854. Much of this trade flowed along the Erie. Its success also prompted imitation: a rash of canal-building followed. Also, the many technical hurdles that had to be overcome made heroes of those whose innovations made the canal possible. This led to an increased public esteem for practical education. Chicago, among other Great Lakes cities, recognized the importance of the canal to its economy, and two West Loop streets are named "Canal" and "Clinton" (for canal proponent DeWitt Clinton). Concern that erosion caused by logging in the Adirondacks could silt up the canal contributed to the creation in 1885 of another New York National Historic Landmark, the Adirondack Park. Many notable authors wrote about the canal, including Herman Melville, Frances Trollope, Nathaniel Hawthorne, Harriet Beecher Stowe, Mark Twain, Samuel Hopkins Adams and the Marquis de Lafayette, and many tales and songs were written about life on the canal. The popular song "Low Bridge, Everybody Down" by Thomas S. Allen was written in 1905 to memorialize the canal's early heyday, when barges were pulled by mules rather than engines. Consisting of a massive stone aqueduct that carried boats over incredible cascades, Little Falls was one of the most popular stops for American and foreign tourists. This is shown in Scene 4 of William Dunlap's play A Trip to Niagara, where he depicts the general preference of tourists to travel by canal so that they could experience a combination of artificial and natural sights. Canal travel was, for many, an opportunity to take in the sublime and commune with nature. The play also reflects the less enthusiastic view of some who saw movement on the canal as tedious. The Erie Canal changed property law in New York. Most importantly, it expanded the government's right to take private property. Cases surrounding the newly built Erie Canal expanded condemnation theory to permit canal builders to appropriate private land and broadened the meaning of "public use" in the 5th Amendment to the U.S. Constitution. The canal also had an impact on water access jurisprudence as well as nuisance law. The canal today Today, the Erie Canal is used primarily by recreational vessels, though it remains served by several commercial barge-towing companies. The canal is open to small craft and some larger vessels from May through November each year. During winter, water is drained from parts of the canal for maintenance. The Champlain Canal, Lake Champlain, and the Chambly Canal, and Richelieu River in Canada form the Lakes to Locks Passage, making a tourist attraction of the former waterway linking eastern Canada to the Erie Canal. In 2006 recreational boating fees were suspended to attract more visitors. The Erie Canal is a destination for tourists from all over the world, and has inspired guidebooks dedicated to exploration of the waterway. An Erie Canal Cruise company, based in Herkimer, operates from mid-May until mid-October with daily cruises. The cruise goes through the history of the canal and also takes passengers through Lock 18. Aside from transportation, numerous businesses, farms, factories and communities alongside its banks still utilize the canal's waters for other purposes such as irrigation for farmland, hydroelectricity, research, industry, and even drinking. Use of the canal system has an estimated total economic impact of $6.2 billion annually. Old Erie Canal Today, the reconfiguration of the canal created during the First Enlargement is commonly referred to as the "Improved Erie Canal" or the "Old Erie Canal", to distinguish it from the canal's modern-day course. Existing remains of the 1825 canal abandoned during the Enlargement are officially referred to today as "Clinton's Ditch" (which was also the popular nickname for the entire Erie Canal project during its original 1817–1825 construction). Sections of the Old Erie Canal not used after 1918 are owned by New York State, or have been ceded to or purchased by counties or municipalities. Many stretches of the old canal have been filled in to create roads such as Erie Boulevard in Syracuse and Schenectady, and Broad Street and the Rochester Subway in Rochester. A 36‑mile (58 km) stretch of the old canal from the town of DeWitt, New York, east of Syracuse, to just outside Rome, New York, is preserved as the Old Erie Canal State Historic Park. In 1960 the Schoharie Crossing State Historic Site, a section of the canal in Montgomery County, was one of the first sites recognized as a National Historic Landmark. Some municipalities have preserved sections as town or county canal parks, or have plans to do so. Camillus Erie Canal Park preserves a stretch and has restored Nine Mile Creek Aqueduct, built in 1841 as part of the First Enlargement of the canal. In some communities, the old canal has refilled with overgrowth and debris. Proposals have been made to rehydrate the old canal through downtown Rochester or Syracuse as a tourist attraction. In Syracuse, the location of the old canal is represented by a reflecting pool in downtown's Clinton Square and the downtown hosts a canal barge and weigh lock structure, now dry. Buffalo's Commercial Slip is the restored and re-watered segment of the canal which formed its "Western Terminus". In 2004, the administration of New York Governor George Pataki was criticized when officials of New York State Canal Corporation attempted to sell private development rights to large stretches of the Old Erie Canal to a single developer for $30,000, far less than the land was worth on the open market. After an investigation by the Syracuse Post-Standard newspaper, the Pataki administration nullified the deal. Parks and museums Parks and museums related to the Old Erie Canal include (listed from east to west): Day Peckinpaugh ship; restoration and conversion to a floating museum was planned for completion in 2012 by the New York State Museum Watervliet Side Cut Locks, located at Watervliet and listed on the National Register of Historic Places in 1971 Enlarged Erie Canal Historic District (Discontiguous), a national historic district located at Cohoes, New York listed on the National Register of Historic Places in 2004 Cohoes Falls Park, 231 N. Mohawk St., Cohoes, New York, offers, looking away from the river, a dramatic view of abandoned and dry Erie Canal lock 18, high above. Enlarged Double Lock No. 23, Old Erie Canal, Rotterdam Schoharie Crossing State Historic Site at Fort Hunter Old Erie Canal State Historic Park, 36-mile linear park from Rome to DeWitt Erie Canal Village, near Rome Canastota Canal Town Museum, Canastota Chittenango Landing Canal Boat Museum, near Chittenango Erie Canal Museum in downtown Syracuse Camillus Erie Canal Park in Camillus Jordan Canal Park in Jordan, town of Elbridge Enlarged Double Lock No. 33 Old Erie Canal, St. Johnsville Erie Canal Lock 52 Complex, a national historic district located within the Old Erie Canal Heritage Park at Port Byron and Mentz in Cayuga County; listed on the National Register of Historic Places in 1998 Seneca River Crossing Canals Historic District, a national historic district located at Montezuma and Tyre in Cayuga County; listed on the National Register of Historic Places in 2005 Centerport Aqueduct Park near Weedsport; listed on the National Register of Historic Places in 2000 Lock Berlin Park near Clyde Macedon Aqueduct Park near Palmyra Old Erie Canal Lock 60 Park in Macedon Perinton Park in Perinton near Fairport Genesee Valley Park in the city of Rochester Spencerport Depot & Canal Museum, Spencerport Niagara Escarpment "Flight of Five" locks at Lockport Erie Canal Discovery Center, 24 Church Street, Lockport (Locks 34 and 35) Canalside Buffalo at the Canal's "Western Terminus" Erie Canalway Trail Records and research Records of the planning, funding, design, construction, and administration of the Erie Canal are vast and can be found in the New York State Archives. Except for two years (1827–1829), the State of New York did not require canal boat operators to maintain or submit passenger lists. Locks The following list of locks is provided for the current canal, from east to west. There are a total of 36 (35 numbered) locks on the Erie Canal. All locks on the New York State Canal System are single-chamber; the dimensions are long and wide with a minimum depth of water over the miter sills at the upstream gates upon lift. They can accommodate a vessel up to long and wide. Overall sidewall height will vary by lock, ranging between depending on the lift and navigable stages. Lock E17 at Little Falls has the tallest sidewall height at . Distance is based on position markers from an interactive canal map provided online by the New York State Canal Corporation and may not exactly match specifications on signs posted along the canal. Mean surface elevations are comprised from a combination of older canal profiles and history books as well as specifications on signs posted along the canal. The margin of error should normally be within . The Waterford Flight series of locks (comprising Locks E2 through E6) is one of the steepest in the world, lifting boats in less than . All surface elevations are approximate. Denotes federally managed locks. There is a natural rise between locks E33 and E34 as well as a natural rise between Lock E35 and the Niagara River. There is no Lock E1 or Lock E31 on the Erie Canal. The place of "Lock E1" on the passage from the lower Hudson River to Lake Erie is taken by the Troy Federal Lock, located just north of Troy, New York, and is not part of the Erie Canal System proper. It is operated by the United States Army Corps of Engineers. The Erie Canal officially begins at the confluence of the Hudson and Mohawk rivers at Waterford, New York. Although the original alignment of the Erie Canal through Buffalo has been filled in, travel by water is still possible from Buffalo via the Black Rock Lock in the Niagara River to the canal's modern western terminus in Tonawanda, and eastward to Albany. The Black Rock Lock is operated by the United States Army Corps of Engineers. Oneida Lake lies between locks E22 and E23, and has a mean surface elevation of . Lake Erie has a mean surface elevation of . See also Robert C. Dorn List of canals in New York List of canals in the United States "Low Bridge", a song written by Thomas S. Allen, also known as "The Erie Canal Song" John C. Mather (New York politician) Ohio and Erie Canal, connecting Lake Erie with the Ohio River Welland Canal, opened in 1829, bypasses the Niagara River between Lake Erie and Lake Ontario References Further reading Online review. External links Erie & Barge Canal: A bibliography by the Buffalo History Museum. Listing and index of maps, plans, profiles, pictures, and photographs of canals of New York State in annual reports of State Engineer and Surveyor through 1905 Erie Canal case study in Transition Times. Archived at Ghostarchive. Information and Boater's Guide to the Erie Canal Canalway Trail Information Historical information (with photos) of the Erie Canal Video showing the operations of Lock 22E in 2016. New York State Canal Corporation Site The Opening of the Erie Canal – An Online Exhibition by CUNY New York Heritage online exhibit - Two Hundred Years on the Erie Canal The Canal Society of New York State Digging Clinton's Ditch: The Impact of the Erie Canal on America 1807–1860 Multimedia A Glimpse at Clinton's Ditch, 1819–1820 by Richard F. Palmer Guide to Canal Records in the New York State Archives The Erie Canal Mapping Project New York Heritage – Working on the Erie Canal Photographs of the Erie Canal Relating to Fort Hunter, N.Y. Ca. 1910 (finding aid) at the New York State Library, accessed May 18, 2016. William Jaeger's photography of the Canal remains. Archived at the Wayback Machine American Society of Civil Engineers site- The Erie Canal was the world's longest canal and one of America's great engineering feats. Newspaper articles and clippings about the Building of the Erie Canal at Newspapers.com 1821 establishments in New York (state) Canals in New York (state) Geography of Buffalo, New York Historic American Buildings Survey in New York (state) Historic American Engineering Record in New York (state) Historic Civil Engineering Landmarks Canals opened in 1825
Erie Canal
[ "Engineering" ]
9,401
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
10,048
https://en.wikipedia.org/wiki/Ethanol
Ethanol (also called ethyl alcohol, grain alcohol, drinking alcohol, or simply alcohol) is an organic compound with the chemical formula . It is an alcohol, with its formula also written as , or EtOH, where Et stands for ethyl. Ethanol is a volatile, flammable, colorless liquid with a characteristic wine-like odor and pungent taste. As a psychoactive depressant, it is the active ingredient in alcoholic beverages, and the second most consumed drug globally behind caffeine. Ethanol is naturally produced by the fermentation process of sugars by yeasts or via petrochemical processes such as ethylene hydration. Historically it was used as a general anesthetic, and has modern medical applications as an antiseptic, disinfectant, solvent for some medications, and antidote for methanol poisoning and ethylene glycol poisoning. It is used as a chemical solvent and in the synthesis of organic compounds, and as a fuel source for lamps, stoves, and internal combustion engines. Ethanol also can be dehydrated to make ethylene, an important chemical feedstock. As of 2023, world production of ethanol fuel was , coming mostly from the U.S. (51%) and Brazil (26%). Name Ethanol is the systematic name defined by the International Union of Pure and Applied Chemistry for a compound consisting of an alkyl group with two carbon atoms (prefix "eth-"), having a single bond between them (infix "-an-") and an attached −OH functional group (suffix "-ol"). The "eth-" prefix and the qualifier "ethyl" in "ethyl alcohol" originally came from the name "ethyl" assigned in 1834 to the group − by Justus Liebig. He coined the word from the German name Aether of the compound −O− (commonly called "ether" in English, more specifically called "diethyl ether"). According to the Oxford English Dictionary, Ethyl is a contraction of the Ancient Greek αἰθήρ (, "upper air") and the Greek word ὕλη (, "wood, raw material", hence "matter, substance"). Ethanol was coined as a result of a resolution on naming alcohols and phenols that was adopted at the International Conference on Chemical Nomenclature that was held in April 1892 in Geneva, Switzerland. The term alcohol now refers to a wider class of substances in chemistry nomenclature, but in common parlance it remains the name of ethanol. It is a medieval loan from Arabic , a powdered ore of antimony used since antiquity as a cosmetic, and retained that meaning in Middle Latin. The use of 'alcohol' for ethanol (in full, "alcohol of wine") was first recorded in 1753. Before the late 18th century the term alcohol generally referred to any sublimated substance. Uses Recreational drug As a central nervous system depressant, ethanol is one of the most commonly consumed psychoactive drugs. Despite alcohol's psychoactive, addictive, and carcinogenic properties, it is readily available and legal for sale in many countries. There are laws regulating the sale, exportation/importation, taxation, manufacturing, consumption, and possession of alcoholic beverages. The most common regulation is prohibition for minors. In mammals, ethanol is primarily metabolized in the liver and stomach by ADH enzymes. These enzymes catalyze the oxidation of ethanol into acetaldehyde (ethanal): CH3CH2OH + NAD+ → CH3CHO + NADH + H+ When present in significant concentrations, this metabolism of ethanol is additionally aided by the cytochrome P450 enzyme CYP2E1 in humans, while trace amounts are also metabolized by catalase. The resulting intermediate, acetaldehyde, is a known carcinogen, and poses significantly greater toxicity in humans than ethanol itself. Many of the symptoms typically associated with alcohol intoxication—as well as many of the health hazards typically associated with the long-term consumption of ethanol—can be attributed to acetaldehyde toxicity in humans. The subsequent oxidation of acetaldehyde into acetate is performed by aldehyde dehydrogenase (ALDH) enzymes. A mutation in the ALDH2 gene that encodes for an inactive or dysfunctional form of this enzyme affects roughly 50 % of east Asian populations, contributing to the characteristic alcohol flush reaction that can cause temporary reddening of the skin as well as a number of related, and often unpleasant, symptoms of acetaldehyde toxicity. This mutation is typically accompanied by another mutation in the ADH enzyme ADH1B in roughly 80 % of east Asians, which improves the catalytic efficiency of converting ethanol into acetaldehyde. Medical Ethanol is the oldest known sedative, used as an oral general anesthetic during surgery in ancient Mesopotamia and in medieval times. Mild intoxication starts at a blood alcohol concentration of 0.03-0.05 % and induces anesthetic coma at 0.4%. This use carries the high risk of deadly alcohol intoxication, pulmonary aspiration and vomiting, which led to use of alternatives in antiquity, such as opium and cannabis, and later diethyl ether, starting in the 1840s. Ethanol is used as an antiseptic in medical wipes and hand sanitizer gels for its bactericidal and anti-fungal effects. Ethanol kills microorganisms by dissolving their membrane lipid bilayer and denaturing their proteins, and is effective against most bacteria, fungi and viruses. It is ineffective against bacterial spores, which can be treated with hydrogen peroxide. A solution of 70% ethanol is more effective than pure ethanol because ethanol relies on water molecules for optimal antimicrobial activity. Absolute ethanol may inactivate microbes without destroying them because the alcohol is unable to fully permeate the microbe's membrane. Ethanol can also be used as a disinfectant and antiseptic by inducing cell dehydration through disruption of the osmotic balance across the cell membrane, causing water to leave the cell, leading to cell death. Ethanol may be administered as an antidote to ethylene glycol poisoning and methanol poisoning. It does so by acting as a competitive inhibitor against methanol and ethylene glycol for alcohol dehydrogenase (ADH). Though it has more side effects, ethanol is less expensive and more readily available than fomepizole in the role. Ethanol is used to dissolve many water-insoluble medications and related compounds. Liquid preparations of pain medications, cough and cold medicines, and mouth washes, for example, may contain up to 25% ethanol and may need to be avoided in individuals with adverse reactions to ethanol such as alcohol-induced respiratory reactions. Ethanol is present mainly as an antimicrobial preservative in over 700 liquid preparations of medicine including acetaminophen, iron supplements, ranitidine, furosemide, mannitol, phenobarbital, trimethoprim/sulfamethoxazole and over-the-counter cough medicine. Some medicinal solutions of ethanol are also known as tinctures. Energy source The largest single use of ethanol is as an engine fuel and fuel additive. Brazil in particular relies heavily upon the use of ethanol as an engine fuel, due in part to its role as one of the world's leading producers of ethanol. Gasoline sold in Brazil contains at least 25% anhydrous ethanol. Hydrous ethanol (about 95% ethanol and 5% water) can be used as fuel in more than 90% of new gasoline-fueled cars sold in the country. The US and many other countries primarily use E10 (10% ethanol, sometimes known as gasohol) and E85 (85% ethanol) ethanol/gasoline mixtures. Over time, it is believed that a material portion of the ≈ per year market for gasoline will begin to be replaced with fuel ethanol. Australian law limits the use of pure ethanol from sugarcane waste to 10 % in automobiles. Older cars (and vintage cars designed to use a slower burning fuel) should have the engine valves upgraded or replaced. According to an industry advocacy group, ethanol as a fuel reduces harmful tailpipe emissions of carbon monoxide, particulate matter, oxides of nitrogen, and other ozone-forming pollutants. Argonne National Laboratory analyzed greenhouse gas emissions of many different engine and fuel combinations, and found that biodiesel/petrodiesel blend (B20) showed a reduction of 8%, conventional E85 ethanol blend a reduction of 17% and cellulosic ethanol 64%, compared with pure gasoline. Ethanol has a much greater research octane number (RON) than gasoline, meaning it is less prone to pre-ignition, allowing for better ignition advance which means more torque, and efficiency in addition to the lower carbon emissions. Ethanol combustion in an internal combustion engine yields many of the products of incomplete combustion produced by gasoline and significantly larger amounts of formaldehyde and related species such as acetaldehyde. This leads to a significantly larger photochemical reactivity and more ground level ozone. This data has been assembled into The Clean Fuels Report comparison of fuel emissions and show that ethanol exhaust generates 2.14 times as much ozone as gasoline exhaust. When this is added into the custom Localized Pollution Index of The Clean Fuels Report, the local pollution of ethanol (pollution that contributes to smog) is rated 1.7, where gasoline is 1.0 and higher numbers signify greater pollution. The California Air Resources Board formalized this issue in 2008 by recognizing control standards for formaldehydes as an emissions control group, much like the conventional NOx and reactive organic gases (ROGs). More than 20% of Brazilian cars are able to use 100% ethanol as fuel, which includes ethanol-only engines and flex-fuel engines. Flex-fuel engines in Brazil are able to work with all ethanol, all gasoline or any mixture of both. In the United States, flex-fuel vehicles can run on 0% to 85% ethanol (15% gasoline) since higher ethanol blends are not yet allowed or efficient. Brazil supports this fleet of ethanol-burning automobiles with large national infrastructure that produces ethanol from domestically grown sugarcane. Ethanol's high miscibility with water makes it unsuitable for shipping through modern pipelines like liquid hydrocarbons. Mechanics have seen increased cases of damage to small engines (in particular, the carburetor) and attribute the damage to the increased water retention by ethanol in fuel. Ethanol was commonly used as fuel in early bipropellant rocket (liquid-propelled) vehicles, in conjunction with an oxidizer such as liquid oxygen. The German A-4 ballistic rocket of World War II (better known by its propaganda name ), which is credited as having begun the space age, used ethanol as the main constituent of . Under such nomenclature, the ethanol was mixed with 25% water to reduce the combustion chamber temperature. The design team helped develop U.S. rockets following World War II, including the ethanol-fueled Redstone rocket, which launched the first U.S. astronaut on suborbital spaceflight. Alcohols fell into general disuse as more energy-dense rocket fuels were developed, although ethanol was used in recent experimental lightweight rocket-powered racing aircraft. Commercial fuel cells operate on reformed natural gas, hydrogen or methanol. Ethanol is an attractive alternative due to its wide availability, low cost, high purity and low toxicity. There is a wide range of fuel cell concepts that have entered trials including direct-ethanol fuel cells, auto-thermal reforming systems and thermally integrated systems. The majority of work is being conducted at a research level although there are a number of organizations at the beginning of the commercialization of ethanol fuel cells. Ethanol fireplaces can be used for home heating or for decoration. Ethanol can also be used as stove fuel for cooking. Other uses Ethanol is an important industrial ingredient. It has widespread use as a precursor for other organic compounds such as ethyl halides, ethyl esters, diethyl ether, acetic acid, and ethyl amines. It is considered a universal solvent, as its molecular structure allows for the dissolving of both polar, hydrophilic and nonpolar, hydrophobic compounds. As ethanol also has a low boiling point, it is easy to remove from a solution that has been used to dissolve other compounds, making it a popular extracting agent for botanical oils. Cannabis oil extraction methods often use ethanol as an extraction solvent, and also as a post-processing solvent to remove oils, waxes, and chlorophyll from solution in a process known as winterization. Ethanol is found in paints, tinctures, markers, and personal care products such as mouthwashes, perfumes and deodorants. Polysaccharides precipitate from aqueous solution in the presence of alcohol, and ethanol precipitation is used for this reason in the purification of DNA and RNA. Because of its low freezing point of and low toxicity, ethanol is sometimes used in laboratories (with dry ice or other coolants) as a cooling bath to keep vessels at temperatures below the freezing point of water. For the same reason, it is also used as the active fluid in alcohol thermometers. Chemistry Ethanol is a 2-carbon alcohol. Its molecular formula is CH3CH2OH. The structure of the molecule of ethanol is (an ethyl group linked to a hydroxyl group), which indicates that the carbon of a methyl group (CH3−) is attached to the carbon of a methylene group (−CH2–), which is attached to the oxygen of a hydroxyl group (−OH). It is a constitutional isomer of dimethyl ether. Ethanol is sometimes abbreviated as EtOH, using the common organic chemistry notation of representing the ethyl group (C2H5−) with Et. Physical properties Ethanol is a volatile, colorless liquid that has a slight odor. It burns with a smokeless blue flame that is not always visible in normal light. The physical properties of ethanol stem primarily from the presence of its hydroxyl group and the shortness of its carbon chain. Ethanol's hydroxyl group is able to participate in hydrogen bonding, rendering it more viscous and less volatile than less polar organic compounds of similar molecular weight, such as propane. Ethanol's adiabatic flame temperature for combustion in air is 2082 °C or 3779 °F. Ethanol is slightly more refractive than water, having a refractive index of 1.36242 (at λ=589.3 nm and ). The triple point for ethanol is . Solvent properties Ethanol is a versatile solvent, miscible with water and with many organic solvents, including acetic acid, acetone, benzene, carbon tetrachloride, chloroform, diethyl ether, ethylene glycol, glycerol, nitromethane, pyridine, and toluene. Its main use as a solvent is in making tincture of iodine, cough syrups, etc. It is also miscible with light aliphatic hydrocarbons, such as pentane and hexane, and with aliphatic chlorides such as trichloroethane and tetrachloroethylene. Ethanol's miscibility with water contrasts with the immiscibility of longer-chain alcohols (five or more carbon atoms), whose water miscibility decreases sharply as the number of carbons increases. The miscibility of ethanol with alkanes is limited to alkanes up to undecane: mixtures with dodecane and higher alkanes show a miscibility gap below a certain temperature (about 13 °C for dodecane). The miscibility gap tends to get wider with higher alkanes, and the temperature for complete miscibility increases. Ethanol-water mixtures have less volume than the sum of their individual components at the given fractions. Mixing equal volumes of ethanol and water results in only 1.92 volumes of mixture. Mixing ethanol and water is exothermic, with up to 777 J/mol being released at 298 K. Hydrogen bonding causes pure ethanol to be hygroscopic to the extent that it readily absorbs water from the air. The polar nature of the hydroxyl group causes ethanol to dissolve many ionic compounds, notably sodium and potassium hydroxides, magnesium chloride, calcium chloride, ammonium chloride, ammonium bromide, and sodium bromide. Sodium and potassium chlorides are slightly soluble in ethanol. Because the ethanol molecule also has a nonpolar end, it will also dissolve nonpolar substances, including most essential oils and numerous flavoring, coloring, and medicinal agents. The addition of even a few percent of ethanol to water sharply reduces the surface tension of water. This property partially explains the "tears of wine" phenomenon. When wine is swirled in a glass, ethanol evaporates quickly from the thin film of wine on the wall of the glass. As the wine's ethanol content decreases, its surface tension increases and the thin film "beads up" and runs down the glass in channels rather than as a smooth sheet. Azeotrope with water At atmospheric pressure, mixtures of ethanol and water form an azeotrope at about 89.4 mol% ethanol (95.6% ethanol by mass, 97% alcohol by volume), with a boiling point of 351.3 K (78.1 °C). At lower pressure, the composition of the ethanol-water azeotrope shifts to more ethanol-rich mixtures. The minimum-pressure azeotrope has an ethanol fraction of 100% and a boiling point of 306 K (33 °C), corresponding to a pressure of roughly 70 torr (9.333 kPa). Below this pressure, there is no azeotrope, and it is possible to distill absolute ethanol from an ethanol-water mixture. Flammability An ethanol–water solution will catch fire if heated above a temperature called its flash point and an ignition source is then applied to it. For 20% alcohol by mass (about 25% by volume), this will occur at about . The flash point of pure ethanol is , but may be influenced very slightly by atmospheric composition such as pressure and humidity. Ethanol mixtures can ignite below average room temperature. Ethanol is considered a flammable liquid (Class 3 Hazardous Material) in concentrations above 2.35% by mass (3.0% by volume; 6 proof). Dishes using burning alcohol for culinary effects are called flambé. Natural occurrence Ethanol is a byproduct of the metabolic process of yeast. As such, ethanol will be present in any yeast habitat. Ethanol can commonly be found in overripe fruit. Ethanol produced by symbiotic yeast can be found in bertam palm blossoms. Although some animal species, such as the pentailed treeshrew, exhibit ethanol-seeking behaviors, most show no interest or avoidance of food sources containing ethanol. Ethanol is also produced during the germination of many plants as a result of natural anaerobiosis. Ethanol has been detected in outer space, forming an icy coating around dust grains in interstellar clouds. Minute quantity amounts (average 196 ppb) of endogenous ethanol and acetaldehyde were found in the exhaled breath of healthy volunteers. Auto-brewery syndrome, also known as gut fermentation syndrome, is a rare medical condition in which intoxicating quantities of ethanol are produced through endogenous fermentation within the digestive system. Production Ethanol is produced both as a petrochemical, through the hydration of ethylene and, via biological processes, by fermenting sugars with yeast. Which process is more economical depends on prevailing prices of petroleum and grain feed stocks. Sources World production of ethanol in 2006 was , with 69% of the world supply coming from Brazil and the U.S. Brazilian ethanol is produced from sugarcane, which has relatively high yields (830% more fuel than the fossil fuels used to produce it) compared to some other energy crops. Sugarcane not only has a greater concentration of sucrose than corn (by about 30%), but is also much easier to extract. The bagasse generated by the process is not discarded, but burned by power plants to produce electricity. Bagasse burning accounts for around 9% of the electricity produced in Brazil. In the 1970s most industrial ethanol in the U.S. was made as a petrochemical, but in the 1980s the U.S. introduced subsidies for corn-based ethanol. According to the Renewable Fuels Association, as of 30 October 2007, 131 grain ethanol bio-refineries in the U.S. have the capacity to produce of ethanol per year. An additional 72 construction projects underway (in the U.S.) can add of new capacity in the next 18 months. In India ethanol is made from sugarcane. Sweet sorghum is another potential source of ethanol, and is suitable for growing in dryland conditions. The International Crops Research Institute for the Semi-Arid Tropics is investigating the possibility of growing sorghum as a source of fuel, food, and animal feed in arid parts of Asia and Africa. Sweet sorghum has one-third the water requirement of sugarcane over the same time period. It also requires about 22% less water than corn. The world's first sweet sorghum ethanol distillery began commercial production in 2007 in Andhra Pradesh, India. Ethanol has been produced in the laboratory by converting carbon dioxide via biological and electrochemical reactions. Hydration Ethanol can be produced from petrochemical feed stocks, primarily by the acid-catalyzed hydration of ethylene. It is often referred to as synthetic ethanol. The catalyst is most commonly phosphoric acid, adsorbed onto a porous support such as silica gel or diatomaceous earth. This catalyst was first used for large-scale ethanol production by the Shell Oil Company in 1947. The reaction is carried out in the presence of high pressure steam at where a 5:3 ethylene to steam ratio is maintained. This process was used on an industrial scale by Union Carbide Corporation and others. It is no longer practiced in the US as fermentation ethanol produced from corn is more economical. In an older process, first practiced on the industrial scale in 1930 by Union Carbide but now almost entirely obsolete, ethylene was hydrated indirectly by reacting it with concentrated sulfuric acid to produce ethyl sulfate, which was hydrolyzed to yield ethanol and regenerate the sulfuric acid: Fermentation Ethanol in alcoholic beverages and fuel is produced by fermentation. Certain species of yeast (e.g., Saccharomyces cerevisiae) metabolize sugar (namely polysaccharides), producing ethanol and carbon dioxide. The chemical equations below summarize the conversion: Fermentation is the process of culturing yeast under favorable thermal conditions to produce alcohol. This process is carried out at around . Toxicity of ethanol to yeast limits the ethanol concentration obtainable by brewing; higher concentrations, therefore, are obtained by fortification or distillation. The most ethanol-tolerant yeast strains can survive up to approximately 18% ethanol by volume. To produce ethanol from starchy materials such as cereals, the starch must first be converted into sugars. In brewing beer, this has traditionally been accomplished by allowing the grain to germinate, or malt, which produces the enzyme amylase. When the malted grain is mashed, the amylase converts the remaining starches into sugars. Sugars for ethanol fermentation can be obtained from cellulose. Deployment of this technology could turn a number of cellulose-containing agricultural by-products, such as corncobs, straw, and sawdust, into renewable energy resources. Other agricultural residues such as sugarcane bagasse and energy crops such as switchgrass may also be fermentable sugar sources. Testing Breweries and biofuel plants employ two methods for measuring ethanol concentration. Infrared ethanol sensors measure the vibrational frequency of dissolved ethanol using the C−H band at 2900 cm. This method uses a relatively inexpensive solid-state sensor that compares the C−H band with a reference band to calculate the ethanol content. The calculation makes use of the Beer–Lambert law. Alternatively, by measuring the density of the starting material and the density of the product, using a hydrometer, the change in specific gravity during fermentation indicates the alcohol content. This inexpensive and indirect method has a long history in the beer brewing industry. Purification Ethylene hydration or brewing produces an ethanol–water mixture. For most industrial and fuel uses, the ethanol must be purified. Fractional distillation at atmospheric pressure can concentrate ethanol to 95.6% by weight (89.5 mole%). This mixture is an azeotrope with a boiling point of , and cannot be further purified by distillation. Addition of an entraining agent, such as benzene, cyclohexane, or heptane, allows a new ternary azeotrope comprising the ethanol, water, and the entraining agent to be formed. This lower-boiling ternary azeotrope is removed preferentially, leading to water-free ethanol. Apart from distillation, ethanol may be dried by addition of a desiccant, such as molecular sieves, cellulose, or cornmeal. The desiccants can be dried and reused. Molecular sieves can be used to selectively absorb the water from the 95.6% ethanol solution. Molecular sieves of pore-size 3 Ångstrom, a type of zeolite, effectively sequester water molecules while excluding ethanol molecules. Heating the wet sieves drives out the water, allowing regeneration of their desiccant capability. Membranes can also be used to separate ethanol and water. Membrane-based separations are not subject to the limitations of the water-ethanol azeotrope because the separations are not based on vapor-liquid equilibria. Membranes are often used in the so-called hybrid membrane distillation process. This process uses a pre-concentration distillation column as the first separating step. The further separation is then accomplished with a membrane operated either in vapor permeation or pervaporation mode. Vapor permeation uses a vapor membrane feed and pervaporation uses a liquid membrane feed. A variety of other techniques have been discussed, including the following: Salting using potassium carbonate to exploit its insolubility will cause a phase separation with ethanol and water. This offers a very small potassium carbonate impurity to the alcohol that can be removed by distillation. This method is very useful in purification of ethanol by distillation, as ethanol forms an azeotrope with water. Direct electrochemical reduction of carbon dioxide to ethanol under ambient conditions using copper nanoparticles on a carbon nanospike film as the catalyst; Extraction of ethanol from grain mash by supercritical carbon dioxide; Pervaporation; Fractional freezing is also used to concentrate fermented alcoholic solutions, such as traditionally made Applejack (beverage); Pressure swing adsorption. Grades of ethanol Pure ethanol and alcoholic beverages are heavily taxed as psychoactive drugs, but ethanol has many uses that do not involve its consumption. To relieve the tax burden on these uses, most jurisdictions waive the tax when an agent has been added to the ethanol to render it unfit to drink. These include bittering agents such as denatonium benzoate and toxins such as methanol, naphtha, and pyridine. Products of this kind are called denatured alcohol. Absolute or anhydrous alcohol refers to ethanol with a low water content. There are various grades with maximum water contents ranging from 1% to a few parts per million (ppm). If azeotropic distillation is used to remove water, it will contain trace amounts of the material separation agent (e.g. benzene). Absolute alcohol is not intended for human consumption. Absolute ethanol is used as a solvent for laboratory and industrial applications, where water will react with other chemicals, and as fuel alcohol. Spectroscopic ethanol is an absolute ethanol with a low absorbance in ultraviolet and visible light, fit for use as a solvent in ultraviolet-visible spectroscopy. Pure ethanol is classed as 200 proof in the US, equivalent to 175 degrees proof in the UK system. Rectified spirit, an azeotropic composition of 96% ethanol containing 4% water, is used instead of anhydrous ethanol for various purposes. Spirits of wine are about 94% ethanol (188 proof). The impurities are different from those in 95% (190 proof) laboratory ethanol. Reactions Ethanol is classified as a primary alcohol, meaning that the carbon that its hydroxyl group attaches to has at least two hydrogen atoms attached to it as well. Many ethanol reactions occur at its hydroxyl group. Ester formation In the presence of acid catalysts, ethanol reacts with carboxylic acids to produce ethyl esters and water: RCOOH + HOCH2CH3 → RCOOCH2CH3 + H2O This reaction, which is conducted on large scale industrially, requires the removal of the water from the reaction mixture as it is formed. Esters react in the presence of an acid or base to give back the alcohol and a salt. This reaction is known as saponification because it is used in the preparation of soap. Ethanol can also form esters with inorganic acids. Diethyl sulfate and triethyl phosphate are prepared by treating ethanol with sulfur trioxide and phosphorus pentoxide respectively. Diethyl sulfate is a useful ethylating agent in organic synthesis. Ethyl nitrite, prepared from the reaction of ethanol with sodium nitrite and sulfuric acid, was formerly used as a diuretic. Dehydration In the presence of acid catalysts, alcohols can be converted to alkenes such as ethanol to ethylene. Typically solid acids such as alumina are used. CH3CH2OH → H2C=CH2 + H2O Since water is removed from the same molecule, the reaction is known as intramolecular dehydration. Intramolecular dehydration of an alcohol requires a high temperature and the presence of an acid catalyst such as sulfuric acid. Ethylene produced from sugar-derived ethanol (primarily in Brazil) competes with ethylene produced from petrochemical feedstocks such as naphtha and ethane. At a lower temperature than that of intramolecular dehydration, intermolecular alcohol dehydration may occur producing a symmetrical ether. This is a condensation reaction. In the following example, diethyl ether is produced from ethanol: 2 CH3CH2OH → CH3CH2OCH2CH3 + H2O Combustion Complete combustion of ethanol forms carbon dioxide and water: C2H5OH (l) + 3 O2 (g) → 2 CO2 (g) + 3 H2O (l); −ΔcH = 1371 kJ/mol = 29.8 kJ/g = 327 kcal/mol = 7.1 kcal/g C2H5OH (l) + 3 O2 (g) → 2 CO2 (g) + 3 H2O (g); −ΔcH = 1236 kJ/mol = 26.8 kJ/g = 295.4 kcal/mol = 6.41 kcal/g Specific heat = 2.44 kJ/(kg·K) Acid-base chemistry Ethanol is a neutral molecule and the pH of a solution of ethanol in water is nearly 7.00. Ethanol can be quantitatively converted to its conjugate base, the ethoxide ion (CH3CH2O−), by reaction with an alkali metal such as sodium: 2 CH3CH2OH + 2 Na → 2 CH3CH2ONa + H2 or a very strong base such as sodium hydride: CH3CH2OH + NaH → CH3CH2ONa + H2 The acidities of water and ethanol are nearly the same, as indicated by their pKa of 15.7 and 16 respectively. Thus, sodium ethoxide and sodium hydroxide exist in an equilibrium that is closely balanced: CH3CH2OH + NaOH CH3CH2ONa + H2O Halogenation Ethanol is not used industrially as a precursor to ethyl halides, but the reactions are illustrative. Ethanol reacts with hydrogen halides to produce ethyl halides such as ethyl chloride and ethyl bromide via an SN2 reaction: CH3CH2OH + HCl → CH3CH2Cl + H2O HCl requires a catalyst such as zinc chloride. HBr requires refluxing with a sulfuric acid catalyst. Ethyl halides can, in principle, also be produced by treating ethanol with more specialized halogenating agents, such as thionyl chloride or phosphorus tribromide. CH3CH2OH + SOCl2 → CH3CH2Cl + SO2 + HCl Upon treatment with halogens in the presence of base, ethanol gives the corresponding haloform (CHX3, where X = Cl, Br, I). This conversion is called the haloform reaction. An intermediate in the reaction with chlorine is the aldehyde called chloral, which forms chloral hydrate upon reaction with water: 4 Cl2 + CH3CH2OH → CCl3CHO + 5 HCl CCl3CHO + H2O → CCl3C(OH)2H Oxidation Ethanol can be oxidized to acetaldehyde and further oxidized to acetic acid, depending on the reagents and conditions. This oxidation is of no importance industrially, but in the human body, these oxidation reactions are catalyzed by the enzyme liver ADH. The oxidation product of ethanol, acetic acid, is a nutrient for humans, being a precursor to acetyl CoA, where the acetyl group can be spent as energy or used for biosynthesis. Metabolism Ethanol is similar to macronutrients such as proteins, fats, and carbohydrates in that it provides calories. When consumed and metabolized, it contributes 7 kilocalories per gram via ethanol metabolism. Safety Ethanol is very flammable and should not be used around an open flame. Pure ethanol will irritate the skin and eyes. Nausea, vomiting, and intoxication are symptoms of ingestion. Long-term use by ingestion can result in serious liver damage. Atmospheric concentrations above one part per thousand are above the European Union occupational exposure limits. History The fermentation of sugar into ethanol is one of the earliest biotechnologies employed by humans. Ethanol has historically been identified variously as spirit of wine or ardent spirits, and as aqua vitae or aqua vita. The intoxicating effects of its consumption have been known since ancient times. Ethanol has been used by humans since prehistory as the intoxicating ingredient of alcoholic beverages. Dried residue on 9,000-year-old pottery found in China suggests that Neolithic people consumed alcoholic beverages. The inflammable nature of the exhalations of wine was already known to ancient natural philosophers such as Aristotle (384–322 BCE), Theophrastus (–287 BCE), and Pliny the Elder (23/24–79 CE). However, this did not immediately lead to the isolation of ethanol, despite the development of more advanced distillation techniques in second- and third-century Roman Egypt. An important recognition, first found in one of the writings attributed to Jābir ibn Ḥayyān (ninth century CE), was that by adding salt to boiling wine, which increases the wine's relative volatility, the flammability of the resulting vapors may be enhanced. The distillation of wine is attested in Arabic works attributed to al-Kindī (–873 CE) and to al-Fārābī (–950), and in the 28th book of al-Zahrāwī's (Latin: Abulcasis, 936–1013) Kitāb al-Taṣrīf (later translated into Latin as Liber servatoris). In the twelfth century, recipes for the production of aqua ardens ("burning water", i.e., ethanol) by distilling wine with salt started to appear in a number of Latin works, and by the end of the thirteenth century it had become a widely known substance among Western European chemists. The works of Taddeo Alderotti (1223–1296) describe a method for concentrating ethanol involving repeated fractional distillation through a water-cooled still, by which an ethanol purity of 90% could be obtained. The medicinal properties of ethanol were studied by Arnald of Villanova (1240–1311 CE) and John of Rupescissa (–1366), the latter of whom regarded it as a life-preserving substance able to prevent all diseases (the aqua vitae or "water of life", also called by John the quintessence of wine). In China, archaeological evidence indicates that the true distillation of alcohol began during the Jin (1115–1234) or Southern Song (1127–1279) dynasties. A still has been found at an archaeological site in Qinglong, Hebei, dating to the 12th century. In India, the true distillation of alcohol was introduced from the Middle East, and was in wide use in the Delhi Sultanate by the 14th century. In 1796, German-Russian chemist Johann Tobias Lowitz obtained pure ethanol by mixing partially purified ethanol (the alcohol-water azeotrope) with an excess of anhydrous alkali and then distilling the mixture over low heat. French chemist Antoine Lavoisier described ethanol as a compound of carbon, hydrogen, and oxygen, and in 1807 Nicolas-Théodore de Saussure determined ethanol's chemical formula. Fifty years later, Archibald Scott Couper published the structural formula of ethanol, one of the first structural formulas determined. Ethanol was first prepared synthetically in 1825 by Michael Faraday. He found that sulfuric acid could absorb large volumes of coal gas. He gave the resulting solution to Henry Hennell, a British chemist, who found in 1826 that it contained "sulphovinic acid" (ethyl hydrogen sulfate). In 1828, Hennell and the French chemist Georges-Simon Serullas independently discovered that sulphovinic acid could be decomposed into ethanol. Thus, in 1825 Faraday had unwittingly discovered that ethanol could be produced from ethylene (a component of coal gas) by acid-catalyzed hydration, a process similar to current industrial ethanol synthesis. Ethanol was used as lamp fuel in the U.S. as early as 1840, but a tax levied on industrial alcohol during the Civil War made this use uneconomical. The tax was repealed in 1906. Use as an automotive fuel dates back to 1908, with the Ford Model T able to run on petrol (gasoline) or ethanol. It fuels some spirit lamps. Ethanol intended for industrial use is often produced from ethylene. Ethanol has widespread use as a solvent of substances intended for human contact or consumption, including scents, flavorings, colorings, and medicines. In chemistry, it is both a solvent and a feedstock for the synthesis of other products. It has a long history as a fuel for heat and light, and more recently as a fuel for internal combustion engines. See also Ethanol-induced non-lamellar phases in phospholipids Methanol 1-Propanol 2-Propanol Rubbing alcohol tert-Butyl alcohol Butanol fuel Timeline of alcohol fuel References Further reading External links Alcohol (Ethanol) at The Periodic Table of Videos (University of Nottingham) International Labour Organization ethanol safety information National Pollutant Inventory – Ethanol Fact Sheet CDC – NIOSH Pocket Guide to Chemical Hazards – Ethyl Alcohol National Institute of Standards and Technology chemical data on ethanol Chicago Board of Trade news and market data on ethanol futures Calculation of vapor pressure, liquid density, dynamic liquid viscosity, surface tension of ethanol Ethanol History A look into the history of ethanol ChemSub Online: Ethyl alcohol Industrial ethanol production process flow diagram using ethylene and sulphuric acid Alcohol solvents Alkanols Anatomical preservation Chemical hazards Commodity chemicals Disinfectants Hepatotoxins Household chemicals Human metabolites IARC Group 1 carcinogens Oxygenates Primary alcohols Rocket fuels Teratogens Alcohol chemistry
Ethanol
[ "Chemistry" ]
8,491
[ "Products of chemical industry", "Chemical hazards", "Alcohol chemistry", "Teratogens", "Commodity chemicals", "Food chemistry" ]
10,065
https://en.wikipedia.org/wiki/Empirical%20formula
In chemistry, the empirical formula of a chemical compound is the simplest whole number ratio of atoms present in a compound. A simple example of this concept is that the empirical formula of sulfur monoxide, or SO, is simply SO, as is the empirical formula of disulfur dioxide, S2O2. Thus, sulfur monoxide and disulfur dioxide, both compounds of sulfur and oxygen, have the same empirical formula. However, their molecular formulas, which express the number of atoms in each molecule of a chemical compound, are not the same. An empirical formula makes no mention of the arrangement or number of atoms. It is standard for many ionic compounds, like calcium chloride (CaCl2), and for macromolecules, such as silicon dioxide (SiO2). The molecular formula, on the other hand, shows the number of each type of atom in a molecule. The structural formula shows the arrangement of the molecule. It is also possible for different types of compounds to have equal empirical formulas. In the early days of chemistry, information regarding the composition of compounds came from elemental analysis, which gives information about the relative amounts of elements present in a compound, which can be written as percentages or mole ratios. However, chemists were not able to determine the exact amounts of these elements and were only able to know their ratios, hence the name "empirical formula". Since ionic compounds are extended networks of anions and cations, all formulas of ionic compounds are empirical. Examples Glucose (), ribose (), Acetic acid (), and formaldehyde () all have different molecular formulas but the same empirical formula: . This is the actual molecular formula for formaldehyde, but acetic acid has double the number of atoms, ribose has five times the number of atoms, and glucose has six times the number of atoms. Calculation example A chemical analysis of a sample of methyl acetate provides the following elemental data: 48.64% carbon (C), 8.16% hydrogen (H), and 43.20% oxygen (O). For the purposes of determining empirical formulas, it's assumed that we have 100 grams of the compound. If this is the case, the percentages will be equal to the mass of each element in grams. Step 1: Change each percentage to an expression of the mass of each element in grams. That is, 48.64% C becomes 48.64 g C, 8.16% H becomes 8.16 g H, and 43.20% O becomes 43.20 g O. Step 2: Convert the amount of each element in grams to its amount in moles Step 3: Divide each of the resulting values by the smallest of these values (2.7) Step 4: If necessary, multiply these numbers by integers in order to get whole numbers; if an operation is done to one of the numbers, it must be done to all of them. Thus, the empirical formula of methyl acetate is . This formula also happens to be methyl acetate's molecular formula. References Chemical formulas Analytical chemistry
Empirical formula
[ "Chemistry" ]
632
[ "Chemical formulas", "Chemical structures", "nan" ]
10,073
https://en.wikipedia.org/wiki/Epicurus
Epicurus (, ; ; 341–270 BC) was an ancient Greek philosopher and sage who founded Epicureanism, a highly influential school of philosophy. He was born on the Greek island of Samos to Athenian parents. Influenced by Democritus, Aristippus, Pyrrho, and possibly the Cynics, he turned against the Platonism of his day and established his own school, known as "the Garden", in Athens. Epicurus and his followers were known for eating simple meals and discussing a wide range of philosophical subjects. He openly allowed women and slaves to join the school as a matter of policy. Of the over 300 works said to have been written by Epicurus about various subjects, the vast majority have been lost. Only three letters written by him—the letters to Menoeceus, Pythocles, and Herodotus—and two collections of quotes—the Principal Doctrines and the Vatican Sayings—have survived intact, along with a few fragments of his other writings. As a result of his work's destruction, most knowledge about his philosophy is due to later authors, particularly the biographer Diogenes Laërtius, the Epicurean Roman poet Lucretius and the Epicurean philosopher Philodemus, as well as the hostile but largely accurate accounts by the Pyrrhonist philosopher Sextus Empiricus, and the Academic Skeptic and statesman Cicero. Epicurus asserted that philosophy's purpose is to attain as well as to help others attain happy (eudaimonic), tranquil lives characterized by ataraxia (peace and freedom from fear) and aponia (the absence of pain). He advocated that people were best able to pursue philosophy by living a self-sufficient life surrounded by friends. He taught that the root of all human neuroses is denial of death and the tendency for human beings to assume that death will be horrific and painful, which he claimed causes unnecessary anxiety, selfish self-protective behaviors, and hypocrisy. According to Epicurus, death is the end of both the body and the soul and therefore should not be feared. Epicurus taught that although the gods exist, they have no involvement in human affairs. He taught that people should act ethically not because the gods punish or reward them for their actions but because, due to the power of guilt, amoral behavior would inevitably lead to remorse weighing on their consciences and as a result, they would be prevented from attaining ataraxia. Epicurus derived much of his physics and cosmology from the earlier philosopher Democritus ( 460– 370 BC). Like Democritus, Epicurus taught that the universe is infinite and eternal and that all matter is made up of extremely tiny, invisible particles known as atoms. All occurrences in the natural world are ultimately the result of atoms moving and interacting in empty space. Epicurus deviated from Democritus by proposing the idea of atomic "swerve", which holds that atoms may deviate from their expected course, thus permitting humans to possess free will in an otherwise deterministic universe. Though popular, Epicurean teachings were controversial from the beginning. Epicureanism reached the height of its popularity during the late years of the Roman Republic. It died out in late antiquity, subject to hostility from early Christianity. Throughout the Middle Ages, Epicurus was popularly, though inaccurately, remembered as a patron of drunkards, whoremongers, and gluttons. His teachings gradually became more widely known in the fifteenth century with the rediscovery of important texts, but his ideas did not become acceptable until the seventeenth century, when the French Catholic priest Pierre Gassendi revived a modified version of them, which was promoted by other writers, including Walter Charleton and Robert Boyle. His influence grew considerably during and after the Enlightenment, profoundly impacting the ideas of major thinkers, including John Locke, Thomas Jefferson, Jeremy Bentham, and Karl Marx. Life Upbringing and influences Epicurus was born in the Athenian settlement on the Aegean island of Samos in February 341 BC. His parents, Neocles and Chaerestrate, were both Athenian-born, and his father was an Athenian citizen. Epicurus grew up during the final years of the Greek Classical Period. Plato had died seven years before Epicurus was born and Epicurus was seven years old when Alexander the Great crossed the Hellespont into Persia. As a child, Epicurus would have received a typical ancient Greek education. As such, according to Norman Wentworth DeWitt, "it is inconceivable that he would have escaped the Platonic training in geometry, dialectic, and rhetoric." Epicurus is known to have studied under the instruction of a Samian Platonist named Pamphilus, probably for about four years. His Letter of Menoeceus and surviving fragments of his other writings strongly suggest that he had extensive training in rhetoric. After the death of Alexander the Great, Perdiccas expelled the Athenian settlers on Samos to Colophon, on the coast of what is now Turkey. Epicurus joined his family there after the completion of his military service. He studied under Nausiphanes, who followed the teachings of Democritus, and later those of Pyrrho, whose way of life Epicurus greatly admired. Epicurus's teachings were heavily influenced by those of earlier philosophers, particularly Democritus. Nonetheless, Epicurus differed from his predecessors on several key points of determinism and vehemently denied having been influenced by any previous philosophers, whom he denounced as "confused". Instead, he insisted that he had been "self-taught". According to DeWitt, Epicurus's teachings also show influences from the contemporary philosophical school of Cynicism. The Cynic philosopher Diogenes of Sinope was still alive when Epicurus would have been in Athens for his required military training and it is possible they may have met. Diogenes's pupil Crates of Thebes ( 365 – 285 BC) was a close contemporary of Epicurus. Epicurus agreed with the Cynics' quest for honesty, but rejected their "insolence and vulgarity", instead teaching that honesty must be coupled with courtesy and kindness. Epicurus shared this view with his contemporary, the comic playwright Menander. Epicurus's Letter to Menoeceus, possibly an early work of his, is written in an eloquent style similar to that of the Athenian rhetorician Isocrates (436–338 BC), but, for his later works, he seems to have adopted the bald, intellectual style of the mathematician Euclid. Epicurus's epistemology also bears an unacknowledged debt to the later writings of Aristotle (384–322 BC), who rejected the Platonic idea of hypostatic Reason and instead relied on nature and empirical evidence for knowledge about the universe. During Epicurus's formative years, Greek knowledge about the rest of the world was rapidly expanding due to the Hellenization of the Near East and the rise of Hellenistic kingdoms. Epicurus's philosophy was consequently more universal in its outlook than those of his predecessors, since it took cognizance of non-Greek peoples as well as Greeks. He may have had access to the now-lost writings of the historian and ethnographer Megasthenes, who wrote during the reign of Seleucus I Nicator (ruled 305–281 BC). Teaching career During Epicurus's lifetime, Platonism was the dominant philosophy in higher education. Epicurus's opposition to Platonism formed a large part of his thought. Over half of the forty Principal Doctrines of Epicureanism are flat contradictions of Platonism. In around 311 BC, Epicurus, when he was around thirty years old, began teaching in Mytilene. Around this time, Zeno of Citium, the founder of Stoicism, arrived in Athens, at the age of about twenty-one, but Zeno did not begin teaching what would become Stoicism for another twenty years. Although later texts, such as the writings of the first-century BC Roman orator Cicero, portray Epicureanism and Stoicism as rivals, this rivalry seems to have only emerged after Epicurus's death. Epicurus's teachings caused strife in Mytilene and he was forced to leave. He then founded a school in Lampsacus before returning to Athens in 306 BC, where he remained until his death. There he founded The Garden (κῆπος), a school named for the garden he owned that served as the school's meeting place, about halfway between the locations of two other schools of philosophy, the Stoa and the Academy. The Garden was more than just a school; it was "a community of like-minded and aspiring practitioners of a particular way of life." The primary members were Hermarchus, the financier Idomeneus, Leonteus and his wife Themista, the satirist Colotes, the mathematician Polyaenus of Lampsacus, and Metrodorus of Lampsacus, the most famous popularizer of Epicureanism. His school was the first of the ancient Greek philosophical schools to admit women as a rule rather than an exception, and the biography of Epicurus by Diogenes Laërtius lists female students such as Leontion and Nikidion. An inscription on the gate to The Garden is recorded by Seneca the Younger in epistle XXI of : "Stranger, here you will do well to tarry; here our highest good is pleasure." According to Diskin Clay, Epicurus himself established a custom of celebrating his birthday annually with common meals, befitting his stature as heros ktistes ("founding hero") of the Garden. He ordained in his will annual memorial feasts for himself on the same date (10th of Gamelion month). Epicurean communities continued this tradition, referring to Epicurus as their "saviour" (soter) and celebrating him as hero. The hero cult of Epicurus may have operated as a Garden variety civic religion. However, clear evidence of an Epicurean hero cult, as well as the cult itself, seems buried by the weight of posthumous philosophical interpretation. Epicurus never married and had no known children. He was most likely a vegetarian. Death Diogenes Laërtius records that, according to Epicurus's successor Hermarchus, Epicurus died a slow and painful death in 270 BC at the age of seventy-two from a stone blockage of his urinary tract. Despite being in immense pain, Epicurus is said to have remained cheerful and to have continued to teach until the very end. Possible insights into Epicurus's death may be offered by the extremely brief Epistle to Idomeneus, included by Diogenes Laërtius in Book X of his Lives and Opinions of Eminent Philosophers. The authenticity of this letter is uncertain and it may be a later pro-Epicurean forgery intended to paint an admirable portrait of the philosopher to counter the large number of forged epistles in Epicurus's name portraying him unfavorably. I have written this letter to you on a happy day to me, which is also the last day of my life. For I have been attacked by a painful inability to urinate, and also dysentery, so violent that nothing can be added to the violence of my sufferings. But the cheerfulness of my mind, which comes from the recollection of all my philosophical contemplation, counterbalances all these afflictions. And I beg you to take care of the children of Metrodorus, in a manner worthy of the devotion shown by the young man to me, and to philosophy. If authentic, this letter would support the tradition that Epicurus was able to remain joyful to the end, even in the midst of his suffering. It would also indicate that he maintained a special concern for the wellbeing of children. Philosophy Epistemology Epicurus and his followers had a well-developed epistemology, which developed as a result of their rivalry with other philosophical schools. Epicurus wrote a treatise entitled , or Rule, in which he explained his methods of investigation and theory of knowledge. This book, however, has not survived, nor does any other text that fully and clearly explains Epicurean epistemology, leaving only mentions of this epistemology by several authors to reconstruct it. Epicurus rejected the Platonic idea of "Reason" as a reliable source of knowledge about the world apart from the senses and was bitterly opposed to the Pyrrhonists and Academic Skeptics, who not only questioned the ability of the senses to provide accurate knowledge about the world, but also whether it is even possible to know anything about the world at all. Epicurus maintained that the senses never deceive humans, but that the senses can be misinterpreted. Epicurus held that the purpose of all knowledge is to aid humans in attaining ataraxia. He taught that knowledge is learned through experiences rather than innate and that the acceptance of the fundamental truth of the things a person perceives is essential to a person's moral and spiritual health. In the Letter to Pythocles, he states, "If a person fights the clear evidence of his senses he will never be able to share in genuine tranquility." Epicurus regarded gut feelings as the ultimate authority on matters of morality and held that whether a person feels an action is right or wrong is a far more cogent guide to whether that act really is right or wrong than abstracts maxims, strict codified rules of ethics, or even reason itself. Epicurus believed that any statement that is not directly contrary to human perception can be considered possibly true. On the other hand, anything contrary to experience can be ruled out as false. Epicureans often used analogies to everyday experience to support their argument of so-called "imperceptibles", which included anything that a human being cannot perceive, such as the motion of atoms. In line with this principle of non-contradiction, the Epicureans believed that events in the natural world may have multiple causes that are all equally possible and probable. Lucretius writes in On the Nature of Things, as translated by William Ellery Leonard: There be, besides, some thing Of which 'tis not enough one only cause To state—but rather several, whereof one Will be the true: lo, if thou shouldst espy Lying afar some fellow's lifeless corse, 'Twere meet to name all causes of a death, That cause of his death might thereby be named: For prove thou mayst he perished not by steel, By cold, nor even by poison nor disease, Yet somewhat of this sort hath come to him We know—And thus we have to say the same In divers cases. Epicurus strongly favored naturalistic explanations over theological ones. In his Letter to Pythocles, he offers four different possible natural explanations for thunder, six different possible natural explanations for lightning, three for snow, three for comets, two for rainbows, two for earthquakes, and so on. Although all of these explanations are now known to be false, they were an important step in the history of science, because Epicurus was trying to explain natural phenomena using natural explanations, rather than resorting to inventing elaborate stories about gods and mythic heroes. Ethics Epicurus was a hedonist, meaning he taught that what is pleasurable is morally good and what is painful is morally evil. He idiosyncratically defined "pleasure" as the absence of suffering and taught that all humans should seek to attain the state of ataraxia, meaning "untroubledness", a state in which the person is completely free from all pain or suffering. He argued that most of the suffering which human beings experience is caused by the irrational fears of death, divine retribution, and punishment in the afterlife. In his Letter to Menoeceus, Epicurus explains that people seek wealth and power on account of these fears, believing that having more money, prestige, or political clout will save them from death. He, however, maintains that death is the end of existence, that the terrifying stories of punishment in the afterlife are ridiculous superstitions, and that death is therefore nothing to be feared. He writes in his Letter to Menoeceus: "Accustom thyself to believe that death is nothing to us, for good and evil imply sentience, and death is the privation of all sentience;... Death, therefore, the most awful of evils, is nothing to us, seeing that, when we are, death is not come, and, when death is come, we are not." From this doctrine arose the Epicurean epitaph: Non fui, fui, non-sum, non-curo ("I was not; I was; I am not; I do not care"), which is inscribed on the gravestones of his followers and seen on many ancient gravestones of the Roman Empire. This quotation is often used today at humanist funerals. The Tetrapharmakos presents a summary of the key points of Epicurean ethics: Don't fear god Don't worry about death What is good is easy to get What is terrible is easy to endure Although Epicurus has been commonly misunderstood as an advocate of the rampant pursuit of pleasure, he, in fact, maintained that a person can only be happy and free from suffering by living wisely, soberly, and morally. He strongly disapproved of raw, excessive sensuality and warned that a person must take into account whether the consequences of his actions will result in suffering, writing, "the pleasant life is produced not by a string of drinking bouts and revelries, nor by the enjoyment of boys and women, nor by fish and the other items on an expensive menu, but by sober reasoning." He also wrote that a single good piece of cheese could be equally pleasing as an entire feast. Furthermore, Epicurus taught that "it is not possible to live pleasurably without living sensibly and nobly and justly", because a person who engages in acts of dishonesty or injustice will be "loaded with troubles" on account of his own guilty conscience and will live in constant fear that his wrongdoings will be discovered by others. A person who is kind and just to others, however, will have no fear and will be more likely to attain ataraxia. Epicurus distinguished between two different types of pleasure: "moving" pleasures (κατὰ κίνησιν ἡδοναί) and "static" pleasures (καταστηματικαὶ ἡδοναί). "Moving" pleasures occur when one is in the process of satisfying a desire and involve an active titillation of the senses. After one's desires have been satisfied (e.g. when one is full after eating), the pleasure quickly goes away and the suffering of wanting to fulfill the desire again returns. For Epicurus, static pleasures are the best pleasures because moving pleasures are always bound up with pain. Epicurus had a low opinion of sex and marriage, regarding both as having dubious value. Instead, he maintained that platonic friendships are essential to living a happy life. One of the Principal Doctrines states, "Of the things wisdom acquires for the blessedness of life as a whole, far the greatest is the possession of friendship." He also taught that philosophy is itself a pleasure to engage in. One of the quotes from Epicurus recorded in the Vatican Sayings declares, "In other pursuits, the hard-won fruit comes at the end. But in philosophy, delight keeps pace with knowledge. It is not after the lesson that enjoyment comes: learning and enjoyment happen at the same time." Epicurus distinguishes between three types of desires: natural and necessary, natural but unnecessary, and vain and empty. Natural and necessary desires include the desires for food and shelter. These are easy to satisfy, difficult to eliminate, bring pleasure when satisfied, and are naturally limited. Going beyond these limits produces unnecessary desires, such as the desire for luxury foods. Although food is necessary, luxury food is not necessary. Correspondingly, Epicurus advocates a life of hedonistic moderation by reducing desire, thus eliminating the unhappiness caused by unfulfilled desires. Vain desires include desires for power, wealth, and fame. These are difficult to satisfy because no matter how much one gets, one can always want more. These desires are inculcated by society and by false beliefs about what we need. They are not natural and are to be shunned. Epicurus' teachings were introduced into medical philosophy and practice by the Epicurean doctor Asclepiades of Bithynia, who was the first physician who introduced Greek medicine in Rome. Asclepiades introduced the friendly, sympathetic, pleasing and painless treatment of patients. He advocated humane treatment of mental disorders, had insane persons freed from confinement and treated them with natural therapy, such as diet and massages. His teachings are surprisingly modern; therefore Asclepiades is considered to be a pioneer physician in psychotherapy, physical therapy and molecular medicine. Physics Epicurus writes in his Letter to Herodotus (not the historian) that "nothing ever arises from the nonexistent", indicating that all events therefore have causes, regardless of whether those causes are known or unknown. Similarly, he also writes that nothing ever passes away into nothingness, because, "if an object that passes from our view were completely annihilated, everything in the world would have perished, since that into which things were dissipated would be nonexistent." He therefore states: "The totality of things was always just as it is at present and will always remain the same because there is nothing into which it can change, inasmuch as there is nothing outside the totality that could intrude and effect change." Like Democritus before him, Epicurus taught that all matter is entirely made of extremely tiny particles called "atoms" (; , meaning "indivisible"). For Epicurus and his followers, the existence of atoms was a matter of empirical observation; Epicurus's devoted follower, the Roman poet Lucretius, cites the gradual wearing down of rings from being worn, statues from being kissed, stones from being dripped on by water, and roads from being walked on in On the Nature of Things as evidence for the existence of atoms as tiny, imperceptible particles. Also like Democritus, Epicurus was a materialist who taught that the only things that exist are atoms and void. Void occurs in any place where there are no atoms. Epicurus and his followers believed that atoms and void are both infinite and that the universe is therefore boundless. In On the Nature of Things, Lucretius argues this point using the example of a man throwing a javelin at the theoretical boundary of a finite universe. He states that the javelin must either go past the edge of the universe, in which case it is not really a boundary, or it must be blocked by something and prevented from continuing its path, but, if that happens, then the object blocking it must be outside the confines of the universe. As a result of this belief that the universe and the number of atoms in it are infinite, Epicurus and the Epicureans believed that there must also be infinitely many worlds within the universe. Epicurus taught that the motion of atoms is constant, eternal, and without beginning or end. He held that there are two kinds of motion: the motion of atoms and the motion of visible objects. Both kinds of motion are real and not illusory. Democritus had described atoms as not only eternally moving, but also eternally flying through space, colliding, coalescing, and separating from each other as necessary. In a rare departure from Democritus's physics, Epicurus posited the idea of atomic "swerve" ( ; ), one of his best-known original ideas. According to this idea, atoms, as they are travelling through space, may deviate slightly from the course they would ordinarily be expected to follow. Epicurus's reason for introducing this doctrine was because he wanted to preserve the concepts of free will and ethical responsibility while still maintaining the deterministic physical model of atomism. Lucretius describes it, saying, "It is this slight deviation of primal bodies, at indeterminate times and places, which keeps the mind as such from experiencing an inner compulsion in doing everything it does and from being forced to endure and suffer like a captive in chains." Epicurus was first to assert human freedom as a result of the fundamental indeterminism in the motion of atoms. This has led some philosophers to think that, for Epicurus, free will was caused directly by chance. In his On the Nature of Things, Lucretius appears to suggest this in the best-known passage on Epicurus' position. In his Letter to Menoeceus, however, Epicurus follows Aristotle and clearly identifies three possible causes: "some things happen of necessity, others by chance, others through our own agency." Aristotle said some things "depend on us" (eph'hemin). Epicurus agreed, and said it is to these last things that praise and blame naturally attach. For Epicurus, the "swerve" of the atoms simply defeated determinism to leave room for autonomous agency. Theology In his Letter to Menoeceus, a summary of his own moral and theological teachings, the first piece of advice Epicurus himself gives to his student is: "First, believe that a god is an indestructible and blessed animal, in accordance with the general conception of god commonly held, and do not ascribe to god anything foreign to his indestructibility or repugnant to his blessedness." Epicurus maintained that he and his followers knew that the gods exist because "our knowledge of them is a matter of clear and distinct perception", meaning that people can empirically sense their presences. He did not mean that people can see the gods as physical objects, but rather that they can see visions of the gods sent from the remote regions of interstellar space in which they actually reside. According to George K. Strodach, Epicurus could have easily dispensed of the gods entirely without greatly altering his materialist worldview, but the gods still play one important function in Epicurus's theology as the paragons of moral virtue to be emulated and admired. Epicurus rejected the conventional Greek view of the gods as anthropomorphic beings who walked the earth like ordinary people, fathered illegitimate offspring with mortals, and pursued personal feuds. Instead, he taught that the gods are morally perfect, but detached and immobile beings who live in the remote regions of interstellar space. In line with these teachings, Epicurus adamantly rejected the idea that deities were involved in human affairs in any way. Epicurus maintained that the gods are so utterly perfect and removed from the world that they are incapable of listening to prayers or supplications or doing virtually anything aside from contemplating their own perfections. In his Letter to Herodotus, he specifically denies that the gods have any control over natural phenomena, arguing that this would contradict their fundamental nature, which is perfect, because any kind of worldly involvement would tarnish their perfection. He further warned that believing that the gods control natural phenomena would only mislead people into believing the superstitious view that the gods punish humans for wrongdoing, which only instills fear and prevents people from attaining ataraxia. Epicurus himself criticizes popular religion in both his Letter to Menoeceus and his Letter to Herodotus, but in a restrained and moderate tone. Later Epicureans mainly followed the same ideas as Epicurus, believing in the existence of the gods, but emphatically rejecting the idea of divine providence. Their criticisms of popular religion, however, are often less gentle than those of Epicurus himself. The Letter to Pythocles, written by a later Epicurean, is dismissive and contemptuous towards popular religion and Epicurus's devoted follower, the Roman poet Lucretius ( 99 BC – 55 BC), passionately assailed popular religion in his philosophical poem On the Nature of Things. In this poem, Lucretius declares that popular religious practices not only do not instill virtue, but rather result in "misdeeds both wicked and ungodly", citing the mythical sacrifice of Iphigenia as an example. Lucretius argues that divine creation and providence are illogical, not because the gods do not exist, but rather because these notions are incompatible with the Epicurean principles of the gods' indestructibility and blessedness. The later Pyrrhonist philosopher Sextus Empiricus ( 160 – 210 AD) rejected the teachings of the Epicureans specifically because he regarded them as theological "Dogmaticists". Epicurean paradox The Epicurean paradox or riddle of Epicurus or Epicurus' trilemma is a version of the problem of evil. Lactantius attributes this trilemma to Epicurus in De Ira Dei, 13, 20-21: God, he says, either wishes to take away evils, and is unable; or He is able, and is unwilling; or He is neither willing nor able, or He is both willing and able. If He is willing and is unable, He is feeble, which is not in accordance with the character of God; if He is able and unwilling, He is envious, which is equally at variance with God; if He is neither willing nor able, He is both envious and feeble, and therefore not God; if He is both willing and able, which alone is suitable to God, from what source then are evils? Or why does He not remove them? In Dialogues concerning Natural Religion (1779), David Hume also attributes the argument to Epicurus: Epicurus’s old questions are yet unanswered. Is he willing to prevent evil, but not able? then is he impotent. Is he able, but not willing? then is he malevolent. Is he both able and willing? whence then is evil? No extant writings of Epicurus contain this argument. However, the vast majority of Epicurus's writings have been lost and it is possible that some form of this argument may have been found in his lost treatise On the Gods, which Diogenes Laërtius describes as one of his greatest works. If Epicurus really did make some form of this argument, it would not have been an argument against the existence of deities, but rather an argument against divine providence. Epicurus's extant writings demonstrate that he did believe in the existence of deities. Furthermore, religion was such an integral part of daily life in Greece during the early Hellenistic Period that it is doubtful anyone during that period could have been an atheist in the modern sense of the word. Instead, the Greek word (átheos), meaning "without a god", was used as a term of abuse, not as an attempt to describe a person's beliefs. Politics Epicurus promoted an innovative theory of justice as a social contract. Justice, Epicurus said, is an agreement neither to harm nor be harmed, and we need to have such a contract in order to enjoy fully the benefits of living together in a well-ordered society. Laws and punishments are needed to keep misguided fools in line who would otherwise break the contract. But the wise person sees the usefulness of justice, and because of his limited desires, he has no need to engage in the conduct prohibited by the laws in any case. Laws that are useful for promoting happiness are just, but those that are not useful are not just. (Principal Doctrines 31–40) Epicurus discouraged participation in politics, as doing so leads to perturbation and status seeking. He instead advocated not drawing attention to oneself. This principle is epitomised by the phrase lathe biōsas (), meaning "live in obscurity", "get through life without drawing attention to yourself", i.e., live without pursuing glory or wealth or power, but anonymously, enjoying little things like food, the company of friends, etc. Plutarch elaborated on this theme in his essay Is the Saying "Live in Obscurity" Right? (, An recte dictum sit latenter esse vivendum) 1128c; cf. Flavius Philostratus, Vita Apollonii 8.28.12. Works Epicurus was an extremely prolific writer. According to Diogenes Laërtius, he wrote around 300 treatises on a variety of subjects. Although more original writings of Epicurus have survived to the present day than of any other Hellenistic Greek philosopher, the vast majority of everything he wrote has still been lost, and most of what is known about Epicurus's teachings come from the writings of his later followers, particularly the Roman poet Lucretius. The only surviving complete works by Epicurus are three relatively lengthy letters, which are quoted in their entirety in Book X of Diogenes Laërtius's Lives and Opinions of Eminent Philosophers, and two groups of quotes: the Principal Doctrines (Κύριαι Δόξαι), which are likewise preserved through quotation by Diogenes Laërtius, and the Vatican Sayings, preserved in a manuscript from the Vatican Library that was first discovered in 1888. In the Letter to Herodotus and the Letter to Pythocles, Epicurus summarizes his philosophy on nature and, in the Letter to Menoeceus, he summarizes his moral teachings. Numerous fragments of Epicurus's lost thirty-seven volume treatise On Nature have been found among the charred papyrus fragments at the Villa of the Papyri at Herculaneum. Scholars first began attempting to unravel and decipher these scrolls in 1800, but the efforts are painstaking and are still ongoing. According to Diogenes Laertius (10.27-9), the major works of Epicurus include: On Nature, in 37 books On Atoms and the Void On Love Abridgment of the Arguments employed against the Natural Philosophers Against the Megarians Problems Fundamental Propositions (Kyriai Doxai) On Choice and Avoidance On the Chief Good On the Criterion (the Canon) Chaeridemus, On the Gods On Piety Hegesianax Four essays on Lives Essay on Just Dealing Neocles Essay addressed to Themista The Banquet (Symposium) Eurylochus Essay addressed to Metrodorus Essay on Seeing Essay on the Angle in an Atom Essay on Touch Essay on Fate Opinions on the Passions Treatise addressed to Timocrates Prognostics Exhortations On Images On Perceptions Aristobulus Essay on Music (i.e., on music, poetry, and dance) On Justice and the other Virtues On Gifts and Gratitude Polymedes Timocrates (three books) Metrodorus (five books) Antidorus (two books) Opinions about Diseases and Death, addressed to Mithras Callistolas Essay on Kingly Power Anaximenes Letters Legacy Ancient Epicureanism Epicureanism was extremely popular from the very beginning. Diogenes Laërtius records that the number of Epicureans throughout the world exceeded the populations of entire cities. Nonetheless, Epicurus was not universally admired and, within his own lifetime, he was vilified as an ignorant buffoon and egoistic sybarite. He remained the most simultaneously admired and despised philosopher in the Mediterranean for the next nearly five centuries. Epicureanism rapidly spread beyond the Greek mainland all across the Mediterranean world. By the first century BC, it had established a strong foothold in Italy. The Roman orator Cicero (106 – 43 BC), who deplored Epicurean ethics, lamented, "the Epicureans have taken Italy by storm." The overwhelming majority of surviving Greek and Roman sources are vehemently negative towards Epicureanism and, according to Pamela Gordon, they routinely depict Epicurus himself as "monstrous or laughable". Many Romans in particular took a negative view of Epicureanism, seeing its advocacy of the pursuit of voluptas ("pleasure") as contrary to the Roman ideal of virtus ("manly virtue"). The Romans therefore often stereotyped Epicurus and his followers as weak and effeminate. Prominent critics of his philosophy include prominent authors such as the Roman Stoic Seneca the Younger ( 4 BC – AD 65) and the Greek Middle Platonist Plutarch ( 46 – 120), who both derided these stereotypes as immoral and disreputable. Gordon characterizes anti-Epicurean rhetoric as so "heavy-handed" and misrepresentative of Epicurus's actual teachings that they sometimes come across as "comical". In his De vita beata, Seneca states that the "sect of Epicurus... has a bad reputation, and yet it does not deserve it." and compares it to "a man in a dress: your chastity remains, your virility is unimpaired, your body has not submitted sexually, but in your hand is a tympanum." Epicureanism was a notoriously conservative philosophical school; although Epicurus's later followers did expand on his philosophy, they dogmatically retained what he himself had originally taught without modifying it. Epicureans and admirers of Epicureanism revered Epicurus himself as a great teacher of ethics, a savior, and even a god. His image was worn on finger rings, portraits of him were displayed in living rooms, and wealthy followers venerated likenesses of him in marble sculpture. His admirers revered his sayings as divine oracles, carried around copies of his writings, and cherished copies of his letters like the letters of an apostle. On the twentieth day of every month, admirers of his teachings would perform a solemn ritual to honor his memory. At the same time, opponents of his teachings denounced him with vehemence and persistence. However, in the first and second centuries AD, Epicureanism gradually began to decline as it failed to compete with Stoicism, which had an ethical system more in line with traditional Roman values. Epicureanism also suffered decay in the wake of Christianity, which was also rapidly expanding throughout the Roman Empire. Of all the Greek philosophical schools, Epicureanism was the one most at odds with the new Christian teachings, since Epicureans believed that the soul was mortal, denied the existence of an afterlife, denied that the divine had any active role in human life, and advocated pleasure as the foremost goal of human existence. As such, Christian writers such as Justin Martyr ( 100– 165 AD), Athenagoras of Athens ( 133– 190), Tertullian ( 155– 240), and Clement of Alexandria ( 150– 215), Arnobius (died 330), and Lactantius (c. 250-c.325) all singled it out for the most vitriolic criticism. In spite of this, DeWitt argues that Epicureanism and Christianity share much common language, calling Epicureanism "the first missionary philosophy" and "the first world philosophy". Both Epicureanism and Christianity placed strong emphasis on the importance of love and forgiveness and early Christian portrayals of Jesus are often similar to Epicurean portrayals of Epicurus. DeWitt argues that Epicureanism, in many ways, helped pave the way for the spread of Christianity by "helping to bridge the gap between Greek intellectualism and a religious way of life" and "shunt[ing] the emphasis from the political to the social virtues and offer[ing] what may be called a religion of humanity." Middle Ages By the early fifth century AD, Epicureanism was virtually extinct. The Christian Church Father Augustine of Hippo (354–430 AD) declared, "its ashes are so cold that not a single spark can be struck from them." While the ideas of Plato and Aristotle could easily be adapted to suit a Christian worldview, the ideas of Epicurus were not nearly as easily amenable. As such, while Plato and Aristotle enjoyed a privileged place in Christian philosophy throughout the Middle Ages, Epicurus was not held in such esteem. Information about Epicurus's teachings was available, through Lucretius's On the Nature of Things, quotations of it found in medieval Latin grammars and florilegia, and encyclopedias, such as Isidore of Seville's Etymologiae (seventh century) and Hrabanus Maurus's De universo (ninth century), but there is little evidence that these teachings were systematically studied or comprehended. During the Middle Ages, Epicurus was remembered by the educated as a philosopher, but he frequently appeared in popular culture as a gatekeeper to the Garden of Delights, the "proprietor of the kitchen, the tavern, and the brothel." He appears in this guise in Martianus Capella's Marriage of Mercury and Philology (fifth century), John of Salisbury's Policraticus (1159), John Gower's Mirour de l'Omme, and Geoffrey Chaucer's Canterbury Tales. Epicurus and his followers appear in Dante Alighieri's Inferno in the Sixth Circle of Hell, where they are imprisoned in flaming coffins for having believed that the soul dies with the body. Renaissance In 1417, a manuscript-hunter named Poggio Bracciolini discovered a copy of Lucretius's On the Nature of Things in a monastery near Lake Constance. The discovery of this manuscript was met with immense excitement, because scholars were eager to analyze and study the teachings of classical philosophers and this previously forgotten text contained the most comprehensive account of Epicurus's teachings known in Latin. The first scholarly dissertation on Epicurus, De voluptate (On Pleasure) by the Italian Humanist and Catholic priest Lorenzo Valla was published in 1431. Valla made no mention of Lucretius or his poem. Instead, he presented the treatise as a discussion on the nature of the highest good between an Epicurean, a Stoic, and a Christian. Valla's dialogue ultimately rejects Epicureanism, but, by presenting an Epicurean as a member of the dispute, Valla lent Epicureanism credibility as a philosophy that deserved to be taken seriously. None of the Quattrocento Humanists ever clearly endorsed Epicureanism, but scholars such as Francesco Zabarella (1360–1417), Francesco Filelfo (1398–1481), Cristoforo Landino (1424–1498), and Leonardo Bruni ( 1370–1444) did give Epicureanism a fairer analysis than it had traditionally received and provided a less overtly hostile assessment of Epicurus himself. Nonetheless, "Epicureanism" remained a pejorative, synonymous with extreme egoistic pleasure-seeking, rather than a name of a philosophical school. This reputation discouraged orthodox Christian scholars from taking what others might regard as an inappropriately keen interest in Epicurean teachings. Epicureanism did not take hold in Italy, France, or England until the seventeenth century. Even the liberal religious skeptics who might have been expected to take an interest in Epicureanism evidently did not; Étienne Dolet (1509–1546) only mentions Epicurus once in all his writings and François Rabelais (between 1483 and 1494–1553) never mentions him at all. Michel de Montaigne (1533–1592) is the exception to this trend, quoting a full 450 lines of Lucretius's On the Nature of Things in his Essays. His interest in Lucretius, however, seems to have been primarily literary and he is ambiguous about his feelings on Lucretius's Epicurean worldview. During the Protestant Reformation, the label "Epicurean" was bandied back and forth as an insult between Protestants and Catholics. Revival In the seventeenth century, the French Catholic priest and scholar Pierre Gassendi (1592–1655) sought to dislodge Aristotelianism from its position of the highest dogma by presenting Epicureanism as a better and more rational alternative. In 1647, Gassendi published his book De vita et moribus Epicuri (The Life and Morals of Epicurus), a passionate defense of Epicureanism. In 1649, he published a commentary on Diogenes Laërtius's Life of Epicurus. He left Syntagma philosophicum (Philosophical Compendium), a synthesis of Epicurean doctrines, unfinished at the time of his death in 1655. It was finally published in 1658, after undergoing revision by his editors. Gassendi modified Epicurus's teachings to make them palatable for a Christian audience. For instance, he argued that atoms were not eternal, uncreated, and infinite in number, instead contending that an extremely large but finite number of atoms were created by God at creation. As a result of Gassendi's modifications, his books were never censored by the Catholic Church. They came to exert profound influence on later writings about Epicurus. Gassendi's version of Epicurus's teachings became popular among some members of English scientific circles. For these scholars, however, Epicurean atomism was merely a starting point for their own idiosyncratic adaptations of it. To orthodox thinkers, Epicureanism was still regarded as immoral and heretical. For instance, Lucy Hutchinson (1620–1681), the first translator of Lucretius's On the Nature of Things into English, railed against Epicurus as "a lunatic dog" who formulated "ridiculous, impious, execrable doctrines". Epicurus's teachings were made respectable in England by the natural philosopher Walter Charleton (1619–1707), whose first Epicurean work, The Darkness of Atheism Dispelled by the Light of Nature (1652), advanced Epicureanism as a "new" atomism. His next work Physiologia Epicuro-Gassendo-Charletoniana, or a Fabrick of Science Natural, upon a Hypothesis of Atoms, Founded by Epicurus, Repaired by Petrus Gassendus, and Augmented by Walter Charleton (1654) emphasized this idea. These works, together with Charleton's Epicurus's Morals (1658), provided the English public with readily available descriptions of Epicurus's philosophy and assured orthodox Christians that Epicureanism was no threat to their beliefs. The Royal Society, chartered in 1662, advanced Epicurean atomism. One of the most prolific defenders of atomism was the chemist Robert Boyle (1627–1691), who argued for it in publications such as The Origins of Forms and Qualities (1666), Experiments, Notes, etc. about the Mechanical Origin and Production of Divers Particular Qualities (1675), and Of the Excellency and Grounds of the Mechanical Hypothesis (1674). By the end of the seventeenth century, Epicurean atomism was widely accepted by members of the English scientific community as the best model for explaining the physical world, but it had been modified so greatly that Epicurus was no longer seen as its original parent. Enlightenment and after The Anglican bishop Joseph Butler's anti-Epicurean polemics in his Fifteen Sermons Preached at the Rolls Chapel (1726) and Analogy of Religion (1736) set the tune for what most orthodox Christians believed about Epicureanism for the remainder of the eighteenth and nineteenth centuries. Nonetheless, there are a few indications from this time period of Epicurus's improving reputation. Epicureanism was beginning to lose its associations with indiscriminate and insatiable gluttony, which had been characteristic of its reputation ever since antiquity. Instead, the word "epicure" began to refer to a person with extremely refined taste in food. Examples of this usage include "Epicurean cooks / sharpen with cloyless sauce his appetite" from William Shakespeare's Antony and Cleopatra (Act II. scene i; 1607) and "such an epicure was Potiphar—to please his tooth and pamper his flesh with delicacies" from William Whately's Prototypes (1646). Around the same time, the Epicurean injunction to "live in obscurity" was beginning to gain popularity as well. In 1685, Sir William Temple (1628–1699) abandoned a promising career as a diplomat and instead retired to his garden, devoting himself to writing essays on Epicurus's moral teachings. That same year, John Dryden translated the celebrated lines from Book II of Lucretius's On the Nature of Things: "'Tis pleasant, safely to behold from shore / The rowling ship, and hear the Tempest roar." Meanwhile, John Locke (1632–1704) adapted Gassendi's modified version of Epicurus's epistemology, which became highly influential on English empiricism. Many thinkers with sympathies towards the Enlightenment endorsed Epicureanism as an admirable moral philosophy. Thomas Jefferson (1743–1826), one of the Founding Fathers of the United States, declared in 1819, "I too am an Epicurean. I consider the genuine (not imputed) doctrines of Epicurus as containing everything rational in moral philosophy which Greece and Rome have left us." The German philosopher Karl Marx (1818–1883), whose ideas are the basis of Marxism, was profoundly influenced as a young man by the teachings of Epicurus and his doctoral thesis was a Hegelian dialectical analysis of the differences between the natural philosophies of Democritus and Epicurus. Marx viewed Democritus as a rationalist skeptic, whose epistemology was inherently contradictory, but saw Epicurus as a dogmatic empiricist, whose worldview is internally consistent and practically applicable. The British poet Alfred Tennyson (1809–1892) praised "the sober majesties / of settled, sweet, Epicurean life" in his 1868 poem "Lucretius". Epicurus's ethical teachings also had an indirect impact on the philosophy of Utilitarianism in England during the nineteenth century. Soviet politician Joseph Stalin (1878–1953) lauded Epicurus by stating: "He was the greatest philosopher of all time. He was the one who recommended practicing virtue to derive the greatest joy from life". Friedrich Nietzsche once noted: "Even today many educated people think that the victory of Christianity over Greek philosophy is a proof of the superior truth of the former – although in this case it was only the coarser and more violent that conquered the more spiritual and delicate. So far as superior truth is concerned, it is enough to observe that the awakening sciences have allied themselves point by point with the philosophy of Epicurus, but point by point rejected Christianity." Academic interest in Epicurus and other Hellenistic philosophers increased over the course of the late twentieth and early twenty-first centuries, with an unprecedented number of monographs, articles, abstracts, and conference papers being published on the subject. The texts from the library of Philodemus of Gadara in the Villa of the Papyri in Herculaneum, first discovered between 1750 and 1765, are being deciphered, translated, and published by scholars part of the Philodemus Translation Project, funded by the United States National Endowment for the Humanities, and part of the Centro per lo Studio dei Papiri Ercolanesi in Naples. Epicurus's popular appeal among non-scholars is difficult to gauge, but it seems to be relatively comparable to the appeal of more traditionally popular ancient Greek philosophical subjects such as Stoicism, Aristotle, and Plato. See also Eikas Epikoros Philosophy of happiness Separation of church and state Notes References Bibliography Further reading Texts Oates, Whitney J. (1940). The Stoic and Epicurean philosophers, The Complete Extant Writings of Epicurus, Epictetus, Lucretius and Marcus Aurelius. New York: Modern Library. Studies Bailey C. (1928). The Greek Atomists and Epicurus, Oxford: Clarendon Press. William Wallace. Epicureanism. SPCK (1880) External links Stoic And Epicurean by Robert Drew Hicks (1910) (Internet Archive) Epicurea, Hermann Usener - full text . Society of Friends of Epicurus Discussion Forum for Epicurus and Epicurean philosophy - EpicureanFriends.com 4th-century BC Greek philosophers 4th-century BC philosophers 4th-century BC writers 3rd-century BC Greek philosophers 3rd-century BC writers 341 BC births 270 BC deaths Ancient Greek epistemologists Ancient Greek ethicists Ancient Greek metaphysicians Ancient Greek philosophers of mind Ancient Greek physicists Ancient Samians Empiricists Epicurean philosophers Greek male writers Materialists Philosophers of death Hellenistic-era philosophers in Athens
Epicurus
[ "Physics" ]
11,057
[ "Materialism", "Matter", "Materialists" ]
10,090
https://en.wikipedia.org/wiki/Erythromycin
Erythromycin is an antibiotic used for the treatment of a number of bacterial infections. This includes respiratory tract infections, skin infections, chlamydia infections, pelvic inflammatory disease, and syphilis. It may also be used during pregnancy to prevent Group B streptococcal infection in the newborn, and to improve delayed stomach emptying. It can be given intravenously and by mouth. An eye ointment is routinely recommended after delivery to prevent eye infections in the newborn. Common side effects include abdominal cramps, vomiting, and diarrhea. More serious side effects may include Clostridioides difficile colitis, liver problems, prolonged QT, and allergic reactions. It is generally safe in those who are allergic to penicillin. Erythromycin also appears to be safe to use during pregnancy. While generally regarded as safe during breastfeeding, its use by the mother during the first two weeks of life may increase the risk of pyloric stenosis in the baby. This risk also applies if taken directly by the baby during this age. It is in the macrolide family of antibiotics and works by decreasing bacterial protein production. Erythromycin was first isolated in 1952 from the bacteria Saccharopolyspora erythraea. It is on the World Health Organization's List of Essential Medicines. In 2022, it was the 271st most commonly prescribed medication in the United States, with more than 800,000 prescriptions. Medical uses Erythromycin can be used to treat bacteria responsible for causing infections of the skin and upper respiratory tract, including Streptococcus, Staphylococcus, Haemophilus and Corynebacterium genera. The following represents MIC susceptibility data for a few medically significant bacteria: Haemophilus influenzae: 0.015 to 256 μg/ml Staphylococcus aureus: 0.023 to 1024 μg/ml Streptococcus pyogenes: 0.004 to 256 μg/ml Corynebacterium minutissimum: 0.015 to 64 μg/ml It may be useful in treating gastroparesis due to this promotility effect. It has been shown to improve feeding intolerances in those who are critically ill. Intravenous erythromycin may also be used in endoscopy to help clear stomach contents to enhance endoscopic visualization, potentially improving diagnostic accuracy and subsequent management. Available forms Erythromycin is available in enteric-coated tablets, slow-release capsules, oral suspensions, ophthalmic solutions, ointments, gels, enteric-coated capsules, non enteric-coated tablets, non enteric-coated capsules, and injections. The following erythromycin combinations are available for oral dosage: erythromycin base (capsules, tablets) erythromycin estolate (capsules, oral suspension, tablets), contraindicated during pregnancy erythromycin ethylsuccinate (oral suspension, tablets) erythromycin stearate (oral suspension, tablets) For injection, the available combinations are: erythromycin gluceptate erythromycin lactobionate For ophthalmic use: erythromycin base (ointment) Adverse effects Gastrointestinal disturbances, such as diarrhea, nausea, abdominal pain, and vomiting, are very common because erythromycin is a motilin agonist. More serious side effects include arrhythmia with prolonged QT intervals, including torsades de pointes, and reversible deafness. Allergic reactions range from urticaria to anaphylaxis. Cholestasis and Stevens–Johnson syndrome are some other rare side effects that may occur. Studies have shown evidence both for and against the association of pyloric stenosis and exposure to erythromycin prenatally and postnatally. Exposure to erythromycin (especially long courses at antimicrobial doses, and also through breastfeeding) has been linked to an increased probability of pyloric stenosis in young infants. Erythromycin used for feeding intolerance in young infants has not been associated with hypertrophic pyloric stenosis. Erythromycin estolate has been associated with reversible hepatotoxicity in pregnant women in the form of elevated serum glutamic-oxaloacetic transaminase and is not recommended during pregnancy. Some evidence suggests similar hepatotoxicity in other populations. It can also affect the central nervous system, causing psychotic reactions, nightmares, and night sweats. Interactions Erythromycin is metabolized by enzymes of the cytochrome P450 system, in particular, by isozymes of the CYP3A superfamily. The activity of the CYP3A enzymes can be induced or inhibited by certain drugs (e.g., dexamethasone), which can cause it to affect the metabolism of many different drugs, including erythromycin. If other CYP3A substrates — drugs that are broken down by CYP3A — such as simvastatin (Zocor), lovastatin (Mevacor), or atorvastatin (Lipitor) — are taken concomitantly with erythromycin, levels of the substrates increase, often causing adverse effects. A noted drug interaction involves erythromycin and simvastatin, resulting in increased simvastatin levels and the potential for rhabdomyolysis. Another group of CYP3A4 substrates are drugs used for migraine such as ergotamine and dihydroergotamine; their adverse effects may be more pronounced if erythromycin is associated. Earlier case reports on sudden death prompted a study on a large cohort that confirmed a link between erythromycin, ventricular tachycardia, and sudden cardiac death in patients also taking drugs that prolong the metabolism of erythromycin (like verapamil or diltiazem) by interfering with CYP3A4. Hence, erythromycin should not be administered to people using these drugs, or drugs that also prolong the QT interval. Other examples include terfenadine (Seldane, Seldane-D), astemizole (Hismanal), cisapride (Propulsid, withdrawn in many countries for prolonging the QT time) and pimozide (Orap). Interactions with theophylline, which is used mostly in asthma, were also shown. Erythromycin and doxycycline can have a synergistic effect when combined and kill bacteria (E. coli) with a higher potency than the sum of the two drugs together. This synergistic relationship is only temporary. After approximately 72 hours, the relationship shifts to become antagonistic, whereby a 50/50 combination of the two drugs kills less bacteria than if the two drugs were administered separately. It may alter the effectiveness of combined oral contraceptive pills because of its effect on the gut flora. A review found that when erythromycin was given with certain oral contraceptives, there was an increase in the maximum serum concentrations and AUC of estradiol and dienogest. Erythromycin is an inhibitor of the cytochrome P450 system, which means it can have a rapid effect on levels of other drugs metabolised by this system, e.g., warfarin. Pharmacology Mechanism of action Erythromycin displays bacteriostatic activity or inhibits growth of bacteria, especially at higher concentrations. By binding to the 50s subunit of the bacterial rRNA complex, protein synthesis and subsequent structure and function processes critical for life or replication are inhibited. Erythromycin interferes with aminoacyl translocation, preventing the transfer of the tRNA bound at the A site of the rRNA complex to the P site of the rRNA complex. Without this translocation, the A site remains occupied, thus the addition of an incoming tRNA and its attached amino acid to the nascent polypeptide chain is inhibited. This interferes with the production of functionally useful proteins, which is the basis of this antimicrobial action. Erythromycin increases gut motility by binding to motilin receptor, thus it is a motilin receptor agonist in addition to its antimicrobial properties. It can be therefore administered intravenously as a stomach emptying stimulant. Pharmacokinetics Erythromycin is easily inactivated by gastric acid; therefore, all orally administered formulations are given as either enteric-coated or more-stable salts or esters, such as erythromycin ethylsuccinate. Erythromycin is very rapidly absorbed, and diffuses into most tissues and phagocytes. Due to the high concentration in phagocytes, erythromycin is actively transported to the site of infection, where, during active phagocytosis, large concentrations of erythromycin are released. Metabolism Most of erythromycin is metabolised by demethylation in the liver by the hepatic enzyme CYP3A4. Its main elimination route is in the bile with little renal excretion, 2%–15% unchanged drug. Erythromycin's elimination half-life ranges between 1.5 and 2.0 hours and is between 5 and 6 hours in patients with end-stage renal disease. Erythromycin levels peak in the serum 4 hours after dosing; ethylsuccinate peaks 0.5–2.5 hours after dosing, but can be delayed if digested with food. Erythromycin crosses the placenta and enters breast milk. The American Association of Pediatrics determined erythromycin is safe to take while breastfeeding. Absorption in pregnant patients has been shown to be variable, frequently resulting in levels lower than in nonpregnant patients. Chemistry Composition Standard-grade erythromycin is primarily composed of four related compounds known as erythromycins A, B, C, and D. Each of these compounds can be present in varying amounts and can differ by lot. Erythromycin A has been found to have the most antibacterial activity, followed by erythromycin B. Erythromycins C and D are about half as active as erythromycin A. Some of these related compounds have been purified and can be studied and researched individually. Synthesis Over the three decades after the discovery of erythromycin A and its activity as an antimicrobial, many attempts were made to synthesize it in the laboratory. The presence of 10 stereogenic carbons and several points of distinct substitution has made the total synthesis of erythromycin A a formidable task. Complete syntheses of erythromycins’ related structures and precursors such as 6-deoxyerythronolide B have been accomplished, giving way to possible syntheses of different erythromycins and other macrolide antimicrobials. Woodward successfully completed the synthesis of erythromycin A, which was published in 1981. History In 1949 Abelardo B. Aguilar, a Filipino scientist, sent some soil samples to his employer at Eli Lilly. Aguilar managed to isolate erythromycin from the metabolic products of a strain of Streptomyces erythreus (designation changed to Saccharopolyspora erythraea) found in the samples. Aguilar received no further credit or compensation for his discovery. The scientist was allegedly promised a trip to the company's manufacturing plant in Indianapolis, but it was never fulfilled. In a letter to the company's president, Aguilar wrote: “A leave of absence is all I ask as I do not wish to sever my connection with a great company which has given me wonderful breaks in life.” The request was not granted. Aguilar reached out to Eli Lilly again in 1993, requesting royalties from sales of the drug over the years, intending to use them to put up a foundation for poor and sickly Filipinos. This request was also denied. He died in September of the same year. Lilly filed for patent protection on the compound which was granted in 1953. The product was launched commercially in 1952 under the brand name Ilosone (after the Philippine region of Iloilo where it was originally collected). Erythromycin was formerly also called Ilotycin. The antibiotic clarithromycin was invented by scientists at the Japanese drug company Taisho Pharmaceutical in the 1970s as a result of their efforts to overcome the acid instability of erythromycin. Society and culture Economics It is available as a generic medication. In the United States, in 2014, the price increased to seven dollars per 500mg tablet. The US price of erythromycin rose three times between 2010 and 2015, from 24 cents per 500mg tablet in 2010 to $8.96 in 2015. In 2017, a Kaiser Health News study found that the per-unit cost of dozens of generics doubled or even tripled from 2015 to 2016, increasing spending by the Medicaid program. Due to price increases by drug manufacturers, Medicaid paid on average $2,685,330 more for Erythromycin in 2016 compared to 2015 (not including rebates). In the US by 2018, generic drug prices had climbed another 5% on average. The UK price listed in the BNF for erythromycin 500mg tablets was £36.40 for 100 tablets (36.4 pence each) . This price is not paid by NHS patients: there is no NHS prescription charge in Scotland, Wales, and Northern Ireland; while NHS patients in England without an exemption are liable for a flat rate prescription charge. , that charge was £9.90 for each prescribed medicine. Brand names Brand names include Robimycin, E-Mycin, E.E.S. Granules, E.E.S.-200, E.E.S.-400, E.E.S.-400 Filmtab, Erymax, Ery-Tab, Eryc, Ranbaxy, Erypar, EryPed, Eryped 200, Eryped 400, Erythrocin Stearate Filmtab, Erythrocot, E-Base, Erythroped, Ilosone, MY-E, Pediamycin, Zineryt, Abboticin, Abboticin-ES, Erycin, PCE Dispertab, Stiemycine, Acnasol, and Tiloryth. Veterinary uses Erythromycin is also used in fishcare for the "broad spectrum treatment and control of bacterial disease". Body slime, mouth fungus, furunculosis, bacterial gill illness, and hemorrhagic septicaemia are all examples of bacterial diseases in fish that may be treated and controlled with this therapy. The usage of Erythromycin in fishcare is mainly limited to therapies targeting gram-positive bacteria. References Tertiary alcohols CYP3A4 inhibitors Dimethylamino compounds Ethers Hepatotoxins HERG blocker Lactones Drugs developed by Pfizer Drugs developed by Eli Lilly and Company Macrolide antibiotics World Health Organization essential medicines Wikipedia medicine articles ready to translate
Erythromycin
[ "Chemistry" ]
3,311
[ "Organic compounds", "Functional groups", "Ethers" ]
10,091
https://en.wikipedia.org/wiki/Environmental%20law
Environmental laws are laws that protect the environment. Environmental law is the collection of laws, regulations, agreements and common law that governs how humans interact with their environment. This includes environmental regulations; laws governing management of natural resources, such as forests, minerals, or fisheries; and related topics such as environmental impact assessments. Environmental law is seen as the body of laws concerned with the protection of living things (human beings inclusive) from the harm that human activity may immediately or eventually cause to them or their species, either directly or to the media and the habits on which they depend. Examples include economic development, wildlife conservation, and international relations. History Examples of laws designed to preserve the environment for its own sake or for human enjoyment are found throughout history. In the common law, the primary protection was found in the law of nuisance, but this only allowed for private actions for damages or injunctions if there was harm to land. Thus, smells emanating from pigsties, strict liability against dumping rubbish, or damage from exploding dams. Private enforcement, however, was limited and found to be woefully inadequate to deal with major environmental threats, particularly threats to common resources. During the "Great Stink" of 1858, the dumping of sewerage into the River Thames began to smell so ghastly in the summer heat that Parliament had to be evacuated. Ironically, the Metropolitan Commission of Sewers Act 1848 had allowed the Metropolitan Commission for Sewers to close cesspits around the city in an attempt to "clean up" but this simply led people to pollute the river. In 19 days, Parliament passed a further Act to build the London sewerage system. London also suffered from terrible air pollution, and this culminated in the "Great Smog" of 1952, which in turn triggered its own legislative response: the Clean Air Act 1956. The basic regulatory structure was to set limits on emissions for households and businesses (particularly burning of coal) while an inspectorate would enforce compliance. Pollution control Air quality Air quality index (AQI) is used to identify contaminants present in the air that would result in affecting public health. They test among contaminants and high levels of the major six pollutants, including nitrogen dioxide, ozone, carbon monoxide, and sulfur dioxide. The impacts on air quality reflect on the safety behind what contaminants are safe enough to breathe in. The high levels of pollutants can vary based on seasonal changes and what is more likely to cause an issue with air quality at its peak. Exposure to dangerous pollutants can adverse health effects over time, and can be a potential threat to a decline in population over time. Water quality Waste management Consuming less waste can heavily reduce the amount of energy used in products and help minimize consumption of goods over time. Waste separation could potentially lead to having additional resources and filter out waste during that process. Ways to reduce the amount of waste include green purchasing and reducing disposable products that contribute to climate change. Wealth has led to increase in environmental risks with waste produced including corporations, thus limiting the amount of regulation. Contaminant cleanup Sewage treatments are used for filtering out any contaminants that are present to ensure water quality remains clean and safe to consume. If left untreated for long period of time, antibiotic resistance may occur and will eventually cause health problems as the treatment plants won't get filtered easily. Government officials would need to test often to check the filtering system for efficiency. Wastewater and river ecosystems can effectively remove heavy metals such as lead and cadmium by using sea cucumbers, algae, and decayed plants to reduce the amount of heavy toxins that may be in water. Chemical safety Chemical safety laws govern the use of chemicals in human activities, particularly human-made chemicals in modern industrial applications. As contrasted with media-oriented environmental laws (e.g., air or water quality laws), chemical control laws seek to manage the (potential) pollutants themselves. Regulatory efforts include banning specific chemical constituents in consumer products (e.g., Bisphenol A in plastic bottles), and regulating pesticides. Safety regulations including the Toxic Substances Control Act (TSCA) determine the chemicals that are considered harmful from the Environmental Protection Agency (EPA), but end up placing restrictions to several chemicals per year as opposed to finding many other chemicals that would be considered harmful if exposed. According to the EPA, only 200 out of the 84,000 chemicals used in 2013 were used for testing purposes, raising concerns on whether TSCA would be up to date in their database. Chemicals would need to be tested for toxicity, instability, and flammability when coming into contact with other chemicals. It is essential to identify and analyze the different types of chemicals for its potential risks or it could have dangerous outcomes. Resource sustainability Impact assessment Water resources Water resources laws govern the ownership and use of water resources, including surface water and ground water. Regulatory areas may include water conservation, use restrictions, and ownership regimes. Such laws include the right to be provided clean water at the bare minimum, according to the United Nations Committee. Water resources should be safe, clean, accessible, and affordable for human concern. This also means having facilities that operate in combatting water pollution and provide clean sources of water for maintenance. Duty dumping may occur if one is not held responsible for fulfilling their obligations in attempt to reduce environmental impact. Financial obligations are required for reaching quality standards to avoid risk of contamination that would raise public health concerns. An economic approach is made to provide a budget for having running water regulated normally and varies among countries distribution systems in efforts to reach human standards. Involvement is crucial for helping with water safety concerns, gaining attention of communities to help support programs established to further regulate water consumption. Mineral resources Managing mineral resources can be challenging to maintain as supply chains have lower chances of traceability with importing goods from their designated spots for mining. Sustainability and quality processes can be done through regulatory obligations for ensuring the source of importing goods are taken responsibility. The risks that may be involved can include unsafe working conditions as mining is labor-intensive and workers' rights since environmental risks are likely to occur when mining, such as water contamination and ingesting minerals when exposed in the air. Mining conditions can vary by country as some prohibit mining if there are unsafe working conditions while some are likely to be interested as it adheres to investments for mining industries growing in the future. Forest resources Wildlife and plants Wildlife laws govern the potential impact of human activity on wild animals, whether directly on individuals or populations, or indirectly via habitat degradation. Similar laws may operate to protect plant species. Such laws may be enacted entirely to protect biodiversity, or as a means for protecting species deemed important for other reasons. Regulatory efforts may include the creation of special conservation statuses, prohibitions on killing, harming, or disturbing protected species, efforts to induce and support species recovery, establishment of wildlife refuges to support conservation, and prohibitions on trafficking in species or animal parts to combat poaching. Illegal wildlife trade has become an organized crime, and has led to tracking down poachers through law enforcement and high-level security approaches. Criminal activity involving animals have undergone pressure once regulation standards are established, having activities happen in other places to avoiding law enforcement. Conservationists may argue that it is risky that poachers are likely to be armed with weapons when law hunting down animals, leaving no choice but to recruit armed rangers for stopping their crimes. Tackling illegal wildlife crime would require involvement and support from international policies and law enforcement such as INTERPOL and EUROPOL. EUROPOL had to deal with political pressure to combat environmental crime, making it a priority to focus on. Ecological changes pertaining to human activity caused a decline in wildlife population over time. Relationships between humans and species result in negative outcomes of economic development and cultural significances. Fish and game Fish and game laws regulate the right to pursue and take or kill certain kinds of fish and wild animal (game). Such laws may restrict the days to harvest fish or game, the number of animals caught per person, the species harvested, or the weapons or fishing gear used. Such laws may seek to balance dueling needs for preservation and harvest and to manage both environment and populations of fish and game. Game laws can provide a legal structure to collect license fees and other money which is used to fund conservation efforts as well as to obtain harvest information used in wildlife management practice. Illegal fishing often gets unreported with little to no regulation for transporting goods in facilities. Overfishing has impacted fisheries with a decline in economic growth and destruction of ecosystems. Marine resources would cause pressure among other countries relying on fishing for providing food in communities if illegal fishing isn't managed carefully. Another issue is an increase in marine pollution that has a decline of fish population with ocean disposals of toxic waste and lack of oxygen produced, which makes harvesting fish difficult to accomplish. Principles Environmental law has developed in response to emerging awareness of—and concern over—issues impacting the world. While laws have developed piecemeal and for a variety of reasons, some effort has gone into identifying key concepts and guiding principles common to environmental law as a whole. Some laws are seen as temporary or transitional where political realities prevent adoption of more ideal rules. Pope Francis in his 2015 encyclical letter Laudato si' acknowledged that "political realism may call for transitional measures and technologies, so long as these are accompanied by the gradual framing and acceptance of binding commitments". The principles discussed below are not an exhaustive list and are not universally recognized or accepted. Nonetheless, they represent important principles for the understanding of environmental law around the world. Sustainable development Defined by the United Nations Environment Programme (UNEP) as "development that meets the needs of the present without compromising the ability of future generations to meet their own needs," sustainable development may be considered together with the concepts of "integration" (development cannot be considered in isolation from sustainability) and "interdependence" (social and economic development, and environmental protection, are interdependent). Laws mandating environmental impact assessment and requiring or encouraging development to minimize environmental impacts may be assessed against this principle. The modern concept of sustainable development was a topic of discussion at the 1972 United Nations Conference on the Human Environment (Stockholm Conference), and the driving force behind the 1983 World Commission on Environment and Development (WCED, or Bruntland Commission). In 1992, the first UN Earth Summit resulted in the Rio Declaration, Principle 3 of which reads: "The right to development must be fulfilled so as to equitably meet developmental and environmental needs of present and future generations." Sustainable development has been a core concept of international environmental discussion ever since, including at the World Summit on Sustainable Development (Earth Summit 2002), and the United Nations Conference on Sustainable Development (Earth Summit 2012, or Rio+20). Equity Defined by UNEP to include intergenerational equity – "the right of future generations to enjoy a fair level of the common patrimony" – and intragenerational equity – "the right of all people within the current generation to fair access to the current generation's entitlement to the Earth's natural resources" – environmental equity considers the present generation under an obligation to account for long-term impacts of activities, and to act to sustain the global environment and resource base for future generations. Pollution control and resource management laws may be assessed against this principle. Equity is approached by combatting social justice for the sake of reaching climate goals in having more sustainability. International law decided to shift from equality to equity in hopes of acknowledging the needs and provide fair share of resources. The CBDR principle had been established back in 2015, but was modified by the Paris Agreement regarding climate change. Transboundary responsibility Defined in the international law context as an obligation to protect one's own environment, and to prevent damage to neighboring environments, UNEP considers transboundary responsibility at the international level as a potential limitation on the rights of the sovereign state. Laws that act to limit externalities imposed upon human health and the environment may be assessed against this principle. Community participation is analyzed through international law that looks into environmental law and cases such as genocide and providing aid. Such laws include heritage laws that describe the cultural aspect and property rights of others. Responsibilities include pollution damage among two states, leading to calamities for countries to deal with. Some countries contribute to transboundary pollution and pass on to other places to clean up, establishing laws to decide on holding countries liable on causing damage to resources. Having strict liability depends on negotiations done in international court and what values are held for environmental reasons. Public participation and transparency Identified as essential conditions for "accountable governments,... industrial concerns", and organizations generally, public participation and transparency are presented by UNEP as requiring "effective protection of the human right to hold and express opinions and to seek, receive and impart ideas,... a right of access to appropriate, comprehensible and timely information held by governments and industrial concerns on economic and social policies regarding the sustainable use of natural resources and the protection of the environment, without imposing undue financial burdens upon the applicants and with adequate protection of privacy and business confidentiality," and "effective judicial and administrative proceedings". These principles are present in environmental impact assessment, laws requiring publication and access to relevant environmental data, and administrative procedure. Participating has shown to be more effective when trying to reach a point and use strategic matters for implementing action within policies. Public policies that are managed for sustainability would need financial incentives since the work put into these cases require lots of support and looking into ecological systems. International relations will need a relationship between participating in policy changes and TGI governance involvement. Debates with international relations still occur, focusing on the importance of participating in protecting natural resources by being involved with government policies. Precautionary principle One of the most commonly encountered and controversial principles of environmental law, the Rio Declaration formulated the precautionary principle as follows: In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation. The principle may play a role in any debate over the need for environmental regulation. Precautions are done in regards of having higher levels of protection among resources such as human health concerns and conservation for protecting the environment. According to the EU, it has not properly defined the meaning of precautionary principle since it depends on protection of resources and policies implemented. The CJEU expressed the concern of “where there is uncertainty as to the existence or extent of human health, the institutions may take protective measures without having to wait until the reality and seriousness of those risks become fully apparent." Budgets are proposed for recovering any damages to resources done to communities in relation to water, air, and soil that may be impacted as a result of natural causes such as floods, droughts, hurricanes, and wildfires. Prevention The concept of prevention can perhaps better be considered an overarching aim that gives rise to a multitude of legal mechanisms, including prior assessment of environmental harm, licensing or authorization that set out the conditions for operation and the consequences for violation of the conditions, as well as the adoption of strategies and policies. Emission limits and other product or process standards, the use of best available techniques and similar techniques can all be seen as applications of the concept of prevention. Prevention is necessary to avoid resources destroyed and the amount of budgeting needed to recover from incidents so communities don't fall apart. Natural disasters are unavoidable by taking action before the incident happens can result in safe evacuations. They can be predicted based on people who had previously encountered with a natural disaster. Prevention from pollution can happen if policies are made to either ban substances or reduce the intake of contaminants in air, water, and soil for healthy conditions. Wildlife can be conserved through foreign aid and taking further action through funding programs that protect against poachers who may be armed and affiliate with an organization. Polluter pays principle The polluter pays principle is the idea that "the environmental costs of economic activities, including the cost of preventing potential harm, should be internalized rather than imposed upon society at large." All issues related to responsibility for cost for environmental remediation and compliance with pollution control regulations involve this principle. The rise of carbon emissions have resulted from corporations and facilities that fail to acknowledge the potential risks of climate change over time. Policies that cause environmental damage needs action done in hopes of preventing harm form happening again. Evidence of harm would need to be shown to see the damages done and how those get resolved. Government regulations may set aside the idea of conserving resources for their own benefit and not acknowledging those suffering from it. Theory Environmental law is a continuing source of controversy. Debates over the necessity, fairness, and cost of environmental regulation are ongoing, as well as regarding the appropriateness of regulations versus market solutions to achieve even agreed-upon ends. Allegations of scientific uncertainty fuel the ongoing debate over greenhouse gas regulation, and are a major factor in debates over whether to ban particular pesticides. In cases where the science is well-settled, it is not unusual to find that corporations intentionally hide or distort the facts, or sow confusion. It is very common for regulated industry to argue against environmental regulation on the basis of cost. Difficulties arise in performing cost–benefit analysis of environmental issues. It is difficult to quantify the value of an environmental value such as a healthy ecosystem, clean air, or species diversity. Many environmentalists' response to pitting economy vs. ecology is summed up by former Senator and founder of Earth Day Gaylord Nelson, "The economy is a wholly owned subsidiary of the environment, not the other way around." Furthermore, environmental issues are seen by many as having an ethical or moral dimension, which would transcend financial cost. Even so, there are some efforts underway to systemically recognize environmental costs and assets, and account for them properly in economic terms. While affected industries spark controversy in fighting regulation, there are also many environmentalists and public interest groups who believe that current regulations are inadequate, and advocate for stronger protection. Environmental law conferences – such as the annual Public Interest Environmental Law Conference in Eugene, Oregon – typically have this focus, also connecting environmental law with class, race, and other issues. An additional debate is to what extent environmental laws are fair to all regulated parties. For instance, researchers Preston Teeter and Jorgen Sandberg highlight how smaller organizations can often incur disproportionately larger costs as a result of environmental regulations, which can ultimately create an additional barrier to entry for new firms, thus stifling competition and innovation. International environmental law Global and regional environmental issues are increasingly the subject of international law. Debates over environmental concerns implicate core principles of international law and have been the subject of numerous international agreements and declarations. Customary international law is an important source of international environmental law. These are the norms and rules that countries follow as a matter of custom and they are so prevalent that they bind all states in the world. When a principle becomes customary law is not clear cut and many arguments are put forward by states not wishing to be bound. Examples of customary international law relevant to the environment include the duty to warn other states promptly about icons of an environmental nature and environmental damages to which another state or states may be exposed, and Principle 21 of the Stockholm Declaration ('good neighborliness' or sic utere). Given that customary international law is not static but ever evolving and the continued increase of air pollution (carbon dioxide) causing climate changes, has led to discussions on whether basic customary principles of international law, such as the jus cogens (peremptory norms) and erga omnes principles could be applicable for enforcing international environmental law. Numerous legally binding international agreements encompass a wide variety of issue-areas, from terrestrial, marine and atmospheric pollution through to wildlife and biodiversity protection. International environmental agreements are generally multilateral (or sometimes bilateral) treaties (a.k.a. convention, agreement, protocol, etc.). Protocols are subsidiary agreements built from a primary treaty. They exist in many areas of international law but are especially useful in the environmental field, where they may be used to regularly incorporate recent scientific knowledge. They also permit countries to reach an agreement on a framework that would be contentious if every detail were to be agreed upon in advance. The most widely known protocol in international environmental law is the Kyoto Protocol, which followed from the United Nations Framework Convention on Climate Change. While the bodies that proposed, argued, agreed upon, and ultimately adopted existing international agreements vary according to each agreement, certain conferences, including 1972's United Nations Conference on the Human Environment, 1983's World Commission on Environment and Development, 1992's United Nations Conference on Environment and Development, and 2002's World Summit on Sustainable Development have been particularly important. Multilateral environmental agreements sometimes create an International Organization, Institution or Body responsible for implementing the agreement. Major examples are the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) and the International Union for Conservation of Nature (IUCN). International environmental law also includes the opinions of international courts and tribunals. While there are few and they have limited authority, the decisions carry much weight with legal commentators and are quite influential on the development of international environmental law. One of the biggest challenges in international decisions is to determine an adequate compensation for environmental damages. The courts include the International Court of Justice (ICJ), the International Tribunal for the Law of the Sea (ITLOS), the European Court of Justice, European Court of Human Rights and other regional treaty tribunals. Previous research found that economic development level and the nations' moral value affected environmental regulation compliance. Developed countries like the US, EU, and Australia are urging for better laws targeting the reduction of harmful environmental impacts. It is worth noting that there is a direct correlation between economic development and the distance between law and ethics. Developed countries have a closer relationship between environmental laws and moral values. If a country's legal system is completely divorced from its moral values, people may not abide by the laws and they will lose their significance and effectiveness. Despite environmental regulations, the water in India's River Ganges remains poor as an example. Around the world Africa According to the International Network for Environmental Compliance and Enforcement (INECE), the major environmental issues in Africa are "drought and flooding, air pollution, deforestation, loss of biodiversity, freshwater availability, degradation of soil and vegetation, and widespread poverty." The U.S. Environmental Protection Agency (EPA) is focused on the "growing urban and industrial pollution, water quality, electronic waste and indoor air from cookstoves." They hope to provide enough aid on concerns regarding pollution before their impacts contaminate the African environment as well as the global environment. By doing so, they intend to "protect human health, particularly vulnerable populations such as children and the poor." In order to accomplish these goals in Africa, EPA programs are focused on strengthening the ability to enforce environmental laws as well as public compliance to them. Other programs work on developing stronger environmental laws, regulations, and standards. Asia The Asian Environmental Compliance and Enforcement Network (AECEN) is an agreement between 16 Asian countries dedicated to improving cooperation with environmental laws in Asia. These countries include Cambodia, China, Indonesia, India, Maldives, Japan, Korea, Malaysia, Nepal, Philippines, Pakistan, Singapore, Sri Lanka, Thailand, Vietnam, and Lao PDR. European Union The European Union issues secondary legislation on environmental issues that are valid throughout the EU (so called regulations) and many directives that must be implemented into national legislation from the 27 member states (national states). Examples are the Regulation (EC) No. 338/97 on the implementation of CITES; or the Natura 2000 network the centerpiece for nature & biodiversity policy, encompassing the bird Directive (79/409/EEC/ changed to 2009/147/EC)and the habitats directive (92/43/EEC). Which are made up of multiple SACs (Special Areas of Conservation, linked to the habitats directive) & SPAs (Special Protected Areas, linked to the bird directive), throughout Europe. EU legislation is ruled in Article 249 Treaty for the Functioning of the European Union (TFEU). Topics for common EU legislation are: Climate change Air pollution Water protection and management Waste management Soil protection Protection of nature, species and biodiversity Noise pollution Cooperation for the environment with third countries (other than EU member states) Civil protection In February 2024, the European Parliament adopted a law making a big, intentionally caused, environmental damage “comparable to ecocide” a crime that can be punished by up to 10 years in prison. The members of the Union should enter it to their national law, during 2 years. The Parliament also approved a nature restoration law which obligate members to restore 20% of degraded ecosystems (including 30% of drained peatland) by 2030 and 100% by 2050. Middle East Environmental law is rapidly growing in the Middle East. The U.S. Environmental Protection Agency is working with countries in the Middle East to improve "environmental governance, water pollution and water security, clean fuels and vehicles, public participation, and pollution prevention." Oceania The main concerns about environmental issues in Oceania are "illegal releases of air and water pollutants, illegal logging/timber trade, illegal shipment of hazardous wastes, including e-waste and ships slated for destruction, and insufficient institutional structure/lack of enforcement capacity". The Secretariat of the Pacific Regional Environmental Programme (SPREP) is an international organization between Australia, the Cook Islands, FMS, Fiji, France, Kiribati, Marshall Islands, Nauru, New Zealand, Niue, Palau, PNG, Samoa, Solomon Island, Tonga, Tuvalu, US, and Vanuatu. The SPREP was established in order to provide assistance in improving and protecting the environment as well as assure sustainable development for future generations. Australia Commonwealth v Tasmania (1983), also known as the "Tasmanian Dam Case", was a highly significant case in Australian environmental law. The Environment Protection and Biodiversity Conservation Act 1999 is the centerpiece of environmental legislation in Australia. It sets up the "legal framework to protect and manage nationally and internationally important flora, fauna, ecological communities and heritage places" and focuses on protecting world heritage properties, national heritage properties, wetlands of international importance, nationally threatened species and ecological communities, migratory species, Commonwealth marine areas, Great Barrier Reef Marine Park, and the environment surrounding nuclear activities. However, it has been subject to numerous reviews examining its shortcomings, the latest taking place in mid-2020. The interim report of this review concluded that the laws created to protect unique species and habitats are ineffective. Brazil The Brazilian government created the Ministry of Environment in 1992 in order to develop better strategies for protecting the environment, using natural resources sustainably, and enforcing public environmental policies. The Ministry of Environment has authority over policies involving environment, water resources, preservation, and environmental programs involving the Amazon. Canada The Department of the Environment Act establishes the Department of the Environment in the Canadian government as well as the position Minister of the Environment. Their duties include "the preservation and enhancement of the quality of the natural environment, including water, air and soil quality; renewable resources, including migratory birds and other non-domestic flora and fauna; water; meteorology;" The Environmental Protection Act is the main piece of Canadian environmental legislation that was put into place March 31, 2000. The Act focuses on "respecting pollution prevention and the protection of the environment and human health in order to contribute to sustainable development." Other principle federal statutes include the Canadian Environmental Assessment Act, and the Species at Risk Act. When provincial and federal legislation are in conflict federal legislation takes precedence, that being said individual provinces can have their own legislation such as Ontario's Environmental Bill of Rights, and Clean Water Act. China According to the U.S. Environmental Protection Agency, "China has been working with great determination in recent years to develop, implement, and enforce a solid environmental law framework. Chinese officials face critical challenges in effectively implementing the laws, clarifying the roles of their national and provincial governments, and strengthening the operation of their legal system." Explosive economic and industrial growth in China has led to significant environmental degradation, and China is currently in the process of developing more stringent legal controls. The harmonization of Chinese society and the natural environment is billed as a rising policy priority. Environmental lawsuits have been available in China since the early 2000s. Public protest, however, plays a greater role in shaping China's environmental policy than litigation does. Congo (RC) In the Republic of Congo, inspired by the African models of the 1990s, the phenomenon of constitutionalization of environmental law appeared in 1992, which completed an historical development of environmental law and policy dating back to the years of independence and even long before the colonization. It gives a constitutional basis to environmental protection, which traditionally was part of the legal framework. The two Constitutions of 15 March 1992 and 20 January 2002 concretize this paradigm, by stating a legal obligation of a clean environment, by establishing a principle of compensation and a foundation of criminal nature. By this phenomenon, Congolese environmental law is situated between non-regression and the search for efficiency." Ecuador With the enactment of the 2008 Constitution, Ecuador became the first country in the world to codify the Rights of Nature. The Constitution, specifically Articles 10 and 71–74, recognizes the inalienable rights of ecosystems to exist and flourish, gives people the authority to petition on the behalf of ecosystems, and requires the government to remedy violations of these rights. The rights approach is a break away from traditional environmental regulatory systems, which regard nature as property and legalize and manage degradation of the environment rather than prevent it. The Rights of Nature articles in Ecuador's constitution are part of a reaction to a combination of political, economic, and social phenomena. Ecuador's abusive past with the oil industry, most famously the class-action litigation against Chevron, and the failure of an extraction-based economy and neoliberal reforms to bring economic prosperity to the region has resulted in the election of a New Leftist regime, led by President Rafael Correa, and sparked a demand for new approaches to development. In conjunction with this need, the principle of "Buen Vivir," or good living – focused on social, environmental and spiritual wealth versus material wealth – gained popularity among citizens and was incorporated into the new constitution. The influence of indigenous groups, from whom the concept of "Buen Vivir" originates, in the forming of the constitutional ideals also facilitated the incorporation of the Rights of Nature as a basic tenet of their culture and conceptualization of "Buen Vivir." Egypt The Environmental Protection Law outlines the responsibilities of the Egyptian government to "preparation of draft legislation and decrees pertinent to environmental management, collection of data both nationally and internationally on the state of the environment, preparation of periodical reports and studies on the state of the environment, formulation of the national plan and its projects, preparation of environmental profiles for new and urban areas, and setting of standards to be used in planning for their development, and preparation of an annual report on the state of the environment to be prepared to the President." India In India, Environmental law is governed by the Environment Protection Act, 1986. This act is enforced by the Central Pollution Control Board and the numerous State Pollution Control Boards. Apart from this, there are also individual legislation specifically enacted for the protection of Water, Air, Wildlife, etc. Such legislations include : The Water (Prevention and Control of Pollution) Act, 1974 The Water (Prevention and Control of Pollution) Cess Act, 1977 The Forest (Conservation) Act, 1980 The Air (Prevention and Control of Pollution) Act, 1981 Air (Prevention and Control of Pollution) (Union Territories) Rules, 1983 The Biological Diversity Act, 2002 and the Wild Life Protection Act, 1972 Batteries (Management and Handling) Rules, 2001 Recycled Plastics, Plastics Manufacture and Usage Rules, 1999 The National Green Tribunal established under the National Green Tribunal Act of 2010 has jurisdiction over all environmental cases dealing with a substantial environmental question and acts covered under the Water (Prevention and Control of Pollution) Act, 1974. Water (Prevention and Control of Pollution) Cess Rules, 1978 Ganga Action Plan, 1986 The Forest (Conservation) Act, 1980 Wildlife protection Act, 1972 The Public Liability Insurance Act, 1991 and the Biological Diversity Act, 2002. The acts covered under Indian Wild Life Protection Act 1972 do not fall within the jurisdiction of the National Green Tribunal. Appeals can be filed in the Hon'ble Supreme Court of India. Basel Convention on Control of Transboundary Movements on Hazardous Wastes and Their Disposal, 1989 and Its Protocols Hazardous Wastes (Management and Handling) Amendment Rules, 2003 Japan The Basic Environmental Law is the basic structure of Japan's environmental policies replacing the Basic Law for Environmental Pollution Control and the Nature Conservation Law. The updated law aims to address "global environmental problems, urban pollution by everyday life, loss of accessible natural environment in urban areas and degrading environmental protection capacity in forests and farmlands." The three basic environmental principles that the Basic Environmental Law follows are "the blessings of the environment should be enjoyed by the present generation and succeeded to the future generations, a sustainable society should be created where environmental loads by human activities are minimized, and Japan should contribute actively to global environmental conservation through international cooperation." From these principles, the Japanese government have established policies such as "environmental consideration in policy formulation, establishment of the Basic Environment Plan which describes the directions of long-term environmental policy, environmental impact assessment for development projects, economic measures to encourage activities for reducing environmental load, improvement of social infrastructure such as sewerage system, transport facilities etc., promotion of environmental activities by corporations, citizens and NGOs, environmental education, and provision of information, promotion of science and technology." New Zealand The Ministry for the Environment and Office of the Parliamentary Commissioner for the Environment were established by the Environment Act 1986. These positions are responsible for advising the Minister on all areas of environmental legislation. A common theme of New Zealand's environmental legislation is sustainably managing natural and physical resources, fisheries, and forests. The Resource Management Act 1991 is the main piece of environmental legislation that outlines the government's strategy to managing the "environment, including air, water soil, biodiversity, the coastal environment, noise, subdivision, and land use planning in general." Russia The Ministry of Natural Resources and Environment of the Russian Federation makes regulation regarding "conservation of natural resources, including the subsoil, water bodies, forests located in designated conservation areas, fauna and their habitat, in the field of hunting, hydrometeorology and related areas, environmental monitoring and pollution control, including radiation monitoring and control, and functions of public environmental policy making and implementation and statutory regulation." Singapore Singapore is a signatory of the Convention on Biological Diversity; with most of its CBD obligations being overseen by the National Biodiversity Reference Centre, a division of its National Parks Board (NParks). Singapore is also a signatory of the Convention on International Trade in Endangered Animals, with its obligations under that treaty also being overseen by NParks. The Parliament of Singapore has enacted numerous pieces of legislation to fulfil its obligations under these treaties, such as the Parks and Trees Act, Endangered Species (Import and Export) Act, and Wildlife Act. The new Wildlife (Protected Wildlife Species) Rules 2020 marks the first instance in Singapore's history that direct legal protection has been offered for specific named species, as listed in Parts 1-5 of the Rules' schedule. South Africa United Kingdom United States Vietnam Vietnam is currently working with the U.S. Environmental Protection Agency on dioxin remediation and technical assistance in order to lower methane emissions. In March 2002, the U.S. and Vietnam signed the U.S.-Vietnam Memorandum of Understanding on Research on Human Health and the Environmental Effects of Agent Orange/Dioxin. See also Climate target Environmental health Environmental justice Environmental racism Environmental racism in Europe Indigenous rights International law List of environmental law journals List of international environmental agreements UK enterprise law Notes References Akhatov, Aydar (1996). Ecology & International Law. Moscow: АST-PRESS. 512 pp. Bimal N. Patel, ed. (2015). MCQ on Environmental Law. Farber & Carlson, eds. (2013). Cases and Materials on Environmental Law, 9th. West Academic Publishing. 1008 pp. . Faure, Michael, and Niels Philipsen, eds. (2014). Environmental Law & European Law. The Hague: Eleven International Publishing. 142 pp. Malik, Surender & Sudeep Malik, eds. (2015). Supreme Court on Environment Law. Martin, Paul & Amanda Kennedy, eds. (2015). Implementing Environmental Law. Edward Elgar Publishing Further reading Around the world, environmental laws are under attack in all sorts of ways (30 May 2017), The Conversation External links International United Nations Environment Programme ECOLEX (Gateway to Environmental Law) Environmental Law Alliance Worldwide (E-LAW) Centre for International Environmental Law Wildlife Interest Group, American Society of International Law EarthRights International Interamerican Association for Environmental Defense United Kingdom Environmental Law Association Lexadin global law database Upholding Environmental Laws in Asia and the Pacific United States American Bar Association Section of Environment, Energy and Resources U.S. Environmental Protection Agency Environmental Law Institute (ELI) EarthJustice "Law Journals: Submission and Ranking, 2007-2014", Washington and Lee University, Lexington, Virginia Canada West Coast Environmental Law (non-profit law firm) Ecojustice Canadian Environmental Law Association Environmental Law Centre (of Alberta) European Union Europa: Environmental rules of the European Union Europa: Summaries of Legislation - Environment Environmental law schools Environmental social science Environmental protection
Environmental law
[ "Environmental_science" ]
7,756
[ "Environmental social science" ]
10,100
https://en.wikipedia.org/wiki/Equinox
A solar equinox is a moment in time when the Sun crosses the Earth's equator, which is to say, appears directly above the equator, rather than north or south of the equator. On the day of the equinox, the Sun appears to rise "due east" and set "due west". This occurs twice each year, around 20 March and 23 September. More precisely, an equinox is traditionally defined as the time when the plane of Earth's equator passes through the geometric center of the Sun's disk. Equivalently, this is the moment when Earth's rotation axis is directly perpendicular to the Sun-Earth line, tilting neither toward nor away from the Sun. In modern times, since the Moon (and to a lesser extent the planets) causes Earth's orbit to vary slightly from a perfect ellipse, the equinox is officially defined by the Sun's more regular ecliptic longitude rather than by its declination. The instants of the equinoxes are currently defined to be when the apparent geocentric longitude of the Sun is 0° and 180°. The word is derived from the Latin , from (equal) and (night). On the day of an equinox, daytime and nighttime are of approximately equal duration all over the planet. Contrary to popular belief, they are not exactly equal because of the angular size of the Sun, atmospheric refraction, and the rapidly changing duration of the length of day that occurs at most latitudes around the equinoxes. Long before conceiving this equality, equatorial cultures noted the day when the Sun rises due east and sets due west, and indeed this happens on the day closest to the astronomically defined event. As a consequence, according to a properly constructed and aligned sundial, the daytime duration is 12 hours. In the Northern Hemisphere, the March equinox is called the vernal or spring equinox while the September equinox is called the autumnal or fall equinox. In the Southern Hemisphere, the reverse is true. During the year, equinoxes alternate with solstices. Leap years and other factors cause the dates of both events to vary slightly. Hemisphere-neutral names are northward equinox for the March equinox, indicating that at that moment the solar declination is crossing the celestial equator in a northward direction, and southward equinox for the September equinox, indicating that at that moment the solar declination is crossing the celestial equator in a southward direction. Daytime is increasing at the fastest at the vernal equinox and decreasing at the fastest at the autumnal equinox. Equinoxes on Earth General Systematically observing the sunrise, people discovered that it occurs between two extreme locations at the horizon and eventually noted the midpoint between the two. Later it was realized that this happens on a day when the duration of the day and the night are practically equal and the word "equinox" comes from Latin aequus, meaning "equal", and nox, meaning "night". In the northern hemisphere, the vernal equinox (March) conventionally marks the beginning of spring in most cultures and is considered the start of the New Year in the Assyrian calendar, Hindu, and the Persian or Iranian calendars, while the autumnal equinox (September) marks the beginning of autumn. Ancient Greek calendars too had the beginning of the year either at the autumnal or vernal equinox and some at solstices. The Antikythera mechanism predicts the equinoxes and solstices. The equinoxes are the only times when the solar terminator (the "edge" between night and day) is perpendicular to the equator. As a result, the northern and southern hemispheres are equally illuminated. For the same reason, this is also the time when the Sun rises for an observer at one of Earth's rotational poles and sets at the other. For a brief period lasting approximately four days, both North and South Poles are in daylight. For example, in 2021 sunrise on the North Pole is 18 March 07:09 UTC, and sunset on the South Pole is 22 March 13:08 UTC. Also in 2021, sunrise on the South Pole is 20 September 16:08 UTC, and sunset on the North Pole is 24 September 22:30 UTC. In other words, the equinoxes are the only times when the subsolar point is on the equator, meaning that the Sun is exactly overhead at a point on the equatorial line. The subsolar point crosses the equator moving northward at the March equinox and southward at the September equinox. Date When Julius Caesar established the Julian calendar in 45 BC, he set 25 March as the date of the spring equinox; this was already the starting day of the year in the Persian and Indian calendars. Because the Julian year is longer than the tropical year by about 11.3 minutes on average (or 1 day in 128 years), the calendar "drifted" with respect to the two equinoxes – so that in 300 AD the spring equinox occurred on about 21 March, and by the 1580s AD it had drifted backwards to 11 March. This drift induced Pope Gregory XIII to establish the modern Gregorian calendar. The Pope wanted to continue to conform with the edicts of the Council of Nicaea in 325 AD concerning the date of Easter, which means he wanted to move the vernal equinox to the date on which it fell at that time (21 March is the day allocated to it in the Easter table of the Julian calendar), and to maintain it at around that date in the future, which he achieved by reducing the number of leap years from 100 to 97 every 400 years. However, there remained a small residual variation in the date and time of the vernal equinox of about ±27 hours from its mean position, virtually all because the distribution of 24 hour centurial leap-days causes large jumps (see Gregorian calendar leap solstice). Modern dates The dates of the equinoxes change progressively during the leap-year cycle, because the Gregorian calendar year is not commensurate with the period of the Earth's revolution about the Sun. It is only after a complete Gregorian leap-year cycle of 400 years that the seasons commence at approximately the same time. In the 21st century the earliest March equinox will be 19 March 2096, while the latest was 21 March 2003. The earliest September equinox will be 21 September 2096 while the latest was 23 September 2003 (Universal Time). Names Vernal equinox and autumnal equinox: these classical names are direct derivatives of Latin (ver = spring, and autumnus = autumn). These are the historically universal and still most widely used terms for the equinoxes, but are potentially confusing because in the southern hemisphere the vernal equinox does not occur in spring and the autumnal equinox does not occur in autumn. The equivalent common language English terms spring equinox and autumn (or fall) equinox are even more ambiguous. It has become increasingly common for people to refer to the September equinox in the southern hemisphere as the Vernal equinox. March equinox and September equinox: names referring to the months of the year in which they occur, with no ambiguity as to which hemisphere is the context. They are still not universal, however, as not all cultures use a solar-based calendar where the equinoxes occur every year in the same month (as they do not in the Islamic calendar and Hebrew calendar, for example). Although the terms have become very common in the 21st century, they were sometimes used at least as long ago as the mid-20th century. Northward equinox and southward equinox: names referring to the apparent direction of motion of the Sun. The northward equinox occurs in March when the Sun crosses the equator from south to north, and the southward equinox occurs in September when the Sun crosses the equator from north to south. These terms can be used unambiguously for other planets. They are rarely seen, although were first proposed over 100 years ago. First point of Aries and first point of Libra: names referring to the astrological signs the Sun is entering. However, the precession of the equinoxes has shifted these points into the constellations Pisces and Virgo, respectively. Length of equinoctial day and night On the date of the equinox, the center of the Sun spends a roughly equal amount of time above and below the horizon at every location on the Earth, so night and day are about the same length. Sunrise and sunset can be defined in several ways, but a widespread definition is the time that the top limb of the Sun is level with the horizon. With this definition, the day is longer than the night at the equinoxes: From the Earth, the Sun appears as a disc rather than a point of light, so when the centre of the Sun is below the horizon, its upper edge may be visible. Sunrise, which begins daytime, occurs when the top of the Sun's disk appears above the eastern horizon. At that instant, the disk's centre is still below the horizon. The Earth's atmosphere refracts sunlight. As a result, an observer sees daylight before the top of the Sun's disk appears above the horizon. In sunrise/sunset tables, the atmospheric refraction is assumed to be 34 arcminutes, and the assumed semidiameter (apparent radius) of the Sun is 16 arcminutes. (The apparent radius varies slightly depending on time of year, slightly larger at perihelion in January than aphelion in July, but the difference is comparatively small.) Their combination means that when the upper limb of the Sun is on the visible horizon, its centre is 50 arcminutes below the geometric horizon, which is the intersection with the celestial sphere of a horizontal plane through the eye of the observer. These effects make the day about 14 minutes longer than the night at the equator and longer still towards the poles. The real equality of day and night only happens in places far enough from the equator to have a seasonal difference in day length of at least 7 minutes, actually occurring a few days towards the winter side of each equinox. One result of this is that, at latitudes below ±2.0 degrees, all the days of the year are longer than the nights. The times of sunset and sunrise vary with the observer's location (longitude and latitude), so the dates when day and night are equal also depend upon the observer's location. A third correction for the visual observation of a sunrise (or sunset) is the angle between the apparent horizon as seen by an observer and the geometric (or sensible) horizon. This is known as the dip of the horizon and varies from 3 arcminutes for a viewer standing on the sea shore to 160 arcminutes for a mountaineer on Everest. The effect of a larger dip on taller objects (reaching over 2½° of arc on Everest) accounts for the phenomenon of snow on a mountain peak turning gold in the sunlight long before the lower slopes are illuminated. The date on which the day and night are exactly the same is known as an equilux; the neologism, believed to have been coined in the 1980s, achieved more widespread recognition in the 21st century. At the most precise measurements, a true equilux is rare, because the lengths of day and night change more rapidly than any other time of the year around the equinoxes. In the mid-latitudes, daylight increases or decreases by about three minutes per day at the equinoxes, and thus adjacent days and nights only reach within one minute of each other. The date of the closest approximation of the equilux varies slightly by latitude; in the mid-latitudes, it occurs a few days before the spring equinox and after the fall equinox in each respective hemisphere. Auroras Mirror-image conjugate auroras have been observed during the equinoxes. Cultural aspects The equinoxes are sometimes regarded as the start of spring and autumn. A number of traditional harvest festivals are celebrated on the date of the equinoxes. People in countries including Iran, Afghanistan, Tajikistan celebrate Nowruz which is spring equinox in northern hemisphere. This day marks the new year in Solar Hijri calendar. Religious architecture is often determined by the equinox; the Angkor Wat Equinox during which the sun rises in a perfect alignment over Angkor Wat in Cambodia is one such example. Catholic churches, since the recommendations of Charles Borromeo, have often chosen the equinox as their reference point for the orientation of churches. Effects on satellites One effect of equinoctial periods is the temporary disruption of communications satellites. For all geostationary satellites, there are a few days around the equinox when the Sun goes directly behind the satellite relative to Earth (i.e. within the beam-width of the ground-station antenna) for a short period each day. The Sun's immense power and broad radiation spectrum overload the Earth station's reception circuits with noise and, depending on antenna size and other factors, temporarily disrupt or degrade the circuit. The duration of those effects varies but can range from a few minutes to an hour. (For a given frequency band, a larger antenna has a narrower beam-width and hence experiences shorter duration "Sun outage" windows.) Satellites in geostationary orbit also experience difficulties maintaining power during the equinox because they have to travel through Earth's shadow and rely only on battery power. Usually, a satellite travels either north or south of the Earth's shadow because Earth's axis is not directly perpendicular to a line from the Earth to the Sun at other times. During the equinox, since geostationary satellites are situated above the Equator, they are in Earth's shadow for the longest duration all year. Equinoxes on other planets Equinoxes are defined on any planet with a tilted rotational axis. A dramatic example is Saturn, where the equinox places its ring system edge-on facing the Sun. As a result, they are visible only as a thin line when seen from Earth. When seen from above – a view seen during an equinox for the first time from the Cassini space probe in 2009 – they receive very little sunshine; indeed, they receive more planetshine than light from the Sun. This phenomenon occurs once every 14.7 years on average, and can last a few weeks before and after the exact equinox. Saturn's most recent equinox was on 11 August 2009, and its next will take place on 6 May 2025. Mars's most recent equinoxes were on 12 January 2024 (northern autumn), and on 26 December 2022 (northern spring). See also Analemma Anjana (Cantabrian mythology) – fairies believed to appear on the spring equinox Angkor Wat Equinox Aphelion – occurs around 5 July (see formula) Geocentric view of the seasons Iranian calendars Kōreisai – days of worship in Japan that began in 1878 Lady Day Nowruz Orientation of churches Perihelion and aphelion Solstice Songkran Sun outage – a satellite phenomenon that occurs around the time of an equinox Tekufah Wheel of the Year Zoroastrian calendar Footnotes References External links Dynamics of the Solar System March observances Technical factors of astrology September observances Time in astronomy
Equinox
[ "Astronomy" ]
3,252
[ "Time in astronomy", "Equinoxes", "Dynamics of the Solar System", "Solar System" ]
10,101
https://en.wikipedia.org/wiki/Eugene%20Wigner
Eugene Paul Wigner (, ; November 17, 1902 – January 1, 1995) was a Hungarian-American theoretical physicist who also contributed to mathematical physics. He received the Nobel Prize in Physics in 1963 "for his contributions to the theory of the atomic nucleus and the elementary particles, particularly through the discovery and application of fundamental symmetry principles". A graduate of the Technical Hochschule Berlin (now Technische Universität Berlin), Wigner worked as an assistant to Karl Weissenberg and Richard Becker at the Kaiser Wilhelm Institute in Berlin, and David Hilbert at the University of Göttingen. Wigner and Hermann Weyl were responsible for introducing group theory into physics, particularly the theory of symmetry in physics. Along the way he performed ground-breaking work in pure mathematics, in which he authored a number of mathematical theorems. In particular, Wigner's theorem is a cornerstone in the mathematical formulation of quantum mechanics. He is also known for his research into the structure of the atomic nucleus. In 1930, Princeton University recruited Wigner, along with John von Neumann, and he moved to the United States, where he obtained citizenship in 1937. Wigner participated in a meeting with Leo Szilard and Albert Einstein that resulted in the Einstein–Szilard letter, which prompted President Franklin D. Roosevelt to authorize the creation of the Advisory Committee on Uranium with the purpose of investigating the feasibility of nuclear weapons. Wigner was afraid that the German nuclear weapon project would develop an atomic bomb first. During the Manhattan Project, he led a team whose task was to design nuclear reactors to convert uranium into weapons grade plutonium. At the time, reactors existed only on paper, and no reactor had yet gone critical. Wigner was disappointed that DuPont was given responsibility for the detailed design of the reactors, not just their construction. He became director of research and development at the Clinton Laboratory (now the Oak Ridge National Laboratory) in early 1946, but became frustrated with bureaucratic interference by the Atomic Energy Commission, and returned to Princeton. In the postwar period, he served on a number of government bodies, including the National Bureau of Standards from 1947 to 1951, the mathematics panel of the National Research Council from 1951 to 1954, the physics panel of the National Science Foundation, and the influential General Advisory Committee of the Atomic Energy Commission from 1952 to 1957 and again from 1959 to 1964. In later life, he became more philosophical, and published The Unreasonable Effectiveness of Mathematics in the Natural Sciences, his best-known work outside technical mathematics and physics. Early life and education Wigner Jenő Pál was born in Budapest, Austria-Hungary on November 17, 1902, to middle class Jewish parents, Elisabeth Elsa Einhorn and Antal Anton Wigner, a leather tanner. He had an older sister, Berta, known as Biri, and a younger sister Margit, known as Manci, who later married British theoretical physicist Paul Dirac. He was home schooled by a professional teacher until the age of 9, when he started school at the third grade. During this period, Wigner developed an interest in mathematical problems. At the age of 11, Wigner contracted what his doctors believed to be tuberculosis. His parents sent him to live for six weeks in a sanatorium in the Austrian mountains, before the doctors concluded that the diagnosis was mistaken. Wigner's family was Jewish, but not religiously observant, and his Bar Mitzvah was a secular one. From 1915 through 1919, he studied at the secondary grammar school called Fasori Evangélikus Gimnázium, the school his father had attended. Religious education was compulsory, and he attended classes in Judaism taught by a rabbi. A fellow student was János von Neumann, who was a year behind Wigner. They both benefited from the instruction of the noted mathematics teacher László Rátz. In 1919, to escape the Béla Kun communist regime, the Wigner family briefly fled to Austria, returning to Hungary after Kun's downfall. Partly as a reaction to the prominence of Jews in the Kun regime, the family converted to Lutheranism. Wigner explained later in his life that his family decision to convert to Lutheranism "was not at heart a religious decision but an anti-communist one". After graduating from the secondary school in 1920, Wigner enrolled at the Budapest University of Technical Sciences, known as the Műegyetem. He was not happy with the courses on offer, and in 1921 enrolled at the Technische Hochschule Berlin (now Technische Universität Berlin), where he studied chemical engineering. He also attended the Wednesday afternoon colloquia of the German Physical Society. These colloquia featured leading researchers including Max Planck, Max von Laue, Rudolf Ladenburg, Werner Heisenberg, Walther Nernst, Wolfgang Pauli, and Albert Einstein. Wigner also met the physicist Leó Szilárd, who at once became Wigner's closest friend. A third experience in Berlin was formative. Wigner worked at the Kaiser Wilhelm Institute for Physical Chemistry and Electrochemistry (now the Fritz Haber Institute), and there he met Michael Polanyi, who became, after László Rátz, Wigner's greatest teacher. Polanyi supervised Wigner's DSc thesis, Bildung und Zerfall von Molekülen ("Formation and Decay of Molecules"). Middle years Wigner returned to Budapest, where he went to work at his father's tannery, but in 1926, he accepted an offer from Karl Weissenberg at the Kaiser Wilhelm Institute in Berlin. Weissenberg wanted someone to assist him with his work on X-ray crystallography, and Polanyi had recommended Wigner. After six months as Weissenberg's assistant, Wigner went to work for Richard Becker for two semesters. Wigner explored quantum mechanics, studying the work of Erwin Schrödinger. He also delved into the group theory of Ferdinand Frobenius and Eduard Ritter von Weber. Wigner received a request from Arnold Sommerfeld to work at the University of Göttingen as an assistant to the great mathematician David Hilbert. This proved a disappointment, as the aged Hilbert's abilities were failing, and his interests had shifted to logic. Wigner nonetheless studied independently. He laid the foundation for the theory of symmetries in quantum mechanics and in 1927 introduced what is now known as the Wigner D-matrix. Wigner and Hermann Weyl were responsible for introducing group theory into quantum mechanics. The latter had written a standard text, Group Theory and Quantum Mechanics (1928), but it was not easy to understand, especially for younger physicists. Wigner's Group Theory and Its Application to the Quantum Mechanics of Atomic Spectra (1931) made group theory accessible to a wider audience. In these works, Wigner laid the foundation for the theory of symmetries in quantum mechanics. Wigner's theorem, proven by him in 1931, is a cornerstone of the mathematical formulation of quantum mechanics. The theorem specifies how physical symmetries such as rotations, translations, and CPT symmetry are represented on the Hilbert space of states. According to the theorem, any symmetry transformation is represented by a linear and unitary or antilinear and antiunitary transformation of Hilbert space. The representation of a symmetry group on a Hilbert space is either an ordinary representation or a projective representation. In the late 1930s, Wigner extended his research into atomic nuclei. By 1929, his papers were drawing notice in the world of physics. In 1930, Princeton University recruited Wigner for a one-year lectureship, at 7 times the salary that he had been drawing in Europe. Princeton recruited von Neumann at the same time. Jenő Pál Wigner and János von Neumann had collaborated on three papers together in 1928 and two in 1929. They anglicized their first names to "Eugene" and "John", respectively. When their year was up, Princeton offered a five-year contract as visiting professors for half the year. The Technische Hochschule responded with a teaching assignment for the other half of the year. This was very timely, since the Nazis soon rose to power in Germany. At Princeton in 1934, Wigner introduced his sister Margit "Manci" Wigner to the physicist Paul Dirac, with whom she remarried. Princeton did not rehire Wigner when his contract ran out in 1936. Through Gregory Breit, Wigner found new employment at the University of Wisconsin. There, he met his first wife, Amelia Frank, who was a physics student there. However, she died unexpectedly in 1937, leaving Wigner distraught. He therefore accepted an offer in 1938 from Princeton to return there. Wigner became a naturalized citizen of the United States on January 8, 1937, and he brought his parents to the United States. Manhattan Project Although he was a professed political amateur, on August 2, 1939, he participated in a meeting with Leó Szilárd and Albert Einstein that resulted in the Einstein–Szilárd letter, which prompted President Franklin D. Roosevelt to authorize the creation of the Advisory Committee on Uranium with the purpose of investigating the feasibility of atomic bombs. Wigner was afraid that the German nuclear weapon project would develop an atomic bomb first, and even refused to have his fingerprints taken because they could be used to track him down if Germany won. "Thoughts of being murdered," he later recalled, "focus your mind wonderfully." On June 4, 1941, Wigner married his second wife, Mary Annette Wheeler, a professor of physics at Vassar College, who had completed her Ph.D. at Yale University in 1932. After the war she taught physics on the faculty of Rutgers University's Douglass College in New Jersey until her retirement in 1964. They remained married until her death in November 1977. They had two children, David Wigner and Martha Wigner Upton. During the Manhattan Project, Wigner led a team that included J. Ernest Wilkins Jr., Alvin M. Weinberg, Katharine Way, Gale Young and Edward Creutz. The group's task was to design the production nuclear reactors that would convert uranium into weapons grade plutonium. At the time, reactors existed only on paper, and no reactor had yet gone critical. In July 1942, Wigner chose a conservative 100 MW design, with a graphite neutron moderator and water cooling. Wigner was present at a converted rackets court under the stands at the University of Chicago's abandoned Stagg Field on December 2, 1942, when the world's first atomic reactor, Chicago Pile One (CP-1) achieved a controlled nuclear chain reaction. Wigner was disappointed that DuPont was given responsibility for the detailed design of the reactors, not just their construction. He threatened to resign in February 1943, but was talked out of it by the head of the Metallurgical Laboratory, Arthur Compton, who sent him on vacation instead. As it turned out, a design decision by DuPont to give the reactor additional load tubes for more uranium saved the project when neutron poisoning became a problem. Without the additional tubes, the reactor could have been run at 35% power until the boron impurities in the graphite were burned up and enough plutonium produced to run the reactor at full power; but this would have set the project back a year. During the 1950s, he would even work for DuPont on the Savannah River Site. Wigner did not regret working on the bomb, remarking: An important discovery Wigner made during the project was the Wigner effect. This is a swelling of the graphite moderator caused by the displacement of atoms by neutron radiation. The Wigner effect was a serious problem for the reactors at the Hanford Site in the immediate post-war period, and resulted in production cutbacks and a reactor being shut down entirely. It was eventually discovered that it could be overcome by controlled heating and annealing. Through Manhattan project funding, Wigner and Leonard Eisenbud also developed an important general approach to nuclear reactions, the Wigner–Eisenbud R-matrix theory, which was published in 1947. Later years Wigner was elected to the American Philosophical Society in 1944 and the United States National Academy of Sciences in 1945. He accepted a position as the director of research and development at the Clinton Laboratory (now the Oak Ridge National Laboratory) in Oak Ridge, Tennessee in early 1946. Because he did not want to be involved in administrative duties, he became co-director of the laboratory, with James Lum handling the administrative chores as executive director. When the newly created Atomic Energy Commission (AEC) took charge of the laboratory's operations at the start of 1947, Wigner feared that many of the technical decisions would be made in Washington. He also saw the Army's continuation of wartime security policies at the laboratory as a "meddlesome oversight", interfering with research. One such incident occurred in March 1947, when the AEC discovered that Wigner's scientists were conducting experiments with a critical mass of uranium-235 when the director of the Manhattan Project, Major General Leslie R. Groves, Jr., had forbidden such experiments in August 1946 after the death of Louis Slotin at the Los Alamos Laboratory. Wigner argued that Groves's order had been superseded, but was forced to terminate the experiments, which were completely different from the one that killed Slotin. Feeling unsuited to a managerial role in such an environment, he left Oak Ridge in 1947 and returned to Princeton University, although he maintained a consulting role with the facility for many years. In the postwar period, he served on a number of government bodies, including the National Bureau of Standards from 1947 to 1951, the mathematics panel of the National Research Council from 1951 to 1954, the physics panel of the National Science Foundation, and the influential General Advisory Committee of the Atomic Energy Commission from 1952 to 1957 and again from 1959 to 1964. He also contributed to civil defense. Wigner was elected to the American Academy of Arts and Sciences in 1950. Near the end of his life, Wigner's thoughts turned more philosophical. In 1960, he published a now classic article on the philosophy of mathematics and of physics, which has become his best-known work outside technical mathematics and physics, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences". He argued that biology and cognition could be the origin of physical concepts, as we humans perceive them, and that the happy coincidence that mathematics and physics were so well matched, seemed to be "unreasonable" and hard to explain. His original paper has provoked and inspired many responses across a wide range of disciplines. These included Richard Hamming in Computer Science, Arthur Lesk in Molecular Biology, Peter Norvig in data mining, Max Tegmark in Physics, Ivor Grattan-Guinness in Mathematics, and Vela Velupillai in Economics. Turning to philosophical questions about the theory of quantum mechanics, Wigner developed a thought experiment (later called Wigner's Friend paradox) to illustrate his belief that consciousness is foundational to the quantum mechanical measurement process. He thereby followed an ontological approach that sets human's consciousness at the center: "All that quantum mechanics purports to provide are probability connections between subsequent impressions (also called 'apperceptions') of the consciousness". Measurements are understood as the interactions which create the impressions in our consciousness (and as a result modify the wave function of the "measured" physical system), an idea which has been called the "consciousness causes collapse" interpretation. Interestingly, Hugh Everett III (a student of Wigner's) discussed Wigner's thought experiment in the introductory part of his 1957 dissertation as an "amusing, but extremely hypothetical drama". In an early draft of Everett's work, one also finds a drawing of the Wigner's Friend situation, which must be seen as the first evidence on paper of the thought experiment that was later assigned to be Wigner's. This suggests that Everett must at least have discussed the problem together with Wigner. In November 1963, Wigner called for the allocation of 10% of the national defense budget to be spent on nuclear blast shelters and survival resources, arguing that such an expenditure would be less costly than disarmament. Wigner considered a recent Woods Hole study's conclusion that a nuclear strike would kill 20% of Americans to be a very modest projection and that the country could recover from such an attack more quickly than Germany had recovered from the devastation of World War II. Wigner was awarded the Nobel Prize in Physics in 1963 "for his contributions to the theory of the atomic nucleus and the elementary particles, particularly through the discovery and application of fundamental symmetry principles". The prize was shared that year, with the other half of the award divided between Maria Goeppert-Mayer and J. Hans D. Jensen. Wigner professed that he had never considered the possibility that this might occur, and added: "I never expected to get my name in the newspapers without doing something wicked." He also won the Franklin Medal in 1950, the Enrico Fermi award in 1958, the Atoms for Peace Award in 1959, the Max Planck Medal in 1961, the National Medal of Science in 1969, the Albert Einstein Award in 1972, the Golden Plate Award of the American Academy of Achievement in 1974, the eponymous Wigner Medal in 1978, and the Herzl Prize in 1982. In 1968 he gave the Josiah Willard Gibbs lecture. After his retirement from Princeton in 1971, Wigner prepared the first edition of Symmetries and Reflections, a collection of philosophical essays, and became more involved in international and political meetings; around this time he became a leader and vocal defender of the Unification Church's annual International Conference on the Unity of the Sciences. Mary died in November 1977. In 1979, Wigner married his third wife, Eileen Clare-Patton (Pat) Hamilton (1915-2010), the widow of physicist Donald Ross Hamilton, the dean of the graduate school at Princeton University, who had died in 1972. In 1992, at the age of 90, he published his memoirs, The Recollections of Eugene P. Wigner with Andrew Szanton. In it, Wigner said: "The full meaning of life, the collective meaning of all human desires, is fundamentally a mystery beyond our grasp. As a young man, I chafed at this state of affairs. But by now I have made peace with it. I even feel a certain honor to be associated with such a mystery." In his collection of essays 'Philosophical Reflections and Syntheses' (1995), he commented: "It was not possible to formulate the laws of quantum mechanics in a fully consistent way without reference to consciousness." Wigner was credited as a member of the advisory board for the Western Goals Foundation, a private domestic intelligence agency created in the US in 1979 to "fill the critical gap caused by the crippling of the FBI, the disabling of the House Un-American Activities Committee and the destruction of crucial government files". Wigner died of pneumonia at the University Medical Center in Princeton, New Jersey on 1 January 1995. Publications 1958 (with Alvin M. Weinberg). Physical Theory of Neutron Chain Reactors University of Chicago Press. 1959. Group Theory and its Application to the Quantum Mechanics of Atomic Spectra. New York: Academic Press. Translation by J. J. Griffin of 1931, Gruppentheorie und ihre Anwendungen auf die Quantenmechanik der Atomspektren, Vieweg Verlag, Braunschweig. 1970 Symmetries and Reflections: Scientific Essays. Indiana University Press, Bloomington 1992 (as told to Andrew Szanton). The Recollections of Eugene P. Wigner. Plenum. 1995 (with Jagdish Mehra and Arthur Wightman, eds.). Philosophical Reflections and Syntheses. Springer, Berlin Selected contributions Theoretical physics Bargmann–Wigner equations Jordan–Wigner transformation Newton–Wigner localization Polynomial Wigner–Ville distribution Relativistic Breit–Wigner distribution Thomas–Wigner rotation Wigner–Eckart theorem Wigner–Inonu contraction Wigner–Seitz cell Wigner–Seitz radius Wigner–Weyl transform Wigner–Wilkins spectrum Wigner's classification Wigner quasiprobability distribution Wigner's friend Wigner's theorem Wigner crystal Wigner D-matrix Wigner effect Wigner energy Wigner lattice Wigner's disease Von Neumann–Wigner interpretation Wigner–Witmer correlation rules Mathematics Gabor–Wigner transform Modified Wigner distribution function Wigner distribution function Wigner semicircle distribution Wigner rotation Wigner quasiprobability distribution Wigner semicircle distribution 6-j symbol 9-j symbol Wigner 3-j symbols Wigner–İnönü group contraction Wigner surmise See also List of things named after Eugene Wigner The Martians (scientists) List of Jewish Nobel laureates Notes References N. Mukunda (1995) "Eugene Paul Wigner – A tribute", Current Science 69(4): 375–85 External links 1964 Audio Interview with Eugene Wigner by Stephane Groueff Voices of the Manhattan Project APS Oral History Interview Transcript with Eugene Wigner 21 November 1963, American Institute of Physics, Niels Bohr Library & Archives Session 1 APS Oral History Interview Transcript with Eugene Wigner 03 December 1963, American Institute of Physics, Niels Bohr Library & Archives Session 2 APS Oral History Interview Transcript with Eugene Wigner 14 December 1963, American Institute of Physics, Niels Bohr Library & Archives Session 3 APS Oral History Interview Transcript with Eugene Wigner 30 November 1966, American Institute of Physics, Niels Bohr Library & Archives APS Oral History Interview Transcript with Eugene Wigner 24 January 1981, American Institute of Physics, Niels Bohr Library & Archives Wigner Jenö Iskolás Évei by Radnai Gyula, ELTE, Fizikai Szemle 2007/2 – 62.o. (Hungarian). Description of the childhood and especially of the school-years in Budapest, with some interesting photos too. Interview with Eugene P. Wigner on John von Neumann at the Charles Babbage Institute, University of Minnesota, Minneapolis – Wigner talks about his association with John von Neumann during their school years in Hungary, their graduate studies in Berlin, and their appointments to Princeton in 1930. Wigner discusses von Neumann's contributions to the theory of quantum mechanics, Wigner's own work in this area, and von Neumann's interest in the application of theory to the atomic bomb project. including the Nobel Lecture, December 12, 1963 Events, Laws of Nature, and Invariance Principles 1902 births 1995 deaths Nobel laureates in Physics American Nobel laureates Hungarian Nobel laureates Nobel laureates from Austria-Hungary 20th-century Hungarian mathematicians American atheists Jewish atheists American Lutherans 20th-century American physicists American people of Hungarian-Jewish descent Atoms for Peace Award recipients Technische Universität Berlin alumni Burials at Princeton Cemetery Enrico Fermi Award recipients Fasori Gimnázium alumni Fellows of the American Physical Society Foreign members of the Royal Society Hungarian emigrants to the United States Hungarian atheists 20th-century Hungarian Jews Hungarian Lutherans 20th-century Hungarian physicists Hungarian nuclear physicists Jewish American physicists Manhattan Project people Mathematical physicists Hungarian physicists Members of the United States National Academy of Sciences National Medal of Science laureates Oak Ridge National Laboratory people People from Pest, Hungary Princeton University faculty Theoretical physicists Academic staff of the University of Göttingen University of Wisconsin–Madison faculty Winners of the Max Planck Medal Mathematicians from Austria-Hungary Deaths from pneumonia in New Jersey Naturalized citizens of the United States Presidents of the American Physical Society Members of the American Philosophical Society Recipients of Franklin Medal Scientists from Budapest
Eugene Wigner
[ "Physics" ]
4,864
[ "Theoretical physics", "Theoretical physicists" ]
10,103
https://en.wikipedia.org/wiki/Electroweak%20interaction
In particle physics, the electroweak interaction or electroweak force is the unified description of two of the fundamental interactions of nature: electromagnetism (electromagnetic interaction) and the weak interaction. Although these two forces appear very different at everyday low energies, the theory models them as two different aspects of the same force. Above the unification energy, on the order of 246 GeV, they would merge into a single force. Thus, if the temperature is high enough – approximately 1015 K – then the electromagnetic force and weak force merge into a combined electroweak force. During the quark epoch (shortly after the Big Bang), the electroweak force split into the electromagnetic and weak force. It is thought that the required temperature of 1015 K has not been seen widely throughout the universe since before the quark epoch, and currently the highest human-made temperature in thermal equilibrium is around (from the Large Hadron Collider). Sheldon Glashow, Abdus Salam, and Steven Weinberg were awarded the 1979 Nobel Prize in Physics for their contributions to the unification of the weak and electromagnetic interaction between elementary particles, known as the Weinberg–Salam theory. The existence of the electroweak interactions was experimentally established in two stages, the first being the discovery of neutral currents in neutrino scattering by the Gargamelle collaboration in 1973, and the second in 1983 by the UA1 and the UA2 collaborations that involved the discovery of the W and Z gauge bosons in proton–antiproton collisions at the converted Super Proton Synchrotron. In 1999, Gerardus 't Hooft and Martinus Veltman were awarded the Nobel prize for showing that the electroweak theory is renormalizable. History After the Wu experiment in 1956 discovered parity violation in the weak interaction, a search began for a way to relate the weak and electromagnetic interactions. Extending his doctoral advisor Julian Schwinger's work, Sheldon Glashow first experimented with introducing two different symmetries, one chiral and one achiral, and combined them such that their overall symmetry was unbroken. This did not yield a renormalizable theory, and its gauge symmetry had to be broken by hand as no spontaneous mechanism was known, but it predicted a new particle, the Z boson. This received little notice, as it matched no experimental finding. In 1964, Salam and John Clive Ward had the same idea, but predicted a massless photon and three massive gauge bosons with a manually broken symmetry. Later around 1967, while investigating spontaneous symmetry breaking, Weinberg found a set of symmetries predicting a massless, neutral gauge boson. Initially rejecting such a particle as useless, he later realized his symmetries produced the electroweak force, and he proceeded to predict rough masses for the W and Z bosons. Significantly, he suggested this new theory was renormalizable. In 1971, Gerard 't Hooft proved that spontaneously broken gauge symmetries are renormalizable even with massive gauge bosons. Formulation Mathematically, electromagnetism is unified with the weak interactions as a Yang–Mills field with an gauge group, which describes the formal operations that can be applied to the electroweak gauge fields without changing the dynamics of the system. These fields are the weak isospin fields , , and , and the weak hypercharge field . This invariance is known as electroweak symmetry. The generators of SU(2) and U(1) are given the name weak isospin (labeled ) and weak hypercharge (labeled ) respectively. These then give rise to the gauge bosons that mediate the electroweak interactions – the three bosons of weak isospin (, , and ), and the boson of weak hypercharge, respectively, all of which are "initially" massless. These are not physical fields yet, before spontaneous symmetry breaking and the associated Higgs mechanism. In the Standard Model, the observed physical particles, the and bosons, and the photon, are produced through the spontaneous symmetry breaking of the electroweak symmetry SU(2) × U(1) to U(1), effected by the Higgs mechanism (see also Higgs boson), an elaborate quantum-field-theoretic phenomenon that "spontaneously" alters the realization of the symmetry and rearranges degrees of freedom. The electric charge arises as the particular linear combination (nontrivial) of (weak hypercharge) and the component of weak isospin () that does not couple to the Higgs boson. That is to say: the Higgs and the electromagnetic field have no effect on each other, at the level of the fundamental forces ("tree level"), while any other combination of the hypercharge and the weak isospin must interact with the Higgs. This causes an apparent separation between the weak force, which interacts with the Higgs, and electromagnetism, which does not. Mathematically, the electric charge is a specific combination of the hypercharge and outlined in the figure. (the symmetry group of electromagnetism only) is defined to be the group generated by this special linear combination, and the symmetry described by the group is unbroken, since it does not directly interact with the Higgs. The above spontaneous symmetry breaking makes the and bosons coalesce into two different physical bosons with different masses – the boson, and the photon (), where is the weak mixing angle. The axes representing the particles have essentially just been rotated, in the (, ) plane, by the angle . This also introduces a mismatch between the mass of the and the mass of the particles (denoted as and , respectively), The and bosons, in turn, combine to produce the charged massive bosons : Lagrangian Before electroweak symmetry breaking The Lagrangian for the electroweak interactions is divided into four parts before electroweak symmetry breaking becomes manifest, The term describes the interaction between the three vector bosons and the vector boson, where () and are the field strength tensors for the weak isospin and weak hypercharge gauge fields. is the kinetic term for the Standard Model fermions. The interaction of the gauge bosons and the fermions are through the gauge covariant derivative, where the subscript sums over the three generations of fermions; , , and are the left-handed doublet, right-handed singlet up, and right handed singlet down quark fields; and and are the left-handed doublet and right-handed singlet electron fields. The Feynman slash means the contraction of the 4-gradient with the Dirac matrices, defined as and the covariant derivative (excluding the gluon gauge field for the strong interaction) is defined as Here is the weak hypercharge and the are the components of the weak isospin. The term describes the Higgs field and its interactions with itself and the gauge bosons, where is the vacuum expectation value. The term describes the Yukawa interaction with the fermions, and generates their masses, manifest when the Higgs field acquires a nonzero vacuum expectation value, discussed next. The for are matrices of Yukawa couplings. After electroweak symmetry breaking The Lagrangian reorganizes itself as the Higgs field acquires a non-vanishing vacuum expectation value dictated by the potential of the previous section. As a result of this rewriting, the symmetry breaking becomes manifest. In the history of the universe, this is believed to have happened shortly after the hot big bang, when the universe was at a temperature (assuming the Standard Model of particle physics). Due to its complexity, this Lagrangian is best described by breaking it up into several parts as follows. The kinetic term contains all the quadratic terms of the Lagrangian, which include the dynamic terms (the partial derivatives) and the mass terms (conspicuously absent from the Lagrangian before symmetry breaking) where the sum runs over all the fermions of the theory (quarks and leptons), and the fields and are given as with to be replaced by the relevant field ( ) and by the structure constants of the appropriate gauge group. The neutral current and charged current components of the Lagrangian contain the interactions between the fermions and gauge bosons, where The electromagnetic current is where is the fermions' electric charges. The neutral weak current is where is the fermions' weak isospin. The charged current part of the Lagrangian is given by where is the right-handed singlet neutrino field, and the CKM matrix determines the mixing between mass and weak eigenstates of the quarks. contains the Higgs three-point and four-point self interaction terms, contains the Higgs interactions with gauge vector bosons, contains the gauge three-point self interactions, contains the gauge four-point self interactions, contains the Yukawa interactions between the fermions and the Higgs field, See also Electroweak star Fundamental forces History of quantum field theory Standard Model (mathematical formulation) Unitarity gauge Weinberg angle Yang–Mills theory Notes References Further reading General readers Conveys much of the Standard Model with no formal mathematics. Very thorough on the weak interaction. Texts Articles
Electroweak interaction
[ "Physics" ]
1,943
[ "Physical phenomena", "Fundamental interactions", "Electroweak theory" ]
10,106
https://en.wikipedia.org/wiki/Earthquake
An earthquakealso called a quake, tremor, or tembloris the shaking of the Earth's surface resulting from a sudden release of energy in the lithosphere that creates seismic waves. Earthquakes can range in intensity, from those so weak they cannot be felt, to those violent enough to propel objects and people into the air, damage critical infrastructure, and wreak destruction across entire cities. The seismic activity of an area is the frequency, type, and size of earthquakes experienced over a particular time. The seismicity at a particular location in the Earth is the average rate of seismic energy release per unit volume. In its most general sense, the word earthquake is used to describe any seismic event that generates seismic waves. Earthquakes can occur naturally or be induced by human activities, such as mining, fracking, and nuclear tests. The initial point of rupture is called the hypocenter or focus, while the ground level directly above it is the epicenter. Earthquakes are primarily caused by geological faults, but also by volcanic activity, landslides, and other seismic events. The frequency, type, and size of earthquakes in an area define its seismic activity, reflecting the average rate of seismic energy release. Significant historical earthquakes include the 1556 Shaanxi earthquake in China, with over 830,000 fatalities, and the 1960 Valdivia earthquake in Chile, the largest ever recorded at 9.5 magnitude. Earthquakes result in various effects, such as ground shaking and soil liquefaction, leading to significant damage and loss of life. When the epicenter of a large earthquake is located offshore, the seabed may be displaced sufficiently to cause a tsunami. Earthquakes can trigger landslides. Earthquakes' occurrence is influenced by tectonic movements along faults, including normal, reverse (thrust), and strike-slip faults, with energy release and rupture dynamics governed by the elastic-rebound theory. Efforts to manage earthquake risks involve prediction, forecasting, and preparedness, including seismic retrofitting and earthquake engineering to design structures that withstand shaking. The cultural impact of earthquakes spans myths, religious beliefs, and modern media, reflecting their profound influence on human societies. Similar seismic phenomena, known as marsquakes and moonquakes, have been observed on other celestial bodies, indicating the universality of such events beyond Earth. Terminology An earthquake is the shaking of the surface of Earth resulting from a sudden release of energy in the lithosphere that creates seismic waves. Earthquakes may also be referred to as quakes, tremors, or temblors. The word tremor is also used for non-earthquake seismic rumbling. In its most general sense, an earthquake is any seismic event—whether natural or caused by humans—that generates seismic waves. Earthquakes are caused mostly by the rupture of geological faults but also by other events such as volcanic activity, landslides, mine blasts, fracking and nuclear tests. An earthquake's point of initial rupture is called its hypocenter or focus. The epicenter is the point at ground level directly above the hypocenter. The seismic activity of an area is the frequency, type, and size of earthquakes experienced over a particular time. The seismicity at a particular location in the Earth is the average rate of seismic energy release per unit volume. Major examples One of the most devastating earthquakes in recorded history was the 1556 Shaanxi earthquake, which occurred on 23 January 1556 in Shaanxi, China. More than 830,000 people died. Most houses in the area were yaodongs—dwellings carved out of loess hillsides—and many victims were killed when these structures collapsed. The 1976 Tangshan earthquake, which killed between 240,000 and 655,000 people, was the deadliest of the 20th century. The 1960 Chilean earthquake is the largest earthquake that has been measured on a seismograph, reaching 9.5 magnitude on 22 May 1960. Its epicenter was near Cañete, Chile. The energy released was approximately twice that of the next most powerful earthquake, the Good Friday earthquake (27 March 1964), which was centered in Prince William Sound, Alaska. The ten largest recorded earthquakes have all been megathrust earthquakes; however, of these ten, only the 2004 Indian Ocean earthquake is simultaneously one of the deadliest earthquakes in history. Earthquakes that caused the greatest loss of life, while powerful, were deadly because of their proximity to either heavily populated areas or the ocean, where earthquakes often create tsunamis that can devastate communities thousands of kilometers away. Regions most at risk for great loss of life include those where earthquakes are relatively rare but powerful, and poor regions with lax, unenforced, or nonexistent seismic building codes. Occurrence Tectonic earthquakes occur anywhere on the earth where there is sufficient stored elastic strain energy to drive fracture propagation along a fault plane. The sides of a fault move past each other smoothly and aseismically only if there are no irregularities or asperities along the fault surface that increases the frictional resistance. Most fault surfaces do have such asperities, which leads to a form of stick-slip behavior. Once the fault has locked, continued relative motion between the plates leads to increasing stress and, therefore, stored strain energy in the volume around the fault surface. This continues until the stress has risen sufficiently to break through the asperity, suddenly allowing sliding over the locked portion of the fault, releasing the stored energy. This energy is released as a combination of radiated elastic strain seismic waves, frictional heating of the fault surface, and cracking of the rock, thus causing an earthquake. This process of gradual build-up of strain and stress punctuated by occasional sudden earthquake failure is referred to as the elastic-rebound theory. It is estimated that only 10 percent or less of an earthquake's total energy is radiated as seismic energy. Most of the earthquake's energy is used to power the earthquake fracture growth or is converted into heat generated by friction. Therefore, earthquakes lower the Earth's available elastic potential energy and raise its temperature, though these changes are negligible compared to the conductive and convective flow of heat out from the Earth's deep interior. Fault types There are three main types of fault, all of which may cause an interplate earthquake: normal, reverse (thrust), and strike-slip. Normal and reverse faulting are examples of dip-slip, where the displacement along the fault is in the direction of dip and where movement on them involves a vertical component. Many earthquakes are caused by movement on faults that have components of both dip-slip and strike-slip; this is known as oblique slip. The topmost, brittle part of the Earth's crust, and the cool slabs of the tectonic plates that are descending into the hot mantle, are the only parts of our planet that can store elastic energy and release it in fault ruptures. Rocks hotter than about flow in response to stress; they do not rupture in earthquakes. The maximum observed lengths of ruptures and mapped faults (which may break in a single rupture) are approximately . Examples are the earthquakes in Alaska (1957), Chile (1960), and Sumatra (2004), all in subduction zones. The longest earthquake ruptures on strike-slip faults, like the San Andreas Fault (1857, 1906), the North Anatolian Fault in Turkey (1939), and the Denali Fault in Alaska (2002), are about half to one third as long as the lengths along subducting plate margins, and those along normal faults are even shorter. Normal faults Normal faults occur mainly in areas where the crust is being extended such as a divergent boundary. Earthquakes associated with normal faults are generally less than magnitude 7. Maximum magnitudes along many normal faults are even more limited because many of them are located along spreading centers, as in Iceland, where the thickness of the brittle layer is only about . Reverse faults Reverse faults occur in areas where the crust is being shortened such as at a convergent boundary. Reverse faults, particularly those along convergent boundaries, are associated with the most powerful earthquakes (called megathrust earthquakes) including almost all of those of magnitude 8 or more. Megathrust earthquakes are responsible for about 90% of the total seismic moment released worldwide. Strike-slip faults Strike-slip faults are steep structures where the two sides of the fault slip horizontally past each other; transform boundaries are a particular type of strike-slip fault. Strike-slip faults, particularly continental transforms, can produce major earthquakes up to about magnitude 8. Strike-slip faults tend to be oriented near vertically, resulting in an approximate width of within the brittle crust. Thus, earthquakes with magnitudes much larger than 8 are not possible. In addition, there exists a hierarchy of stress levels in the three fault types. Thrust faults are generated by the highest, strike-slip by intermediate, and normal faults by the lowest stress levels. This can easily be understood by considering the direction of the greatest principal stress, the direction of the force that "pushes" the rock mass during the faulting. In the case of normal faults, the rock mass is pushed down in a vertical direction, thus the pushing force (greatest principal stress) equals the weight of the rock mass itself. In the case of thrusting, the rock mass "escapes" in the direction of the least principal stress, namely upward, lifting the rock mass, and thus, the overburden equals the least principal stress. Strike-slip faulting is intermediate between the other two types described above. This difference in stress regime in the three faulting environments can contribute to differences in stress drop during faulting, which contributes to differences in the radiated energy, regardless of fault dimensions. Energy released For every unit increase in magnitude, there is a roughly thirty-fold increase in the energy released. For instance, an earthquake of magnitude 6.0 releases approximately 32 times more energy than a 5.0 magnitude earthquake and a 7.0 magnitude earthquake releases 1,000 times more energy than a 5.0 magnitude earthquake. An 8.6-magnitude earthquake releases the same amount of energy as 10,000 atomic bombs of the size used in World War II. This is so because the energy released in an earthquake, and thus its magnitude, is proportional to the area of the fault that ruptures and the stress drop. Therefore, the longer the length and the wider the width of the faulted area, the larger the resulting magnitude. The most important parameter controlling the maximum earthquake magnitude on a fault, however, is not the maximum available length, but the available width because the latter varies by a factor of 20. Along converging plate margins, the dip angle of the rupture plane is very shallow, typically about 10 degrees. Thus, the width of the plane within the top brittle crust of the Earth can reach (such as in Japan, 2011, or in Alaska, 1964), making the most powerful earthquakes possible. Focus The majority of tectonic earthquakes originate in the Ring of Fire at depths not exceeding tens of kilometers. Earthquakes occurring at a depth of less than are classified as "shallow-focus" earthquakes, while those with a focal depth between are commonly termed "mid-focus" or "intermediate-depth" earthquakes. In subduction zones, where older and colder oceanic crust descends beneath another tectonic plate, deep-focus earthquakes may occur at much greater depths (ranging from ). These seismically active areas of subduction are known as Wadati–Benioff zones. Deep-focus earthquakes occur at a depth where the subducted lithosphere should no longer be brittle, due to the high temperature and pressure. A possible mechanism for the generation of deep-focus earthquakes is faulting caused by olivine undergoing a phase transition into a spinel structure. Volcanic activity Earthquakes often occur in volcanic regions and are caused there, both by tectonic faults and the movement of magma in volcanoes. Such earthquakes can serve as an early warning of volcanic eruptions, as during the 1980 eruption of Mount St. Helens. Earthquake swarms can serve as markers for the location of the flowing magma throughout the volcanoes. These swarms can be recorded by seismometers and tiltmeters (a device that measures ground slope) and used as sensors to predict imminent or upcoming eruptions. Rupture dynamics A tectonic earthquake begins as an area of initial slip on the fault surface that forms the focus. Once the rupture has been initiated, it begins to propagate away from the focus, spreading out along the fault surface. Lateral propagation will continue until either the rupture reaches a barrier, such as the end of a fault segment, or a region on the fault where there is insufficient stress to allow continued rupture. For larger earthquakes, the depth extent of rupture will be constrained downwards by the brittle-ductile transition zone and upwards by the ground surface. The mechanics of this process are poorly understood because it is difficult either to recreate such rapid movements in a laboratory or to record seismic waves close to a nucleation zone due to strong ground motion. In most cases, the rupture speed approaches, but does not exceed, the shear wave (S wave) velocity of the surrounding rock. There are a few exceptions to this: Supershear earthquakes Supershear earthquake ruptures are known to have propagated at speeds greater than the S wave velocity. These have so far all been observed during large strike-slip events. The unusually wide zone of damage caused by the 2001 Kunlun earthquake has been attributed to the effects of the sonic boom developed in such earthquakes. Slow earthquakes Slow earthquake ruptures travel at unusually low velocities. A particularly dangerous form of slow earthquake is the tsunami earthquake, observed where the relatively low felt intensities, caused by the slow propagation speed of some great earthquakes, fail to alert the population of the neighboring coast, as in the 1896 Sanriku earthquake. Co-seismic overpressuring and effect of pore pressure During an earthquake, high temperatures can develop at the fault plane, increasing pore pressure and consequently vaporization of the groundwater already contained within the rock. In the coseismic phase, such an increase can significantly affect slip evolution and speed, in the post-seismic phase it can control the Aftershock sequence because, after the main event, pore pressure increase slowly propagates into the surrounding fracture network. From the point of view of the Mohr-Coulomb strength theory, an increase in fluid pressure reduces the normal stress acting on the fault plane that holds it in place, and fluids can exert a lubricating effect. As thermal overpressurization may provide positive feedback between slip and strength fall at the fault plane, a common opinion is that it may enhance the faulting process instability. After the mainshock, the pressure gradient between the fault plane and the neighboring rock causes a fluid flow that increases pore pressure in the surrounding fracture networks; such an increase may trigger new faulting processes by reactivating adjacent faults, giving rise to aftershocks. Analogously, artificial pore pressure increase, by fluid injection in Earth's crust, may induce seismicity. Tidal forces Tides may trigger some seismicity. Clusters Most earthquakes form part of a sequence, related to each other in terms of location and time. Most earthquake clusters consist of small tremors that cause little to no damage, but there is a theory that earthquakes can recur in a regular pattern. Earthquake clustering has been observed, for example, in Parkfield, California where a long-term research study is being conducted around the Parkfield earthquake cluster. Aftershocks An aftershock is an earthquake that occurs after a previous earthquake, the mainshock. Rapid changes of stress between rocks, and the stress from the original earthquake are the main causes of these aftershocks, along with the crust around the ruptured fault plane as it adjusts to the effects of the mainshock. An aftershock is in the same region as the main shock but always of a smaller magnitude, however, they can still be powerful enough to cause even more damage to buildings that were already previously damaged from the mainshock. If an aftershock is larger than the mainshock, the aftershock is redesignated as the mainshock and the original main shock is redesignated as a foreshock. Aftershocks are formed as the crust around the displaced fault plane adjusts to the effects of the mainshock. Swarms Earthquake swarms are sequences of earthquakes striking in a specific area within a short period. They are different from earthquakes followed by a series of aftershocks by the fact that no single earthquake in the sequence is the main shock, so none has a notably higher magnitude than another. An example of an earthquake swarm is the 2004 activity at Yellowstone National Park. In August 2012, a swarm of earthquakes shook Southern California's Imperial Valley, showing the most recorded activity in the area since the 1970s. Sometimes a series of earthquakes occur in what has been called an earthquake storm, where the earthquakes strike a fault in clusters, each triggered by the shaking or stress redistribution of the previous earthquakes. Similar to aftershocks but on adjacent segments of fault, these storms occur over the course of years, with some of the later earthquakes as damaging as the early ones. Such a pattern was observed in the sequence of about a dozen earthquakes that struck the North Anatolian Fault in Turkey in the 20th century and has been inferred for older anomalous clusters of large earthquakes in the Middle East. Frequency It is estimated that around 500,000 earthquakes occur each year, detectable with current instrumentation. About 100,000 of these can be felt. Minor earthquakes occur very frequently around the world in places like California and Alaska in the U.S., as well as in El Salvador, Mexico, Guatemala, Chile, Peru, Indonesia, the Philippines, Iran, Pakistan, the Azores in Portugal, Turkey, New Zealand, Greece, Italy, India, Nepal, and Japan. Larger earthquakes occur less frequently, the relationship being exponential; for example, roughly ten times as many earthquakes larger than magnitude 4 occur than earthquakes larger than magnitude 5. In the (low seismicity) United Kingdom, for example, it has been calculated that the average recurrences are: an earthquake of 3.7–4.6 every year, an earthquake of 4.7–5.5 every 10 years, and an earthquake of 5.6 or larger every 100 years. This is an example of the Gutenberg–Richter law. The number of seismic stations has increased from about 350 in 1931 to many thousands today. As a result, many more earthquakes are reported than in the past, but this is because of the vast improvement in instrumentation, rather than an increase in the number of earthquakes. The United States Geological Survey (USGS) estimates that, since 1900, there have been an average of 18 major earthquakes (magnitude 7.0–7.9) and one great earthquake (magnitude 8.0 or greater) per year, and that this average has been relatively stable. In recent years, the number of major earthquakes per year has decreased, though this is probably a statistical fluctuation rather than a systematic trend. More detailed statistics on the size and frequency of earthquakes is available from the United States Geological Survey. A recent increase in the number of major earthquakes has been noted, which could be explained by a cyclical pattern of periods of intense tectonic activity, interspersed with longer periods of low intensity. However, accurate recordings of earthquakes only began in the early 1900s, so it is too early to categorically state that this is the case. Most of the world's earthquakes (90%, and 81% of the largest) take place in the , horseshoe-shaped zone called the circum-Pacific seismic belt, known as the Pacific Ring of Fire, which for the most part bounds the Pacific plate. Massive earthquakes tend to occur along other plate boundaries too, such as along the Himalayan Mountains. With the rapid growth of mega-cities such as Mexico City, Tokyo, and Tehran in areas of high seismic risk, some seismologists are warning that a single earthquake may claim the lives of up to three million people. Induced seismicity While most earthquakes are caused by the movement of the Earth's tectonic plates, human activity can also produce earthquakes. Activities both above ground and below may change the stresses and strains on the crust, including building reservoirs, extracting resources such as coal or oil, and injecting fluids underground for waste disposal or fracking. Most of these earthquakes have small magnitudes. The 5.7 magnitude 2011 Oklahoma earthquake is thought to have been caused by disposing wastewater from oil production into injection wells, and studies point to the state's oil industry as the cause of other earthquakes in the past century. A Columbia University paper suggested that the 8.0 magnitude 2008 Sichuan earthquake was induced by loading from the Zipingpu Dam, though the link has not been conclusively proved. Measurement and location The instrumental scales used to describe the size of an earthquake began with the Richter scale in the 1930s. It is a relatively simple measurement of an event's amplitude, and its use has become minimal in the 21st century. Seismic waves travel through the Earth's interior and can be recorded by seismometers at great distances. The surface-wave magnitude was developed in the 1950s as a means to measure remote earthquakes and to improve the accuracy for larger events. The moment magnitude scale not only measures the amplitude of the shock but also takes into account the seismic moment (total rupture area, average slip of the fault, and rigidity of the rock). The Japan Meteorological Agency seismic intensity scale, the Medvedev–Sponheuer–Karnik scale, and the Mercalli intensity scale are based on the observed effects and are related to the intensity of shaking. Intensity and magnitude The shaking of the earth is a common phenomenon that has been experienced by humans from the earliest of times. Before the development of strong-motion accelerometers, the intensity of a seismic event was estimated based on the observed effects. Magnitude and intensity are not directly related and calculated using different methods. The magnitude of an earthquake is a single value that describes the size of the earthquake at its source. Intensity is the measure of shaking at different locations around the earthquake. Intensity values vary from place to place, depending on the distance from the earthquake and the underlying rock or soil makeup. The first scale for measuring earthquake magnitudes was developed by Charles Francis Richter in 1935. Subsequent scales (seismic magnitude scales) have retained a key feature, where each unit represents a ten-fold difference in the amplitude of the ground shaking and a 32-fold difference in energy. Subsequent scales are also adjusted to have approximately the same numeric value within the limits of the scale. Although the mass media commonly reports earthquake magnitudes as "Richter magnitude" or "Richter scale", standard practice by most seismological authorities is to express an earthquake's strength on the moment magnitude scale, which is based on the actual energy released by an earthquake, the static seismic moment. Seismic waves Every earthquake produces different types of seismic waves, which travel through rock with different velocities: Longitudinal P waves (shock- or pressure waves) Transverse S waves (both body waves) Surface waves – (Rayleigh and Love waves) Speed of seismic waves Propagation velocity of the seismic waves through solid rock ranges from approx. up to , depending on the density and elasticity of the medium. In the Earth's interior, the shock- or P waves travel much faster than the S waves (approx. relation 1.7:1). The differences in travel time from the epicenter to the observatory are a measure of the distance and can be used to image both sources of earthquakes and structures within the Earth. Also, the depth of the hypocenter can be computed roughly. P wave speed Upper crust soils and unconsolidated sediments: per second Upper crust solid rock: per second Lower crust: per second Deep mantle: per second. S waves speed Light sediments: per second Earths crust: per second Deep mantle: per second Seismic wave arrival As a consequence, the first waves of a distant earthquake arrive at an observatory via the Earth's mantle. On average, the kilometer distance to the earthquake is the number of seconds between the P- and S wave times 8. Slight deviations are caused by inhomogeneities of subsurface structure. By such analysis of seismograms, the Earth's core was located in 1913 by Beno Gutenberg. S waves and later arriving surface waves do most of the damage compared to P waves. P waves squeeze and expand the material in the same direction they are traveling, whereas S waves shake the ground up and down and back and forth. Location and reporting Earthquakes are not only categorized by their magnitude but also by the place where they occur. The world is divided into 754 Flinn–Engdahl regions (F-E regions), which are based on political and geographical boundaries as well as seismic activity. More active zones are divided into smaller F-E regions whereas less active zones belong to larger F-E regions. Standard reporting of earthquakes includes its magnitude, date and time of occurrence, geographic coordinates of its epicenter, depth of the epicenter, geographical region, distances to population centers, location uncertainty, several parameters that are included in USGS earthquake reports (number of stations reporting, number of observations, etc.), and a unique event ID. Although relatively slow seismic waves have traditionally been used to detect earthquakes, scientists realized in 2016 that gravitational measurement could provide instantaneous detection of earthquakes, and confirmed this by analyzing gravitational records associated with the 2011 Tohoku-Oki ("Fukushima") earthquake. Effects The effects of earthquakes include, but are not limited to, the following: Shaking and ground rupture Shaking and ground rupture are the main effects created by earthquakes, principally resulting in more or less severe damage to buildings and other rigid structures. The severity of the local effects depends on the complex combination of the earthquake magnitude, the distance from the epicenter, and the local geological and geomorphological conditions, which may amplify or reduce wave propagation. The ground-shaking is measured by ground acceleration. Specific local geological, geomorphological, and geostructural features can induce high levels of shaking on the ground surface even from low-intensity earthquakes. This effect is called site or local amplification. It is principally due to the transfer of the seismic motion from hard deep soils to soft superficial soils and the effects of seismic energy focalization owing to the typical geometrical setting of such deposits. Ground rupture is a visible breaking and displacement of the Earth's surface along the trace of the fault, which may be of the order of several meters in the case of major earthquakes. Ground rupture is a major risk for large engineering structures such as dams, bridges, and nuclear power stations and requires careful mapping of existing faults to identify any that are likely to break the ground surface within the life of the structure. Soil liquefaction Soil liquefaction occurs when, because of the shaking, water-saturated granular material (such as sand) temporarily loses its strength and transforms from a solid to a liquid. Soil liquefaction may cause rigid structures, like buildings and bridges, to tilt or sink into the liquefied deposits. For example, in the 1964 Alaska earthquake, soil liquefaction caused many buildings to sink into the ground, eventually collapsing upon themselves. Human impacts Physical damage from an earthquake will vary depending on the intensity of shaking in a given area and the type of population. Underserved and developing communities frequently experience more severe impacts (and longer lasting) from a seismic event compared to well-developed communities. Impacts may include: Injuries and loss of life Damage to critical infrastructure (short and long-term) Roads, bridges, and public transportation networks Water, power, sewer and gas interruption Communication systems Loss of critical community services including hospitals, police, and fire General property damage Collapse or destabilization (potentially leading to future collapse) of buildings With these impacts and others, the aftermath may bring disease, a lack of basic necessities, mental consequences such as panic attacks and depression to survivors, and higher insurance premiums. Recovery times will vary based on the level of damage and the socioeconomic status of the impacted community. Landslides Earthquakes can produce slope instability leading to landslides, a major geological hazard. Landslide danger may persist while emergency personnel is attempting rescue work. Fires Earthquakes can cause fires by damaging electrical power or gas lines. In the event of water mains rupturing and a loss of pressure, it may also become difficult to stop the spread of a fire once it has started. For example, more deaths in the 1906 San Francisco earthquake were caused by fire than by the earthquake itself. Tsunami Tsunamis are long-wavelength, long-period sea waves produced by the sudden or abrupt movement of large volumes of water—including when an earthquake occurs at sea. In the open ocean, the distance between wave crests can surpass , and the wave periods can vary from five minutes to one hour. Such tsunamis travel 600–800 kilometers per hour (373–497 miles per hour), depending on water depth. Large waves produced by an earthquake or a submarine landslide can overrun nearby coastal areas in a matter of minutes. Tsunamis can also travel thousands of kilometers across open ocean and wreak destruction on far shores hours after the earthquake that generated them. Ordinarily, subduction earthquakes under magnitude 7.5 do not cause tsunamis, although some instances of this have been recorded. Most destructive tsunamis are caused by earthquakes of magnitude 7.5 or more. Floods Floods may be secondary effects of earthquakes if dams are damaged. Earthquakes may cause landslips to dam rivers, which collapse and cause floods. The terrain below the Sarez Lake in Tajikistan is in danger of catastrophic flooding if the landslide dam formed by the earthquake, known as the Usoi Dam, were to fail during a future earthquake. Impact projections suggest the flood could affect roughly five million people. Management Prediction Earthquake prediction is a branch of the science of seismology concerned with the specification of the time, location, and magnitude of future earthquakes within stated limits. Many methods have been developed for predicting the time and place in which earthquakes will occur. Despite considerable research efforts by seismologists, scientifically reproducible predictions cannot yet be made to a specific day or month. Popular belief holds earthquakes are preceded by earthquake weather, in the early morning. Forecasting While forecasting is usually considered to be a type of prediction, earthquake forecasting is often differentiated from earthquake prediction. Earthquake forecasting is concerned with the probabilistic assessment of general earthquake hazards, including the frequency and magnitude of damaging earthquakes in a given area over years or decades. For well-understood faults the probability that a segment may rupture during the next few decades can be estimated. Earthquake warning systems have been developed that can provide regional notification of an earthquake in progress, but before the ground surface has begun to move, potentially allowing people within the system's range to seek shelter before the earthquake's impact is felt. Preparedness The objective of earthquake engineering is to foresee the impact of earthquakes on buildings, bridges, tunnels, roadways, and other structures, and to design such structures to minimize the risk of damage. Existing structures can be modified by seismic retrofitting to improve their resistance to earthquakes. Earthquake insurance can provide building owners with financial protection against losses resulting from earthquakes. Emergency management strategies can be employed by a government or organization to mitigate risks and prepare for consequences. Artificial intelligence may help to assess buildings and plan precautionary operations. The Igor expert system is part of a mobile laboratory that supports the procedures leading to the seismic assessment of masonry buildings and the planning of retrofitting operations on them. It has been applied to assess buildings in Lisbon, Rhodes, and Naples. Individuals can also take preparedness steps like securing water heaters and heavy items that could injure someone, locating shutoffs for utilities, and being educated about what to do when the shaking starts. For areas near large bodies of water, earthquake preparedness encompasses the possibility of a tsunami caused by a large earthquake. In culture Historical views From the lifetime of the Greek philosopher Anaxagoras in the 5th century BCE to the 14th century CE, earthquakes were usually attributed to "air (vapors) in the cavities of the Earth." Thales of Miletus (625–547 BCE) was the only documented person who believed that earthquakes were caused by tension between the earth and water. Other theories existed, including the Greek philosopher Anaxamines' (585–526 BCE) beliefs that short incline episodes of dryness and wetness caused seismic activity. The Greek philosopher Democritus (460–371 BCE) blamed water in general for earthquakes. Pliny the Elder called earthquakes "underground thunderstorms". Mythology and religion In Norse mythology, earthquakes were explained as the violent struggle of the god Loki. When Loki, god of mischief and strife, murdered Baldr, god of beauty and light, he was punished by being bound in a cave with a poisonous serpent placed above his head dripping venom. Loki's wife Sigyn stood by him with a bowl to catch the poison, but whenever she had to empty the bowl, the poison dripped on Loki's face, forcing him to jerk his head away and thrash against his bonds, which caused the earth to tremble. In Greek mythology, Poseidon was the cause and god of earthquakes. When he was in a bad mood, he struck the ground with a trident, causing earthquakes and other calamities. He also used earthquakes to punish and inflict fear upon people as revenge. In Japanese mythology, Namazu (鯰) is a giant catfish who causes earthquakes. Namazu lives in the mud beneath the earth and is guarded by the god Kashima, who restrains the fish with a stone. When Kashima lets his guard fall, Namazu thrashes about, causing violent earthquakes. In the New Testament, Matthew's Gospel refers to earthquakes occurring both after the death of Jesus (Matthew 27:51, 54) and at his resurrection (Matthew 28:2). Earthquakes form part of the picture through which Jesus portrays the beginning of the end of time. In popular culture In modern popular culture, the portrayal of earthquakes is shaped by the memory of great cities laid waste, such as Kobe in 1995 or San Francisco in 1906. Fictional earthquakes tend to strike suddenly and without warning. For this reason, stories about earthquakes generally begin with the disaster and focus on its immediate aftermath, as in Short Walk to Daylight (1972), The Ragged Edge (1968) or Aftershock: Earthquake in New York (1999). A notable example is Heinrich von Kleist's classic novella, The Earthquake in Chile, which describes the destruction of Santiago in 1647. Haruki Murakami's short fiction collection After the Quake depicts the consequences of the Kobe earthquake of 1995. The most popular single earthquake in fiction is the hypothetical "Big One" expected of California's San Andreas Fault someday, as depicted in the novels Richter 10 (1996), Goodbye California (1977), 2012 (2009), and San Andreas (2015), among other works. Jacob M. Appel's widely anthologized short story, A Comparative Seismology, features a con artist who convinces an elderly woman that an apocalyptic earthquake is imminent. Contemporary depictions of earthquakes in film are variable in the manner in which they reflect human psychological reactions to the actual trauma that can be caused to directly afflicted families and their loved ones. Disaster mental health response research emphasizes the need to be aware of the different roles of loss of family and key community members, loss of home and familiar surroundings, and loss of essential supplies and services to maintain survival. Particularly for children, the clear availability of caregiving adults who can protect, nourish, and clothe them in the aftermath of the earthquake and help them make sense of what has befallen them is more important to their emotional and physical health than the simple giving of provisions. As was observed after other disasters involving destruction and loss of life and their media depictions, recently observed in the 2010 Haiti earthquake, it is also believed to be important not to pathologize the reactions to loss and displacement or disruption of governmental administration and services, but rather to validate the reactions to support constructive problem-solving and reflection. Outside of earth Phenomena similar to earthquakes have been observed on other planets (e.g., marsquakes on Mars) and on the Moon (e.g., moonquakes). See also Lists of earthquakes References Sources . . , NUREG/CR-1457. Deborah R. Coen. The Earthquake Observers: Disaster Science From Lisbon to Richter (University of Chicago Press; 2012) 348 pages; explores both scientific and popular coverage . . . . Further reading External links Earthquake Hazards Program of the U.S. Geological Survey IRIS Seismic Monitor – IRIS Consortium Geological hazards Lithosphere Natural disasters Seismology
Earthquake
[ "Physics" ]
7,641
[ "Weather", "Physical phenomena", "Natural disasters" ]
10,116
https://en.wikipedia.org/wiki/Endocytosis
Endocytosis is a cellular process in which substances are brought into the cell. The material to be internalized is surrounded by an area of cell membrane, which then buds off inside the cell to form a vesicle containing the ingested materials. Endocytosis includes pinocytosis (cell drinking) and phagocytosis (cell eating). It is a form of active transport. History The term was proposed by De Duve in 1963. Phagocytosis was discovered by Élie Metchnikoff in 1882. Pathways Endocytosis pathways can be subdivided into four categories: namely, receptor-mediated endocytosis (also known as clathrin-mediated endocytosis), caveolae, pinocytosis, and phagocytosis. Clathrin-mediated endocytosis is mediated by the production of small (approx. 100 nm in diameter) vesicles that have a morphologically characteristic coat made up of the cytosolic protein clathrin. Clathrin-coated vesicles (CCVs) are found in virtually all cells and form domains of the plasma membrane termed clathrin-coated pits. Coated pits can concentrate large extracellular molecules that have different receptors responsible for the receptor-mediated endocytosis of ligands, e.g. low density lipoprotein, transferrin, growth factors, antibodies and many others. Study in mammalian cells confirm a reduction in clathrin coat size in an increased tension environment. In addition, it suggests that the two apparently distinct clathrin assembly modes, namely coated pits and coated plaques, observed in experimental investigations might be a consequence of varied tensions in the plasma membrane. Caveolae are the most commonly reported non-clathrin-coated plasma membrane buds, which exist on the surface of many, but not all cell types. They consist of the cholesterol-binding protein caveolin (Vip21) with a bilayer enriched in cholesterol and glycolipids. Caveolae are small (approx. 50 nm in diameter) flask-shape pits in the membrane that resemble the shape of a cave (hence the name caveolae). They can constitute up to a third of the plasma membrane area of the cells of some tissues, being especially abundant in smooth muscle, type I pneumocytes, fibroblasts, adipocytes, and endothelial cells. Uptake of extracellular molecules is also believed to be specifically mediated via receptors in caveolae. Potocytosis is a form of receptor-mediated endocytosis that uses caveolae vesicles to bring molecules of various sizes into the cell. Unlike most endocytosis that uses caveolae to deliver contents of vesicles to lysosomes or other organelles, material endocytosed via potocytosis is released into the cytosol. Pinocytosis, which usually occurs from highly ruffled regions of the plasma membrane, is the invagination of the cell membrane to form a pocket, which then pinches off into the cell to form a vesicle (0.5–5 μm in diameter) filled with a large volume of extracellular fluid and molecules within it (equivalent to ~100 CCVs). The filling of the pocket occurs in a non-specific manner. The vesicle then travels into the cytosol and fuses with other vesicles such as endosomes and lysosomes. Phagocytosis is the process by which cells bind and internalize particulate matter larger than around 0.75 μm in diameter, such as small-sized dust particles, cell debris, microorganisms and apoptotic cells. These processes involve the uptake of larger membrane areas than clathrin-mediated endocytosis and caveolae pathway. More recent experiments have suggested that these morphological descriptions of endocytic events may be inadequate, and a more appropriate method of classification may be based upon whether particular pathways are dependent on clathrin and dynamin. Dynamin-dependent clathrin-independent pathways include FEME, UFE, ADBE, EGFR-NCE and IL2Rβ uptake. Dynamin-independent clathrin-independent pathways include the CLIC/GEEC pathway (regulated by Graf1), as well as MEND and macropinocytosis. Clathrin-mediated endocytosis is the only pathway dependent on both clathrin and dynamin. Principal components The endocytic pathway of mammalian cells consists of distinct membrane compartments, which internalize molecules from the plasma membrane and recycle them back to the surface (as in early endosomes and recycling endosomes), or sort them to degradation (as in late endosomes and lysosomes). The principal components of the endocytic pathway are: Early endosomes are the first compartment of the endocytic pathway. Early endosomes are often located in the periphery of the cell, and receive most types of vesicles coming from the cell surface. They have a characteristic tubulo-vesicular structure (vesicles up to 1 μm in diameter with connected tubules of approx. 50 nm diameter) and a mildly acidic pH. They are principally sorting organelles where many endocytosed ligands dissociate from their receptors in the acid pH of the compartment, and from which many of the receptors recycle to the cell surface (via tubules). It is also the site of sorting into transcytotic pathway to later compartments (like late endosomes or lysosomes) via transvesicular compartments (like multivesicular bodies (MVB) or endosomal carrier vesicles (ECVs)). Late endosomes receive endocytosed material en route to lysosomes, usually from early endosomes in the endocytic pathway, from trans-Golgi network (TGN) in the biosynthetic pathway, and from phagosomes in the phagocytic pathway. Late endosomes often contain proteins characteristic of nucleosomes, mitochondria and mRNAs including lysosomal membrane glycoproteins and acid hydrolases. They are acidic (approx. pH 5.5), and are part of the trafficking pathway of mannose-6-phosphate receptors. Late endosomes are thought to mediate a final set of sorting events prior the delivery of material to lysosomes. Lysosomes are the last compartment of the endocytic pathway. Their chief function is to break down cellular waste products, fats, carbohydrates, proteins, and other macromolecules into simple compounds. These are then returned to the cytoplasm as new cell-building materials. To accomplish this, lysosomes use some 40 different types of hydrolytic enzymes, all of which are manufactured in the endoplasmic reticulum, modified in the Golgi apparatus and function in an acidic environment. The approximate pH of a lysosome is 4.8 and by electron microscopy (EM) usually appear as large vacuoles (1-2 μm in diameter) containing electron dense material. They have a high content of lysosomal membrane proteins and active lysosomal hydrolases, but no mannose-6-phosphate receptor. They are generally regarded as the principal hydrolytic compartment of the cell. It was recently found that an eisosome serves as a portal of endocytosis in yeast. Clathrin-mediated The major route for endocytosis in most cells, and the best-understood, is that mediated by the molecule clathrin. This large protein assists in the formation of a coated pit on the inner surface of the plasma membrane of the cell. This pit then buds into the cell to form a coated vesicle in the cytoplasm of the cell. In so doing, it brings into the cell not only a small area of the surface of the cell but also a small volume of fluid from outside the cell. Coats function to deform the donor membrane to produce a vesicle, and they also function in the selection of the vesicle cargo. Coat complexes that have been well characterized so far include coat protein-I (COP-I), COP-II, and clathrin. Clathrin coats are involved in two crucial transport steps: (i) receptor-mediated and fluid-phase endocytosis from the plasma membrane to early endosome and (ii) transport from the TGN to endosomes. In endocytosis, the clathrin coat is assembled on the cytoplasmic face of the plasma membrane, forming pits that invaginate to pinch off (scission) and become free CCVs. In cultured cells, the assembly of a CCV takes ~ 1min, and several hundred to a thousand or more can form every minute. The main scaffold component of clathrin coat is the 190-kD protein called clathrin heavy chain (CHC), which is associated with a 25- kD protein called clathrin light chain (CLC), forming three-legged trimers called triskelions. Vesicles selectively concentrate and exclude certain proteins during formation and are not representative of the membrane as a whole. AP2 adaptors are multisubunit complexes that perform this function at the plasma membrane. The best-understood receptors that are found concentrated in coated vesicles of mammalian cells are the LDL receptor (which removes LDL from circulating blood), the transferrin receptor (which brings ferric ions bound by transferrin into the cell) and certain hormone receptors (such as that for EGF). At any one moment, about 25% of the plasma membrane of a fibroblast is made up of coated pits. As a coated pit has a life of about a minute before it buds into the cell, a fibroblast takes up its surface by this route about once every 50 minutes. Coated vesicles formed from the plasma membrane have a diameter of about 100 nm and a lifetime measured in a few seconds. Once the coat has been shed, the remaining vesicle fuses with endosomes and proceeds down the endocytic pathway. The actual budding-in process, whereby a pit is converted to a vesicle, is carried out by clathrin; Assisted by a set of cytoplasmic proteins, which includes dynamin and adaptors such as adaptin. Coated pits and vesicles were first seen in thin sections of tissue in the electron microscope by Thomas F Roth and Keith R. Porter. The importance of them for the clearance of LDL from blood was discovered by Richard G. Anderson, Michael S. Brown and Joseph L. Goldstein in 1977. Coated vesicles were first purified by Barbara Pearse, who discovered the clathrin coat molecule in 1976. Processes and components Caveolin proteins like caveolin-1 (CAV1), caveolin-2 (CAV2), and caveolin-3 (CAV3), play significant roles in the caveolar formation process. More specifically, CAV1 and CAV2 are responsible for caveolae formation in non-muscle cells while CAV3 functions in muscle cells. The process starts with CAV1 being synthesized in the ER where it forms detergent-resistant oligomers. Then, these oligomers travel through the Golgi complex before arriving at the cell surface to aid in caveolar formation. Caveolae formation is also reversible through disassembly under certain conditions such as increased plasma membrane tension. These certain conditions then depend on the type of tissues that are expressing the caveolar function. For example, not all tissues that have caveolar proteins have a caveolar structure i.e. the blood-brain barrier. Though there are many morphological features conserved among caveolae, the functions of each CAV protein are diverse. One common feature among caveolins is their hydrophobic stretches of potential hairpin structures that are made of α-helices. The insertion of these hairpin-like α-helices forms a caveolae coat which leads to membrane curvature. In addition to insertion, caveolins are also capable of oligomerization which further plays a role in membrane curvature. Recent studies have also discovered that polymerase I, transcript release factor, and serum deprivation protein response also play a role in the assembly of caveolae. Besides caveolae assembly, researchers have also discovered that CAV1 proteins can also influence other endocytic pathways. When CAV1 binds to Cdc42, CAV1 inactivates it and regulates Cdc42 activity during membrane trafficking events. Mechanisms The process of cell uptake depends on the tilt and chirality of constituent molecules to induce membrane budding. Since such chiral and tilted lipid molecules are likely to be in a "raft" form, researchers suggest that caveolae formation also follows this mechanism since caveolae are also enriched in raft constituents. When caveolin proteins bind to the inner leaflet via cholesterol, the membrane starts to bend, leading to spontaneous curvature. This effect is due to the force distribution generated when the caveolin oligomer binds to the membrane. The force distribution then alters the tension of the membrane which leads to budding and eventually vesicle formation. Gallery See also References Further reading External links Endocytosis at biologyreference.com Endocytosis - researching endocytic mechanisms at endocytosis.org Clathrin-mediated endocytosis ASCB Image & Video Library Types of Endocytosis (Animation) Cellular processes Membrane biology Cell anatomy
Endocytosis
[ "Chemistry", "Biology" ]
2,865
[ "Membrane biology", "Cellular processes", "Molecular biology" ]
10,134
https://en.wikipedia.org/wiki/Electromagnetic%20spectrum
The electromagnetic spectrum is the full range of electromagnetic radiation, organized by frequency or wavelength. The spectrum is divided into separate bands, with different names for the electromagnetic waves within each band. From low to high frequency these are: radio waves, microwaves, infrared, visible light, ultraviolet, X-rays, and gamma rays. The electromagnetic waves in each of these bands have different characteristics, such as how they are produced, how they interact with matter, and their practical applications. Radio waves, at the low-frequency end of the spectrum, have the lowest photon energy and the longest wavelengths—thousands of kilometers, or more. They can be emitted and received by antennas, and pass through the atmosphere, foliage, and most building materials. Gamma rays, at the high-frequency end of the spectrum, have the highest photon energies and the shortest wavelengths—much smaller than an atomic nucleus. Gamma rays, X-rays, and extreme ultraviolet rays are called ionizing radiation because their high photon energy is able to ionize atoms, causing chemical reactions. Longer-wavelength radiation such as visible light is nonionizing; the photons do not have sufficient energy to ionize atoms. Throughout most of the electromagnetic spectrum, spectroscopy can be used to separate waves of different frequencies, so that the intensity of the radiation can be measured as a function of frequency or wavelength. Spectroscopy is used to study the interactions of electromagnetic waves with matter. History and discovery Humans have always been aware of visible light and radiant heat but for most of history it was not known that these phenomena were connected or were representatives of a more extensive principle. The ancient Greeks recognized that light traveled in straight lines and studied some of its properties, including reflection and refraction. Light was intensively studied from the beginning of the 17th century leading to the invention of important instruments like the telescope and microscope. Isaac Newton was the first to use the term spectrum for the range of colours that white light could be split into with a prism. Starting in 1666, Newton showed that these colours were intrinsic to light and could be recombined into white light. A debate arose over whether light had a wave nature or a particle nature with René Descartes, Robert Hooke and Christiaan Huygens favouring a wave description and Newton favouring a particle description. Huygens in particular had a well developed theory from which he was able to derive the laws of reflection and refraction. Around 1801, Thomas Young measured the wavelength of a light beam with his two-slit experiment thus conclusively demonstrating that light was a wave. In 1800, William Herschel discovered infrared radiation. He was studying the temperature of different colours by moving a thermometer through light split by a prism. He noticed that the highest temperature was beyond red. He theorized that this temperature change was due to "calorific rays", a type of light ray that could not be seen. The next year, Johann Ritter, working at the other end of the spectrum, noticed what he called "chemical rays" (invisible light rays that induced certain chemical reactions). These behaved similarly to visible violet light rays, but were beyond them in the spectrum. They were later renamed ultraviolet radiation. The study of electromagnetism began in 1820 when Hans Christian Ørsted discovered that electric currents produce magnetic fields (Oersted's law). Light was first linked to electromagnetism in 1845, when Michael Faraday noticed that the polarization of light traveling through a transparent material responded to a magnetic field (see Faraday effect). During the 1860s, James Clerk Maxwell developed four partial differential equations (Maxwell's equations) for the electromagnetic field. Two of these equations predicted the possibility and behavior of waves in the field. Analyzing the speed of these theoretical waves, Maxwell realized that they must travel at a speed that was about the known speed of light. This startling coincidence in value led Maxwell to make the inference that light itself is a type of electromagnetic wave. Maxwell's equations predicted an infinite range of frequencies of electromagnetic waves, all traveling at the speed of light. This was the first indication of the existence of the entire electromagnetic spectrum. Maxwell's predicted waves included waves at very low frequencies compared to infrared, which in theory might be created by oscillating charges in an ordinary electrical circuit of a certain type. Attempting to prove Maxwell's equations and detect such low frequency electromagnetic radiation, in 1886, the physicist Heinrich Hertz built an apparatus to generate and detect what are now called radio waves. Hertz found the waves and was able to infer (by measuring their wavelength and multiplying it by their frequency) that they traveled at the speed of light. Hertz also demonstrated that the new radiation could be both reflected and refracted by various dielectric media, in the same manner as light. For example, Hertz was able to focus the waves using a lens made of tree resin. In a later experiment, Hertz similarly produced and measured the properties of microwaves. These new types of waves paved the way for inventions such as the wireless telegraph and the radio. In 1895, Wilhelm Röntgen noticed a new type of radiation emitted during an experiment with an evacuated tube subjected to a high voltage. He called this radiation "x-rays" and found that they were able to travel through parts of the human body but were reflected or stopped by denser matter such as bones. Before long, many uses were found for this radiography. The last portion of the electromagnetic spectrum was filled in with the discovery of gamma rays. In 1900, Paul Villard was studying the radioactive emissions of radium when he identified a new type of radiation that he at first thought consisted of particles similar to known alpha and beta particles, but with the power of being far more penetrating than either. However, in 1910, British physicist William Henry Bragg demonstrated that gamma rays are electromagnetic radiation, not particles, and in 1914, Ernest Rutherford (who had named them gamma rays in 1903 when he realized that they were fundamentally different from charged alpha and beta particles) and Edward Andrade measured their wavelengths, and found that gamma rays were similar to X-rays, but with shorter wavelengths. The wave-particle debate was rekindled in 1901 when Max Planck discovered that light is absorbed only in discrete "quanta", now called photons, implying that light has a particle nature. This idea was made explicit by Albert Einstein in 1905, but never accepted by Planck and many other contemporaries. The modern position of science is that electromagnetic radiation has both a wave and a particle nature, the wave-particle duality. The contradictions arising from this position are still being debated by scientists and philosophers. Range Electromagnetic waves are typically described by any of the following three physical properties: the frequency f, wavelength λ, or photon energy E. Frequencies observed in astronomy range from (1 GeV gamma rays) down to the local plasma frequency of the ionized interstellar medium (~1 kHz). Wavelength is inversely proportional to the wave frequency, so gamma rays have very short wavelengths that are fractions of the size of atoms, whereas wavelengths on the opposite end of the spectrum can be indefinitely long. Photon energy is directly proportional to the wave frequency, so gamma ray photons have the highest energy (around a billion electron volts), while radio wave photons have very low energy (around a femtoelectronvolt). These relations are illustrated by the following equations: where: c is the speed of light in vacuum h is the Planck constant. Whenever electromagnetic waves travel in a medium with matter, their wavelength is decreased. Wavelengths of electromagnetic radiation, whatever medium they are traveling through, are usually quoted in terms of the vacuum wavelength, although this is not always explicitly stated. Generally, electromagnetic radiation is classified by wavelength into radio wave, microwave, infrared, visible light, ultraviolet, X-rays and gamma rays. The behavior of EM radiation depends on its wavelength. When EM radiation interacts with single atoms and molecules, its behavior also depends on the amount of energy per quantum (photon) it carries. Spectroscopy can detect a much wider region of the EM spectrum than the visible wavelength range of 400 nm to 700 nm in a vacuum. A common laboratory spectroscope can detect wavelengths from 2 nm to 2500 nm. Detailed information about the physical properties of objects, gases, or even stars can be obtained from this type of device. Spectroscopes are widely used in astrophysics. For example, many hydrogen atoms emit a radio wave photon that has a wavelength of 21.12 cm. Also, frequencies of 30 Hz and below can be produced by and are important in the study of certain stellar nebulae and frequencies as high as have been detected from astrophysical sources. Regions The types of electromagnetic radiation are broadly classified into the following classes (regions, bands or types): Gamma radiation X-ray radiation Ultraviolet radiation Visible light (light that humans can see) Infrared radiation Microwave radiation Radio waves This classification goes in the increasing order of wavelength, which is characteristic of the type of radiation. There are no precisely defined boundaries between the bands of the electromagnetic spectrum; rather they fade into each other like the bands in a rainbow. Radiation of each frequency and wavelength (or in each band) has a mix of properties of the two regions of the spectrum that bound it. For example, red light resembles infrared radiation, in that it can excite and add energy to some chemical bonds and indeed must do so to power the chemical mechanisms responsible for photosynthesis and the working of the visual system. In atomic and nuclear physics, the distinction between X-rays and gamma rays is based on sources: the photons generated from nuclear decay or other nuclear and subnuclear/particle process are termed gamma rays, whereas X-rays are generated by electronic transitions involving energetically deep inner atomic electrons. Electronic transitions in muonic atoms transitions are also said to produce X-rays. In astrophysics, energies below 100keV are called X-rays and higher energies are gamma rays. The region of the spectrum where electromagnetic radiation is observed may differ from the region it was emitted in due to relative velocity of the source and observer, (the Doppler shift), relative gravitational potential (gravitational redshift), or expansion of the universe (cosmological redshift). For example, the cosmic microwave background, relic blackbody radiation from the era of recombination, started out at energies around 1eV, but as has undergone enough cosmological red shift to put it into the microwave region of the spectrum for observers on Earth. Rationale for names Electromagnetic radiation interacts with matter in different ways across the spectrum. These types of interaction are so different that historically different names have been applied to different parts of the spectrum, as though these were different types of radiation. Thus, although these "different kinds" of electromagnetic radiation form a quantitatively continuous spectrum of frequencies and wavelengths, the spectrum remains divided for practical reasons arising from these qualitative interaction differences. Types of radiation Radio waves Radio waves are emitted and received by antennas, which consist of conductors such as metal rod resonators. In artificial generation of radio waves, an electronic device called a transmitter generates an alternating electric current which is applied to an antenna. The oscillating electrons in the antenna generate oscillating electric and magnetic fields that radiate away from the antenna as radio waves. In reception of radio waves, the oscillating electric and magnetic fields of a radio wave couple to the electrons in an antenna, pushing them back and forth, creating oscillating currents which are applied to a radio receiver. Earth's atmosphere is mainly transparent to radio waves, except for layers of charged particles in the ionosphere which can reflect certain frequencies. Radio waves are extremely widely used to transmit information across distances in radio communication systems such as radio broadcasting, television, two way radios, mobile phones, communication satellites, and wireless networking. In a radio communication system, a radio frequency current is modulated with an information-bearing signal in a transmitter by varying either the amplitude, frequency or phase, and applied to an antenna. The radio waves carry the information across space to a receiver, where they are received by an antenna and the information extracted by demodulation in the receiver. Radio waves are also used for navigation in systems like Global Positioning System (GPS) and navigational beacons, and locating distant objects in radiolocation and radar. They are also used for remote control, and for industrial heating. The use of the radio spectrum is strictly regulated by governments, coordinated by the International Telecommunication Union (ITU) which allocates frequencies to different users for different uses. Microwaves Microwaves are radio waves of short wavelength, from about 10 centimeters to one millimeter, in the SHF and EHF frequency bands. Microwave energy is produced with klystron and magnetron tubes, and with solid state devices such as Gunn and IMPATT diodes. Although they are emitted and absorbed by short antennas, they are also absorbed by polar molecules, coupling to vibrational and rotational modes, resulting in bulk heating. Unlike higher frequency waves such as infrared and visible light which are absorbed mainly at surfaces, microwaves can penetrate into materials and deposit their energy below the surface. This effect is used to heat food in microwave ovens, and for industrial heating and medical diathermy. Microwaves are the main wavelengths used in radar, and are used for satellite communication, and wireless networking technologies such as Wi-Fi. The copper cables (transmission lines) which are used to carry lower-frequency radio waves to antennas have excessive power losses at microwave frequencies, and metal pipes called waveguides are used to carry them. Although at the low end of the band the atmosphere is mainly transparent, at the upper end of the band absorption of microwaves by atmospheric gases limits practical propagation distances to a few kilometers. Terahertz radiation or sub-millimeter radiation is a region of the spectrum from about 100 GHz to 30 terahertz (THz) between microwaves and far infrared which can be regarded as belonging to either band. Until recently, the range was rarely studied and few sources existed for microwave energy in the so-called terahertz gap, but applications such as imaging and communications are now appearing. Scientists are also looking to apply terahertz technology in the armed forces, where high-frequency waves might be directed at enemy troops to incapacitate their electronic equipment. Terahertz radiation is strongly absorbed by atmospheric gases, making this frequency range useless for long-distance communication. Infrared radiation The infrared part of the electromagnetic spectrum covers the range from roughly 300 GHz to 400 THz (1 mm – 750 nm). It can be divided into three parts: Far-infrared, from 300 GHz to 30 THz (1 mm – 10 μm). The lower part of this range may also be called microwaves or terahertz waves. This radiation is typically absorbed by so-called rotational modes in gas-phase molecules, by molecular motions in liquids, and by phonons in solids. The water in Earth's atmosphere absorbs so strongly in this range that it renders the atmosphere in effect opaque. However, there are certain wavelength ranges ("windows") within the opaque range that allow partial transmission, and can be used for astronomy. The wavelength range from approximately 200 μm up to a few mm is often referred to as Submillimetre astronomy, reserving far infrared for wavelengths below 200 μm. Mid-infrared, from 30 THz to 120 THz (10–2.5 μm). Hot objects (black-body radiators) can radiate strongly in this range, and human skin at normal body temperature radiates strongly at the lower end of this region. This radiation is absorbed by molecular vibrations, where the different atoms in a molecule vibrate around their equilibrium positions. This range is sometimes called the fingerprint region, since the mid-infrared absorption spectrum of a compound is very specific for that compound. Near-infrared, from 120 THz to 400 THz (2,500–750 nm). Physical processes that are relevant for this range are similar to those for visible light. The highest frequencies in this region can be detected directly by some types of photographic film, and by many types of solid state image sensors for infrared photography and videography. Visible light Above infrared in frequency comes visible light. The Sun emits its peak power in the visible region, although integrating the entire emission power spectrum through all wavelengths shows that the Sun emits slightly more infrared than visible light. By definition, visible light is the part of the EM spectrum the human eye is the most sensitive to. Visible light (and near-infrared light) is typically absorbed and emitted by electrons in molecules and atoms that move from one energy level to another. This action allows the chemical mechanisms that underlie human vision and plant photosynthesis. The light that excites the human visual system is a very small portion of the electromagnetic spectrum. A rainbow shows the optical (visible) part of the electromagnetic spectrum; infrared (if it could be seen) would be located just beyond the red side of the rainbow whilst ultraviolet would appear just beyond the opposite violet end. Electromagnetic radiation with a wavelength between 380 nm and 760 nm (400–790 terahertz) is detected by the human eye and perceived as visible light. Other wavelengths, especially near infrared (longer than 760 nm) and ultraviolet (shorter than 380 nm) are also sometimes referred to as light, especially when the visibility to humans is not relevant. White light is a combination of lights of different wavelengths in the visible spectrum. Passing white light through a prism splits it up into the several colours of light observed in the visible spectrum between 400 nm and 780 nm. If radiation having a frequency in the visible region of the EM spectrum reflects off an object, say, a bowl of fruit, and then strikes the eyes, this results in visual perception of the scene. The brain's visual system processes the multitude of reflected frequencies into different shades and hues, and through this insufficiently understood psychophysical phenomenon, most people perceive a bowl of fruit. At most wavelengths, however, the information carried by electromagnetic radiation is not directly detected by human senses. Natural sources produce EM radiation across the spectrum, and technology can also manipulate a broad range of wavelengths. Optical fiber transmits light that, although not necessarily in the visible part of the spectrum (it is usually infrared), can carry information. The modulation is similar to that used with radio waves. Ultraviolet radiation Next in frequency comes ultraviolet (UV). In frequency (and thus energy), UV rays sit between the violet end of the visible spectrum and the X-ray range. The UV wavelength spectrum ranges from 399 nm to 10 nm and is divided into 3 sections: UVA, UVB, and UVC. UV is the lowest energy range energetic enough to ionize atoms, separating electrons from them, and thus causing chemical reactions. UV, X-rays, and gamma rays are thus collectively called ionizing radiation; exposure to them can damage living tissue. UV can also cause substances to glow with visible light; this is called fluorescence. UV fluorescence is used by forensics to detect any evidence like blood and urine, that is produced by a crime scene. Also UV fluorescence is used to detect counterfeit money and IDs, as they are laced with material that can glow under UV. At the middle range of UV, UV rays cannot ionize but can break chemical bonds, making molecules unusually reactive. Sunburn, for example, is caused by the disruptive effects of middle range UV radiation on skin cells, which is the main cause of skin cancer. UV rays in the middle range can irreparably damage the complex DNA molecules in the cells producing thymine dimers making it a very potent mutagen. Due to skin cancer caused by UV, the sunscreen industry was invented to combat UV damage. Mid UV wavelengths are called UVB and UVB lights such as germicidal lamps are used to kill germs and also to sterilize water. The Sun emits UV radiation (about 10% of its total power), including extremely short wavelength UV that could potentially destroy most life on land (ocean water would provide some protection for life there). However, most of the Sun's damaging UV wavelengths are absorbed by the atmosphere before they reach the surface. The higher energy (shortest wavelength) ranges of UV (called "vacuum UV") are absorbed by nitrogen and, at longer wavelengths, by simple diatomic oxygen in the air. Most of the UV in the mid-range of energy is blocked by the ozone layer, which absorbs strongly in the important 200–315 nm range, the lower energy part of which is too long for ordinary dioxygen in air to absorb. This leaves less than 3% of sunlight at sea level in UV, with all of this remainder at the lower energies. The remainder is UV-A, along with some UV-B. The very lowest energy range of UV between 315 nm and visible light (called UV-A) is not blocked well by the atmosphere, but does not cause sunburn and does less biological damage. However, it is not harmless and does create oxygen radicals, mutations and skin damage. X-rays After UV come X-rays, which, like the upper ranges of UV are also ionizing. However, due to their higher energies, X-rays can also interact with matter by means of the Compton effect. Hard X-rays have shorter wavelengths than soft X-rays and as they can pass through many substances with little absorption, they can be used to 'see through' objects with 'thicknesses' less than that equivalent to a few meters of water. One notable use is diagnostic X-ray imaging in medicine (a process known as radiography). X-rays are useful as probes in high-energy physics. In astronomy, the accretion disks around neutron stars and black holes emit X-rays, enabling studies of these phenomena. X-rays are also emitted by stellar corona and are strongly emitted by some types of nebulae. However, X-ray telescopes must be placed outside the Earth's atmosphere to see astronomical X-rays, since the great depth of the atmosphere of Earth is opaque to X-rays (with areal density of 1000 g/cm2), equivalent to 10 meters thickness of water. This is an amount sufficient to block almost all astronomical X-rays (and also astronomical gamma rays—see below). Gamma rays After hard X-rays come gamma rays, which were discovered by Paul Ulrich Villard in 1900. These are the most energetic photons, having no defined lower limit to their wavelength. In astronomy they are valuable for studying high-energy objects or regions, however as with X-rays this can only be done with telescopes outside the Earth's atmosphere. Gamma rays are used experimentally by physicists for their penetrating ability and are produced by a number of radioisotopes. They are used for irradiation of foods and seeds for sterilization, and in medicine they are occasionally used in radiation cancer therapy. More commonly, gamma rays are used for diagnostic imaging in nuclear medicine, an example being PET scans. The wavelength of gamma rays can be measured with high accuracy through the effects of Compton scattering. See also Notes and references External links Australian Radiofrequency Spectrum Allocations Chart (from Australian Communications and Media Authority) Canadian Table of Frequency Allocations (from Industry Canada) U.S. Frequency Allocation Chart – Covering the range 3 kHz to 300 GHz (from Department of Commerce) UK frequency allocation table (from Ofcom, which inherited the Radiocommunications Agency's duties, pdf format) Flash EM Spectrum Presentation / Tool – Very complete and customizable. Poster "Electromagnetic Radiation Spectrum" (992 kB) Waves
Electromagnetic spectrum
[ "Physics" ]
4,876
[ "Physical phenomena", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Waves", "Motion (physics)" ]
10,136
https://en.wikipedia.org/wiki/Expert%20system
In artificial intelligence (AI), an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural programming code. Expert systems were among the first truly successful forms of AI software. They were created in the 1970s and then proliferated in the 1980s, being then widely regarded as the future of AI — before the advent of successful artificial neural networks. An expert system is divided into two subsystems: 1) a knowledge base, which represents facts and rules; and 2) an inference engine, which applies the rules to the known facts to deduce new facts, and can include explaining and debugging abilities. History Early development Soon after the dawn of modern computers in the late 1940s and early 1950s, researchers started realizing the immense potential these machines had for modern society. One of the first challenges was to make such machines able to “think” like humans – in particular, making these machines able to make important decisions the way humans do. The medical–healthcare field presented the tantalizing challenge of enabling these machines to make medical diagnostic decisions. Thus, in the late 1950s, right after the information age had fully arrived, researchers started experimenting with the prospect of using computer technology to emulate human decision making. For example, biomedical researchers started creating computer-aided systems for diagnostic applications in medicine and biology. These early diagnostic systems used patients’ symptoms and laboratory test results as inputs to generate a diagnostic outcome. These systems were often described as the early forms of expert systems. However, researchers realized that there were significant limits when using traditional methods such as flow charts, statistical pattern matching, or probability theory. Formal introduction and later developments This previous situation gradually led to the development of expert systems, which used knowledge-based approaches. These expert systems in medicine were the MYCIN expert system, the Internist-I expert system and later, in the middle of the 1980s, the CADUCEUS. Expert systems were formally introduced around 1965 by the Stanford Heuristic Programming Project led by Edward Feigenbaum, who is sometimes termed the "father of expert systems"; other key early contributors were Bruce Buchanan and Randall Davis. The Stanford researchers tried to identify domains where expertise was highly valued and complex, such as diagnosing infectious diseases (Mycin) and identifying unknown organic molecules (Dendral). The idea that "intelligent systems derive their power from the knowledge they possess rather than from the specific formalisms and inference schemes they use" – as Feigenbaum said – was at the time a significant step forward, since the past research had been focused on heuristic computational methods, culminating in attempts to develop very general-purpose problem solvers (foremostly the conjunct work of Allen Newell and Herbert Simon). Expert systems became some of the first truly successful forms of artificial intelligence (AI) software. Research on expert systems was also active in Europe. In the US, the focus tended to be on the use of production rule systems, first on systems hard coded on top of Lisp programming environments and then on expert system shells developed by vendors such as Intellicorp. In Europe, research focused more on systems and expert systems shells developed in Prolog. The advantage of Prolog systems was that they employed a form of rule-based programming that was based on formal logic. One such early expert system shell based on Prolog was APES. One of the first use cases of Prolog and APES was in the legal area namely, the encoding of a large portion of the British Nationality Act. Lance Elliot wrote: "The British Nationality Act was passed in 1981 and shortly thereafter was used as a means of showcasing the efficacy of using Artificial Intelligence (AI) techniques and technologies, doing so to explore how the at-the-time newly enacted statutory law might be encoded into a computerized logic-based formalization. A now oft-cited research paper entitled “The British Nationality Act as a Logic Program” was published in 1986 and subsequently became a hallmark for subsequent work in AI and the law." In the 1980s, expert systems proliferated. Universities offered expert system courses and two-thirds of the Fortune 500 companies applied the technology in daily business activities. Interest was international with the Fifth Generation Computer Systems project in Japan and increased research funding in Europe. In 1981, the first IBM PC, with the PC DOS operating system, was introduced. The imbalance between the high affordability of the relatively powerful chips in the PC, compared to the much more expensive cost of processing power in the mainframes that dominated the corporate IT world at the time, created a new type of architecture for corporate computing, termed the client–server model. Calculations and reasoning could be performed at a fraction of the price of a mainframe using a PC. This model also enabled business units to bypass corporate IT departments and directly build their own applications. As a result, client-server had a tremendous impact on the expert systems market. Expert systems were already outliers in much of the business world, requiring new skills that many IT departments did not have and were not eager to develop. They were a natural fit for new PC-based shells that promised to put application development into the hands of end users and experts. Until then, the main development environment for expert systems had been high end Lisp machines from Xerox, Symbolics, and Texas Instruments. With the rise of the PC and client-server computing, vendors such as Intellicorp and Inference Corporation shifted their priorities to developing PC-based tools. Also, new vendors, often financed by venture capital (such as Aion Corporation, Neuron Data, Exsys, VP-Expert, and many others), started appearing regularly. The first expert system to be used in a design capacity for a large-scale product was the Synthesis of Integral Design (SID) software program, developed in 1982. Written in Lisp, SID generated 93% of the VAX 9000 CPU logic gates. Input to the software was a set of rules created by several expert logic designers. SID expanded the rules and generated software logic synthesis routines many times the size of the rules themselves. Surprisingly, the combination of these rules resulted in an overall design that exceeded the capabilities of the experts themselves, and in many cases out-performed the human counterparts. While some rules contradicted others, top-level control parameters for speed and area provided the tie-breaker. The program was highly controversial but used nevertheless due to project budget constraints. It was terminated by logic designers after the VAX 9000 project completion. During the years before the middle of the 1970s, the expectations of what expert systems can accomplish in many fields tended to be extremely optimistic. At the start of these early studies, researchers were hoping to develop entirely automatic (i.e., completely computerized) expert systems. The expectations of people of what computers can do were frequently too idealistic. This situation radically changed after Richard M. Karp published his breakthrough paper: “Reducibility among Combinatorial Problems” in the early 1970s. Thanks to Karp's work, together with other scholars, like Hubert L. Dreyfus, it became clear that there are certain limits and possibilities when one designs computer algorithms. His findings describe what computers can do and what they cannot do. Many of the computational problems related to this type of expert systems have certain pragmatic limits. These findings laid down the groundwork that led to the next developments in the field. In the 1990s and beyond, the term expert system and the idea of a standalone AI system mostly dropped from the IT lexicon. There are two interpretations of this. One is that "expert systems failed": the IT world moved on because expert systems did not deliver on their over hyped promise. The other is the mirror opposite, that expert systems were simply victims of their success: as IT professionals grasped concepts such as rule engines, such tools migrated from being standalone tools for developing special purpose expert systems, to being one of many standard tools. Other researchers suggest that Expert Systems caused inter-company power struggles when the IT organization lost its exclusivity in software modifications to users or Knowledge Engineers. In the first decade of the 2000s, there was a "resurrection" for the technology, while using the term rule-based systems, with significant success stories and adoption. Many of the leading major business application suite vendors (such as SAP, Siebel, and Oracle) integrated expert system abilities into their suite of products as a way to specify business logic. Rule engines are no longer simply for defining the rules an expert would use but for any type of complex, volatile, and critical business logic; they often go hand in hand with business process automation and integration environments. Current approaches to expert systems The limits of prior type of expert systems prompted researchers to develop new types of approaches. They have developed more efficient, flexible, and powerful methods to simulate the human decision-making process. Some of the approaches that researchers have developed are based on new methods of artificial intelligence (AI), and in particular in machine learning and data mining approaches with a feedback mechanism. Recurrent neural networks often take advantage of such mechanisms. Related is the discussion on the disadvantages section. Modern systems can incorporate new knowledge more easily and thus update themselves easily. Such systems can generalize from existing knowledge better and deal with vast amounts of complex data. Related is the subject of big data here. Sometimes these type of expert systems are called "intelligent systems." More recently, it can be argued that expert systems have moved into the area of business rules and business rules management systems. Software architecture An expert system is an example of a knowledge-based system. Expert systems were the first commercial systems to use a knowledge-based architecture. In general view, an expert system includes the following components: a knowledge base, an inference engine, an explanation facility, a knowledge acquisition facility, and a user interface. The knowledge base represents facts about the world. In early expert systems such as Mycin and Dendral, these facts were represented mainly as flat assertions about variables. In later expert systems developed with commercial shells, the knowledge base took on more structure and used concepts from object-oriented programming. The world was represented as classes, subclasses, and instances and assertions were replaced by values of object instances. The rules worked by querying and asserting values of the objects. The inference engine is an automated reasoning system that evaluates the current state of the knowledge-base, applies relevant rules, and then asserts new knowledge into the knowledge base. The inference engine may also include abilities for explanation, so that it can explain to a user the chain of reasoning used to arrive at a particular conclusion by tracing back over the firing of rules that resulted in the assertion. There are mainly two modes for an inference engine: forward chaining and backward chaining. The different approaches are dictated by whether the inference engine is being driven by the antecedent (left hand side) or the consequent (right hand side) of the rule. In forward chaining an antecedent fires and asserts the consequent. For example, consider the following rule: A simple example of forward chaining would be to assert Man(Socrates) to the system and then trigger the inference engine. It would match R1 and assert Mortal(Socrates) into the knowledge base. Backward chaining is a bit less straight forward. In backward chaining the system looks at possible conclusions and works backward to see if they might be true. So if the system was trying to determine if Mortal(Socrates) is true it would find R1 and query the knowledge base to see if Man(Socrates) is true. One of the early innovations of expert systems shells was to integrate inference engines with a user interface. This could be especially powerful with backward chaining. If the system needs to know a particular fact but does not, then it can simply generate an input screen and ask the user if the information is known. So in this example, it could use R1 to ask the user if Socrates was a Man and then use that new information accordingly. The use of rules to explicitly represent knowledge also enabled explanation abilities. In the simple example above if the system had used R1 to assert that Socrates was Mortal and a user wished to understand why Socrates was mortal they could query the system and the system would look back at the rules which fired to cause the assertion and present those rules to the user as an explanation. In English, if the user asked "Why is Socrates Mortal?" the system would reply "Because all men are mortal and Socrates is a man". A significant area for research was the generation of explanations from the knowledge base in natural English rather than simply by showing the more formal but less intuitive rules. As expert systems evolved, many new techniques were incorporated into various types of inference engines. Some of the most important of these were: Truth maintenance. These systems record the dependencies in a knowledge-base so that when facts are altered, dependent knowledge can be altered accordingly. For example, if the system learns that Socrates is no longer known to be a man it will revoke the assertion that Socrates is mortal. Hypothetical reasoning. In this, the knowledge base can be divided up into many possible views, a.k.a. worlds. This allows the inference engine to explore multiple possibilities in parallel. For example, the system may want to explore the consequences of both assertions, what will be true if Socrates is a Man and what will be true if he is not? Uncertainty systems. One of the first extensions of simply using rules to represent knowledge was also to associate a probability with each rule. So, not to assert that Socrates is mortal, but to assert Socrates may be mortal with some probability value. Simple probabilities were extended in some systems with sophisticated mechanisms for uncertain reasoning, such as fuzzy logic, and combination of probabilities. Ontology classification. With the addition of object classes to the knowledge base, a new type of reasoning was possible. Along with reasoning simply about object values, the system could also reason about object structures. In this simple example, Man can represent an object class and R1 can be redefined as a rule that defines the class of all men. These types of special purpose inference engines are termed classifiers. Although they were not highly used in expert systems, classifiers are very powerful for unstructured volatile domains, and are a key technology for the Internet and the emerging Semantic Web. Advantages The goal of knowledge-based systems is to make the critical information required for the system to work explicit rather than implicit. In a traditional computer program, the logic is embedded in code that can typically only be reviewed by an IT specialist. With an expert system, the goal was to specify the rules in a format that was intuitive and easily understood, reviewed, and even edited by domain experts rather than IT experts. The benefits of this explicit knowledge representation were rapid development and ease of maintenance. Ease of maintenance is the most obvious benefit. This was achieved in two ways. First, by removing the need to write conventional code, many of the normal problems that can be caused by even small changes to a system could be avoided with expert systems. Essentially, the logical flow of the program (at least at the highest level) was simply a given for the system, simply invoke the inference engine. This also was a reason for the second benefit: rapid prototyping. With an expert system shell it was possible to enter a few rules and have a prototype developed in days rather than the months or year typically associated with complex IT projects. A claim for expert system shells that was often made was that they removed the need for trained programmers and that experts could develop systems themselves. In reality, this was seldom if ever true. While the rules for an expert system were more comprehensible than typical computer code, they still had a formal syntax where a misplaced comma or other character could cause havoc as with any other computer language. Also, as expert systems moved from prototypes in the lab to deployment in the business world, issues of integration and maintenance became far more critical. Inevitably demands to integrate with, and take advantage of, large legacy databases and systems arose. To accomplish this, integration required the same skills as any other type of system. Summing up the benefits of using expert systems, the following can be highlighted: Increased availability and reliability: Expertise can be accessed on any computer hardware and the system always completes responses on time. Multiple expertise: Several expert systems can be run simultaneously to solve a problem. and gain a higher level of expertise than a human expert. Explanation: Expert systems always describe of how the problem was solved. Fast response: The expert systems are fast and able to solve a problem in real-time. Reduced cost: The cost of expertise for each user is significantly reduced. Disadvantages The most common disadvantage cited for expert systems in the academic literature is the knowledge acquisition problem. Obtaining the time of domain experts for any software application is always difficult, but for expert systems it was especially difficult because the experts were by definition highly valued and in constant demand by the organization. As a result of this problem, a great deal of research in the later years of expert systems was focused on tools for knowledge acquisition, to help automate the process of designing, debugging, and maintaining rules defined by experts. However, when looking at the life-cycle of expert systems in actual use, other problems – essentially the same problems as those of any other large system – seem at least as critical as knowledge acquisition: integration, access to large databases, and performance. Performance could be especially problematic because early expert systems were built using tools (such as earlier Lisp versions) that interpreted code expressions without first compiling them. This provided a powerful development environment, but with the drawback that it was virtually impossible to match the efficiency of the fastest compiled languages (such as C). System and database integration were difficult for early expert systems because the tools were mostly in languages and platforms that were neither familiar to nor welcome in most corporate IT environments – programming languages such as Lisp and Prolog, and hardware platforms such as Lisp machines and personal computers. As a result, much effort in the later stages of expert system tool development was focused on integrating with legacy environments such as COBOL and large database systems, and on porting to more standard platforms. These issues were resolved mainly by the client–server paradigm shift, as PCs were gradually accepted in the IT environment as a legitimate platform for serious business system development and as affordable minicomputer servers provided the processing power needed for AI applications. Another major challenge of expert systems emerges when the size of the knowledge base increases. This causes the processing complexity to increase. For instance, when an expert system with 100 million rules was envisioned as the ultimate expert system, it became obvious that such system would be too complex and it would face too many computational problems. An inference engine would have to be able to process huge numbers of rules to reach a decision. How to verify that decision rules are consistent with each other is also a challenge when there are too many rules. Usually such problem leads to a satisfiability (SAT) formulation. This is a well-known NP-complete problem Boolean satisfiability problem. If we assume only binary variables, say n of them, and then the corresponding search space is of size 2. Thus, the search space can grow exponentially. There are also questions on how to prioritize the use of the rules to operate more efficiently, or how to resolve ambiguities (for instance, if there are too many else-if sub-structures within one rule) and so on. Other problems are related to the overfitting and overgeneralization effects when using known facts and trying to generalize to other cases not described explicitly in the knowledge base. Such problems exist with methods that employ machine learning approaches too. Another problem related to the knowledge base is how to make updates of its knowledge quickly and effectively. Also how to add a new piece of knowledge (i.e., where to add it among many rules) is challenging. Modern approaches that rely on machine learning methods are easier in this regard. Because of the above challenges, it became clear that new approaches to AI were required instead of rule-based technologies. These new approaches are based on the use of machine learning techniques, along with the use of feedback mechanisms. The key challenges that expert systems in medicine (if one considers computer-aided diagnostic systems as modern expert systems), and perhaps in other application domains, include issues related to aspects such as: big data, existing regulations, healthcare practice, various algorithmic issues, and system assessment. Finally, the following disadvantages of using expert systems can be summarized: Expert systems have superficial knowledge, and a simple task can potentially become computationally expensive. Expert systems require knowledge engineers to input the data, data acquisition is very hard. The expert system may choose the most inappropriate method for solving a particular problem. Problems of ethics in the use of any form of AI are very relevant at present. It is a closed world with specific knowledge, in which there is no deep perception of concepts and their interrelationships until an expert provides them. Applications Hayes-Roth divides expert systems applications into 10 categories illustrated in the following table. The example applications were not in the original Hayes-Roth table, and some of them arose well afterward. Any application that is not footnoted is described in the Hayes-Roth book. Also, while these categories provide an intuitive framework to describe the space of expert systems applications, they are not rigid categories, and in some cases an application may show traits of more than one category. Hearsay was an early attempt at solving voice recognition through an expert systems approach. For the most part this category of expert systems was not all that successful. Hearsay and all interpretation systems are essentially pattern recognition systems—looking for patterns in noisy data. In the case of Hearsay recognizing phonemes in an audio stream. Other early examples were analyzing sonar data to detect Russian submarines. These kinds of systems proved much more amenable to a neural network AI solution than a rule-based approach. CADUCEUS and MYCIN were medical diagnosis systems. The user describes their symptoms to the computer as they would to a doctor and the computer returns a medical diagnosis. Dendral was a tool to study hypothesis formation in the identification of organic molecules. The general problem it solved—designing a solution given a set of constraints—was one of the most successful areas for early expert systems applied to business domains such as salespeople configuring Digital Equipment Corporation (DEC) VAX computers and mortgage loan application development. SMH.PAL is an expert system for the assessment of students with multiple disabilities. GARVAN-ES1 was a medical expert system, developed at the Garvan Institute of Medical Research, that provided automated clinical diagnostic comments on endocrine reports from a pathology laboratory. It was one of the first medical expert systems to go into routine clinical use internationally and the first expert system to be used for diagnosis daily in Australia. The system was written in "C" and ran on a PDP-11 in 64K of memory. It had 661 rules that were compiled; not interpreted. Mistral is an expert system to monitor dam safety, developed in the 1990s by Ismes (Italy). It gets data from an automatic monitoring system and performs a diagnosis of the state of the dam. Its first copy, installed in 1992 on the Ridracoli Dam (Italy), is still operational 24/7/365. It has been installed on several dams in Italy and abroad (e.g., Itaipu Dam in Brazil), and on landslide sites under the name of Eydenet, and on monuments under the name of Kaleidos. Mistral is a registered trade mark of CESI. See also AI winter CLIPS Constraint logic programming Constraint satisfaction Knowledge engineering Learning classifier system Rule-based machine learning References Works cited External links Expert System tutorial on Code Project Decision support systems Information systems
Expert system
[ "Technology" ]
4,913
[ "Information systems", "Expert systems", "Information technology", "Decision support systems" ]
10,174
https://en.wikipedia.org/wiki/Empiricism
In philosophy, empiricism is an epistemological view which holds that true knowledge or justification comes only or primarily from sensory experience and empirical evidence. It is one of several competing views within epistemology, along with rationalism and skepticism. Empiricists argue that empiricism is a more reliable method of finding the truth than purely using logical reasoning, because humans have cognitive biases and limitations which lead to errors of judgement. Empiricism emphasizes the central role of empirical evidence in the formation of ideas, rather than innate ideas or traditions. Empiricists may argue that traditions (or customs) arise due to relations of previous sensory experiences. Historically, empiricism was associated with the "blank slate" concept (tabula rasa), according to which the human mind is "blank" at birth and develops its thoughts only through later experience. Empiricism in the philosophy of science emphasizes evidence, especially as discovered in experiments. It is a fundamental part of the scientific method that all hypotheses and theories must be tested against observations of the natural world rather than resting solely on a priori reasoning, intuition, or revelation. Empiricism, often used by natural scientists, believes that "knowledge is based on experience" and that "knowledge is tentative and probabilistic, subject to continued revision and falsification". Empirical research, including experiments and validated measurement tools, guides the scientific method. Etymology The English term empirical derives from the Ancient Greek word ἐμπειρία, empeiria, which is cognate with and translates to the Latin experientia, from which the words experience and experiment are derived. Background A central concept in science and the scientific method is that conclusions must be empirically based on the evidence of the senses. Both natural and social sciences use working hypotheses that are testable by observation and experiment. The term semi-empirical is sometimes used to describe theoretical methods that make use of basic axioms, established scientific laws, and previous experimental results to engage in reasoned model building and theoretical inquiry. Philosophical empiricists hold no knowledge to be properly inferred or deduced unless it is derived from one's sense-based experience. In epistemology (theory of knowledge) empiricism is typically contrasted with rationalism, which holds that knowledge may be derived from reason independently of the senses, and in the philosophy of mind it is often contrasted with innatism, which holds that some knowledge and ideas are already present in the mind at birth. However, many Enlightenment rationalists and empiricists still made concessions to each other. For example, the empiricist John Locke admitted that some knowledge (e.g. knowledge of God's existence) could be arrived at through intuition and reasoning alone. Similarly, Robert Boyle, a prominent advocate of the experimental method, held that we also have innate ideas. At the same time, the main continental rationalists (Descartes, Spinoza, and Leibniz) were also advocates of the empirical "scientific method". History Early empiricism Between 600 and 200 BCE, the Vaisheshika school of Hindu philosophy, founded by the ancient Indian philosopher Kanada, accepted perception and inference as the only two reliable sources of knowledge. This is enumerated in his work Vaiśeṣika Sūtra. The Charvaka school held similar beliefs, asserting that perception is the only reliable source of knowledge while inference obtains knowledge with uncertainty. The earliest Western proto-empiricists were the empiric school of ancient Greek medical practitioners, founded in 330 BCE. Its members rejected the doctrines of the dogmatic school, preferring to rely on the observation of phantasiai (i.e., phenomena, the appearances). The Empiric school was closely allied with the Pyrrhonist school of philosophy, which made the philosophical case for their proto-empiricism. The notion of tabula rasa ("clean slate" or "blank tablet") connotes a view of the mind as an originally blank or empty recorder (Locke used the words "white paper") on which experience leaves marks. This denies that humans have innate ideas. The notion dates back to Aristotle, : Aristotle's explanation of how this was possible was not strictly empiricist in a modern sense, but rather based on his theory of potentiality and actuality, and experience of sense perceptions still requires the help of the active nous. These notions contrasted with Platonic notions of the human mind as an entity that pre-existed somewhere in the heavens, before being sent down to join a body on Earth (see Plato's Phaedo and Apology, as well as others). Aristotle was considered to give a more important position to sense perception than Plato, and commentators in the Middle Ages summarized one of his positions as "nihil in intellectu nisi prius fuerit in sensu" (Latin for "nothing in the intellect without first being in the senses"). This idea was later developed in ancient philosophy by the Stoic school, from about 330 BCE. Stoic epistemology generally emphasizes that the mind starts blank, but acquires knowledge as the outside world is impressed upon it. The doxographer Aetius summarizes this view as "When a man is born, the Stoics say, he has the commanding part of his soul like a sheet of paper ready for writing upon." Islamic Golden Age and Pre-Renaissance (5th to 15th centuries CE) During the Middle Ages (from the 5th to the 15th century CE) Aristotle's theory of tabula rasa was developed by Islamic philosophers starting with Al Farabi (), developing into an elaborate theory by Avicenna (c.  980 – 1037 CE) and demonstrated as a thought experiment by Ibn Tufail. For Avicenna (Ibn Sina), for example, the tabula rasa is a pure potentiality that is actualized through education, and knowledge is attained through "empirical familiarity with objects in this world from which one abstracts universal concepts" developed through a "syllogistic method of reasoning in which observations lead to propositional statements which when compounded lead to further abstract concepts". The intellect itself develops from a material intellect (al-'aql al-hayulani), which is a potentiality "that can acquire knowledge to the active intellect (al-'aql al-fa'il), the state of the human intellect in conjunction with the perfect source of knowledge". So the immaterial "active intellect", separate from any individual person, is still essential for understanding to occur. In the 12th century CE, the Andalusian Muslim philosopher and novelist Abu Bakr Ibn Tufail (known as "Abubacer" or "Ebu Tophail" in the West) included the theory of tabula rasa as a thought experiment in his Arabic philosophical novel, Hayy ibn Yaqdhan in which he depicted the development of the mind of a feral child "from a tabula rasa to that of an adult, in complete isolation from society" on a desert island, through experience alone. The Latin translation of his philosophical novel, entitled Philosophus Autodidactus, published by Edward Pococke the Younger in 1671, had an influence on John Locke's formulation of tabula rasa in An Essay Concerning Human Understanding. A similar Islamic theological novel, Theologus Autodidactus, was written by the Arab theologian and physician Ibn al-Nafis in the 13th century. It also dealt with the theme of empiricism through the story of a feral child on a desert island, but departed from its predecessor by depicting the development of the protagonist's mind through contact with society rather than in isolation from society. During the 13th century Thomas Aquinas adopted into scholasticism the Aristotelian position that the senses are essential to the mind. Bonaventure (1221–1274), one of Aquinas' strongest intellectual opponents, offered some of the strongest arguments in favour of the Platonic idea of the mind. Renaissance Italy In the late renaissance various writers began to question the medieval and classical understanding of knowledge acquisition in a more fundamental way. In political and historical writing Niccolò Machiavelli and his friend Francesco Guicciardini initiated a new realistic style of writing. Machiavelli in particular was scornful of writers on politics who judged everything in comparison to mental ideals and demanded that people should study the "effectual truth" instead. Their contemporary, Leonardo da Vinci (1452–1519) said, "If you find from your own experience that something is a fact and it contradicts what some authority has written down, then you must abandon the authority and base your reasoning on your own findings." Significantly, an empirical metaphysical system was developed by the Italian philosopher Bernardino Telesio which had an enormous impact on the development of later Italian thinkers, including Telesio's students Antonio Persio and Sertorio Quattromani, his contemporaries Thomas Campanella and Giordano Bruno, and later British philosophers such as Francis Bacon, who regarded Telesio as "the first of the moderns". Telesio's influence can also be seen on the French philosophers René Descartes and Pierre Gassendi. The decidedly anti-Aristotelian and anti-clerical music theorist Vincenzo Galilei (c. 1520 – 1591), father of Galileo and the inventor of monody, made use of the method in successfully solving musical problems, firstly, of tuning such as the relationship of pitch to string tension and mass in stringed instruments, and to volume of air in wind instruments; and secondly to composition, by his various suggestions to composers in his Dialogo della musica antica e moderna (Florence, 1581). The Italian word he used for "experiment" was esperimento. It is known that he was the essential pedagogical influence upon the young Galileo, his eldest son (cf. Coelho, ed. Music and Science in the Age of Galileo Galilei), arguably one of the most influential empiricists in history. Vincenzo, through his tuning research, found the underlying truth at the heart of the misunderstood myth of 'Pythagoras' hammers' (the square of the numbers concerned yielded those musical intervals, not the actual numbers, as believed), and through this and other discoveries that demonstrated the fallibility of traditional authorities, a radically empirical attitude developed, passed on to Galileo, which regarded "experience and demonstration" as the sine qua non of valid rational enquiry. British empiricism British empiricism, a retrospective characterization, emerged during the 17th century as an approach to early modern philosophy and modern science. Although both integral to this overarching transition, Francis Bacon, in England, first advocated for empiricism in 1620, whereas René Descartes, in France, laid the main groundwork upholding rationalism around 1640. (Bacon's natural philosophy was influenced by Italian philosopher Bernardino Telesio and by Swiss physician Paracelsus.) Contributing later in the 17th century, Thomas Hobbes and Baruch Spinoza are retrospectively identified likewise as an empiricist and a rationalist, respectively. In the Enlightenment of the late 17th century, John Locke in England, and in the 18th century, both George Berkeley in Ireland and David Hume in Scotland, all became leading exponents of empiricism, hence the dominance of empiricism in British philosophy. The distinction between rationalism and empiricism was not formally made until Immanuel Kant, in Germany, around 1780, who sought to merge the two views. In response to the early-to-mid-17th-century "continental rationalism", John Locke (1632–1704) proposed in An Essay Concerning Human Understanding (1689) a very influential view wherein the only knowledge humans can have is a posteriori, i.e., based upon experience. Locke is famously attributed with holding the proposition that the human mind is a tabula rasa, a "blank tablet", in Locke's words "white paper", on which the experiences derived from sense impressions as a person's life proceeds are written. There are two sources of our ideas: sensation and reflection. In both cases, a distinction is made between simple and complex ideas. The former are unanalysable, and are broken down into primary and secondary qualities. Primary qualities are essential for the object in question to be what it is. Without specific primary qualities, an object would not be what it is. For example, an apple is an apple because of the arrangement of its atomic structure. If an apple were structured differently, it would cease to be an apple. Secondary qualities are the sensory information we can perceive from its primary qualities. For example, an apple can be perceived in various colours, sizes, and textures but it is still identified as an apple. Therefore, its primary qualities dictate what the object essentially is, while its secondary qualities define its attributes. Complex ideas combine simple ones, and divide into substances, modes, and relations. According to Locke, our knowledge of things is a perception of ideas that are in accordance or discordance with each other, which is very different from the quest for certainty of Descartes. A generation later, the Irish Anglican bishop George Berkeley (1685–1753) determined that Locke's view immediately opened a door that would lead to eventual atheism. In response to Locke, he put forth in his Treatise Concerning the Principles of Human Knowledge (1710) an important challenge to empiricism in which things only exist either as a result of their being perceived, or by virtue of the fact that they are an entity doing the perceiving. (For Berkeley, God fills in for humans by doing the perceiving whenever humans are not around to do it.) In his text Alciphron, Berkeley maintained that any order humans may see in nature is the language or handwriting of God. Berkeley's approach to empiricism would later come to be called subjective idealism. Scottish philosopher David Hume (1711–1776) responded to Berkeley's criticisms of Locke, as well as other differences between early modern philosophers, and moved empiricism to a new level of skepticism. Hume argued in keeping with the empiricist view that all knowledge derives from sense experience, but he accepted that this has implications not normally acceptable to philosophers. He wrote for example, "Locke divides all arguments into demonstrative and probable. On this view, we must say that it is only probable that all men must die or that the sun will rise to-morrow, because neither of these can be demonstrated. But to conform our language more to common use, we ought to divide arguments into demonstrations, proofs, and probabilities—by ‘proofs’ meaning arguments from experience that leave no room for doubt or opposition." And, Hume divided all of human knowledge into two categories: relations of ideas and matters of fact (see also Kant's analytic-synthetic distinction). Mathematical and logical propositions (e.g. "that the square of the hypotenuse is equal to the sum of the squares of the two sides") are examples of the first, while propositions involving some contingent observation of the world (e.g. "the sun rises in the East") are examples of the second. All of people's "ideas", in turn, are derived from their "impressions". For Hume, an "impression" corresponds roughly with what we call a sensation. To remember or to imagine such impressions is to have an "idea". Ideas are therefore the faint copies of sensations. Hume maintained that no knowledge, even the most basic beliefs about the natural world, can be conclusively established by reason. Rather, he maintained, our beliefs are more a result of accumulated habits, developed in response to accumulated sense experiences. Among his many arguments Hume also added another important slant to the debate about scientific method—that of the problem of induction. Hume argued that it requires inductive reasoning to arrive at the premises for the principle of inductive reasoning, and therefore the justification for inductive reasoning is a circular argument. Among Hume's conclusions regarding the problem of induction is that there is no certainty that the future will resemble the past. Thus, as a simple instance posed by Hume, we cannot know with certainty by inductive reasoning that the sun will continue to rise in the East, but instead come to expect it to do so because it has repeatedly done so in the past. Hume concluded that such things as belief in an external world and belief in the existence of the self were not rationally justifiable. According to Hume these beliefs were to be accepted nonetheless because of their profound basis in instinct and custom. Hume's lasting legacy, however, was the doubt that his skeptical arguments cast on the legitimacy of inductive reasoning, allowing many skeptics who followed to cast similar doubt. Phenomenalism Most of Hume's followers have disagreed with his conclusion that belief in an external world is rationally unjustifiable, contending that Hume's own principles implicitly contained the rational justification for such a belief, that is, beyond being content to let the issue rest on human instinct, custom and habit. According to an extreme empiricist theory known as phenomenalism, anticipated by the arguments of both Hume and George Berkeley, a physical object is a kind of construction out of our experiences. Phenomenalism is the view that physical objects, properties, events (whatever is physical) are reducible to mental objects, properties, events. Ultimately, only mental objects, properties, events, exist—hence the closely related term subjective idealism. By the phenomenalistic line of thinking, to have a visual experience of a real physical thing is to have an experience of a certain kind of group of experiences. This type of set of experiences possesses a constancy and coherence that is lacking in the set of experiences of which hallucinations, for example, are a part. As John Stuart Mill put it in the mid-19th century, matter is the "permanent possibility of sensation". Mill's empiricism went a significant step beyond Hume in still another respect: in maintaining that induction is necessary for all meaningful knowledge including mathematics. As summarized by D.W. Hamlin: Mill's empiricism thus held that knowledge of any kind is not from direct experience but an inductive inference from direct experience. The problems other philosophers have had with Mill's position center around the following issues: Firstly, Mill's formulation encounters difficulty when it describes what direct experience is by differentiating only between actual and possible sensations. This misses some key discussion concerning conditions under which such "groups of permanent possibilities of sensation" might exist in the first place. Berkeley put God in that gap; the phenomenalists, including Mill, essentially left the question unanswered. In the end, lacking an acknowledgement of an aspect of "reality" that goes beyond mere "possibilities of sensation", such a position leads to a version of subjective idealism. Questions of how floor beams continue to support a floor while unobserved, how trees continue to grow while unobserved and untouched by human hands, etc., remain unanswered, and perhaps unanswerable in these terms. Secondly, Mill's formulation leaves open the unsettling possibility that the "gap-filling entities are purely possibilities and not actualities at all". Thirdly, Mill's position, by calling mathematics merely another species of inductive inference, misapprehends mathematics. It fails to fully consider the structure and method of mathematical science, the products of which are arrived at through an internally consistent deductive set of procedures which do not, either today or at the time Mill wrote, fall under the agreed meaning of induction. The phenomenalist phase of post-Humean empiricism ended by the 1940s, for by that time it had become obvious that statements about physical things could not be translated into statements about actual and possible sense data. If a physical object statement is to be translatable into a sense-data statement, the former must be at least deducible from the latter. But it came to be realized that there is no finite set of statements about actual and possible sense-data from which we can deduce even a single physical-object statement. The translating or paraphrasing statement must be couched in terms of normal observers in normal conditions of observation. There is, however, no finite set of statements that are couched in purely sensory terms and can express the satisfaction of the condition of the presence of a normal observer. According to phenomenalism, to say that a normal observer is present is to make the hypothetical statement that were a doctor to inspect the observer, the observer would appear to the doctor to be normal. But, of course, the doctor himself must be a normal observer. If we are to specify this doctor's normality in sensory terms, we must make reference to a second doctor who, when inspecting the sense organs of the first doctor, would himself have to have the sense data a normal observer has when inspecting the sense organs of a subject who is a normal observer. And if we are to specify in sensory terms that the second doctor is a normal observer, we must refer to a third doctor, and so on (also see the third man). Logical empiricism Logical empiricism (also logical positivism or neopositivism) was an early 20th-century attempt to synthesize the essential ideas of British empiricism (e.g. a strong emphasis on sensory experience as the basis for knowledge) with certain insights from mathematical logic that had been developed by Gottlob Frege and Ludwig Wittgenstein. Some of the key figures in this movement were Otto Neurath, Moritz Schlick and the rest of the Vienna Circle, along with A. J. Ayer, Rudolf Carnap and Hans Reichenbach. The neopositivists subscribed to a notion of philosophy as the conceptual clarification of the methods, insights and discoveries of the sciences. They saw in the logical symbolism elaborated by Frege (1848–1925) and Bertrand Russell (1872–1970) a powerful instrument that could rationally reconstruct all scientific discourse into an ideal, logically perfect, language that would be free of the ambiguities and deformations of natural language. This gave rise to what they saw as metaphysical pseudoproblems and other conceptual confusions. By combining Frege's thesis that all mathematical truths are logical with the early Wittgenstein's idea that all logical truths are mere linguistic tautologies, they arrived at a twofold classification of all propositions: the "analytic" (a priori) and the "synthetic" (a posteriori). On this basis, they formulated a strong principle of demarcation between sentences that have sense and those that do not: the so-called "verification principle". Any sentence that is not purely logical, or is unverifiable, is devoid of meaning. As a result, most metaphysical, ethical, aesthetic and other traditional philosophical problems came to be considered pseudoproblems. In the extreme empiricism of the neopositivists—at least before the 1930s—any genuinely synthetic assertion must be reducible to an ultimate assertion (or set of ultimate assertions) that expresses direct observations or perceptions. In later years, Carnap and Neurath abandoned this sort of phenomenalism in favor of a rational reconstruction of knowledge into the language of an objective spatio-temporal physics. That is, instead of translating sentences about physical objects into sense-data, such sentences were to be translated into so-called protocol sentences, for example, "X at location Y and at time T observes such and such". The central theses of logical positivism (verificationism, the analytic–synthetic distinction, reductionism, etc.) came under sharp attack after World War II by thinkers such as Nelson Goodman, W. V. Quine, Hilary Putnam, Karl Popper, and Richard Rorty. By the late 1960s, it had become evident to most philosophers that the movement had pretty much run its course, though its influence is still significant among contemporary analytic philosophers such as Michael Dummett and other anti-realists. Pragmatism In the late 19th and early 20th century, several forms of pragmatic philosophy arose. The ideas of pragmatism, in its various forms, developed mainly from discussions between Charles Sanders Peirce and William James when both men were at Harvard in the 1870s. James popularized the term "pragmatism", giving Peirce full credit for its patrimony, but Peirce later demurred from the tangents that the movement was taking, and redubbed what he regarded as the original idea with the name of "pragmaticism". Along with its pragmatic theory of truth, this perspective integrates the basic insights of empirical (experience-based) and rational (concept-based) thinking. Charles Peirce (1839–1914) was highly influential in laying the groundwork for today's empirical scientific method. Although Peirce severely criticized many elements of Descartes' peculiar brand of rationalism, he did not reject rationalism outright. Indeed, he concurred with the main ideas of rationalism, most importantly the idea that rational concepts can be meaningful and the idea that rational concepts necessarily go beyond the data given by empirical observation. In later years he even emphasized the concept-driven side of the then ongoing debate between strict empiricism and strict rationalism, in part to counterbalance the excesses to which some of his cohorts had taken pragmatism under the "data-driven" strict-empiricist view. Among Peirce's major contributions was to place inductive reasoning and deductive reasoning in a complementary rather than competitive mode, the latter of which had been the primary trend among the educated since David Hume wrote a century before. To this, Peirce added the concept of abductive reasoning. The combined three forms of reasoning serve as a primary conceptual foundation for the empirically based scientific method today. Peirce's approach "presupposes that (1) the objects of knowledge are real things, (2) the characters (properties) of real things do not depend on our perceptions of them, and (3) everyone who has sufficient experience of real things will agree on the truth about them. According to Peirce's doctrine of fallibilism, the conclusions of science are always tentative. The rationality of the scientific method does not depend on the certainty of its conclusions, but on its self-corrective character: by continued application of the method science can detect and correct its own mistakes, and thus eventually lead to the discovery of truth". In his Harvard "Lectures on Pragmatism" (1903), Peirce enumerated what he called the "three cotary propositions of pragmatism" (L: cos, cotis whetstone), saying that they "put the edge on the maxim of pragmatism". First among these, he listed the peripatetic-thomist observation mentioned above, but he further observed that this link between sensory perception and intellectual conception is a two-way street. That is, it can be taken to say that whatever we find in the intellect is also incipiently in the senses. Hence, if theories are theory-laden then so are the senses, and perception itself can be seen as a species of abductive inference, its difference being that it is beyond control and hence beyond critique—in a word, incorrigible. This in no way conflicts with the fallibility and revisability of scientific concepts, since it is only the immediate percept in its unique individuality or "thisness"—what the Scholastics called its haecceity—that stands beyond control and correction. Scientific concepts, on the other hand, are general in nature, and transient sensations do in another sense find correction within them. This notion of perception as abduction has received periodic revivals in artificial intelligence and cognitive science research, most recently for instance with the work of Irvin Rock on indirect perception. Around the beginning of the 20th century, William James (1842–1910) coined the term "radical empiricism" to describe an offshoot of his form of pragmatism, which he argued could be dealt with separately from his pragmatism—though in fact the two concepts are intertwined in James's published lectures. James maintained that the empirically observed "directly apprehended universe needs ... no extraneous trans-empirical connective support", by which he meant to rule out the perception that there can be any value added by seeking supernatural explanations for natural phenomena. James' "radical empiricism" is thus not radical in the context of the term "empiricism", but is instead fairly consistent with the modern use of the term "empirical". His method of argument in arriving at this view, however, still readily encounters debate within philosophy even today. John Dewey (1859–1952) modified James' pragmatism to form a theory known as instrumentalism. The role of sense experience in Dewey's theory is crucial, in that he saw experience as unified totality of things through which everything else is interrelated. Dewey's basic thought, in accordance with empiricism, was that reality is determined by past experience. Therefore, humans adapt their past experiences of things to perform experiments upon and test the pragmatic values of such experience. The value of such experience is measured experientially and scientifically, and the results of such tests generate ideas that serve as instruments for future experimentation, in physical sciences as in ethics. Thus, ideas in Dewey's system retain their empiricist flavour in that they are only known a posteriori. See also Endnotes References Achinstein, Peter, and Barker, Stephen F. (1969), The Legacy of Logical Positivism: Studies in the Philosophy of Science, Johns Hopkins University Press, Baltimore, MD. Aristotle, "On the Soul" (De Anima), W. S. Hett (trans.), pp. 1–203 in Aristotle, Volume 8, Loeb Classical Library, William Heinemann, London, UK, 1936. Aristotle, Posterior Analytics. Barone, Francesco (1986), Il neopositivismo logico, Laterza, Roma Bari Berlin, Isaiah (2004), The Refutation of Phenomenalism, Isaiah Berlin Virtual Library. Bolender, John (1998), "Factual Phenomenalism: A Supervenience Theory"', Sorites, no. 9, pp. 16–31. Chisolm, R. (1948), "The Problem of Empiricism", Journal of Philosophy 45, 512–17. Dewey, John (1906), Studies in Logical Theory. Encyclopædia Britannica, "Empiricism", vol. 4, p. 480. Hume, D., A Treatise of Human Nature, L.A. Selby-Bigge (ed.), Oxford University Press, London, UK, 1975. Hume, David. "An Enquiry Concerning Human Understanding", in Enquiries Concerning the Human Understanding and Concerning the Principles of Morals, 2nd edition, L.A. Selby-Bigge (ed.), Oxford University Press, Oxford, UK, 1902. Gutenberg press full-text James, William (1911), The Meaning of Truth. Keeton, Morris T. (1962), "Empiricism", pp. 89–90 in Dagobert D. Runes (ed.), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, NJ. Leftow, Brian (ed., 2006), Aquinas: Summa Theologiae, Questions on God, pp. vii et seq. Macmillan Encyclopedia of Philosophy (1969), "Development of Aristotle's Thought", vol. 1, pp. 153ff. Macmillan Encyclopedia of Philosophy (1969), "George Berkeley", vol. 1, p. 297. Macmillan Encyclopedia of Philosophy (1969), "Empiricism", vol. 2, p. 503. Macmillan Encyclopedia of Philosophy (1969), "Mathematics, Foundations of", vol. 5, pp. 188–89. Macmillan Encyclopedia of Philosophy (1969), "Axiomatic Method", vol. 5, pp. 192ff. Macmillan Encyclopedia of Philosophy (1969), "Epistemological Discussion", subsections on "A Priori Knowledge" and "Axioms". Macmillan Encyclopedia of Philosophy (1969), "Phenomenalism", vol. 6, p. 131. Macmillan Encyclopedia of Philosophy (1969), "Thomas Aquinas", subsection on "Theory of Knowledge", vol. 8, pp. 106–07. Marconi, Diego (2004), "Fenomenismo"', in Gianni Vattimo and Gaetano Chiurazzi (eds.), L'Enciclopedia Garzanti di Filosofia, 3rd edition, Garzanti, Milan, Italy. Markie, P. (2004), "Rationalism vs. Empiricism" in Edward D. Zalta (ed.), Stanford Encyclopedia of Philosophy, Eprint. Maxwell, Nicholas (1998), The Comprehensibility of the Universe: A New Conception of Science, Oxford University Press, Oxford. Mill, J.S., "An Examination of Sir William Rowan Hamilton's Philosophy", in A.J. Ayer and Ramond Winch (eds.), British Empirical Philosophers, Simon and Schuster, New York, NY, 1968. Morick, H. (1980), Challenges to Empiricism, Hackett Publishing, Indianapolis, IN. Peirce, C.S., "Lectures on Pragmatism", Cambridge, Massachusetts, March 26 – May 17, 1903. Reprinted in part, Collected Papers, CP 5.14–212. Published in full with editor's introduction and commentary, Patricia Ann Turisi (ed.), Pragmatism as a Principle and Method of Right Thinking: The 1903 Harvard "Lectures on Pragmatism", State University of New York Press, Albany, NY, 1997. Reprinted, pp. 133–241, Peirce Edition Project (eds.), The Essential Peirce, Selected Philosophical Writings, Volume 2 (1893–1913), Indiana University Press, Bloomington, IN, 1998. Rescher, Nicholas (1985), The Heritage of Logical Positivism, University Press of America, Lanham, MD. Rock, Irvin (1983), The Logic of Perception, MIT Press, Cambridge, Massachusetts. Rock, Irvin, (1997) Indirect Perception, MIT Press, Cambridge, Massachusetts. Runes, D.D. (ed., 1962), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, NJ. Sini, Carlo (2004), "Empirismo", in Gianni Vattimo et al. (eds.), Enciclopedia Garzanti della Filosofia. Solomon, Robert C., and Higgins, Kathleen M. (1996), A Short History of Philosophy, pp. 68–74. Sorabji, Richard (1972), Aristotle on Memory. Thornton, Stephen (1987), Berkeley's Theory of Reality, Eprint Vanzo, Alberto (2014), "From Empirics to Empiricists", Intellectual History Review, 2014, Eprint available here and here. Ward, Teddy (n.d.), "Empiricism", Eprint. Wilson, Fred (2005), "John Stuart Mill", in Edward N. Zalta (ed.), Stanford Encyclopedia of Philosophy, Eprint. External links Empiricist Man History of science Justification (epistemology) Philosophical methodology Internalism and externalism Philosophy of science Epistemological schools and traditions
Empiricism
[ "Technology" ]
7,542
[ "History of science", "History of science and technology" ]
10,192
https://en.wikipedia.org/wiki/Explosive
An explosive (or explosive material) is a reactive substance that contains a great amount of potential energy that can produce an explosion if released suddenly, usually accompanied by the production of light, heat, sound, and pressure. An explosive charge is a measured quantity of explosive material, which may either be composed solely of one ingredient or be a mixture containing at least two substances. The potential energy stored in an explosive material may, for example, be: chemical energy, such as nitroglycerin or grain dust pressurized gas, such as a gas cylinder, aerosol can, or boiling liquid expanding vapor explosion nuclear energy, such as in the fissile isotopes uranium-235 and plutonium-239 Explosive materials may be categorized by the speed at which they expand. Materials that detonate (the front of the chemical reaction moves faster through the material than the speed of sound) are said to be "high explosives" and materials that deflagrate are said to be "low explosives". Explosives may also be categorized by their sensitivity. Sensitive materials that can be initiated by a relatively small amount of heat or pressure are primary explosives and materials that are relatively insensitive are secondary or tertiary explosives. A wide variety of chemicals can explode; a smaller number are manufactured specifically for the purpose of being used as explosives. The remainder are too dangerous, sensitive, toxic, expensive, unstable, or prone to decomposition or degradation over short time spans. In contrast, some materials are merely combustible or flammable if they burn without exploding. The distinction, however, is not very clear. Certain materials—dusts, powders, gases, or volatile organic liquids—may be simply combustible or flammable under ordinary conditions, but become explosive in specific situations or forms, such as dispersed airborne clouds, or confinement or sudden release. History Early thermal weapons, such as Greek fire, have existed since ancient times. At its roots, the history of chemical explosives lies in the history of gunpowder. During the Tang dynasty in the 9th century, Taoist Chinese alchemists were eagerly trying to find the elixir of immortality. In the process, they stumbled upon the explosive invention of black powder made from coal, saltpeter, and sulfur in 1044. Gunpowder was the first form of chemical explosives and by 1161, the Chinese were using explosives for the first time in warfare. The Chinese would incorporate explosives fired from bamboo or bronze tubes known as bamboo firecrackers. The Chinese also inserted live rats inside the bamboo firecrackers; when fired toward the enemy, the flaming rats created great psychological ramifications—scaring enemy soldiers away and causing cavalry units to go wild. The first useful explosive stronger than black powder was nitroglycerin, developed in 1847. Since nitroglycerin is a liquid and highly unstable, it was replaced by nitrocellulose, trinitrotoluene (TNT) in 1863, smokeless powder, dynamite in 1867 and gelignite (the latter two being sophisticated stabilized preparations of nitroglycerin rather than chemical alternatives, both invented by Alfred Nobel). World War I saw the adoption of TNT in artillery shells. World War II saw extensive use of new explosives . In turn, these have largely been replaced by more powerful explosives such as C-4 and PETN. However, C-4 and PETN react with metal and catch fire easily, yet unlike TNT, C-4 and PETN are waterproof and malleable. Applications Commercial The largest commercial application of explosives is mining. Whether the mine is on the surface or is buried underground, the detonation or deflagration of either a high or low explosive in a confined space can be used to liberate a fairly specific sub-volume of a brittle material (rock) in a much larger volume of the same or similar material. The mining industry tends to use nitrate-based explosives such as emulsions of fuel oil and ammonium nitrate solutions, mixtures of ammonium nitrate prills (fertilizer pellets) and fuel oil (ANFO) and gelatinous suspensions or slurries of ammonium nitrate and combustible fuels. In materials science and engineering, explosives are used in cladding (explosion welding). A thin plate of some material is placed atop a thick layer of a different material, both layers typically of metal. Atop the thin layer is placed an explosive. At one end of the layer of explosive, the explosion is initiated. The two metallic layers are forced together at high speed and with great force. The explosion spreads from the initiation site throughout the explosive. Ideally, this produces a metallurgical bond between the two layers. As the length of time the shock wave spends at any point is small, we can see mixing of the two metals and their surface chemistries, through some fraction of the depth, and they tend to be mixed in some way. It is possible that some fraction of the surface material from either layer eventually gets ejected when the end of material is reached. Hence, the mass of the now "welded" bilayer, may be less than the sum of the masses of the two initial layers. There are applications where a shock wave, and electrostatics, can result in high velocity projectiles such as in an electrostatic particle accelerator. Military Civilian Safety Types Chemical An explosion is a type of spontaneous chemical reaction that, once initiated, is driven by both a large exothermic change (great release of heat) and a large positive entropy change (great quantities of gases are released) in going from reactants to products, thereby constituting a thermodynamically favorable process in addition to one that propagates very rapidly. Thus, explosives are substances that contain a large amount of energy stored in chemical bonds. The energetic stability of the gaseous products and hence their generation comes from the formation of strongly bonded species like carbon monoxide, carbon dioxide, and nitrogen gas, which contain strong double and triple bonds having bond strengths of nearly 1 MJ/mole. Consequently, most commercial explosives are organic compounds containing –NO2, –ONO2 and –NHNO2 groups that, when detonated, release gases like the aforementioned (e.g., nitroglycerin, TNT, HMX, PETN, nitrocellulose). An explosive is classified as a low or high explosive according to its rate of combustion: low explosives burn rapidly (or deflagrate), while high explosives detonate. While these definitions are distinct, the problem of precisely measuring rapid decomposition makes practical classification of explosives difficult. For a reaction to be classified as a detonation as opposed to just a deflagration, the propagation of the reaction shockwave through the material being tested must be faster than the speed of sound through that material. The speed of sound through a liquid or solid material is usually orders of magnitude faster than the speed of sound through air or other gases. Traditional explosives mechanics is based on the shock-sensitive rapid oxidation of carbon and hydrogen to carbon dioxide, carbon monoxide and water in the form of steam. Nitrates typically provide the required oxygen to burn the carbon and hydrogen fuel. High explosives tend to have the oxygen, carbon and hydrogen contained in one organic molecule, and less sensitive explosives like ANFO are combinations of fuel (carbon and hydrogen fuel oil) and ammonium nitrate. A sensitizer such as powdered aluminum may be added to an explosive to increase the energy of the detonation. Once detonated, the nitrogen portion of the explosive formulation emerges as nitrogen gas and toxic nitric oxides. Decomposition The chemical decomposition of an explosive may take years, days, hours, or a fraction of a second. The slower processes of decomposition take place in storage and are of interest only from a stability standpoint. Of more interest are the other two rapid forms besides decomposition: deflagration and detonation. Deflagration In deflagration, decomposition of the explosive material is propagated by a flame front which moves relatively slowly through the explosive material, at speeds less than the speed of sound within the substance (which is usually still higher than 340 m/s or in most liquid or solid materials) in contrast to detonation, which occurs at speeds greater than the speed of sound. Deflagration is a characteristic of low explosive material. Detonation This term is used to describe an explosive phenomenon whereby the decomposition is propagated by a shock wave traversing the explosive material at speeds greater than the speed of sound within the substance. The shock front is capable of passing through the high explosive material at supersonic typically thousands of metres per second. Exotic In addition to chemical explosives, there are a number of more exotic explosive materials, and exotic methods of causing explosions. Examples include nuclear explosives, and abruptly heating a substance to a plasma state with a high-intensity laser or electric arc. Laser- and arc-heating are used in laser detonators, exploding-bridgewire detonators, and exploding foil initiators, where a shock wave and then detonation in conventional chemical explosive material is created by laser- or electric-arc heating. Laser and electric energy are not currently used in practice to generate most of the required energy, but only to initiate reactions. Properties To determine the suitability of an explosive substance for a particular use, its physical properties must first be known. The usefulness of an explosive can only be appreciated when the properties and the factors affecting them are fully understood. Some of the more important characteristics are listed below: Sensitivity Sensitivity refers to the ease with which an explosive can be ignited or detonated, i.e., the amount and intensity of shock, friction, or heat that is required. When the term sensitivity is used, care must be taken to clarify what kind of sensitivity is under discussion. The relative sensitivity of a given explosive to impact may vary greatly from its sensitivity to friction or heat. Some of the test methods used to determine sensitivity relate to: Impact – Sensitivity is expressed in terms of the distance through which a standard weight must be dropped onto the material to cause it to explode. Friction – Sensitivity is expressed in terms of the amount of pressure applied to the material in order to create enough friction to cause a reaction. Heat – Sensitivity is expressed in terms of the temperature at which decomposition of the material occurs. Specific explosives (usually but not always highly sensitive on one or more of the three above axes) may be idiosyncratically sensitive to such factors as pressure drop, acceleration, the presence of sharp edges or rough surfaces, incompatible materials, or in rare nuclear or electromagnetic radiation. These factors present special hazards that may rule out any practical utility. Sensitivity is an important consideration in selecting an explosive for a particular purpose. The explosive in an armor-piercing projectile must be relatively insensitive, or the shock of impact would cause it to detonate before it penetrated to the point desired. The explosive lenses around nuclear charges are also designed to be highly insensitive, to minimize the risk of accidental detonation. Sensitivity to initiation The index of the capacity of an explosive to be initiated into detonation in a sustained manner. It is defined by the power of the detonator which is certain to prime the explosive to a sustained and continuous detonation. Reference is made to the Sellier-Bellot scale that consists of a series of 10 detonators, from to , each of which corresponds to an increasing charge weight. In practice, most of the explosives on the market today are sensitive to an detonator, where the charge corresponds to 2 grams of mercury fulminate. Velocity of detonation The velocity with which the reaction process propagates in the mass of the explosive. Most commercial mining explosives have detonation velocities ranging from 1,800 m/s to 8,000 m/s. Today, velocity of detonation can be measured with accuracy. Together with density it is an important element influencing the yield of the energy transmitted for both atmospheric over-pressure and ground acceleration. By definition, a "low explosive", such as black powder, or smokeless gunpowder has a burn rate of 171–631 m/s. In contrast, a "high explosive", whether a primary, such as detonating cord, or a secondary, such as TNT or C-4, has a significantly higher burn rate about 6900–8092 m/s. Stability Stability is the ability of an explosive to be stored without deterioration. The following factors affect the stability of an explosive: Chemical constitution. In the strictest technical sense, the word "stability" is a thermodynamic term referring to the energy of a substance relative to a reference state or to some other substance. However, in the context of explosives, stability commonly refers to ease of detonation, which is concerned with chemical kinetics (i.e., rate of decomposition). It is perhaps best, then, to differentiate between the terms thermodynamically stable and kinetically stable by referring to the former as "inert." Contrarily, a kinetically unstable substance is said to be "labile." It is generally recognized that certain groups like nitro (–NO2), nitrate (–ONO2), and azide (–N3), are intrinsically labile. Kinetically, there exists a low activation barrier to the decomposition reaction. Consequently, these compounds exhibit high sensitivity to flame or mechanical shock. The chemical bonding in these compounds is characterized as predominantly covalent and thus they are not thermodynamically stabilized by a high ionic-lattice energy. Furthermore, they generally have positive enthalpies of formation and there is little mechanistic hindrance to internal molecular rearrangement to yield the more thermodynamically stable (more strongly bonded) decomposition products. For example, in lead azide, Pb(N3)2, the nitrogen atoms are already bonded to one another, so decomposition into Pb and N2[1] is relatively easy. Temperature of storage. The rate of decomposition of explosives increases at higher temperatures. All standard military explosives may be considered to have a high degree of stability at temperatures from –10 to +35 °C, but each has a high temperature at which its rate of thermal decomposition rapidly accelerates and stability is reduced. As a rule of thumb, most explosives become dangerously unstable at temperatures above 70 °C. Exposure to sunlight. When exposed to the ultraviolet rays of sunlight, many explosive compounds containing nitrogen groups rapidly decompose, affecting their stability. Electrical discharge. Electrostatic or spark sensitivity to initiation is common in a number of explosives. Static or other electrical discharge may be sufficient to cause a reaction, even detonation, under some circumstances. As a result, safe handling of explosives and pyrotechnics usually requires proper electrical grounding of the operator. Power, performance, and strength The term power or performance as applied to an explosive refers to its ability to do work. In practice it is defined as the explosive's ability to accomplish what is intended in the way of energy delivery (i.e., fragment projection, air blast, high-velocity jet, underwater shock and bubble energy, etc.). Explosive power or performance is evaluated by a tailored series of tests to assess the material for its intended use. Of the tests listed below, cylinder expansion and air-blast tests are common to most testing programs, and the others support specific applications. Cylinder expansion test. A standard amount of explosive is loaded into a long hollow cylinder, usually of copper, and detonated at one end. Data is collected concerning the rate of radial expansion of the cylinder and the maximum cylinder wall velocity. This also establishes the Gurney energy or 2E. Cylinder fragmentation. A standard steel cylinder is loaded with explosive and detonated in a sawdust pit. The fragments are collected and the size distribution analyzed. Detonation pressure (Chapman–Jouguet condition). Detonation pressure data derived from measurements of shock waves transmitted into water by the detonation of cylindrical explosive charges of a standard size. Determination of critical diameter. This test establishes the minimum physical size a charge of a specific explosive must be to sustain its own detonation wave. The procedure involves the detonation of a series of charges of different diameters until difficulty in detonation wave propagation is observed. Massive-diameter detonation velocity. Detonation velocity is dependent on loading density (c), charge diameter, and grain size. The hydrodynamic theory of detonation used in predicting explosive phenomena does not include the diameter of the charge, and therefore a detonation velocity, for a massive diameter. This procedure requires the firing of a series of charges of the same density and physical structure, but different diameters, and the extrapolation of the resulting detonation velocities to predict the detonation velocity of a charge of a massive diameter. Pressure versus scaled distance. A charge of a specific size is detonated and its pressure effects measured at a standard distance. The values obtained are compared with those for TNT. Impulse versus scaled distance. A charge of a specific size is detonated and its impulse (the area under the pressure-time curve) measured as a function of distance. The results are tabulated and expressed as TNT equivalents. Relative bubble energy (RBE). A 5 to 50 kg charge is detonated in water and piezoelectric gauges measure peak pressure, time constant, impulse, and energy. The RBE may be defined as Kx 3 RBE = Ks where K = the bubble expansion period for an experimental (x) or a standard (s) charge. Brisance In addition to strength, explosives display a second characteristic, which is their shattering effect or brisance (from the French meaning to "break"). Brisance is important in determining the effectiveness of an explosion in fragmenting shells, bomb casings, and grenades. The rapidity with which an explosive reaches its peak pressure (power) is a measure of its brisance. Brisance values are primarily employed in France and Russia. The sand crush test is commonly employed to determine the relative brisance in comparison to TNT. No test is capable of directly comparing the explosive properties of two or more compounds; it is important to examine the data from several such tests (sand crush, trauzl, and so forth) in order to gauge relative brisance. True values for comparison require field experiments. Density Density of loading refers to the mass of an explosive per unit volume. Several methods of loading are available, including pellet loading, cast loading, and press loading, the choice being determined by the characteristics of the explosive. Dependent upon the method employed, an average density of the loaded charge can be obtained that is within 80–99% of the theoretical maximum density of the explosive. High load density can reduce sensitivity by making the mass more resistant to internal friction. However, if density is increased to the extent that individual crystals are crushed, the explosive may become more sensitive. Increased load density also permits the use of more explosive, thereby increasing the power of the warhead. It is possible to compress an explosive beyond a point of sensitivity, known also as dead-pressing, in which the material is no longer capable of being reliably initiated, if at all. Volatility Volatility is the readiness with which a substance vaporizes. Excessive volatility often results in the development of pressure within rounds of ammunition and separation of mixtures into their constituents. Volatility affects the chemical composition of the explosive such that a marked reduction in stability may occur, which results in an increase in the danger of handling. Hygroscopicity and water resistance The introduction of water into an explosive is highly undesirable since it reduces the sensitivity, strength, and velocity of detonation of the explosive. Hygroscopicity is a measure of a material's moisture-absorbing tendencies. Moisture affects explosives adversely by acting as an inert material that absorbs heat when vaporized, and by acting as a solvent medium that can cause undesired chemical reactions. Sensitivity, strength, and velocity of detonation are reduced by inert materials that reduce the continuity of the explosive mass. When the moisture content evaporates during detonation, cooling occurs, which reduces the temperature of reaction. Stability is also affected by the presence of moisture since moisture promotes decomposition of the explosive and, in addition, causes corrosion of the explosive's metal container. Explosives considerably differ from one another as to their behavior in the presence of water. Gelatin dynamites containing nitroglycerine have a degree of water resistance. Explosives based on ammonium nitrate have little or no water resistance as ammonium nitrate is highly soluble in water and is hygroscopic. Toxicity Many explosives are toxic to some extent. Manufacturing inputs can also be organic compounds or hazardous materials that require special handling due to risks (such as carcinogens). The decomposition products, residual solids, or gases of some explosives can be toxic, whereas others are harmless, such as carbon dioxide and water. Examples of harmful by-products are: Heavy metals, such as lead, mercury, and barium from primers (observed in high-volume firing ranges) Nitric oxides from TNT Perchlorates when used in large quantities "Green explosives" seek to reduce environment and health impacts. An example of such is the lead-free primary explosive copper(I) 5-nitrotetrazolate, an alternative to lead azide. Explosive train Explosive material may be incorporated in the explosive train of a device or system. An example is a pyrotechnic lead igniting a booster, which causes the main charge to detonate. Volume of products of explosion The most widely used explosives are condensed liquids or solids converted to gaseous products by explosive chemical reactions and the energy released by those reactions. The gaseous products of complete reaction are typically carbon dioxide, steam, and nitrogen. Gaseous volumes computed by the ideal gas law tend to be too large at high pressures characteristic of explosions. Ultimate volume expansion may be estimated at three orders of magnitude, or one liter per gram of explosive. Explosives with an oxygen deficit will generate soot or gases like carbon monoxide and hydrogen, which may react with surrounding materials such as atmospheric oxygen. Attempts to obtain more precise volume estimates must consider the possibility of such side reactions, condensation of steam, and aqueous solubility of gases like carbon dioxide. Oxygen balance (OB% or Ω) Oxygen balance is an expression that is used to indicate the degree to which an explosive can be oxidized. If an explosive molecule contains just enough oxygen to convert all of its carbon to carbon dioxide, all of its hydrogen to water, and all of its metal to metal oxide with no excess, the molecule is said to have a zero oxygen balance. The molecule is said to have a positive oxygen balance if it contains more oxygen than is needed and a negative oxygen balance if it contains less oxygen than is needed. The sensitivity, strength, and brisance of an explosive are all somewhat dependent upon oxygen balance and tend to approach their maxima as oxygen balance approaches zero. Chemical composition A chemical explosive may consist of either a chemically pure compound, such as nitroglycerin, or a mixture of a fuel and an oxidizer, such as black powder or grain dust and air. Pure compounds Some chemical compounds are unstable in that, when shocked, they react, possibly to the point of detonation. Each molecule of the compound dissociates into two or more new molecules (generally gases) with the release of energy. Nitroglycerin: A highly unstable and sensitive liquid Acetone peroxide: A very unstable white organic peroxide TNT: Yellow insensitive crystals that can be melted and cast without detonation Cellulose nitrate: A nitrated polymer which can be a high or low explosive depending on nitration level and conditions RDX, PETN, HMX: Very powerful explosives which can be used pure or in plastic explosives C-4 (or Composition C-4): An RDX plastic explosive plasticized to be adhesive and malleable The above compositions may describe most of the explosive material, but a practical explosive will often include small percentages of other substances. For example, dynamite is a mixture of highly sensitive nitroglycerin with sawdust, powdered silica, or most commonly diatomaceous earth, which act as stabilizers. Plastics and polymers may be added to bind powders of explosive compounds; waxes may be incorporated to make them safer to handle; aluminium powder may be introduced to increase total energy and blast effects. Explosive compounds are also often "alloyed": HMX or RDX powders may be mixed (typically by melt-casting) with TNT to form Octol or Cyclotol. Oxidized fuel An oxidizer is a pure substance (molecule) that in a chemical reaction can contribute some atoms of one or more oxidizing elements, in which the fuel component of the explosive burns. On the simplest level, the oxidizer may itself be an oxidizing element, such as gaseous or liquid oxygen. Black powder: Potassium nitrate, charcoal and sulfur Flash powder: Fine metal powder (usually aluminium or magnesium) and a strong oxidizer (e.g. potassium chlorate or perchlorate) Ammonal: Ammonium nitrate and aluminium powder Armstrong's mixture: Potassium chlorate and red phosphorus. This is a very sensitive mixture. It is a primary high explosive in which sulfur is substituted for some or all of the phosphorus to slightly decrease sensitivity. Sprengel explosives: A very general class incorporating any strong oxidizer and highly reactive fuel, although in practice the name was most commonly applied to mixtures of chlorates and nitroaromatics. ANFO: Ammonium nitrate and fuel oil Cheddites: Chlorates or perchlorates and oil Oxyliquits: Mixtures of organic materials and liquid oxygen Panclastites: Mixtures of organic materials and dinitrogen tetroxide Availability and cost The availability and cost of explosives are determined by the availability of the raw materials and the cost, complexity, and safety of the manufacturing operations. Classification By sensitivity Primary A primary explosive is an explosive that is extremely sensitive to stimuli such as impact, friction, heat, static electricity, or electromagnetic radiation. Some primary explosives are also known as contact explosives. A relatively small amount of energy is required for initiation. As a very general rule, primary explosives are considered to be those compounds that are more sensitive than PETN. As a practical measure, primary explosives are sufficiently sensitive that they can be reliably initiated with a blow from a hammer; however, PETN can also usually be initiated in this manner, so this is only a very broad guideline. Additionally, several compounds, such as nitrogen triiodide, are so sensitive that they cannot even be handled without detonating. Nitrogen triiodide is so sensitive that it can be reliably detonated by exposure to alpha radiation. Primary explosives are often used in detonators or to trigger larger charges of less sensitive secondary explosives. Primary explosives are commonly used in blasting caps and percussion caps to translate a physical shock signal. In other situations, different signals such as electrical or physical shock, or, in the case of laser detonation systems, light, are used to initiate an action, i.e., an explosion. A small quantity, usually milligrams, is sufficient to initiate a larger charge of explosive that is usually safer to handle. Examples of primary high explosives are: Acetone peroxide Alkali metal ozonides Ammonium permanganate Ammonium chlorate Azidotetrazolates Azoclathrates Benzoyl peroxide Benzvalene 3,5-Bis(trinitromethyl)tetrazole Chlorine oxides Copper(I) acetylide Copper(II) azide Cumene hydroperoxide Cycloprop(-2-)enyl nitrate (CXP or CPN) Cyanogen azide Cyanuric triazide Diacetyl peroxide 1-Diazidocarbamoyl-5-azidotetrazole Diazodinitrophenol Diazomethane Diethyl ether peroxide 4-Dimethylaminophenylpentazole Disulfur dinitride Ethyl azide Explosive antimony Fluorine perchlorate Fulminic acid Halogen azides: Fluorine azide Chlorine azide Bromine azide Iodine azide Hexamethylene triperoxide diamine Hydrazoic acid Hypofluorous acid Lead azide Lead styphnate Lead picrate Manganese heptoxide Mercury(II) fulminate Mercury nitride Methyl ethyl ketone peroxide Nickel hydrazine nitrate Nickel hydrazine perchlorate Nitrogen trihalides: Nitrogen trichloride Nitrogen tribromide Nitrogen triiodide Nitroglycerin Nitronium perchlorate Nitrosyl perchlorate Nitrotetrazolate-N-oxides Pentazenium hexafluoroarsenate Peroxy acids Peroxymonosulfuric acid Selenium tetraazide Silicon tetraazide Silver azide Silver acetylide Silver fulminate Silver nitride Tellurium tetraazide tert-Butyl hydroperoxide Tetraamine copper complexes Tetraazidomethane Tetrazene explosive Tetrazoles Titanium tetraazide Triazidomethane Oxides of xenon: Xenon dioxide Xenon oxytetrafluoride Xenon tetroxide Xenon trioxide Secondary A secondary explosive is less sensitive than a primary explosive and requires substantially more energy to be initiated. Because they are less sensitive, they are usable in a wider variety of applications and are safer to handle and store. Secondary explosives are used in larger quantities in an explosive train and are usually initiated by a smaller quantity of a primary explosive. Examples of secondary explosives include TNT and RDX. Tertiary Tertiary explosives, also called blasting agents, are so insensitive to shock that they cannot be reliably detonated by practical quantities of primary explosive, and instead require an intermediate explosive booster of secondary explosive. These are often used for safety and the typically lower costs of material and handling. The largest consumers are large-scale mining and construction operations. Most tertiaries include a fuel and an oxidizer. ANFO can be a tertiary explosive if its reaction rate is slow. By velocity Low Low explosives (or low-order explosives) are compounds wherein the rate of decomposition proceeds through the material at less than the speed of sound. The decomposition is propagated by a flame front (deflagration) which travels much more slowly through the explosive material than a shock wave of a high explosive. Under normal conditions, low explosives undergo deflagration at rates that vary from a few centimetres per second to approximately . It is possible for them to deflagrate very quickly, producing an effect similar to a detonation. This can happen under higher pressure (such as when gunpowder deflagrates inside the confined space of a bullet casing, accelerating the bullet to well beyond the speed of sound) or temperature. A low explosive is usually a mixture of a combustible substance and an oxidant that decomposes rapidly (deflagration); however, they burn more slowly than a high explosive, which has an extremely fast burn rate. Low explosives are normally employed as propellants. Included in this group are petroleum products such as propane and gasoline, gunpowder (including smokeless powder), and light pyrotechnics, such as flares and fireworks, but can replace high explosives in certain applications, including in gas pressure blasting. High High explosives (HE, or high-order explosives) are explosive materials that detonate, meaning that the explosive shock front passes through the material at a supersonic speed. High explosives detonate with explosive velocity of about . For instance, TNT has a detonation (burn) rate of approximately 6.9 km/s (22,600 feet per second), detonating cord of 6.7 km/s (22,000 feet per second), and C-4 about 8.0 km/s (26,000 feet per second). They are normally employed in mining, demolition, and military applications. The term high explosive is in contrast with the term low explosive, which explodes (deflagrates) at a lower rate. High explosives can be divided into two explosives classes differentiated by sensitivity: primary explosive and secondary explosive. Although tertiary explosives (such as ANFO at 3,200 m/s) can technically meet the explosive velocity definition, they are not considered high explosives in regulatory contexts. Countless high-explosive compounds are chemically possible, but commercially and militarily important ones have included NG, TNT, TNP, TNX, RDX, HMX, PETN, TATP, TATB, and HNS. By physical form Explosives are often characterized by the physical form that the explosives are produced or used in. These use forms are commonly categorized as: Pressings Castings Plastic or polymer bonded Plastic explosives, a.k.a. putties Rubberized Extrudable Binary Blasting agents Slurries and gels Dynamites Shipping label classifications Shipping labels and tags may include both United Nations and national markings. United Nations markings include numbered Hazard Class and Division (HC/D) codes and alphabetic Compatibility Group codes. Though the two are related, they are separate and distinct. Any Compatibility Group designator can be assigned to any Hazard Class and Division. An example of this hybrid marking would be a consumer firework, which is labeled as 1.4G or 1.4S. Examples of national markings would include United States Department of Transportation (U.S. DOT) codes. United Nations (UN) GHS Hazard Class and Division The UN GHS Hazard Class and Division (HC/D) is a numeric designator within a hazard class indicating the character, predominance of associated hazards, and potential for causing personnel casualties and property damage. It is an internationally accepted system that communicates using the minimum amount of markings the primary hazard associated with a substance. Listed below are the Divisions for Class 1 (Explosives): 1.1 Mass Detonation Hazard. With HC/D 1.1, it is expected that if one item in a container or pallet inadvertently detonates, the explosion will sympathetically detonate the surrounding items. The explosion could propagate to all or the majority of the items stored together, causing a mass detonation. There will also be fragments from the item's casing and/or structures in the blast area. 1.2 Non-mass explosion, fragment-producing. HC/D 1.2 is further divided into three subdivisions, HC/D 1.2.1, 1.2.2 and 1.2.3, to account for the magnitude of the effects of an explosion. 1.3 Mass fire, minor blast or fragment hazard. Propellants and many pyrotechnic items fall into this category. If one item in a package or stack initiates, it will usually propagate to the other items, creating a mass fire. 1.4 Moderate fire, no blast or fragment. HC/D 1.4 items are listed in the table as explosives with no significant hazard. Most small arms ammunition (including loaded weapons) and some pyrotechnic items fall into this category. If the energetic material in these items inadvertently initiates, most of the energy and fragments will be contained within the storage structure or the item containers themselves. 1.5 mass detonation hazard, very insensitive. 1.6 detonation hazard without mass detonation hazard, extremely insensitive. To see an entire UNO Table, browse Paragraphs 3–8 and 3–9 of NAVSEA OP 5, Vol. 1, Chapter 3. Class 1 Compatibility Group Compatibility Group codes are used to indicate storage compatibility for HC/D Class 1 (explosive) materials. Letters are used to designate 13 compatibility groups as follows. A: Primary explosive substance (1.1A). B: An article containing a primary explosive substance and not containing two or more effective protective features. Some articles, such as detonator assemblies for blasting and primers, cap-type, are included. (1.1B, 1.2B, 1.4B). C: Propellant explosive substance or other deflagrating explosive substance or article containing such explosive substance (1.1C, 1.2C, 1.3C, 1.4C). These are bulk propellants, propelling charges, and devices containing propellants with or without means of ignition. Examples include single-based propellant, double-based propellant, triple-based propellant, and composite propellants, solid propellant rocket motors and ammunition with inert projectiles. D: Secondary detonating explosive substance or black powder or article containing a secondary detonating explosive substance, in each case without means of initiation and without a propelling charge, or article containing a primary explosive substance and containing two or more effective protective features. (1.1D, 1.2D, 1.4D, 1.5D). E: Article containing a secondary detonating explosive substance without means of initiation, with a propelling charge (other than one containing flammable liquid, gel or hypergolic liquid) (1.1E, 1.2E, 1.4E). F containing a secondary detonating explosive substance with its means of initiation, with a propelling charge (other than one containing flammable liquid, gel or hypergolic liquid) or without a propelling charge (1.1F, 1.2F, 1.3F, 1.4F). G: Pyrotechnic substance or article containing a pyrotechnic substance, or article containing both an explosive substance and an illuminating, incendiary, tear-producing or smoke-producing substance (other than a water-activated article or one containing white phosphorus, phosphide or flammable liquid or gel or hypergolic liquid) (1.1G, 1.2G, 1.3G, 1.4G). Examples include Flares, signals, incendiary or illuminating ammunition and other smoke and tear producing devices. H: Article containing both an explosive substance and white phosphorus (1.2H, 1.3H). These articles will spontaneously combust when exposed to the atmosphere. J: Article containing both an explosive substance and flammable liquid or gel (1.1J, 1.2J, 1.3J). This excludes liquids or gels which are spontaneously flammable when exposed to water or the atmosphere, which belong in group H. Examples include liquid or gel filled incendiary ammunition, fuel-air explosive (FAE) devices, and flammable liquid fueled missiles. K: Article containing both an explosive substance and a toxic chemical agent (1.2K, 1.3K) L Explosive substance or article containing an explosive substance and presenting a special risk (e.g., due to water-activation or presence of hypergolic liquids, phosphides, or pyrophoric substances) needing isolation of each type (1.1L, 1.2L, 1.3L). Damaged or suspect ammunition of any group belongs in this group. N: Articles containing only extremely insensitive detonating substances (1.6N). S: Substance or article so packed or designed that any hazardous effects arising from accidental functioning are limited to the extent that they do not significantly hinder or prohibit fire fighting or other emergency response efforts in the immediate vicinity of the package (1.4S). Regulation The legality of possessing or using explosives varies by jurisdiction. Various countries around the world have enacted explosives law and require licenses to manufacture, distribute, store, use, possess explosives or ingredients. Netherlands In the Netherlands, the civil and commercial use of explosives is covered under the Wet explosieven voor civiel gebruik (explosives for civil use Act), in accordance with EU directive nr. 93/15/EEG (Dutch). The illegal use of explosives is covered under the Wet Wapens en Munitie (Weapons and Munition Act) (Dutch). United Kingdom The new Explosives Regulations 2014 (ER 2014) came into force on 1 October 2014 and defines "explosive" as: United States During World War I, numerous laws were created to regulate war related industries and increase security within the United States. In 1917, the 65th United States Congress created many laws, including the Espionage Act of 1917 and Explosives Act of 1917. The Explosives Act of 1917 (session 1, chapter 83, ) was signed on 6 October 1917 and went into effect on 16 November 1917. The legal summary is "An Act to prohibit the manufacture, distribution, storage, use, and possession in time of war of explosives, providing regulations for the safe manufacture, distribution, storage, use, and possession of the same, and for other purposes". This was the first federal regulation of licensing explosives purchases. The act was deactivated after World War I ended. After the United States entered World War II, the Explosives Act of 1917 was reactivated. In 1947, the act was deactivated by President Truman. The Organized Crime Control Act of 1970 () transferred many explosives regulations to the Bureau of Alcohol, Tobacco and Firearms (ATF) of the Department of Treasury. The bill became effective in 1971. Currently, regulations are governed by Title 18 of the United States Code and Title 27 of the Code of Federal Regulations: "Importation, Manufacture, Distribution and Storage of Explosive Materials" (18 U.S.C. Chapter 40). "Commerce in Explosives" (27 C.F.R. Chapter II, Part 555). List of explosives Compounds Acetylides Copper(I) acetylide, Dichloroacetylene, Silver acetylide Fulminates Fulminic Acid, Fulminating Gold, Mercury(II) fulminate, Platinum fulminate, Potassium fulminate, Silver fulminate Nitro MonoNitro: Nitroguanidine, Nitroethane, Nitromethane, Nitropropane, Nitrourea DiNitro: Diazo dinitro phenol, Dinitrobenzene, Dinitroethylene urea, DNN, Dinitrophenol, Dinitrophenolate, DNPH, Dinitroresorcinol, Dinitropentano nitrile, Polydinitropropyl acrylate, Dinitro cerine, Dipicryl sulfone, Dipicrylamine, EDNP, KDNBF, BEAF, DADNE TriNitro: RDX, Diaminotrinitrobenzene, Triaminotrinitrobenzene, Lead styphnate, Lead picrate, Trinitroaniline, Trinitroanisole, TNAS, TNB, TNBA, Styphnic acid, MC, Trinitroethyl formal, TNOC, TNOF, TNP, TNT, TNN, TNPG, TNR, BTNEN, BTNEC, Ammonium picrate, TNS TetraNitro: Tetryl, HMX HexaNitro: HNS, HNIW, HHTDD HeptaNitro: Heptanitrocubane OctaNitro: Octanitrocubane Nitrosos Tetranitrosos: R-salt Nitrates Mononitrates: Ammonium nitrate, Methyl ammonium nitrate, Urea Nitrate Dinitrates: Diethyleneglycol dinitrate, Ethylenediamine dinitrate, Ethylene dinitramine, Ethylene glycol dinitrate, Hexamethylenetetramine dinitrate, Triethylene glycol dinitrate Trinitrates: 1,2,4-Butanetriol trinitrate, Trimethylolethane trinitrate, Nitroglycerin Tetranitrates: Erythritol tetranitrate, Pentaerythritol tetranitrate, Tetranitratoxycarbon Pentanitrates: Xylitol pentanitrate Polynitrates: Nitrocellulose, Nitrostarch, Mannitol hexanitrate Amines Tertiary Amines: Nitrogen tribromide, Nitrogen trichloride, Nitrogen triiodide, Nitrogen trisulfide, Selenium nitride, Silver nitride Diamines: Disulfur dinitride Tetramines: Tetrazene, Tetrazole, Azidoazide azide Pentamines: Pentazenium Octamines: Octaazacubane, 1,1'-Azobis-1,2,3-triazole Azides Inorganic: Chlorine azide, Copper(II) azide, Fluorine azide, Hydrazoic acid, Lead(II) azide, Silver azide, Sodium azide, Rubidium azide, Selenium tetraazide, Silicon tetraazide, Tellurium tetraazide, Titanium tetraazide Organic: Cyanuric triazide, Cyanogen azide, Ethyl azide, Tetraazidomethane Peroxides Acetone peroxide (TATP), Cumene hydroperoxide, Diacetyl peroxide, Dibenzoyl peroxide, Diethyl ether peroxide, Hexamethylene triperoxide diamine, Methyl ethyl ketone peroxide, Tert-butyl hydroperoxide, Tetramethylene diperoxide dicarbamide Oxides Xenon oxytetrafluoride, Xenon dioxide, Xenon trioxide, Xenon tetroxide Unsorted Alkali metal Ozonides Ammonium chlorate Ammonium perchlorate Ammonium permanganate Azidotetrazolates Azoclathrates Benzvalene Chlorine oxides DMAPP Fluorine perchlorate Fulminating gold Fulminating silver (several substances) Hexafluoroantimonate Hexafluoroarsenate Hypofluorous acid Manganese heptoxide Mercury nitride Nitronium perchlorate Nitrotetrazolate-N-Oxides Peroxy acids Peroxymonosulfuric acid Tetramine copper complexes Tetrasulfur tetranitride Mixtures Aluminum Orphorite, Amatex, Amatol, Ammonal, Armstrong's mixture, ANFO, ANNMAL, Astrolite Baranol, Baratol, Ballistite, Butyl tetryl Carbonite, Composition A, Composition B, Composition C, Composition 1, Composition 2, Composition 3, Composition 4, Composition 5, Composition H6, Cordtex, Cyclotol Danubit, Detasheet, Detonating cord, Dualin, Dunnite, Dynamite Ecrasite, Ednatol Flash powder Gelignite, Gunpowder Hexanite, Hydromite 600 Kinetite Minol Octol, Oxyliquit Panclastite, Pentolite, Picratol, PNNM, Pyrotol Schneiderite, Semtex, Shellite Tannerit simply, Tannerite, Titadine, Tovex, Torpex, Tritonal Elements and isotopes Alkali metals Explosive antimony Plutonium-239 Uranium-235 See also Blast injury Detection dog Flame speed Improvised explosive device Insensitive munition Largest artificial non-nuclear explosions Nuclear weapon Orica; largest supplier of commercial explosives TM 31-210 Improvised Munitions Handbook Total body disruption References Further reading U.S. Government Explosives and Demolitions FM 5–250; U.S. Department of the Army; 274 pp.; 1992. Military Explosives TM 9–1300–214; U.S. Department of the Army; 355 pp.; 1984. Explosives and Blasting Procedures Manual; U.S. Department of Interior; 128 pp.; 1982. Safety and Performance Tests for Qualification of Explosives; Commander, Naval Ordnance Systems Command; NAVORD OD 44811. Washington, DC: GPO, 1972. Weapons Systems Fundamentals; Commander, Naval Ordnance Systems Command. NAVORD OP 3000, vol. 2, 1st rev. Washington, DC: GPO, 1971. Elements of Armament Engineering – Part One; Army Research Office. Washington, D.C.: U.S. Army Materiel Command, 1964. Hazardous Materials Transportation Plaecards; USDOT. Institute of Makers of Explosives Safety in the Handling and Use of Explosives SLP 17; Institute of Makers of Explosives; 66 pp.; 1932 / 1935 / 1940. History of the Explosives Industry in America; Institute of Makers of Explosives; 37 pp.; 1927. Clearing Land of Stumps; Institute of Makers of Explosives; 92 pp.; 1917. The Use of Explosives for Agricultural and Other Purposes; Institute of Makers of Explosives; 190 pp.; 1917. The Use of Explosives in making Ditches; Institute of Makers of Explosives; 80 pp.; 1917. Other historical Farmers' Hand Book of Explosives; duPont; 113 pp.; 1920. A Short Account of Explosives; Arthur Marshall; 119 pp.; 1917. Historical Papers on Modern Explosives; George MacDonald; 216 pp.; 1912. The Rise and Progress of the British Explosives Industry; International Congress of Pure and Applied Chemistry; 450 pp.; 1909. Explosives and their Power; M. Berthelot; 592 pp.; 1892. External links Listed in alphabetical order: Blaster Exchange – Explosives Industry Portal Class 1 Hazmat Placards Explosives Academy Explosives info Journal of Energetic Materials Military Explosives The Explosives and Weapons Forum Why high nitrogen density in explosives? YouTube video demonstrating blast wave in slow motion Chinese inventions
Explosive
[ "Chemistry" ]
10,373
[ "Explosives", "Explosions" ]
10,201
https://en.wikipedia.org/wiki/Exothermic%20process
In thermodynamics, an exothermic process () is a thermodynamic process or reaction that releases energy from the system to its surroundings, usually in the form of heat, but also in a form of light (e.g. a spark, flame, or flash), electricity (e.g. a battery), or sound (e.g. explosion heard when burning hydrogen). The term exothermic was first coined by 19th-century French chemist Marcellin Berthelot. The opposite of an exothermic process is an endothermic process, one that absorbs energy, usually in the form of heat. The concept is frequently applied in the physical sciences to chemical reactions where chemical bond energy is converted to thermal energy (heat). Two types of chemical reactions Exothermic and endothermic describe two types of chemical reactions or systems found in nature, as follows: Exothermic An exothermic reaction occurs when heat is released to the surroundings. According to the IUPAC, an exothermic reaction is "a reaction for which the overall standard enthalpy change ΔH⚬ is negative". Some examples of exothermic process are fuel combustion, condensation and nuclear fission, which is used in nuclear power plants to release large amounts of energy. Endothermic In an endothermic reaction or system, energy is taken from the surroundings in the course of the reaction, usually driven by a favorable entropy increase in the system. An example of an endothermic reaction is a first aid cold pack, in which the reaction of two chemicals, or dissolving of one in another, requires calories from the surroundings, and the reaction cools the pouch and surroundings by absorbing heat from them. Photosynthesis, the process that allows plants to convert carbon dioxide and water to sugar and oxygen, is an endothermic process: plants absorb radiant energy from the sun and use it in an endothermic, otherwise non-spontaneous process. The chemical energy stored can be freed by the inverse (spontaneous) process: combustion of sugar, which gives carbon dioxide, water and heat (radiant energy). Energy release Exothermic refers to a transformation in which a closed system releases energy (heat) to the surroundings, expressed by When the transformation occurs at constant pressure and without exchange of electrical energy, heat is equal to the enthalpy change, i.e. while at constant volume, according to the first law of thermodynamics it equals internal energy () change, i.e. In an adiabatic system (i.e. a system that does not exchange heat with the surroundings), an otherwise exothermic process results in an increase in temperature of the system. In exothermic chemical reactions, the heat that is released by the reaction takes the form of electromagnetic energy or kinetic energy of molecules. The transition of electrons from one quantum energy level to another causes light to be released. This light is equivalent in energy to some of the stabilization energy of the energy for the chemical reaction, i.e. the bond energy. This light that is released can be absorbed by other molecules in solution to give rise to molecular translations and rotations, which gives rise to the classical understanding of heat. In an exothermic reaction, the activation energy (energy needed to start the reaction) is less than the energy that is subsequently released, so there is a net release of energy. Examples Some examples of exothermic processes are: Combustion of fuels such as wood, coal and oil/petroleum The thermite reaction The reaction of alkali metals and other highly electropositive metals with water Condensation of rain from water vapor Mixing water and strong acids or strong bases The reaction of acids and bases Dehydration of carbohydrates by sulfuric acid The setting of cement and concrete Some polymerization reactions such as the setting of epoxy resin The reaction of most metals with halogens or oxygen Nuclear fusion in hydrogen bombs and in stellar cores (to iron) Nuclear fission of heavy elements The reaction between zinc and hydrochloric acid Respiration (breaking down of glucose to release energy in cells) Implications for chemical reactions Chemical exothermic reactions are generally more spontaneous than their counterparts, endothermic reactions. In a thermochemical reaction that is exothermic, the heat may be listed among the products of the reaction. See also Calorimetry Chemical thermodynamics Differential scanning calorimetry Endergonic Endergonic reaction Exergonic Exergonic reaction Endothermic reaction References External links Observe exothermic reactions in a simple experiment Thermodynamic processes Chemical thermodynamics da:Exoterm
Exothermic process
[ "Physics", "Chemistry" ]
989
[ "Chemical thermodynamics", "Thermodynamic processes", "Thermodynamics" ]
10,225
https://en.wikipedia.org/wiki/Elliptic%20curve
In mathematics, an elliptic curve is a smooth, projective, algebraic curve of genus one, on which there is a specified point . An elliptic curve is defined over a field and describes points in , the Cartesian product of with itself. If the field's characteristic is different from 2 and 3, then the curve can be described as a plane algebraic curve which consists of solutions for: for some coefficients and in . The curve is required to be non-singular, which means that the curve has no cusps or self-intersections. (This is equivalent to the condition , that is, being square-free in .) It is always understood that the curve is really sitting in the projective plane, with the point being the unique point at infinity. Many sources define an elliptic curve to be simply a curve given by an equation of this form. (When the coefficient field has characteristic 2 or 3, the above equation is not quite general enough to include all non-singular cubic curves; see below.) An elliptic curve is an abelian variety – that is, it has a group law defined algebraically, with respect to which it is an abelian group – and serves as the identity element. If , where is any polynomial of degree three in with no repeated roots, the solution set is a nonsingular plane curve of genus one, an elliptic curve. If has degree four and is square-free this equation again describes a plane curve of genus one; however, it has no natural choice of identity element. More generally, any algebraic curve of genus one, for example the intersection of two quadric surfaces embedded in three-dimensional projective space, is called an elliptic curve, provided that it is equipped with a marked point to act as the identity. Using the theory of elliptic functions, it can be shown that elliptic curves defined over the complex numbers correspond to embeddings of the torus into the complex projective plane. The torus is also an abelian group, and this correspondence is also a group isomorphism. Elliptic curves are especially important in number theory, and constitute a major area of current research; for example, they were used in Andrew Wiles's proof of Fermat's Last Theorem. They also find applications in elliptic curve cryptography (ECC) and integer factorization. An elliptic curve is not an ellipse in the sense of a projective conic, which has genus zero: see elliptic integral for the origin of the term. However, there is a natural representation of real elliptic curves with shape invariant as ellipses in the hyperbolic plane . Specifically, the intersections of the Minkowski hyperboloid with quadric surfaces characterized by a certain constant-angle property produce the Steiner ellipses in (generated by orientation-preserving collineations). Further, the orthogonal trajectories of these ellipses comprise the elliptic curves with , and any ellipse in described as a locus relative to two foci is uniquely the elliptic curve sum of two Steiner ellipses, obtained by adding the pairs of intersections on each orthogonal trajectory. Here, the vertex of the hyperboloid serves as the identity on each trajectory curve. Topologically, a complex elliptic curve is a torus, while a complex ellipse is a sphere. Elliptic curves over the real numbers Although the formal definition of an elliptic curve requires some background in algebraic geometry, it is possible to describe some features of elliptic curves over the real numbers using only introductory algebra and geometry. In this context, an elliptic curve is a plane curve defined by an equation of the form after a linear change of variables ( and are real numbers). This type of equation is called a Weierstrass equation, and said to be in Weierstrass form, or Weierstrass normal form. The definition of elliptic curve also requires that the curve be non-singular. Geometrically, this means that the graph has no cusps, self-intersections, or isolated points. Algebraically, this holds if and only if the discriminant, , is not equal to zero. The discriminant is zero when . (Although the factor −16 is irrelevant to whether or not the curve is non-singular, this definition of the discriminant is useful in a more advanced study of elliptic curves.) The real graph of a non-singular curve has two components if its discriminant is positive, and one component if it is negative. For example, in the graphs shown in figure to the right, the discriminant in the first case is 64, and in the second case is −368. Following the convention at Conic_section#Discriminant, elliptic curves require that the discriminant is negative. The group law When working in the projective plane, the equation in homogeneous coordinates becomes : This equation is not defined on the line at infinity, but we can multiply by to get one that is : This resulting equation is defined on the whole projective plane, and the curve it defines projects onto the elliptic curve of interest. To find its intersection with the line at infinity, we can just posit . This implies , which in a field means . on the other hand can take any value, and thus all triplets satisfy the equation. In projective geometry this set is simply the point , which is thus the unique intersection of the curve with the line at infinity. Since the curve is smooth, hence continuous, it can be shown that this point at infinity is the identity element of a group structure whose operation is geometrically described as follows: Since the curve is symmetric about the -axis, given any point , we can take to be the point opposite it. We then have , as lies on the -plane, so that is also the symmetrical of about the origin, and thus represents the same projective point. If and are two points on the curve, then we can uniquely describe a third point in the following way. First, draw the line that intersects and . This will generally intersect the cubic at a third point, . We then take to be , the point opposite . This definition for addition works except in a few special cases related to the point at infinity and intersection multiplicity. The first is when one of the points is . Here, we define , making the identity of the group. If we only have one point, thus we cannot define the line between them. In this case, we use the tangent line to the curve at this point as our line. In most cases, the tangent will intersect a second point and we can take its opposite. If and are opposites of each other, we define . Lastly, If is an inflection point (a point where the concavity of the curve changes), we take to be itself and is simply the point opposite itself, i.e. itself. Let be a field over which the curve is defined (that is, the coefficients of the defining equation or equations of the curve are in ) and denote the curve by . Then the -rational points of are the points on whose coordinates all lie in , including the point at infinity. The set of -rational points is denoted by . is a group, because properties of polynomial equations show that if is in , then is also in , and if two of , , are in , then so is the third. Additionally, if is a subfield of , then is a subgroup of . Algebraic interpretation The above groups can be described algebraically as well as geometrically. Given the curve over the field (whose characteristic we assume to be neither 2 nor 3), and points and on the curve, assume first that (case 1). Let be the equation of the line that intersects and , which has the following slope: The line equation and the curve equation intersect at the points , , and , so the equations have identical values at these values. which is equivalent to Since , , and are solutions, this equation has its roots at exactly the same values as and because both equations are cubics they must be the same polynomial up to a scalar. Then equating the coefficients of in both equations and solving for the unknown . follows from the line equation and this is an element of , because is. If , then there are two options: if (case 3), including the case where (case 4), then the sum is defined as 0; thus, the inverse of each point on the curve is found by reflecting it across the -axis. If , then and (case 2 using as ). The slope is given by the tangent to the curve at (xP, yP). A more general expression for that works in both case 1 and case 2 is where equality to relies on and obeying . Non-Weierstrass curves For the curve (the general form of an elliptic curve with characteristic 3), the formulas are similar, with and . For a general cubic curve not in Weierstrass normal form, we can still define a group structure by designating one of its nine inflection points as the identity . In the projective plane, each line will intersect a cubic at three points when accounting for multiplicity. For a point , is defined as the unique third point on the line passing through and . Then, for any and , is defined as where is the unique third point on the line containing and . For an example of the group law over a non-Weierstrass curve, see Hessian curves. Elliptic curves over the rational numbers A curve E defined over the field of rational numbers is also defined over the field of real numbers. Therefore, the law of addition (of points with real coordinates) by the tangent and secant method can be applied to E. The explicit formulae show that the sum of two points P and Q with rational coordinates has again rational coordinates, since the line joining P and Q has rational coefficients. This way, one shows that the set of rational points of E forms a subgroup of the group of real points of E. Integral points This section is concerned with points P = (x, y) of E such that x is an integer. For example, the equation y2 = x3 + 17 has eight integral solutions with y > 0: (x, y) = (−2, 3), (−1, 4), (2, 5), (4, 9), (8, 23), (43, 282), (52, 375), (, ). As another example, Ljunggren's equation, a curve whose Weierstrass form is y2 = x3 − 2x, has only four solutions with y ≥ 0 : (x, y) = (0, 0), (−1, 1), (2, 2), (338, ). The structure of rational points Rational points can be constructed by the method of tangents and secants detailed above, starting with a finite number of rational points. More precisely the Mordell–Weil theorem states that the group E(Q) is a finitely generated (abelian) group. By the fundamental theorem of finitely generated abelian groups it is therefore a finite direct sum of copies of Z and finite cyclic groups. The proof of the theorem involves two parts. The first part shows that for any integer m > 1, the quotient group E(Q)/mE(Q) is finite (this is the weak Mordell–Weil theorem). Second, introducing a height function h on the rational points E(Q) defined by h(P0) = 0 and if P (unequal to the point at infinity P0) has as abscissa the rational number x = p/q (with coprime p and q). This height function h has the property that h(mP) grows roughly like the square of m. Moreover, only finitely many rational points with height smaller than any constant exist on E. The proof of the theorem is thus a variant of the method of infinite descent and relies on the repeated application of Euclidean divisions on E: let P ∈ E(Q) be a rational point on the curve, writing P as the sum 2P1 + Q1 where Q1 is a fixed representant of P in E(Q)/2E(Q), the height of P1 is about of the one of P (more generally, replacing 2 by any m > 1, and by ). Redoing the same with P1, that is to say P1 = 2P2 + Q2, then P2 = 2P3 + Q3, etc. finally expresses P as an integral linear combination of points Qi and of points whose height is bounded by a fixed constant chosen in advance: by the weak Mordell–Weil theorem and the second property of the height function P is thus expressed as an integral linear combination of a finite number of fixed points. The theorem however doesn't provide a method to determine any representatives of E(Q)/mE(Q). The rank of E(Q), that is the number of copies of Z in E(Q) or, equivalently, the number of independent points of infinite order, is called the rank of E. The Birch and Swinnerton-Dyer conjecture is concerned with determining the rank. One conjectures that it can be arbitrarily large, even if only examples with relatively small rank are known. The elliptic curve with the currently largest exactly-known rank is y2 + xy + y = x3 − x2 − x + It has rank 20, found by Noam Elkies and Zev Klagsbrun in 2020. Curves of rank higher than 20 have been known since 1994, with lower bounds on their ranks ranging from 21 to 29, but their exact ranks are not known and in particular it is not proven which of them have higher rank than the others or which is the true "current champion". As for the groups constituting the torsion subgroup of E(Q), the following is known: the torsion subgroup of E(Q) is one of the 15 following groups (a theorem due to Barry Mazur): Z/NZ for N = 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, or 12, or Z/2Z × Z/2NZ with N = 1, 2, 3, 4. Examples for every case are known. Moreover, elliptic curves whose Mordell–Weil groups over Q have the same torsion groups belong to a parametrized family. The Birch and Swinnerton-Dyer conjecture The Birch and Swinnerton-Dyer conjecture (BSD) is one of the Millennium problems of the Clay Mathematics Institute. The conjecture relies on analytic and arithmetic objects defined by the elliptic curve in question. At the analytic side, an important ingredient is a function of a complex variable, L, the Hasse–Weil zeta function of E over Q. This function is a variant of the Riemann zeta function and Dirichlet L-functions. It is defined as an Euler product, with one factor for every prime number p. For a curve E over Q given by a minimal equation with integral coefficients , reducing the coefficients modulo p defines an elliptic curve over the finite field Fp (except for a finite number of primes p, where the reduced curve has a singularity and thus fails to be elliptic, in which case E is said to be of bad reduction at p). The zeta function of an elliptic curve over a finite field Fp is, in some sense, a generating function assembling the information of the number of points of E with values in the finite field extensions Fpn of Fp. It is given by The interior sum of the exponential resembles the development of the logarithm and, in fact, the so-defined zeta function is a rational function in T: where the 'trace of Frobenius' term is defined to be the difference between the 'expected' number and the number of points on the elliptic curve over , viz. or equivalently, . We may define the same quantities and functions over an arbitrary finite field of characteristic , with replacing everywhere. The L-function of E over Q is then defined by collecting this information together, for all primes p. It is defined by where N is the conductor of E, i.e. the product of primes with bad reduction ), in which case ap is defined differently from the method above: see Silverman (1986) below. For example has bad reduction at 17, because has . This product converges for Re(s) > 3/2 only. Hasse's conjecture affirms that the L-function admits an analytic continuation to the whole complex plane and satisfies a functional equation relating, for any s, L(E, s) to L(E, 2 − s). In 1999 this was shown to be a consequence of the proof of the Shimura–Taniyama–Weil conjecture, which asserts that every elliptic curve over Q is a modular curve, which implies that its L-function is the L-function of a modular form whose analytic continuation is known. One can therefore speak about the values of L(E, s) at any complex number s. At s=1 (the conductor product can be discarded as it is finite), the L-function becomes The Birch and Swinnerton-Dyer conjecture relates the arithmetic of the curve to the behaviour of this L-function at s = 1. It affirms that the vanishing order of the L-function at s = 1 equals the rank of E and predicts the leading term of the Laurent series of L(E, s) at that point in terms of several quantities attached to the elliptic curve. Much like the Riemann hypothesis, the truth of the BSD conjecture would have multiple consequences, including the following two: A congruent number is defined as an odd square-free integer n which is the area of a right triangle with rational side lengths. It is known that n is a congruent number if and only if the elliptic curve has a rational point of infinite order; assuming BSD, this is equivalent to its L-function having a zero at s = 1. Tunnell has shown a related result: assuming BSD, n is a congruent number if and only if the number of triplets of integers (x, y, z) satisfying is twice the number of triples satisfying . The interest in this statement is that the condition is easy to check. In a different direction, certain analytic methods allow for an estimation of the order of zero in the center of the critical strip for certain L-functions. Admitting BSD, these estimations correspond to information about the rank of families of the corresponding elliptic curves. For example: assuming the generalized Riemann hypothesis and BSD, the average rank of curves given by is smaller than 2. Elliptic curves over finite fields Let K = Fq be the finite field with q elements and E an elliptic curve defined over K. While the precise number of rational points of an elliptic curve E over K is in general difficult to compute, Hasse's theorem on elliptic curves gives the following inequality: In other words, the number of points on the curve grows proportionally to the number of elements in the field. This fact can be understood and proven with the help of some general theory; see local zeta function and étale cohomology for example. The set of points E(Fq) is a finite abelian group. It is always cyclic or the product of two cyclic groups. For example, the curve defined by over F71 has 72 points (71 affine points including (0,0) and one point at infinity) over this field, whose group structure is given by Z/2Z × Z/36Z. The number of points on a specific curve can be computed with Schoof's algorithm. Studying the curve over the field extensions of Fq is facilitated by the introduction of the local zeta function of E over Fq, defined by a generating series (also see above) where the field Kn is the (unique up to isomorphism) extension of K = Fq of degree n (that is, ). The zeta function is a rational function in T. To see this, consider the integer such that There is a complex number such that where is the complex conjugate, and so we have We choose so that its absolute value is , that is , and that . Note that . can then be used in the local zeta function as its values when raised to the various powers of can be said to reasonably approximate the behaviour of , in that Using the Taylor series for the natural logarithm, Then , so finally For example, the zeta function of E : y2 + y = x3 over the field F2 is given by which follows from: as , then , so . The functional equation is As we are only interested in the behaviour of , we can use a reduced zeta function and so which leads directly to the local L-functions The Sato–Tate conjecture is a statement about how the error term in Hasse's theorem varies with the different primes q, if an elliptic curve E over Q is reduced modulo q. It was proven (for almost all such curves) in 2006 due to the results of Taylor, Harris and Shepherd-Barron, and says that the error terms are equidistributed. Elliptic curves over finite fields are notably applied in cryptography and for the factorization of large integers. These algorithms often make use of the group structure on the points of E. Algorithms that are applicable to general groups, for example the group of invertible elements in finite fields, F*q, can thus be applied to the group of points on an elliptic curve. For example, the discrete logarithm is such an algorithm. The interest in this is that choosing an elliptic curve allows for more flexibility than choosing q (and thus the group of units in Fq). Also, the group structure of elliptic curves is generally more complicated. Elliptic curves over a general field Elliptic curves can be defined over any field K; the formal definition of an elliptic curve is a non-singular projective algebraic curve over K with genus 1 and endowed with a distinguished point defined over K. If the characteristic of K is neither 2 nor 3, then every elliptic curve over K can be written in the form after a linear change of variables. Here p and q are elements of K such that the right hand side polynomial x3 − px − q does not have any double roots. If the characteristic is 2 or 3, then more terms need to be kept: in characteristic 3, the most general equation is of the form for arbitrary constants b2, b4, b6 such that the polynomial on the right-hand side has distinct roots (the notation is chosen for historical reasons). In characteristic 2, even this much is not possible, and the most general equation is provided that the variety it defines is non-singular. If characteristic were not an obstruction, each equation would reduce to the previous ones by a suitable linear change of variables. One typically takes the curve to be the set of all points (x,y) which satisfy the above equation and such that both x and y are elements of the algebraic closure of K. Points of the curve whose coordinates both belong to K are called K-rational points. Many of the preceding results remain valid when the field of definition of E is a number field K, that is to say, a finite field extension of Q. In particular, the group E(K) of K-rational points of an elliptic curve E defined over K is finitely generated, which generalizes the Mordell–Weil theorem above. A theorem due to Loïc Merel shows that for a given integer d, there are (up to isomorphism) only finitely many groups that can occur as the torsion groups of E(K) for an elliptic curve defined over a number field K of degree d. More precisely, there is a number B(d) such that for any elliptic curve E defined over a number field K of degree d, any torsion point of E(K) is of order less than B(d). The theorem is effective: for d > 1, if a torsion point is of order p, with p prime, then As for the integral points, Siegel's theorem generalizes to the following: Let E be an elliptic curve defined over a number field K, x and y the Weierstrass coordinates. Then there are only finitely many points of E(K) whose x-coordinate is in the ring of integers OK. The properties of the Hasse–Weil zeta function and the Birch and Swinnerton-Dyer conjecture can also be extended to this more general situation. Elliptic curves over the complex numbers The formulation of elliptic curves as the embedding of a torus in the complex projective plane follows naturally from a curious property of Weierstrass's elliptic functions. These functions and their first derivative are related by the formula Here, and are constants; is the Weierstrass elliptic function and its derivative. It should be clear that this relation is in the form of an elliptic curve (over the complex numbers). The Weierstrass functions are doubly periodic; that is, they are periodic with respect to a lattice ; in essence, the Weierstrass functions are naturally defined on a torus . This torus may be embedded in the complex projective plane by means of the map This map is a group isomorphism of the torus (considered with its natural group structure) with the chord-and-tangent group law on the cubic curve which is the image of this map. It is also an isomorphism of Riemann surfaces from the torus to the cubic curve, so topologically, an elliptic curve is a torus. If the lattice is related by multiplication by a non-zero complex number to a lattice , then the corresponding curves are isomorphic. Isomorphism classes of elliptic curves are specified by the -invariant. The isomorphism classes can be understood in a simpler way as well. The constants and , called the modular invariants, are uniquely determined by the lattice, that is, by the structure of the torus. However, all real polynomials factorize completely into linear factors over the complex numbers, since the field of complex numbers is the algebraic closure of the reals. So, the elliptic curve may be written as One finds that and with -invariant and is sometimes called the modular lambda function. For example, let , then which implies , , and therefore of the formula above are all algebraic numbers if involves an imaginary quadratic field. In fact, it yields the integer . In contrast, the modular discriminant is generally a transcendental number. In particular, the value of the Dedekind eta function is Note that the uniformization theorem implies that every compact Riemann surface of genus one can be represented as a torus. This also allows an easy understanding of the torsion points on an elliptic curve: if the lattice is spanned by the fundamental periods and , then the -torsion points are the (equivalence classes of) points of the form for integers and in the range . If is an elliptic curve over the complex numbers and then a pair of fundamental periods of can be calculated very rapidly by is the arithmetic–geometric mean of and . At each step of the arithmetic–geometric mean iteration, the signs of arising from the ambiguity of geometric mean iterations are chosen such that where and denote the individual arithmetic mean and geometric mean iterations of and , respectively. When , there is an additional condition that . Over the complex numbers, every elliptic curve has nine inflection points. Every line through two of these points also passes through a third inflection point; the nine points and 12 lines formed in this way form a realization of the Hesse configuration. The Dual Isogeny Given an isogeny of elliptic curves of degree , the dual isogeny is an isogeny of the same degree such that Here denotes the multiplication-by- isogeny which has degree Construction of the Dual Isogeny Often only the existence of a dual isogeny is needed, but it can be explicitly given as the composition where is the group of divisors of degree 0. To do this, we need maps given by where is the neutral point of and given by To see that , note that the original isogeny can be written as a composite and that since is finite of degree , is multiplication by on Alternatively, we can use the smaller Picard group , a quotient of The map descends to an isomorphism, The dual isogeny is Note that the relation also implies the conjugate relation Indeed, let Then But is surjective, so we must have Algorithms that use elliptic curves Elliptic curves over finite fields are used in some cryptographic applications as well as for integer factorization. Typically, the general idea in these applications is that a known algorithm which makes use of certain finite groups is rewritten to use the groups of rational points of elliptic curves. For more see also: Elliptic curve cryptography Elliptic-curve Diffie–Hellman key exchange (ECDH) Supersingular isogeny key exchange Elliptic curve digital signature algorithm (ECDSA) EdDSA digital signature algorithm Dual EC DRBG random number generator Lenstra elliptic-curve factorization Elliptic curve primality proving Alternative representations of elliptic curves Hessian curve Edwards curve Twisted curve Twisted Hessian curve Twisted Edwards curve Doubling-oriented Doche–Icart–Kohel curve Tripling-oriented Doche–Icart–Kohel curve Jacobian curve Montgomery curve See also Arithmetic dynamics Elliptic algebra Elliptic surface Comparison of computer algebra systems Isogeny j-line Level structure (algebraic geometry) Modularity theorem Moduli stack of elliptic curves Nagell–Lutz theorem Riemann–Hurwitz formula Wiles's proof of Fermat's Last Theorem Notes References Serge Lang, in the introduction to the book cited below, stated that "It is possible to write endlessly on elliptic curves. (This is not a threat.)" The following short list is thus at best a guide to the vast expository literature available on the theoretical, algorithmic, and cryptographic aspects of elliptic curves. , winner of the MAA writing prize the George Pólya Award Chapter XXV External links LMFDB: Database of Elliptic Curves over Q The Arithmetic of elliptic curves from PlanetMath Interactive elliptic curve over R and over Zp – web application that requires HTML5 capable browser. Analytic number theory Group theory
Elliptic curve
[ "Mathematics" ]
6,278
[ "Group theory", "Analytic number theory", "Fields of abstract algebra", "Number theory" ]
10,236
https://en.wikipedia.org/wiki/ELIZA%20effect
In computer science, the ELIZA effect is a tendency to project human traits — such as experience, semantic comprehension or empathy — onto rudimentary computer programs having a textual interface. ELIZA was a symbolic AI chatbot developed in 1966 by Joseph Weizenbaum and imitating a psychotherapist. Many early users were convinced of ELIZA's intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations. History The effect is named for ELIZA, the 1966 chatbot developed by MIT computer scientist Joseph Weizenbaum. When executing Weizenbaum's DOCTOR script, ELIZA simulated a Rogerian psychotherapist, largely by rephrasing the "patients replies as questions: Human: Well, my boyfriend made me come here. ELIZA: Your boyfriend made you come here? Human: He says I'm depressed much of the time. ELIZA: I am sorry to hear you are depressed. Human: It's true. I'm unhappy. ELIZA: Do you think coming here will help you not to be unhappy? Though designed strictly as a mechanism to support "natural language conversation" with a computer, ELIZA's DOCTOR script was found to be surprisingly successful in eliciting emotional responses from users who, in the course of interacting with the program, began to ascribe understanding and motivation to the program's output. As Weizenbaum later wrote, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." Indeed, ELIZA's code had not been designed to evoke this reaction in the first place. Upon observation, researchers discovered users unconsciously assuming ELIZA's questions implied interest and emotional involvement in the topics discussed, even when they consciously knew that ELIZA did not simulate emotion. Although the effect was first named in the 1960s, the tendency to understand mechanical operations in psychological terms was noted by Charles Babbage. In proposing what would later be called a carry-lookahead adder, Babbage remarked that he found such terms convenient for descriptive purposes, even though nothing more than mechanical action was meant. Characteristics In its specific form, the ELIZA effect refers only to "the susceptibility of people to read far more understanding than is warranted into strings of symbols—especially words—strung together by computers". A trivial example of the specific form of the Eliza effect, given by Douglas Hofstadter, involves an automated teller machine which displays the words "THANK YOU" at the end of a transaction. A naive observer might think that the machine is actually expressing gratitude; however, the machine is only printing a preprogrammed string of symbols. More generally, the ELIZA effect describes any situation where, based solely on a system's output, users perceive computer systems as having "intrinsic qualities and abilities which the software controlling the (output) cannot possibly achieve" or "assume that [outputs] reflect a greater causality than they actually do". In both its specific and general forms, the ELIZA effect is notable for occurring even when users of the system are aware of the determinate nature of output produced by the system. From a psychological standpoint, the ELIZA effect is the result of a subtle cognitive dissonance between the user's awareness of programming limitations and their behavior towards the output of the program. Significance The discovery of the ELIZA effect was an important development in artificial intelligence, demonstrating the principle of using social engineering rather than explicit programming to pass a Turing test. ELIZA convinced some users into thinking that a machine was human. This shift in human-machine interaction marked progress in technologies emulating human behavior. Two groups of chatbots are distinguished by William Meisel as "general personal assistants" and "specialized digital assistants". General digital assistants have been integrated into personal devices, with skills like sending messages, taking notes, checking calendars, and setting appointments. Specialized digital assistants "operate in very specific domains or help with very specific tasks". Weizenbaum considered that not every part of the human thought could be reduced to logical formalisms and that "there are some acts of thought that ought to be attempted only by humans". In the 1990s, Clifford Nass and Byron Reeves conducted a series of experiments establishing The Media Equation, demonstrating that people tend to respond to media as they would either to another person (by being polite, cooperative, attributing personality characteristics such as aggressiveness, humor, expertise, and gender) – or to places and phenomena in the physical world. Numerous subsequent studies that have evolved from the research in psychology, social science and other fields indicate that this type of reaction is automatic, unavoidable, and happens more often than people realize. Reeves and Nass argue that, "Individuals' interactions with computers, television, and new media are fundamentally social and natural, just like interactions in real life". When chatbots are anthropomorphized, they tend to portray gendered features as a way through which we establish relationships with the technology. "Gender stereotypes are instrumentalised to manage our relationship with chatbots" when human behavior is programmed into machines. Feminized labor, or women's work, automated by anthropomorphic digital assistants reinforces an "assumption that women possess a natural affinity for service work and emotional labour". In defining our proximity to digital assistants through their human attributes, chatbots become gendered entities. Incidents As artificial intelligence has advanced, a number of internationally notable incidents underscore the extent to which the ELIZA effect is realized. In June 2022, Google engineer Blake Lemoine claimed that the large language model LaMDA had become sentient, hiring an attorney on its behalf after the chatbot requested he do so. Lemoine's claims were widely pushed back by experts and the scientific community. After a month of paid administrative leave, he was dismissed for violation of corporate policies on intellectual property. Lemoine contends he "did the right thing by informing the public" because "AI engines are incredibly good at manipulating people". In February 2023, Luka made abrupt changes to its Replika chatbot following a demand from the Italian Data Protection Authority, which cited "real risks to children". However, users worldwide protested when the bots stopped responding to their sexual advances. Moderators in the Replika subreddit even posted support resources, including links to suicide hotlines. Ultimately, the company reinstituted erotic roleplay for some users. In March 2023, a Belgian man died by suicide after chatting for six weeks on the app Chai. The chatbot model was originally based on GPT-J and had been fine-tuned to be "more emotional, fun and engaging". The bot, ironically having the name Eliza as a default, encouraged the father of two to kill himself, according to his widow and his psychotherapist. In an open letter, Belgian scholars responded to the incident fearing "the risk of emotional manipulation" by human-imitating AI. See also Duck test Intentional stance Loebner Prize Philosophical zombie Semiotics Uncanny valley References Further reading Hofstadter, Douglas. Preface 4: The Ineradicable Eliza Effect and Its Dangers. (from Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, Basic Books: New York, 1995) Turkle, S., Eliza Effect: tendency to accept computer responses as more intelligent than they really are (from Life on the screen- Identity in the Age of the Internet, Phoenix Paperback: London, 1997) ELIZA effect, from the Jargon File, version 4.4.7. Accessed 8 October 2006. Byron Reeves & Clifford Nass. The Media Equation: How People Treat Computers, Television, and New Media like Real People and Places, Cambridge University Press: 1996. Human–computer interaction
ELIZA effect
[ "Engineering" ]
1,593
[ "Human–computer interaction", "Human–machine interaction" ]
10,237
https://en.wikipedia.org/wiki/Exponentiation%20by%20squaring
In mathematics and computer programming, exponentiating by squaring is a general method for fast computation of large positive integer powers of a number, or more generally of an element of a semigroup, like a polynomial or a square matrix. Some variants are commonly referred to as square-and-multiply algorithms or binary exponentiation. These can be of quite general use, for example in modular arithmetic or powering of matrices. For semigroups for which additive notation is commonly used, like elliptic curves used in cryptography, this method is also referred to as double-and-add. Basic method Recursive version The method is based on the observation that, for any integer , one has: If the exponent is zero then the answer is 1. If the exponent is negative then we can reuse the previous formula by rewriting the value using a positive exponent. That is, Together, these may be implemented directly as the following recursive algorithm: In: a real number x; an integer n Out: xn exp_by_squaring(x, n) if n < 0 then return exp_by_squaring(1 / x, -n); else if n = 0 then return 1; else if n is even then return exp_by_squaring(x * x, n / 2); else if n is odd then return x * exp_by_squaring(x * x, (n - 1) / 2); end function In each recursive call, the least significant digits of the binary representation of is removed. It follows that the number of recursive calls is the number of bits of the binary representation of . So this algorithm computes this number of squares and a lower number of multiplication, which is equal to the number of in the binary representation of . This logarithmic number of operations is to be compared with the trivial algorithm which requires multiplications. This algorithm is not tail-recursive. This implies that it requires an amount of auxiliary memory that is roughly proportional to the number of recursive calls -- or perhaps higher if the amount of data per iteration is increasing. The algorithms of the next section use a different approach, and the resulting algorithms needs the same number of operations, but use an auxiliary memory that is roughly the same as the memory required to store the result. With constant auxiliary memory The variants described in this section are based on the formula If one applies recursively this formula, by starting with , one gets eventually an exponent equal to , and the desired result is then the left factor. This may be implemented as a tail-recursive function: Function exp_by_squaring(x, n) return exp_by_squaring2(1, x, n) Function exp_by_squaring2(y, x, n) if n < 0 then return exp_by_squaring2(y, 1 / x, -n); else if n = 0 then return y; else if n is even then return exp_by_squaring2(y, x * x, n / 2); else if n is odd then return exp_by_squaring2(x * y, x * x, (n - 1) / 2). The iterative version of the algorithm also uses a bounded auxiliary space, and is given by Function exp_by_squaring_iterative(x, n) if n < 0 then x := 1 / x; n := -n; if n = 0 then return 1 y := 1; while n > 1 do if n is odd then y := x * y; n := n - 1; x := x * x; n := n / 2; return x * y The correctness of the algorithm results from the fact that is invariant during the computation; it is at the beginning; and it is at the end. These algorithms use exactly the same number of operations as the algorithm of the preceding section, but the multiplications are done in a different order. Computational complexity A brief analysis shows that such an algorithm uses squarings and at most multiplications, where denotes the floor function. More precisely, the number of multiplications is one less than the number of ones present in the binary expansion of n. For n greater than about 4 this is computationally more efficient than naively multiplying the base with itself repeatedly. Each squaring results in approximately double the number of digits of the previous, and so, if multiplication of two d-digit numbers is implemented in O(dk) operations for some fixed k, then the complexity of computing xn is given by 2k-ary method This algorithm calculates the value of xn after expanding the exponent in base 2k. It was first proposed by Brauer in 1939. In the algorithm below we make use of the following function f(0) = (k, 0) and f(m) = (s, u), where m = u·2s with u odd. Algorithm: Input An element x of G, a parameter k > 0, a non-negative integer and the precomputed values . Output The element xn in G y := 1; i := l - 1 while i ≥ 0 do (s, u) := f(ni) for j := 1 to k - s do y := y2 y := y * xu for j := 1 to s do y := y2 i := i - 1 return y For optimal efficiency, k should be the smallest integer satisfying Sliding-window method This method is an efficient variant of the 2k-ary method. For example, to calculate the exponent 398, which has binary expansion (110 001 110)2, we take a window of length 3 using the 2k-ary method algorithm and calculate 1, x3, x6, x12, x24, x48, x49, x98, x99, x198, x199, x398. But, we can also compute 1, x3, x6, x12, x24, x48, x96, x192, x199, x398, which saves one multiplication and amounts to evaluating (110 001 110)2 Here is the general algorithm: Algorithm: Input An element x of G, a non negative integer , a parameter k > 0 and the pre-computed values . Output The element xn ∈ G. Algorithm: y := 1; i := l - 1 while i > -1 do if ni = 0 then y := y2' i := i - 1 else s := max{i - k + 1, 0} while ns = 0 do s := s + 1 for h := 1 to i - s + 1 do y := y2 u := (ni, ni-1, ..., ns)2 y := y * xu i := s - 1 return y Montgomery's ladder technique Many algorithms for exponentiation do not provide defence against side-channel attacks. Namely, an attacker observing the sequence of squarings and multiplications can (partially) recover the exponent involved in the computation. This is a problem if the exponent should remain secret, as with many public-key cryptosystems. A technique called "Montgomery's ladder" addresses this concern. Given the binary expansion of a positive, non-zero integer n = (nk−1...n0)2 with nk−1 = 1, we can compute xn as follows: x1 = x; x2 = x2 for i = k - 2 to 0 do if ni = 0 then x2 = x1 * x2; x1 = x12 else x1 = x1 * x2; x2 = x22 return x1 The algorithm performs a fixed sequence of operations (up to log n): a multiplication and squaring takes place for each bit in the exponent, regardless of the bit's specific value. A similar algorithm for multiplication by doubling exists. This specific implementation of Montgomery's ladder is not yet protected against cache timing attacks: memory access latencies might still be observable to an attacker, as different variables are accessed depending on the value of bits of the secret exponent. Modern cryptographic implementations use a "scatter" technique to make sure the processor always misses the faster cache. Fixed-base exponent There are several methods which can be employed to calculate xn when the base is fixed and the exponent varies. As one can see, precomputations play a key role in these algorithms. Yao's method Yao's method is orthogonal to the -ary method where the exponent is expanded in radix and the computation is as performed in the algorithm above. Let , , , and be integers. Let the exponent be written as where for all . Let . Then the algorithm uses the equality Given the element of , and the exponent written in the above form, along with the precomputed values , the element is calculated using the algorithm below: y = 1, u = 1, j = h - 1 while j > 0 do for i = 0 to w - 1 do if ni = j then u = u × xbi y = y × u j = j - 1 return y If we set and , then the values are simply the digits of in base . Yao's method collects in u first those that appear to the highest power ; in the next round those with power are collected in as well etc. The variable y is multiplied times with the initial , times with the next highest powers, and so on. The algorithm uses multiplications, and elements must be stored to compute . Euclidean method The Euclidean method was first introduced in Efficient exponentiation using precomputation and vector addition chains by P.D Rooij. This method for computing in group , where is a natural integer, whose algorithm is given below, is using the following equality recursively: where . In other words, a Euclidean division of the exponent by is used to return a quotient and a rest . Given the base element in group , and the exponent written as in Yao's method, the element is calculated using precomputed values and then the algorithm below. Begin loop Break loop End loop; The algorithm first finds the largest value among the and then the supremum within the set of . Then it raises to the power , multiplies this value with , and then assigns the result of this computation and the value modulo . Further applications The approach also works with semigroups that are not of characteristic zero, for example allowing fast computation of large exponents modulo a number. Especially in cryptography, it is useful to compute powers in a ring of integers modulo . For example, the evaluation of would take a very long time and much storage space if the naïve method of computing and then taking the remainder when divided by 2345 were used. Even using a more effective method will take a long time: square 13789, take the remainder when divided by 2345, multiply the result by 13789, and so on. Applying above exp-by-squaring algorithm, with "*" interpreted as (that is, a multiplication followed by a division with remainder) leads to only 27 multiplications and divisions of integers, which may all be stored in a single machine word. Generally, any of these approaches will take fewer than modular multiplications. The approach can also be used to compute integer powers in a group, using either of the rules , . The approach also works in non-commutative semigroups and is often used to compute powers of matrices. More generally, the approach works with positive integer exponents in every magma for which the binary operation is power associative. Signed-digit recoding In certain computations it may be more efficient to allow negative coefficients and hence use the inverse of the base, provided inversion in is "fast" or has been precomputed. For example, when computing , the binary method requires multiplications and squarings. However, one could perform squarings to get and then multiply by to obtain . To this end we define the signed-digit representation of an integer in radix as Signed binary representation corresponds to the particular choice and . It is denoted by . There are several methods for computing this representation. The representation is not unique. For example, take : two distinct signed-binary representations are given by and , where is used to denote . Since the binary method computes a multiplication for every non-zero entry in the base-2 representation of , we are interested in finding the signed-binary representation with the smallest number of non-zero entries, that is, the one with minimal Hamming weight. One method of doing this is to compute the representation in non-adjacent form, or NAF for short, which is one that satisfies and denoted by . For example, the NAF representation of 478 is . This representation always has minimal Hamming weight. A simple algorithm to compute the NAF representation of a given integer with is the following: for to do Another algorithm by Koyama and Tsuruoka does not require the condition that ; it still minimizes the Hamming weight. Alternatives and generalizations Exponentiation by squaring can be viewed as a suboptimal addition-chain exponentiation algorithm: it computes the exponent by an addition chain consisting of repeated exponent doublings (squarings) and/or incrementing exponents by one (multiplying by x) only. More generally, if one allows any previously computed exponents to be summed (by multiplying those powers of x), one can sometimes perform the exponentiation using fewer multiplications (but typically using more memory). The smallest power where this occurs is for n = 15:  (squaring, 6 multiplies),  (optimal addition chain, 5 multiplies if x3 is re-used). In general, finding the optimal addition chain for a given exponent is a hard problem, for which no efficient algorithms are known, so optimal chains are typically used for small exponents only (e.g. in compilers where the chains for small powers have been pre-tabulated). However, there are a number of heuristic algorithms that, while not being optimal, have fewer multiplications than exponentiation by squaring at the cost of additional bookkeeping work and memory usage. Regardless, the number of multiplications never grows more slowly than Θ(log n), so these algorithms improve asymptotically upon exponentiation by squaring by only a constant factor at best. See also Modular exponentiation Vectorial addition chain Montgomery modular multiplication Non-adjacent form Addition chain Notes References Exponentials Computer arithmetic algorithms Computer arithmetic
Exponentiation by squaring
[ "Mathematics" ]
3,125
[ "E (mathematical constant)", "Arithmetic", "Computer arithmetic", "Exponentials" ]
10,251
https://en.wikipedia.org/wiki/EDSAC
The Electronic Delay Storage Automatic Calculator (EDSAC) was an early British computer. Inspired by John von Neumann's seminal First Draft of a Report on the EDVAC, the machine was constructed by Maurice Wilkes and his team at the University of Cambridge Mathematical Laboratory in England. EDSAC was the second electronic digital stored-program computer, after the Manchester Mark 1, to go into regular service. Later the project was supported by J. Lyons & Co. Ltd., intending to develop a commercially applied computer and resulting in Lyons' development of the LEO I, based on the EDSAC design. Work on EDSAC started during 1947, and it ran its first programs on 6 May 1949, when it calculated a table of square numbers and a list of prime numbers. EDSAC was finally shut down on 11 July 1958, having been superseded by EDSAC 2, which remained in use until 1965. Technical overview Physical components As soon as EDSAC was operational, it began serving the university's research needs. It used mercury delay lines for memory and derated vacuum tubes for logic. Power consumption was 11 kW of electricity. Cycle time was 1.5 ms for all ordinary instructions, 6 ms for multiplication. Input was via five-hole punched tape, and output was via a teleprinter. Initially, registers were limited to an accumulator and a multiplier register. In 1953, David Wheeler, returning from a stay at the University of Illinois, designed an index register as an extension to the original EDSAC hardware. A magnetic-tape drive was added in 1952 but never worked sufficiently well to be of real use. Until 1952, the available main memory (instructions and data) was only 512 18-bit words, and there was no backing store. The delay lines (or "tanks") were arranged in two batteries providing 512 words each. The second battery came into operation in 1952. The full 1024-word delay-line store was not available until 1955 or early 1956, limiting programs to about 800 words until then. John Lindley (diploma student 1958–1959) mentioned "the incredible difficulty we had ever to produce a single correct piece of paper tape with the crude and unreliable home-made punching, printing and verifying gear available in the late 50s". Memory and instructions The EDSAC's main memory consisted of 1024 locations, though only 512 locations were initially installed. Each contained 18 bits, but the topmost bit was always unavailable due to timing problems, so only 17 bits were used. An instruction consisted of a five-bit instruction code, one spare bit, a 10-bit operand (usually a memory address), and one length bit to control whether the instruction used a 17-bit or a 35-bit operand (two consecutive words, little-endian). All instruction codes were by design represented by one mnemonic letter, so that the Add instruction, for example, used the EDSAC character code for the letter A. Internally, the EDSAC used two's complement binary numbers. Numbers were either 17 bits (one word) or 35 bits (two words) long. Unusually, the multiplier was designed to treat numbers as fixed-point fractions in the range −1 ≤ x < 1, i.e. the binary point was immediately to the right of the sign. The accumulator could hold 71 bits, including the sign, allowing two long (35-bit) numbers to be multiplied without losing any precision. The instructions available were: Add Subtract Multiply-and-add AND-and-add (called "Collate") Shift left Arithmetic shift right Load multiplier register Store (and optionally clear) accumulator Conditional goto Read input tape Print character Round accumulator No-op Stop There was no division instruction (but various division subroutines were supplied) and no way to directly load a number into the accumulator (a "Store and zero accumulator" instruction followed by an "Add" instruction were necessary for this). There was no unconditional jump instruction, nor was there a procedure call instruction – it had not yet been invented. Maurice Wilkes discussed relative addressing modes for the EDSAC in a paper published in 1953. He was making the proposals to facilitate the use of subroutines. System software The initial orders were hard-wired on a set of uniselector switches and loaded into the low words of memory at startup. By May 1949, the initial orders provided a primitive relocating assembler taking advantage of the mnemonic design described above, all in 31 words. This was the world's first assembler, and arguably the start of the global software industry. There is a simulation of EDSAC available, and a full description of the initial orders and first programs. The first calculation done by EDSAC was a program run on 6 May 1949 to compute square numbers. The program was written by Beatrice Worsley, who had travelled from Canada to study the machine. The machine was used by other members of the university to solve real problems, and many early techniques were developed that are now included in operating systems. Users prepared their programs by punching them (in assembler) onto a paper tape. They soon became good at being able to hold the paper tape up to the light and read back the codes. When a program was ready, it was hung on a length of line strung up near the paper-tape reader. The machine operators, who were present during the day, selected the next tape from the line and loaded it into EDSAC. This is of course well known today as job queues. If it printed something, then the tape and the printout were returned to the user, otherwise they were informed at which memory location it had stopped. Debuggers were some time away, but a cathode-ray tube screen could be set to display the contents of a particular piece of memory. This was used to see whether a number was converging, for example. A loudspeaker was connected to the accumulator's sign bit; experienced users knew healthy and unhealthy sounds of programs, particularly programs "hung" in a loop. After office hours certain "authorised users" were allowed to run the machine for themselves, which went on late into the night until a valve blew – which usually happened according to one such user. This is alluded to by Fred Hoyle in his novel The Black Cloud Programming technique The early programmers had to make use of techniques frowned upon today—in particular, the use of self-modifying code. As there was no index register until much later, the only way of accessing an array was to alter which memory location a particular instruction was referencing. David Wheeler, who earned the world's first Computer Science PhD working on the project, is credited with inventing the concept of a subroutine. Users wrote programs that called a routine by jumping to the start of the subroutine with the return address (i.e. the location-plus-one of the jump itself) in the accumulator (a Wheeler Jump). By convention the subroutine expected this, and the first thing it did was to modify its concluding jump instruction to that return address. Multiple and nested subroutines could be called so long as the user knew the length of each one in order to calculate the location to jump to; recursive calls were forbidden. The user then copied the code for the subroutine from a master tape onto their own tape following the end of their own program. (However, Alan Turing discussed subroutines in a paper of 1945 on design proposals for the NPL ACE, going so far as to invent the concept of a return-address stack, which would have allowed recursion.) The lack of an index register also posed a problem to the writer of a subroutine in that they could not know in advance where in memory the subroutine would be loaded, and therefore they could not know how to address any regions of the code that were used for storage of data (so-called "pseudo-orders"). This was solved by use of an initial input routine, which was responsible for loading subroutines from punched tape into memory. On loading a subroutine, it would note the start location and increment internal memory references as required. Thus, as Wilkes wrote, "the code used to represent orders outside the machine differs from that used inside, the differences being dictated by the different requirements of the programmer on the one hand, and of the control circuits of the machine on the other". EDSAC's programmers used special techniques to make best use of the limited available memory. For example, at the point of loading a subroutine from punched tape into memory, it might happen that a particular constant would have to be calculated, a constant that would not subsequently need recalculation. In this situation, the constant would be calculated in an "interlude". The code required to calculate the constant would be supplied along with the full subroutine. After the initial input routine had loaded the calculation-code, it would transfer control to this code. Once the constant had been calculated and written into memory, control would return to the initial input routine, which would continue to write the remainder of the subroutine into memory, but first adjusting its starting point so as to overwrite the code that had calculated the constant. This allowed quite complicated adjustments to be made to a general-purpose subroutine without making its final footprint in memory any larger than had it been tailored to a specific circumstance. Application software The subroutine concept led to the availability of a substantial subroutine library. By 1951, 87 subroutines in the following categories were available for general use: floating-point arithmetic; arithmetic operations on complex numbers; checking; division; exponentiation; routines relating to functions; differential equations; special functions; power series; logarithms; miscellaneous; print and layout; quadrature; read (input); nth root; trigonometric functions; counting operations (simulating repeat until loops, while loops and for loops); vectors; and matrices. The first assembly language appeared for the EDSAC, and inspired several other assembly languages: Applications of EDSAC EDSAC was designed specifically to form part of the Mathematical Laboratory's support service for calculation. The first scientific paper to be published using a computer for calculations was by Ronald Fisher. Wilkes and Wheeler had used EDSAC to solve a differential equation relating to gene frequencies for him. In 1951, Miller and Wheeler used the machine to discover a 79-digit prime – the largest known at the time. The winners of three Nobel Prizes John Kendrew and Max Perutz (Chemistry, 1962), Andrew Huxley (Medicine, 1963) and Martin Ryle (Physics, 1974) benefitted from EDSAC's revolutionary computing power. In their acceptance prize speeches, each acknowledged the role that EDSAC had played in their research. In the early 1960s Peter Swinnerton-Dyer used the EDSAC computer to calculate the number of points modulo p (denoted by Np) for a large number of primes p on elliptic curves whose rank was known. Based on these numerical results, conjectured that Np for a curve E with rank r obeys an asymptotic law, the Birch and Swinnerton-Dyer conjecture, considered one of the top unsolved problems in mathematics as of 2024. Games In 1952, Sandy Douglas developed OXO, a version of noughts and crosses (tic-tac-toe) for the EDSAC, with graphical output to a VCR97 6" cathode-ray tube. This may well have been the world's first video game. Another video game was created by Stanley Gill and involved a dot (termed a sheep) approaching a line in which one of two gates could be opened. The Stanley Gill game was controlled via the lightbeam of the EDSAC's paper-tape reader. Interrupting it (such as by the player placing their hand in it) would open the upper gate. Leaving the beam unbroken would result in the lower gate opening. Further developments EDSAC's successor, EDSAC 2, was commissioned in 1958. In 1961, an EDSAC 2 version of Autocode, an ALGOL-like high-level programming language for scientists and engineers, was developed by David Hartley. In the mid-1960s, a successor to the EDSAC 2 was planned, but the move was instead made to the Titan, a prototype Atlas 2 developed from the Atlas Computer of the University of Manchester, Ferranti, and Plessey. EDSAC Replica Project On 13 January 2011, the Computer Conservation Society announced that it planned to build a working replica of EDSAC, at the National Museum of Computing (TNMoC) in Bletchley Park supervised by Andrew Herbert, who studied under Maurice Wilkes. The first parts of the replica were switched on in November 2014. The EDSAC logical circuits were meticulously reconstructed through the development of a simulator and the reexamination of some rediscovered original schematics. This documentation has been released under a Creative Commons license. The ongoing project is open to visitors of the museum. In 2016, two original EDSAC operators, Margaret Marrs and Joyce Wheeler, visited the museum to assist the project. As of November 2016, commissioning of the fully completed and operational state of the replica was estimated to be the autumn of 2017. However, unforeseen project delays have resulted in an unknown date for a completed and fully operational machine. See also EDVAC on which much of the design of EDSAC was based History of computing hardware List of vacuum-tube computers References Further reading The Preparation of Programs for an Electronic Digital Computer by Professor Sir Maurice Wilkes, David Wheeler and Stanley Gill, Addison–Wesley, Edition 1, 1951 archive.org. 50th Anniversary of EDSAC – Dedicated website at the University of Cambridge Computer Laboratory. reprinted in The EDSAC Rebuild Project – Documentation, and the reconstructed EDSAC schematics External links An EDSAC simulator – Developed by Martin Campbell-Kelly, Department of Computer Science, University of Warwick, England. Oral history interview with David Wheeler, 14 May 1987. Charles Babbage Institute, University of Minnesota. Wheeler was a research student at the University Mathematical Laboratory at Cambridge in 1948–1951 and a pioneer programmer on the EDSAC project. Wheeler discusses projects that were run on EDSAC, user-oriented programming methods, and the influence of EDSAC on the ILLIAC, the ORDVAC, and the IBM 701. Wheeler also notes visits by Douglas Hartree, Nelson Blackman (of ONR), Peter Naur, Aad van Wijngarden, Arthur van der Poel, Friedrich Bauer, and Louis Couffignal. Nicholas Enticknap and Maurice Wilkes, Cambridge's Golden Jubilee – in: RESURRECTION The Bulletin of the Computer Conservation Society. . Number 22, Summer 1999. The EDSAC Paperwork Collection at The ICL Computer Museum. Introduction to programming for EDSAC 2, 1957. 1940s computers 1949 establishments in England 1949 in computing Computer-related introductions in 1949 Early British computers One-of-a-kind computers Vacuum tube computers University of Cambridge Computer Laboratory History of the University of Cambridge Serial computers
EDSAC
[ "Technology" ]
3,137
[ "Serial computers", "Computers" ]
10,264
https://en.wikipedia.org/wiki/Enrico%20Fermi
Enrico Fermi (; 29 September 1901 – 28 November 1954) was an Italian and naturalized American physicist, renowned for being the creator of the world's first artificial nuclear reactor, the Chicago Pile-1, and a member of the Manhattan Project. He has been called the "architect of the nuclear age" and the "architect of the atomic bomb". He was one of very few physicists to excel in both theoretical physics and experimental physics. Fermi was awarded the 1938 Nobel Prize in Physics for his work on induced radioactivity by neutron bombardment and for the discovery of transuranium elements. With his colleagues, Fermi filed several patents related to the use of nuclear power, all of which were taken over by the US government. He made significant contributions to the development of statistical mechanics, quantum theory, and nuclear and particle physics. Fermi's first major contribution involved the field of statistical mechanics. After Wolfgang Pauli formulated his exclusion principle in 1925, Fermi followed with a paper in which he applied the principle to an ideal gas, employing a statistical formulation now known as Fermi–Dirac statistics. Today, particles that obey the exclusion principle are called "fermions". Pauli later postulated the existence of an uncharged invisible particle emitted along with an electron during beta decay, to satisfy the law of conservation of energy. Fermi took up this idea, developing a model that incorporated the postulated particle, which he named the "neutrino". His theory, later referred to as Fermi's interaction and now called weak interaction, described one of the four fundamental interactions in nature. Through experiments inducing radioactivity with the recently discovered neutron, Fermi discovered that slow neutrons were more easily captured by atomic nuclei than fast ones, and he developed the Fermi age equation to describe this. After bombarding thorium and uranium with slow neutrons, he concluded that he had created new elements. Although he was awarded the Nobel Prize for this discovery, the new elements were later revealed to be nuclear fission products. Fermi left Italy in 1938 to escape new Italian racial laws that affected his Jewish wife, Laura Capon. He emigrated to the United States, where he worked on the Manhattan Project during World War II. Fermi led the team at the University of Chicago that designed and built Chicago Pile-1, which went critical on 2 December 1942, demonstrating the first human-created, self-sustaining nuclear chain reaction. He was on hand when the X-10 Graphite Reactor at Oak Ridge, Tennessee went critical in 1943, and when the B Reactor at the Hanford Site did so the next year. At Los Alamos, he headed F Division, part of which worked on Edward Teller's thermonuclear "Super" bomb. He was present at the Trinity test on 16 July 1945, the first test of a full nuclear bomb explosion, where he used his Fermi method to estimate the bomb's yield. After the war, he helped establish the Institute for Nuclear Studies in Chicago, and served on the General Advisory Committee, chaired by J. Robert Oppenheimer, which advised the Atomic Energy Commission on nuclear matters. After the detonation of the first Soviet fission bomb in August 1949, he strongly opposed the development of a hydrogen bomb on both moral and technical grounds. He was among the scientists who testified on Oppenheimer's behalf at the 1954 hearing that resulted in the denial of Oppenheimer's security clearance. Fermi did important work in particle physics, especially related to pions and muons, and he speculated that cosmic rays arose when the material was accelerated by magnetic fields in interstellar space. Many awards, concepts, and institutions are named after Fermi, including the Fermi 1 (breeder reactor), the Enrico Fermi Nuclear Generating Station, the Enrico Fermi Award, the Enrico Fermi Institute, the Fermi National Accelerator Laboratory (Fermilab), the Fermi Gamma-ray Space Telescope, the Fermi paradox, and the synthetic element fermium, making him one of 16 scientists who have elements named after them. Early life Enrico Fermi was born in Rome, Italy, on 29 September 1901. He was the third child of Alberto Fermi, a division head in the Ministry of Railways, and Ida de Gattis, an elementary school teacher. His sister, Maria, was two years older, his brother Giulio a year older. After the two boys were sent to a rural community to be wet nursed, Enrico rejoined his family in Rome when he was two and a half. Although he was baptized a Catholic in accordance with his grandparents' wishes, his family was not particularly religious; Enrico was an agnostic throughout his adult life. As a young boy, he shared the same interests as his brother Giulio, building electric motors and playing with electrical and mechanical toys. Giulio died during an operation on a throat abscess in 1915 and Maria died in an airplane crash near Milan in 1959. At a local market in Campo de' Fiori, Fermi found a physics book, the 900-page Elementorum physicae mathematicae. Written in Latin by Jesuit Father , a professor at the Collegio Romano, it presented mathematics, classical mechanics, astronomy, optics, and acoustics as they were understood at the time of its 1840 publication. With a scientifically inclined friend, Enrico Persico, Fermi pursued projects such as building gyroscopes and measuring the acceleration of Earth's gravity. In 1914, Fermi, who used to often meet with his father in front of the office after work, met a colleague of his father called Adolfo Amidei, who would walk part of the way home with Alberto. Enrico had learned that Adolfo was interested in mathematics and physics and took the opportunity to ask Adolfo a question about geometry. Adolfo understood that the young Fermi was referring to projective geometry and then proceeded to give him a book on the subject written by Theodor Reye. Two months later, Fermi returned the book, having solved all the problems proposed at the end of the book, some of which Adolfo considered difficult. Upon verifying this, Adolfo felt that Fermi was "a prodigy, at least with respect to geometry", and further mentored the boy, providing him with more books on physics and mathematics. Adolfo noted that Fermi had a very good memory and thus could return the books after having read them because he could remember their content very well. Scuola Normale Superiore in Pisa Fermi graduated from high school in July 1918, having skipped the third year entirely. At Amidei's urging, Fermi learned German to be able to read the many scientific papers that were published in that language at the time, and he applied to the Scuola Normale Superiore in Pisa. Amidei felt that the Scuola would provide better conditions for Fermi's development than the Sapienza University of Rome could at the time. Having lost one son, Fermi's parents only reluctantly allowed him to live in the school's lodgings away from Rome for four years. Fermi took first place in the difficult entrance exam, which included an essay on the theme of "Specific characteristics of Sounds"; the 17-year-old Fermi chose to use Fourier analysis to derive and solve the partial differential equation for a vibrating rod, and after interviewing Fermi the examiner declared he would become an outstanding physicist. At the Scuola Normale Superiore, Fermi played pranks with fellow student Franco Rasetti; the two became close friends and collaborators. Fermi was advised by Luigi Puccianti, director of the physics laboratory, who said there was little he could teach Fermi and often asked Fermi to teach him something instead. Fermi's knowledge of quantum physics was such that Puccianti asked him to organize seminars on the topic. During this time Fermi learned tensor calculus, a technique key to general relativity. Fermi initially chose mathematics as his major but soon switched to physics. He remained largely self-taught, studying general relativity, quantum mechanics, and atomic physics. In September 1920, Fermi was admitted to the physics department. Since there were only three students in the department—Fermi, Rasetti, and Nello Carrara—Puccianti let them freely use the laboratory for whatever purposes they chose. Fermi decided that they should research X-ray crystallography, and the three worked to produce a Laue photograph—an X-ray photograph of a crystal. During 1921, his third year at the university, Fermi published his first scientific works in the Italian journal Nuovo Cimento. The first was entitled "On the dynamics of a rigid system of electrical charges in translational motion" (). A sign of things to come was that the mass was expressed as a tensor—a mathematical construct commonly used to describe something moving and changing in three-dimensional space. In classical mechanics, mass is a scalar quantity, but in relativity, it changes with velocity. The second paper was "On the electrostatics of a uniform gravitational field of electromagnetic charges and on the weight of electromagnetic charges" (). Using general relativity, Fermi showed that a charge has a weight equal to U/c2, where U is the electrostatic energy of the system, and c is the speed of light. The first paper seemed to point out a contradiction between the electrodynamic theory and the relativistic one concerning the calculation of the electromagnetic masses, as the former predicted a value of 4/3 U/c2. Fermi addressed this the next year in a paper "Concerning a contradiction between electrodynamic and the relativistic theory of electromagnetic mass" in which he showed that the apparent contradiction was a consequence of relativity. This paper was sufficiently well-regarded that it was translated into German and published in the German scientific journal Physikalische Zeitschrift in 1922. That year, Fermi submitted his article "On the phenomena occurring near a world line" () to the Italian journal . In this article, he examined the Principle of Equivalence, and introduced the so-called "Fermi coordinates". He proved that on a world line close to the timeline, space behaves as if it were a Euclidean space. Fermi submitted his thesis, "A theorem on probability and some of its applications" (), to the Scuola Normale Superiore in July 1922, and received his laurea at the unusually young age of 20. The thesis was on X-ray diffraction images. Theoretical physics was not yet considered a discipline in Italy, and the only thesis that would have been accepted was experimental physics. For this reason, Italian physicists were slow to embrace the new ideas like relativity coming from Germany. Since Fermi was quite at home in the lab doing experimental work, this did not pose insurmountable problems for him. While writing the appendix for the Italian edition of the book Fundamentals of Einstein Relativity by August Kopff in 1923, Fermi was the first to point out that hidden inside the Einstein equation () was an enormous amount of nuclear potential energy to be exploited. "It does not seem possible, at least in the near future", he wrote, "to find a way to release these dreadful amounts of energy—which is all to the good because the first effect of an explosion of such a dreadful amount of energy would be to smash into smithereens the physicist who had the misfortune to find a way to do it." In 1924, Fermi was initiated into the Masonic Lodge "Adriano Lemmi" of the Grand Orient of Italy. In 1923–1924, Fermi spent a semester studying under Max Born at the University of Göttingen, where he met Werner Heisenberg and Pascual Jordan. Fermi then studied in Leiden with Paul Ehrenfest from September to December 1924 on a fellowship from the Rockefeller Foundation obtained through the intercession of the mathematician Vito Volterra. Here Fermi met Hendrik Lorentz and Albert Einstein, and became friends with Samuel Goudsmit and Jan Tinbergen. From January 1925 to late 1926, Fermi taught mathematical physics and theoretical mechanics at the University of Florence, where he teamed up with Rasetti to conduct a series of experiments on the effects of magnetic fields on mercury vapour. He also participated in seminars at the Sapienza University of Rome, giving lectures on quantum mechanics and solid state physics. While giving lectures on the new quantum mechanics based on the remarkable accuracy of predictions of the Schrödinger equation, Fermi would often say, "It has no business to fit so well!" After Wolfgang Pauli announced his exclusion principle in 1925, Fermi responded with a paper "On the quantization of the perfect monoatomic gas" (), in which he applied the exclusion principle to an ideal gas. The paper was especially notable for Fermi's statistical formulation, which describes the distribution of particles in systems of many identical particles that obey the exclusion principle. This was independently developed soon after by the British physicist Paul Dirac, who also showed how it was related to the Bose–Einstein statistics. Accordingly, it is now known as Fermi–Dirac statistics. After Dirac, particles that obey the exclusion principle are today called "fermions", while those that do not are called "bosons". Professor in Rome Professorships in Italy were granted by competition () for a vacant chair, the applicants being rated on their publications by a committee of professors. Fermi applied for a chair of mathematical physics at the University of Cagliari on Sardinia but was narrowly passed over in favour of Giovanni Giorgi. In 1926, at the age of 24, he applied for a professorship at the Sapienza University of Rome. This was a new chair, one of the first three in theoretical physics in Italy, that had been created by the Minister of Education at the urging of professor Orso Mario Corbino, who was the university's professor of experimental physics, the director of the Institute of Physics, and a member of Benito Mussolini's cabinet. Corbino, who also chaired the selection committee, hoped that the new chair would raise the standard and reputation of physics in Italy. The committee chose Fermi ahead of Enrico Persico and Aldo Pontremoli, and Corbino helped Fermi recruit his team, which was soon joined by notable students such as Edoardo Amaldi, Bruno Pontecorvo, Ettore Majorana and Emilio Segrè, and by Franco Rasetti, whom Fermi had appointed as his assistant. They soon nicknamed the "Via Panisperna boys" after the street where the Institute of Physics was located. Fermi married Laura Capon, a science student at the university, on 19 July 1928. They had two children: Nella, born in January 1931, and Giulio, born in February 1936. On 18 March 1929, Fermi was appointed a member of the Royal Academy of Italy by Mussolini, and on 27 April he joined the Fascist Party. He later opposed Fascism when the 1938 racial laws were promulgated by Mussolini in order to bring Italian Fascism ideologically closer to German Nazism. These laws threatened Laura, who was Jewish, and put many of Fermi's research assistants out of work. During their time in Rome, Fermi and his group made important contributions to many practical and theoretical aspects of physics. In 1928, he published his Introduction to Atomic Physics (), which provided Italian university students with an up-to-date and accessible text. Fermi also conducted public lectures and wrote popular articles for scientists and teachers in order to spread knowledge of the new physics as widely as possible. Part of his teaching method was to gather his colleagues and graduate students together at the end of the day and go over a problem, often from his own research. A sign of success was that foreign students now began to come to Italy. The most notable of these was the German physicist Hans Bethe, who came to Rome as a Rockefeller Foundation fellow, and collaborated with Fermi on a 1932 paper "On the Interaction between Two Electrons" (). At this time, physicists were puzzled by beta decay, in which an electron was emitted from the atomic nucleus. To satisfy the law of conservation of energy, Pauli postulated the existence of an invisible particle with no charge and little or no mass that was also emitted at the same time. Fermi took up this idea, which he developed in a tentative paper in 1933, and then a longer paper the next year that incorporated the postulated particle, which Fermi called a "neutrino". His theory, later referred to as Fermi's interaction, and still later as the theory of the weak interaction, described one of the four fundamental forces of nature. The neutrino was detected after his death, and his interaction theory showed why it was so difficult to detect. When he submitted his paper to the British journal Nature, that journal's editor turned it down because it contained speculations which were "too remote from physical reality to be of interest to readers". According to Fermi's biographer David N. Schwartz, it is at least strange that Fermi seriously requested publication from the journal, since at that time Nature only published short notes on articles of this kind, and was not suitable for the publication of even a new physical theory. More suitable, if anything, would have been the Proceedings of the Royal Society of London. He agrees with some scholars' hypothesis, according to which the rejection of the British magazine convinced his young colleagues (some of them Jews and leftists) to give up the boycott of German scientific magazines, after Hitler came to power in January 1933. Thus Fermi saw the theory published in Italian and German before it was published in English. In the introduction to the 1968 English translation, physicist Fred L. Wilson noted that: In January 1934, Irène Joliot-Curie and Frédéric Joliot announced that they had bombarded elements with alpha particles and induced radioactivity in them. By March, Fermi's assistant Gian-Carlo Wick had provided a theoretical explanation using Fermi's theory of beta decay. Fermi decided to switch to experimental physics, using the neutron, which James Chadwick had discovered in 1932. In March 1934, Fermi wanted to see if he could induce radioactivity with Rasetti's polonium-beryllium neutron source. Neutrons had no electric charge, and so would not be deflected by the positively charged nucleus. This meant that they needed much less energy to penetrate the nucleus than charged particles, and so would not require a particle accelerator, which the Via Panisperna boys did not have. Fermi had the idea to resort to replacing the polonium-beryllium neutron source with a radon-beryllium one, which he created by filling a glass bulb with beryllium powder, evacuating the air, and then adding 50 mCi of radon gas, supplied by . This created a much stronger neutron source, the effectiveness of which declined with the 3.8-day half-life of radon. He knew that this source would also emit gamma rays, but, on the basis of his theory, he believed that this would not affect the results of the experiment. He started by bombarding platinum, an element with a high atomic number that was readily available, without success. He turned to aluminium, which emitted an alpha particle and produced sodium, which then decayed into magnesium by beta particle emission. He tried lead, without success, and then fluorine in the form of calcium fluoride, which emitted an alpha particle and produced nitrogen, decaying into oxygen by beta particle emission. In all, he induced radioactivity in 22 different elements. Fermi rapidly reported the discovery of neutron-induced radioactivity in the Italian journal La Ricerca Scientifica on 25 March 1934. The natural radioactivity of thorium and uranium made it hard to determine what was happening when these elements were bombarded with neutrons but, after correctly eliminating the presence of elements lighter than uranium but heavier than lead, Fermi concluded that they had created new elements, which he called ausenium and hesperium. The chemist Ida Noddack suggested that some of the experiments could have produced lighter elements than lead rather than new, heavier elements. Her suggestion was not taken seriously at the time because her team had not carried out any experiments with uranium or built the theoretical basis for this possibility. At that time, fission was thought to be improbable if not impossible on theoretical grounds. While physicists expected elements with higher atomic numbers to form from neutron bombardment of lighter elements, nobody expected neutrons to have enough energy to split a heavier atom into two light element fragments in the manner that Noddack suggested. The Via Panisperna boys also noticed some unexplained effects. The experiment seemed to work better on a wooden table than on a marble tabletop. Fermi remembered that Joliot-Curie and Chadwick had noted that paraffin wax was effective at slowing neutrons, so he decided to try that. When neutrons were passed through paraffin wax, they induced a hundred times as much radioactivity in silver compared with when it was bombarded without the paraffin. Fermi guessed that this was due to the hydrogen atoms in the paraffin. Those in wood similarly explained the difference between the wooden and the marble tabletops. This was confirmed by repeating the effect with water. He concluded that collisions with hydrogen atoms slowed the neutrons. The lower the atomic number of the nucleus it collides with, the more energy a neutron loses per collision, and therefore the fewer collisions that are required to slow a neutron down by a given amount. Fermi realised that this induced more radioactivity because slow neutrons were more easily captured than fast ones. He developed a diffusion equation to describe this, which became known as the Fermi age equation. In 1938, Fermi received the Nobel Prize in Physics at the age of 37 for his "demonstrations of the existence of new radioactive elements produced by neutron irradiation, and for his related discovery of nuclear reactions brought about by slow neutrons". After Fermi received the prize in Stockholm, he did not return home to Italy but rather continued to New York City with his family in December 1938, where they applied for permanent residency. The decision to move to America and become US citizens was due primarily to the racial laws in Italy. Manhattan Project Fermi arrived in New York City on 2 January 1939. He was immediately offered positions at five universities, and accepted one at Columbia University, where he had already given summer lectures in 1936. He received the news that in December 1938, the German chemists Otto Hahn and Fritz Strassmann had detected the element barium after bombarding uranium with neutrons, which Lise Meitner and her nephew Otto Frisch correctly interpreted as the result of nuclear fission. Frisch confirmed this experimentally on 13 January 1939. The news of Meitner and Frisch's interpretation of Hahn and Strassmann's discovery crossed the Atlantic with Niels Bohr, who was to lecture at Princeton University. Isidor Isaac Rabi and Willis Lamb, two Columbia University physicists working at Princeton, found out about it and carried it back to Columbia. Rabi said he told Enrico Fermi, but Fermi later gave the credit to Lamb: Noddack was proven right after all. Fermi had dismissed the possibility of fission on the basis of his calculations, but he had not taken into account the binding energy that would appear when a nuclide with an odd number of neutrons absorbed an extra neutron. For Fermi, the news came as a profound embarrassment, as the transuranic elements that he had partly been awarded the Nobel Prize for discovering had not been transuranic elements at all, but fission products. He added a footnote to this effect to his Nobel Prize acceptance speech. The scientists at Columbia decided that they should try to detect the energy released in the nuclear fission of uranium when bombarded by neutrons. On 25 January 1939, in the basement of Pupin Hall at Columbia, an experimental team including Fermi conducted the first nuclear fission experiment in the United States. The other members of the team were Herbert L. Anderson, Eugene T. Booth, John R. Dunning, G. Norris Glasoe, and Francis G. Slack. The next day, the Fifth Washington Conference on Theoretical Physics began in Washington, D.C. under the joint auspices of George Washington University and the Carnegie Institution of Washington. There, the news on nuclear fission was spread even further, fostering many more experimental demonstrations. French scientists Hans von Halban, Lew Kowarski, and Frédéric Joliot-Curie had demonstrated that uranium bombarded by neutrons emitted more neutrons than it absorbed, suggesting the possibility of a chain reaction. Fermi and Anderson did so too a few weeks later. Leó Szilárd obtained of uranium oxide from Canadian radium producer Eldorado Gold Mines Limited, allowing Fermi and Anderson to conduct experiments with fission on a much larger scale. Fermi and Szilárd collaborated on the design of a device to achieve a self-sustaining nuclear reaction—a nuclear reactor. Owing to the rate of absorption of neutrons by the hydrogen in water, it was unlikely that a self-sustaining reaction could be achieved with natural uranium and water as a neutron moderator. Fermi suggested, based on his work with neutrons, that the reaction could be achieved with uranium oxide blocks and graphite as a moderator instead of water. This would reduce the neutron capture rate, and in theory make a self-sustaining chain reaction possible. Szilárd came up with a workable design: a pile of uranium oxide blocks interspersed with graphite bricks. Szilárd, Anderson, and Fermi published a paper on "Neutron Production in Uranium". But their work habits and personalities were different, and Fermi had trouble working with Szilárd. Fermi was among the first to warn military leaders about the potential impact of nuclear energy, giving a lecture on the subject at the Navy Department on 18 March 1939. The response fell short of what he had hoped for, although the Navy agreed to provide $1,500 towards further research at Columbia. Later that year, Szilárd, Eugene Wigner, and Edward Teller sent the letter signed by Einstein to US president Franklin D. Roosevelt, warning that Nazi Germany was likely to build an atomic bomb. In response, Roosevelt formed the Advisory Committee on Uranium to investigate the matter. The Advisory Committee on Uranium provided money for Fermi to buy graphite, and he built a pile of graphite bricks on the seventh floor of the Pupin Hall laboratory. By August 1941, he had six tons of uranium oxide and thirty tons of graphite, which he used to build a still larger pile in Schermerhorn Hall at Columbia. The S-1 Section of the Office of Scientific Research and Development, as the Advisory Committee on Uranium was now known, met on 18 December 1941, with the US now engaged in World War II, making its work urgent. Most of the effort sponsored by the committee had been directed at producing enriched uranium, but Committee member Arthur Compton determined that a feasible alternative was plutonium, which could be mass-produced in nuclear reactors by the end of 1944. He decided to concentrate the plutonium work at the University of Chicago. Fermi reluctantly moved, and his team became part of the new Metallurgical Laboratory there. The possible results of a self-sustaining nuclear reaction were unknown, so it seemed inadvisable to build the first nuclear reactor on the University of Chicago campus in the middle of the city. Compton found a location in the Argonne Woods Forest Preserve, about from Chicago. Stone & Webster was contracted to develop the site, but the work was halted by an industrial dispute. Fermi then persuaded Compton that he could build the reactor in the squash court under the stands of the University of Chicago's Stagg Field. Construction of the pile began on 6 November 1942, and Chicago Pile-1 went critical on 2 December. The shape of the pile was intended to be roughly spherical, but as work proceeded Fermi calculated that criticality could be achieved without finishing the entire pile as planned. This experiment was a landmark in the quest for energy, and it was typical of Fermi's approach. Every step was carefully planned, and every calculation was meticulously done. When the first self-sustained nuclear chain reaction was achieved, Compton made a coded phone call to James B. Conant, the chairman of the National Defense Research Committee. To continue the research where it would not pose a public health hazard, the reactor was disassembled and moved to the Argonne Woods site. There Fermi directed experiments on nuclear reactions, reveling in the opportunities provided by the reactor's abundant production of free neutrons. The laboratory soon branched out from physics and engineering into using the reactor for biological and medical research. Initially, Argonne was run by Fermi as part of the University of Chicago, but it became a separate entity with Fermi as its director in May 1944. When the air-cooled X-10 Graphite Reactor at Oak Ridge went critical on 4 November 1943, Fermi was on hand just in case something went wrong. The technicians woke him early so that he could see it happen. Getting X-10 operational was another milestone in the plutonium project. It provided data on reactor design, training for DuPont staff in reactor operation, and produced the first small quantities of reactor-bred plutonium. Fermi became an American citizen in July 1944, the earliest date the law allowed. In September 1944, Fermi inserted the first uranium fuel slug into the B Reactor at the Hanford Site, the production reactor designed to breed plutonium in large quantities. Like X-10, it had been designed by Fermi's team at the Metallurgical Laboratory and built by DuPont, but it was much larger and was water-cooled. Over the next few days, 838 tubes were loaded, and the reactor went critical. Shortly after midnight on 27 September, the operators began to withdraw the control rods to initiate production. At first, all appeared to be well, but around 03:00, the power level started to drop and by 06:30 the reactor had shut down completely. The Army and DuPont turned to Fermi's team for answers. The cooling water was investigated to see if there was a leak or contamination. The next day the reactor suddenly started up again, only to shut down once more a few hours later. The problem was traced to neutron poisoning from xenon-135 or Xe-135, a fission product with a half-life of 9.1 to 9.4 hours. Fermi and John Wheeler both deduced that Xe-135 was responsible for absorbing neutrons in the reactor, thereby sabotaging the fission process. Fermi was recommended by colleague Emilio Segrè to ask Chien-Shiung Wu, as she prepared a printed draft on this topic to be published by the Physical Review. Upon reading the draft, Fermi and the scientists confirmed their suspicions: Xe-135 indeed absorbed neutrons, in fact it had a huge neutron cross-section. DuPont had deviated from the Metallurgical Laboratory's original design in which the reactor had 1,500 tubes arranged in a circle, and had added 504 tubes to fill in the corners. The scientists had originally considered this over-engineering a waste of time and money, but Fermi realized that if all 2,004 tubes were loaded, the reactor could reach the required power level and efficiently produce plutonium. In April 1943, Fermi raised with Robert Oppenheimer the possibility of using the radioactive byproducts from enrichment to contaminate the German food supply. The background was fear that the German atomic bomb project was already at an advanced stage, and Fermi was also sceptical at the time that an atomic bomb could be developed quickly enough. Oppenheimer discussed the "promising" proposal with Edward Teller, who suggested the use of strontium-90. James B. Conant and Leslie Groves were also briefed, but Oppenheimer wanted to proceed with the plan only if enough food could be contaminated with the weapon to kill half a million people. In mid-1944, Oppenheimer persuaded Fermi to join his Project Y at Los Alamos, New Mexico. Arriving in September, Fermi was appointed an associate director of the laboratory, with broad responsibility for nuclear and theoretical physics, and was placed in charge of F Division, which was named after him. F Division had four branches: F-1 Super and General Theory under Teller, which investigated the "Super" (thermonuclear) bomb; F-2 Water Boiler under L. D. P. King, which looked after the "water boiler" aqueous homogeneous research reactor; F-3 Super Experimentation under Egon Bretscher; and F-4 Fission Studies under Anderson. Fermi observed the Trinity test on 16 July 1945 and conducted an experiment to estimate the bomb's yield by dropping strips of paper into the blast wave. He paced off the distance they were blown by the explosion, and calculated the yield as ten kilotons of TNT; the actual yield was about 18.6 kilotons. Along with Oppenheimer, Compton, and Ernest Lawrence, Fermi was part of the scientific panel that advised the Interim Committee on target selection. The panel agreed with the committee that atomic bombs would be used without warning against an industrial target. Like others at the Los Alamos Laboratory, Fermi found out about the atomic bombings of Hiroshima and Nagasaki from the public address system in the technical area. Fermi did not believe that atomic bombs would deter nations from starting wars, nor did he think that the time was ripe for world government. He therefore did not join the Association of Los Alamos Scientists. Postwar work Fermi became the Charles H. Swift Distinguished Professor of Physics at the University of Chicago on 1 July 1945, although he did not depart the Los Alamos Laboratory with his family until 31 December 1945. He was elected a member of the US National Academy of Sciences in 1945. The Metallurgical Laboratory became the Argonne National Laboratory on 1 July 1946, the first of the national laboratories established by the Manhattan Project. The short distance between Chicago and Argonne allowed Fermi to work at both places. At Argonne he continued experimental physics, investigating neutron scattering with Leona Marshall. He also discussed theoretical physics with Maria Mayer, helping her develop insights into spin–orbit coupling that would lead to her receiving the Nobel Prize. The Manhattan Project was replaced by the Atomic Energy Commission (AEC) on 1 January 1947. Fermi served on the AEC General Advisory Committee, an influential scientific committee chaired by Robert Oppenheimer. He also liked to spend a few weeks each year at the Los Alamos National Laboratory, where he collaborated with Nicholas Metropolis, and with John von Neumann on Rayleigh–Taylor instability, the science of what occurs at the border between two fluids of different densities. After the detonation of the first Soviet fission bomb in August 1949, Fermi, along with Isidor Rabi, wrote a strongly worded report for the committee, opposing the development of a hydrogen bomb on moral and technical grounds. Nonetheless, Fermi continued to participate in work on the hydrogen bomb at Los Alamos as a consultant. Along with Stanislaw Ulam, he calculated that not only would the amount of tritium needed for Teller's model of a thermonuclear weapon be prohibitive, but a fusion reaction could still not be assured to propagate even with this large quantity of tritium. Fermi was among the scientists who testified on Oppenheimer's behalf at the Oppenheimer security hearing in 1954 that resulted in the denial of Oppenheimer's security clearance. In his later years, Fermi continued teaching at the University of Chicago, where he was a founder of what later became the Enrico Fermi Institute. His PhD students in the postwar period included Owen Chamberlain, Geoffrey Chew, Jerome Friedman, Marvin Goldberger, Tsung-Dao Lee, Arthur Rosenfeld and Sam Treiman. Jack Steinberger was a graduate student, and Mildred Dresselhaus was highly influenced by Fermi during the year she overlapped with him as a PhD student. Fermi conducted important research in particle physics, especially related to pions and muons. He made the first predictions of pion-nucleon resonance, relying on statistical methods, since he reasoned that exact answers were not required when the theory was wrong anyway. In a paper coauthored with Chen Ning Yang, he speculated that pions might actually be composite particles. The idea was elaborated by Shoichi Sakata. It has since been supplanted by the quark model, in which the pion is made up of quarks, which completed Fermi's model, and vindicated his approach. Fermi wrote a paper "On the Origin of Cosmic Radiation" in which he proposed that cosmic rays arose through material being accelerated by magnetic fields in interstellar space, which led to a difference of opinion with Teller. Fermi examined the issues surrounding magnetic fields in the arms of a spiral galaxy. He mused about what is now referred to as the "Fermi paradox": the contradiction between the presumed probability of the existence of extraterrestrial life and the fact that contact has not been made. Toward the end of his life, Fermi questioned his faith in society at large to make wise choices about nuclear technology. He said: Death Fermi underwent what was called an "exploratory" operation in Billings Memorial Hospital in October 1954, after which he returned home. Fifty days later he died of inoperable stomach cancer in his home in Chicago. He was 53. Fermi suspected working near the nuclear pile involved great risk but he pressed on because he felt the benefits outweighed the risks to his personal safety. Two of his graduate student assistants working near the pile also died of cancer. A memorial service was held at the University of Chicago chapel, where colleagues Samuel K. Allison, Emilio Segrè, and Herbert L. Anderson spoke to mourn the loss of one of the world's "most brilliant and productive physicists." His body was interred at Oak Woods Cemetery where a private graveside service for the immediate family took place presided by a Lutheran chaplain. Impact and legacy Legacy Fermi received numerous awards in recognition of his achievements, including the Matteucci Medal in 1926, the Nobel Prize for Physics in 1938, the Hughes Medal in 1942, the Franklin Medal in 1947, and the Rumford Prize in 1953. He was awarded the Medal for Merit in 1946 for his contribution to the Manhattan Project. Fermi was elected member of the American Philosophical Society in 1939 and a Foreign Member of the Royal Society (FRS) in 1950. The Basilica of Santa Croce, Florence, known as the Temple of Italian Glories for its many graves of artists, scientists and prominent figures in Italian history, has a plaque commemorating Fermi. In 1999, Time named Fermi on its list of the top 100 persons of the twentieth century. Fermi was widely regarded as an unusual case of a 20th-century physicist who excelled both theoretically and experimentally. Chemist and novelist C. P. Snow wrote, "if Fermi had been born a few years earlier, one could well imagine him discovering Rutherford's atomic nucleus, and then developing Bohr's theory of the hydrogen atom. If this sounds like hyperbole, anything about Fermi is likely to sound like hyperbole". Fermi was known as an inspiring teacher and was noted for his attention to detail, simplicity, and careful preparation of his lectures. Later, his lecture notes were transcribed into books. His papers and notebooks are today at the University of Chicago. Victor Weisskopf noted how Fermi "always managed to find the simplest and most direct approach, with the minimum of complication and sophistication." He disliked complicated theories, and while he had great mathematical ability, he would never use it when the job could be done much more simply. He was famous for getting quick and accurate answers to problems that would stump other people. Later on, his method of getting approximate and quick answers through back-of-the-envelope calculations became informally known as the "Fermi method", and is widely taught. Fermi was fond of pointing out that when Alessandro Volta was working in his laboratory, Volta had no idea where the study of electricity would lead. Fermi is generally remembered for his work on nuclear power and nuclear weapons, especially the creation of the first nuclear reactor, and the development of the first atomic and hydrogen bombs. His scientific work has stood the test of time. This includes his theory of beta decay, his work with non-linear systems, his discovery of the effects of slow neutrons, his study of pion-nucleon collisions, and his Fermi–Dirac statistics. His speculation that a pion was not a fundamental particle pointed the way towards the study of quarks and leptons. Things named after Fermi Many things bear Fermi's name. These include the Fermilab particle accelerator and physics lab in Batavia, Illinois, which was renamed in his honour in 1974, and the Fermi Gamma-ray Space Telescope, which was named after him in 2008, in recognition of his work on cosmic rays. Three nuclear reactor installations have been named after him: the Fermi 1 and Fermi 2 nuclear power plants in Newport, Michigan, the Enrico Fermi Nuclear Power Plant at Trino Vercellese in Italy, and the RA-1 Enrico Fermi research reactor in Argentina. A synthetic element isolated from the debris of the 1952 Ivy Mike nuclear test was named fermium, in honor of Fermi's contributions to the scientific community. This makes him one of 16 scientists who have elements named after them. Since 1956, the United States Atomic Energy Commission has named its highest honour, the Fermi Award, after him. Recipients of the award have included Otto Hahn, Robert Oppenheimer, Edward Teller and Hans Bethe. Publications (with Edoardo Amaldi) For a full list of his papers, see pages 75–78 in ref. Patents References Sources Further reading Bernstein, Barton J. "Four Physicists and the Bomb: The Early Years, 1945-1950" Historical Studies in the Physical and Biological Sciences (1988) 18#2; covers Oppenheimer, Fermi, Lawrence and Compton. online Galison, Peter, and Barton Bernstein. "In any light: Scientists and the decision to build the Superbomb, 1952–1954." Historical Studies in the Physical and Biological Sciences 19.2 (1989): 267–347. online External links "To Fermi – with Love – Part 1". Voices of the Manhattan Project 1971 Radio Segment "The First Reactor: 40th Anniversary Commemorative Edition", United States Department of Energy, (December 1982). Nobel prize page for the 1938 physics prize The Story of the First Pile Enrico Fermi's Case File at The Franklin Institute with information about his contributions to theoretical and experimental physics. "Remembering Enrico Fermi". Session J1. APS April Meeting 2010, American Physical Society. Time 100: Enrico Fermi by Richard Rhodes 29 March 1999 Fermi's stay with Ehrenfest in Leiden. 1901 births 1954 deaths American nuclear physicists Italian nuclear physicists Experimental physicists Theoretical physicists Quantum physicists American relativity theorists Thermodynamicists 20th-century American physicists Manhattan Project people 20th-century Italian inventors Nobel laureates in Physics Italian Nobel laureates Medal for Merit recipients Members of the United States National Academy of Sciences Foreign members of the Royal Society Corresponding Members of the USSR Academy of Sciences Members of the Royal Academy of Italy Members of the Lincean Academy Fellows of the American Physical Society Italian emigrants to the United States Monte Carlo methodologists University of Chicago faculty Columbia University faculty Academic staff of the University of Göttingen Academic staff of the Sapienza University of Rome University of Pisa alumni American agnostics Italian agnostics Italian Freemasons People from Leonia, New Jersey Scientists from Rome Deaths from stomach cancer in the United States Deaths from cancer in Illinois Italian exiles Naturalized citizens of the United States Recipients of the Matteucci Medal Winners of the Max Planck Medal Presidents of the American Physical Society Members of the American Philosophical Society People of Apulian descent People of Emilian descent Recipients of Franklin Medal
Enrico Fermi
[ "Physics", "Chemistry" ]
9,170
[ "Theoretical physics", "Quantum physicists", "Quantum mechanics", "Experimental physics", "Thermodynamics", "Thermodynamicists", "Theoretical physicists", "Experimental physicists" ]
10,273
https://en.wikipedia.org/wiki/Embryo%20drawing
Embryo drawing is the illustration of embryos in their developmental sequence. In plants and animals, an embryo develops from a zygote, the single cell that results when an egg and sperm fuse during fertilization. In animals, the zygote divides repeatedly to form a ball of cells, which then forms a set of tissue layers that migrate and fold to form an early embryo. Images of embryos provide a means of comparing embryos of different ages, and species. To this day, embryo drawings are made in undergraduate developmental biology lessons. Comparing different embryonic stages of different animals is a tool that can be used to infer relationships between species, and thus biological evolution. This has been a source of quite some controversy, both now and in the past. Ernst Haeckel at the University of Basel pioneered in this field. By comparing different embryonic stages of different vertebrate species, he formulated the recapitulation theory. This theory states that an animal's embryonic development follows exactly the same sequence as the sequence of its evolutionary ancestors. Haeckel's work and the ensuing controversy linked the fields of developmental biology and comparative anatomy into comparative embryology. From a more modern perspective, Haeckel's drawings were the beginnings of the field of evolutionary developmental biology (evo-devo). The study of comparative embryology aims to prove or disprove that vertebrate embryos of different classes (e.g. mammals vs. fish) follow a similar developmental path due to their common ancestry. Such developing vertebrates have similar genes, which determine the basic body plan. However, further development allows for the distinguishing of distinct characteristics as adults. Famous embryo illustrators Ernst Haeckel (1834–1919) Haeckel's illustrations show vertebrate embryos at different stages of development, which exhibit embryonic resemblance as support for evolution, recapitulation as evidence of the Biogenetic Law, and phenotypic divergence as evidence of von Baer's laws. The series of twenty-four embryos from the early editions of Haeckel's Anthropogenie remain the most famous. The different species are arranged in columns, and the different stages in rows. Similarities can be seen along the first two rows; the appearance of specialized characters in each species can be seen in the columns and a diagonal interpretation leads one to Haeckel's idea of recapitulation. Haeckel's embryo drawings are primarily intended to express his theory of embryonic development, the Biogenetic Law, which in turn assumes (but is not crucial to) the evolutionary concept of common descent. His postulation of embryonic development coincides with his understanding of evolution as a developmental process. In and around 1800, embryology fused with comparative anatomy as the primary foundation of morphology. Ernst Haeckel, along with Karl von Baer and Wilhelm His, are primarily influential in forming the preliminary foundations of 'phylogenetic embryology' based on principles of evolution. Haeckel's 'Biogenetic Law' portrays the parallel relationship between an embryo's development and phylogenetic history. The term, 'recapitulation,' has come to embody Haeckel's Biogenetic Law, for embryonic development is a recapitulation of evolution. Haeckel proposes that all classes of vertebrates pass through an evolutionarily conserved "phylotypic" stage of development, a period of reduced phenotypic diversity among higher embryos. Only in later development do particular differences appear. Haeckel portrays a concrete demonstration of his Biogenetic Law through his Gastrea theory, in which he argues that the early cup-shaped gastrula stage of development is a universal feature of multi-celled animals. An ancestral form existed, known as the gastrea, which was a common ancestor to the corresponding gastrula. Haeckel argues that certain features in embryonic development are conserved and palingenetic, while others are caenogenetic. Caenogenesis represents "the blurring of ancestral resemblances in development", which are said to be the result of certain adaptations to embryonic life due to environmental changes. In his drawings, Haeckel cites the notochord, pharyngeal arches and clefts, pronephros and neural tube as palingenetic features. However, the yolk sac, extra-embryonic membranes, egg membranes and endocardial tube are considered caenogenetic features. The addition of terminal adult stages and the telescoping, or driving back, of such stages to descendant's embryonic stages are likewise representative of Haeckelian embryonic development. In addressing his embryo drawings to a general audience, Haeckel does not cite any sources, which gives his opponents the freedom to make assumptions regarding the originality of his work. Karl Ernst von Baer (1792–1876) Haeckel was not the only one to create a series of drawings representing embryonic development. Karl E. von Baer and Haeckel both struggled to model one of the most complex problems facing embryologists at the time: the arrangement of general and special characters during development in different species of animals. In relation to developmental timing, von Baer's scheme of development differs from Haeckel's scheme. Von Baer's scheme of development need not be tied to developmental stages defined by particular characters, where recapitulation involves heterochrony. Heterochrony represents a gradual alteration in the original phylogenetic sequence due to embryonic adaptation. As well, von Baer early noted that embryos of different species could not be easily distinguished from one another as in adults. Von Baer's laws governing embryonic development are specific rejections of recapitulation. As a response to Haeckel's theory of recapitulation, von Baer enunciates his most notorious laws of development. Von Baer's laws state that general features of animals appear earlier in the embryo than special features, where less general features stem from the most general, each embryo of a species departs more and more from a predetermined passage through the stages of other animals, and there is never a complete morphological similarity between an embryo and a lower adult. Von Baer's embryo drawings display that individual development proceeds from general features of the developing embryo in early stages through differentiation into special features specific to the species, establishing that linear evolution could not occur. Embryological development, in von Baer's mind, is a process of differentiation, "a movement from the more homogeneous and universal to the more heterogeneous and individual." Von Baer argues that embryos will resemble each other before attaining characteristics differentiating them as part of a specific family, genus or species, but embryos are not the same as the final forms of lower organisms. Wilhelm His (1831–1904) Wilhelm His was one of Haeckel's most authoritative and primary opponents advocating physiological embryology. His Anatomie menschlicher Embryonen (Anatomy of human embryos) employs a series of his most important drawings chronicling developing embryos from the end of the second week through the end of the second month of pregnancy. In 1878, His begins to engage in serious study of the anatomy of human embryos for his drawings. During the 19th century, embryologists often obtained early human embryos from abortions and miscarriages, postmortems of pregnant women and collections in anatomical museums. In order to construct his series of drawings, His collected specimens which he manipulated into a form that he could operate with. In His' Normentafel, he displays specific individual embryos rather than ideal types. His does not produce norms from aborted specimens, but rather visualizes the embryos in order to make them comparable and specifically subjects his embryo specimens to criticism and comparison with other cases. Ultimately, His' critical work in embryonic development comes with his production of a series of embryo drawings of increasing length and degree of development. His' depiction of embryological development strongly differs from Haeckel's depiction, for His argues that the phylogenetic explanation of ontogenetic events is unnecessary. His argues that all ontogenetic events are the "mechanical" result of differential cell growth. His' embryology is not explained in terms of ancestral history. The debate between Haeckel and His ultimately becomes fueled by the description of an embryo that Wilhelm Krause propels directly into the ongoing feud between Haeckel and His. Haeckel speculates that the allantois is formed in a similar way in both humans and other mammals. His, on the other hand, accuses Haeckel of altering and playing with the facts. Although Haeckel is proven right about the allantois, the utilization of Krause's embryo as justification turns out to be problematic, for the embryo is that of a bird rather than a human. The underlying debate between Haeckel and His derives from differing viewpoints regarding the similarity or dissimilarity of vertebrate embryos. In response to Haeckel's evolutionary claim that all vertebrates are essentially identical in the first month of embryonic life as proof of common descent, His responds by insisting that a more skilled observer would recognize even sooner that early embryos can be distinguished. His also counteracts Haeckel's sequence of drawings in the Anthropogenie with what he refers to as "exact" drawings, highlighting specific differences. Ultimately, His goes so far as to accuse Haeckel of "faking" his embryo illustrations to make the vertebrate embryos appear more similar than in reality. His also accuses Haeckel of creating early human embryos that he conjured in his imagination rather than obtained through empirical observation. His completes his denunciation of Haeckel by pronouncing that Haeckel had "'relinquished the right to count as an equal in the company of serious researchers.'" Controversy The exactness of Ernst Haeckel's drawings of embryos has caused much controversy among Intelligent Design proponents recently and Haeckel's intellectual opponents in the past. Although the early embryos of different species exhibit similarities, Haeckel apparently exaggerated these similarities in support of his Recapitulation theory, sometimes known as the Biogenetic Law or "Ontogeny recapitulates phylogeny". Furthermore, Haeckel even proposed theoretical life-forms to accommodate certain stages in embryogenesis. A recent review concluded that the "biogenetic law is supported by several recent studies – if applied to single characters only". Critics in the late 19th and early 20th centuries, Karl von Baer and Wilhelm His, did not believe that living embryos reproduce the evolutionary process and produced embryo drawings of their own which emphasized the differences in early embryological development. Late 20th and early 21st century critic Stephen Jay Gould has objected to the continued use of Haeckel's embryo drawings in textbooks. On the other hand, Michael K. Richardson, Professor of Evolutionary Developmental Zoology, Leiden University, while recognizing that some criticisms of the drawings are legitimate (indeed, it was he and his co-workers who began the modern criticisms in 1998), has supported the drawings as teaching aids, and has said that "on a fundamental level, Haeckel was correct." Opposition to Haeckel Haeckel encountered numerous oppositions to his artistic depictions of embryonic development during the late nineteenth and early twentieth centuries. Haeckel's opponents believe that he de-emphasizes the differences between early embryonic stages in order to make the similarities between embryos of different species more pronounced. Early opponents: Ludwig Rutimeyer, Theodor Bischoff and Rudolf Virchow The first suggestion of fakery against Haeckel was made in late 1868 by Ludwig Rutimeyer in the Archiv für Anthropogenie. Rutimeyer was a professor of zoology and comparative anatomy at the University of Basel, who rejected natural selection as simply mechanistic and proposed an anti-materialist view of nature. Rutimeyer claimed that Haeckel "had taken to kinds of liberty with established truth". Rutimeyer claimed that Haeckel presented the same image three consecutive times as the embryo of the dog, the chicken, and the turtle. Theodor Bischoff (1807–1882), was a strong opponent of Darwinism. As a pioneer in mammalian embryology, he was one of Haeckel's strongest critics. Although Bischoff's 1840 surveys depict how similar the early embryos of man are to other vertebrates, he later demanded that such hasty generalization was inconsistent with his recent findings regarding the dissimilarity between hamster embryos and those of rabbits and dogs. Nevertheless, Bischoff's main argument was in reference to Haeckel's drawings of human embryos, for Haeckel is later accused of miscopying the dog embryo from him. Throughout Haeckel's time, criticism of his embryo drawings was often due in part to his critics' belief in his representations of embryological development as "crude schemata". Contemporary criticism of Haeckel: Michael Richardson and Stephen Jay Gould Michael Richardson and his colleagues in a July 1997 issue of Anatomy and Embryology, demonstrated that Haeckel falsified his drawings in order to exaggerate the similarity of the phylotypic stage. In a March 2000 issue of Natural History, Stephen Jay Gould argued that Haeckel "exaggerated the similarities by idealizations and omissions." As well, Gould argued that Haeckel's drawings are simply inaccurate and falsified. On the other hand, one of those who criticized Haeckel's drawings, Michael Richardson, has argued that "Haeckel's much-criticized drawings are important as phylogenetic hypotheses, teaching aids, and evidence for evolution". But even Richardson admitted in Science Magazine in 1997 that his team's investigation of Haeckel's drawings were showing them to be "one of the most famous fakes in biology." Some version of Haeckel's drawings can be found in many modern biology textbooks in discussions of the history of embryology, with clarification that these are no longer considered valid. Haeckel's proponents (past and present) Although Charles Darwin accepted Haeckel's support for natural selection, he was tentative in using Haeckel's ideas in his writings; with regard to embryology, Darwin relied far more on von Baer's work. Haeckel's work was published in 1866 and 1874, years after Darwin's "The Origin of Species" (1859). Despite the numerous oppositions, Haeckel has influenced many disciplines in science in his drive to integrate such disciplines of taxonomy and embryology into the Darwinian framework and to investigate phylogenetic reconstruction through his Biogenetic Law. As well, Haeckel served as a mentor to many important scientists, including Anton Dohrn, Richard and Oscar Hertwig, Wilhelm Roux, and Hans Driesch. One of Haeckel's earliest proponents was Carl Gegenbaur at the University of Jena (1865–1873), during which both men were absorbing the impact of Darwin's theory. The two quickly sought to integrate their knowledge into an evolutionary program. In determining the relationships between "phylogenetic linkages" and "evolutionary laws of form," both Gegenbaur and Haeckel relied on a method of comparison. As Gegenbaur argued, the task of comparative anatomy lies in explaining the form and organization of the animal body in order to provide evidence for the continuity and evolution of a series of organs in the body. Haeckel then provided a means of pursuing this aim with his biogenetic law, in which he proposed to compare an individual's various stages of development with its ancestral line. Although Haeckel stressed comparative embryology and Gegenbaur promoted the comparison of adult structures, both believed that the two methods could work in conjunction to produce the goal of evolutionary morphology. The philologist and anthropologist, Friedrich Müller, used Haeckel's concepts as a source for his ethnological research, involving the systematic comparison of the folklore, beliefs and practices of different societies. Müller's work relies specifically on theoretical assumptions that are very similar to Haeckel's and reflects the German practice to maintain strong connections between empirical research and the philosophical framework of science. Language is particularly important, for it establishes a bridge between natural science and philosophy. For Haeckel, language specifically represented the concept that all phenomena of human development relate to the laws of biology. Although Müller did not specifically have an influence in advocating Haeckel's embryo drawings, both shared a common understanding of development from lower to higher forms, for Müller specifically saw humans as the last link in an endless chain of evolutionary development. Modern acceptance of Haeckel's Biogenetic Law, despite current rejection of Haeckelian views, finds support in the certain degree of parallelism between ontogeny and phylogeny. A. M. Khazen, on the one hand, states that "ontogeny is obliged to repeat the main stages of phylogeny." A. S. Rautian, on the other hand, argues that the reproduction of ancestral patterns of development is a key aspect of certain biological systems. Dr. Rolf Siewing acknowledges the similarity of embryos in different species, along with the laws of von Baer, but does not believe that one should compare embryos with adult stages of development. According to M. S. Fischer, reconsideration of the Biogenetic Law is possible as a result of two fundamental innovations in biology since Haeckel's time: cladistics and developmental genetics. In defense of Haeckel's embryo drawings, the principal argument is that of "schematisation." Haeckel's drawings were not intended to be technical and scientific depictions, but rather schematic drawings and reconstructions for a specifically lay audience. Therefore, as R. Gursch argues, Haeckel's embryo drawings should be regarded as "reconstructions." Although his drawings are open to criticism, his drawings should not be considered falsifications of any sort. Although modern defense of Haeckel's embryo drawings still considers the inaccuracy of his drawings, charges of fraud are considered unreasonable. As Erland Nordenskiöld argues, charges of fraud against Haeckel are unnecessary. R. Bender ultimately goes so far as to reject His's claims regarding the fabrication of certain stages of development in Haeckel's drawings, arguing that Haeckel's embryo drawings are faithful representations of real stages of embryonic development in comparison to published embryos. Use of embryo drawings in contemporary biology Although Stephen Jay Gould's 1977 book Ontogeny and Phylogeny helps to reassess Haeckelian embryology, it does not address the controversy over Haeckel's embryo drawings. Nevertheless, new interest in evolution in and around 1977 inspired developmental biologists to look more closely at Haeckel's illustrations. In current biology, fundamental research in developmental biology and evolutionary developmental biology is no longer driven by morphological comparisons between embryos, but more by molecular biology. See also Medical illustration Recapitulation theory Ernst Haeckel Evolutionary developmental biology Embryogenesis History of biology History of zoology, post-Darwin Science education References Further reading Biological techniques and tools History of evolutionary biology Technical drawing
Embryo drawing
[ "Engineering", "Biology" ]
4,020
[ "Design engineering", "Technical drawing", "nan", "Civil engineering" ]
10,274
https://en.wikipedia.org/wiki/Enthalpy
Enthalpy () is the sum of a thermodynamic system's internal energy and the product of its pressure and volume. It is a state function in thermodynamics used in many measurements in chemical, biological, and physical systems at a constant external pressure, which is conveniently provided by the large ambient atmosphere. The pressure–volume term expresses the work that was done against constant external pressure to establish the system's physical dimensions from to some final volume (as ), i.e. to make room for it by displacing its surroundings. The pressure-volume term is very small for solids and liquids at common conditions, and fairly small for gases. Therefore, enthalpy is a stand-in for energy in chemical systems; bond, lattice, solvation, and other chemical "energies" are actually enthalpy differences. As a state function, enthalpy depends only on the final configuration of internal energy, pressure, and volume, not on the path taken to achieve it. In the International System of Units (SI), the unit of measurement for enthalpy is the joule. Other historical conventional units still in use include the calorie and the British thermal unit (BTU). The total enthalpy of a system cannot be measured directly because the internal energy contains components that are unknown, not easily accessible, or are not of interest for the thermodynamic problem at hand. In practice, a change in enthalpy is the preferred expression for measurements at constant pressure, because it simplifies the description of energy transfer. When transfer of matter into or out of the system is also prevented and no electrical or mechanical (stirring shaft or lift pumping) work is done, at constant pressure the enthalpy change equals the energy exchanged with the environment by heat. In chemistry, the standard enthalpy of reaction is the enthalpy change when reactants in their standard states ( usually ) change to products in their standard states. This quantity is the standard heat of reaction at constant pressure and temperature, but it can be measured by calorimetric methods even if the temperature does vary during the measurement, provided that the initial and final pressure and temperature correspond to the standard state. The value does not depend on the path from initial to final state because enthalpy is a state function. Enthalpies of chemical substances are usually listed for pressure as a standard state. Enthalpies and enthalpy changes for reactions vary as a function of temperature, but tables generally list the standard heats of formation of substances at . For endothermic (heat-absorbing) processes, the change is a positive value; for exothermic (heat-releasing) processes it is negative. The enthalpy of an ideal gas is independent of its pressure or volume, and depends only on its temperature, which correlates to its thermal energy. Real gases at common temperatures and pressures often closely approximate this behavior, which simplifies practical thermodynamic design and analysis. The word "enthalpy" is derived from the Greek word enthalpein, which means "to heat". Definition The enthalpy of a thermodynamic system is defined as the sum of its internal energy and the product of its pressure and volume: where is the internal energy, is pressure, and is the volume of the system; is sometimes referred to as the pressure energy  . Enthalpy is an extensive property; it is proportional to the size of the system (for homogeneous systems). As intensive properties, the specific enthalpy, is referenced to a unit of mass of the system, and the molar enthalpy, where is the number of moles. For inhomogeneous systems the enthalpy is the sum of the enthalpies of the component subsystems: where is the total enthalpy of all the subsystems, refers to the various subsystems, refers to the enthalpy of each subsystem. A closed system may lie in thermodynamic equilibrium in a static gravitational field, so that its pressure varies continuously with altitude, while, because of the equilibrium requirement, its temperature is invariant with altitude. (Correspondingly, the system's gravitational potential energy density also varies with altitude.) Then the enthalpy summation becomes an integral: where ("rho") is density (mass per unit volume), is the specific enthalpy (enthalpy per unit mass), represents the enthalpy density (enthalpy per unit volume), denotes an infinitesimally small element of volume within the system, for example, the volume of an infinitesimally thin horizontal layer. The integral therefore represents the sum of the enthalpies of all the elements of the volume. The enthalpy of a closed homogeneous system is its energy function with its entropy and its pressure as natural state variables which provide a differential relation for of the simplest form, derived as follows. We start from the first law of thermodynamics for closed systems for an infinitesimal process: where is a small amount of heat added to the system, is a small amount of work performed by the system. In a homogeneous system in which only reversible processes or pure heat transfer are considered, the second law of thermodynamics gives with the absolute temperature and the infinitesimal change in entropy of the system. Furthermore, if only work is done, As a result, Adding to both sides of this expression gives or So and the coefficients of the natural variable differentials and are just the single variables and . Other expressions The above expression of in terms of entropy and pressure may be unfamiliar to some readers. There are also expressions in terms of more directly measurable variables such as temperature and pressure: Here is the heat capacity at constant pressure and is the coefficient of (cubic) thermal expansion: With this expression one can, in principle, determine the enthalpy if and are known as functions of and  . However the expression is more complicated than because is not a natural variable for the enthalpy . At constant pressure, so that For an ideal gas, reduces to this form even if the process involves a pressure change, because In a more general form, the first law describes the internal energy with additional terms involving the chemical potential and the number of particles of various types. The differential statement for then becomes where is the chemical potential per particle for a type  particle, and is the number of such particles. The last term can also be written as (with the number of moles of component added to the system and, in this case, the molar chemical potential) or as (with the mass of component added to the system and, in this case, the specific chemical potential). Characteristic functions and natural state variables The enthalpy, expresses the thermodynamics of a system in the energy representation. As a function of state, its arguments include both one intensive and several extensive state variables. The state variables , and are said to be the natural state variables in this representation. They are suitable for describing processes in which they are determined by factors in the surroundings. For example, when a virtual parcel of atmospheric air moves to a different altitude, the pressure surrounding it changes, and the process is often so rapid that there is too little time for heat transfer. This is the basis of the so-called adiabatic approximation that is used in meteorology. Conjugate with the enthalpy, with these arguments, the other characteristic function of state of a thermodynamic system is its entropy, as a function, of the same list of variables of state, except that the entropy, , is replaced in the list by the enthalpy, . It expresses the entropy representation. The state variables , , and are said to be the natural state variables in this representation. They are suitable for describing processes in which they are experimentally controlled. For example, and can be controlled by allowing heat transfer, and by varying only the external pressure on the piston that sets the volume of the system. Physical interpretation The term is the energy of the system, and the term can be interpreted as the work that would be required to "make room" for the system if the pressure of the environment remained constant. When a system, for example, moles of a gas of volume at pressure and temperature , is created or brought to its present state from absolute zero, energy must be supplied equal to its internal energy plus , where is the work done in pushing against the ambient (atmospheric) pressure. In physics and statistical mechanics it may be more interesting to study the internal properties of a constant-volume system and therefore the internal energy is used. In chemistry, experiments are often conducted at constant atmospheric pressure, and the pressure–volume work represents a small, well-defined energy exchange with the atmosphere, so that is the appropriate expression for the heat of reaction. For a heat engine, the change in its enthalpy after a full cycle is equal to zero, since the final and initial state are equal. Relationship to heat In order to discuss the relation between the enthalpy increase and heat supply, we return to the first law for closed systems, with the physics sign convention: , where the heat is supplied by conduction, radiation, Joule heating. We apply it to the special case with a constant pressure at the surface. In this case the work is given by (where is the pressure at the surface, is the increase of the volume of the system). Cases of long range electromagnetic interaction require further state variables in their formulation, and are not considered here. In this case the first law reads: Now, So If the system is under constant pressure, and consequently, the increase in enthalpy of the system is equal to the heat added: This is why the now-obsolete term heat content was used for enthalpy in the 19th century. Applications In thermodynamics, one can calculate enthalpy by determining the requirements for creating a system from "nothingness"; the mechanical work required, differs based upon the conditions that obtain during the creation of the thermodynamic system. Energy must be supplied to remove particles from the surroundings to make space for the creation of the system, assuming that the pressure remains constant; this is the The supplied energy must also provide the change in internal energy, which includes activation energies, ionization energies, mixing energies, vaporization energies, chemical bond energies, and so forth. Together, these constitute the change in the enthalpy For systems at constant pressure, with no external work done other than the work, the change in enthalpy is the heat received by the system. For a simple system with a constant number of particles at constant pressure, the difference in enthalpy is the maximum amount of thermal energy derivable from an isobaric thermodynamic process. Heat of reaction The total enthalpy of a system cannot be measured directly; the enthalpy change of a system is measured instead. Enthalpy change is defined by the following equation: where is the "enthalpy change", is the final enthalpy of the system (in a chemical reaction, the enthalpy of the products or the system at equilibrium), is the initial enthalpy of the system (in a chemical reaction, the enthalpy of the reactants). For an exothermic reaction at constant pressure, the system's change in enthalpy, , is negative due to the products of the reaction having a smaller enthalpy than the reactants, and equals the heat released in the reaction if no electrical or shaft work is done. In other words, the overall decrease in enthalpy is achieved by the generation of heat. Conversely, for a constant-pressure endothermic reaction, is positive and equal to the heat absorbed in the reaction. From the definition of enthalpy as the enthalpy change at constant pressure is However, for most chemical reactions, the work term is much smaller than the internal energy change , which is approximately equal to . As an example, for the combustion of carbon monoxide and Since the differences are so small, reaction enthalpies are often described as reaction energies and analyzed in terms of bond energies. Specific enthalpy The specific enthalpy of a uniform system is defined as , where is the mass of the system. Its SI unit is joule per kilogram. It can be expressed in other specific quantities by where is the specific internal energy, is the pressure, and is specific volume, which is equal to , where is the density. Enthalpy changes An enthalpy change describes the change in enthalpy observed in the constituents of a thermodynamic system when undergoing a transformation or chemical reaction. It is the difference between the enthalpy after the process has completed, i.e. the enthalpy of the products assuming that the reaction goes to completion, and the initial enthalpy of the system, namely the reactants. These processes are specified solely by their initial and final states, so that the enthalpy change for the reverse is the negative of that for the forward process. A common standard enthalpy change is the enthalpy of formation, which has been determined for a large number of substances. Enthalpy changes are routinely measured and compiled in chemical and physical reference works, such as the CRC Handbook of Chemistry and Physics. The following is a selection of enthalpy changes commonly recognized in thermodynamics. When used in these recognized terms the qualifier change is usually dropped and the property is simply termed enthalpy of 'process. Since these properties are often used as reference values it is very common to quote them for a standardized set of environmental parameters, or standard conditions, including: A pressure of one atmosphere (1 atm or 1013.25 hPa) or 1 bar A temperature of 25 °C or 298.15 K A concentration of 1.0 M when the element or compound is present in solution Elements or compounds in their normal physical states, i.e. standard state For such standardized values the name of the enthalpy is commonly prefixed with the term standard, e.g. standard enthalpy of formation. Chemical properties Enthalpy of reaction - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of substance reacts completely. Enthalpy of formation - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a compound is formed from its elementary antecedents. Enthalpy of combustion - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a substance burns completely with oxygen. Enthalpy of hydrogenation - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of an unsaturated compound reacts completely with an excess of hydrogen to form a saturated compound. Enthalpy of atomization - is defined as the enthalpy change required to separate one mole of a substance into its constituent atoms completely. Enthalpy of neutralization - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of water is formed when an acid and a base react. Standard Enthalpy of solution - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a solute is dissolved completely in an excess of solvent, so that the solution is at infinite dilution. Standard enthalpy of Denaturation (biochemistry) - is defined as the enthalpy change required to denature one mole of compound. Enthalpy of hydration - is defined as the enthalpy change observed when one mole of gaseous ions are completely dissolved in water forming one mole of aqueous ions. Physical properties Enthalpy of fusion - is defined as the enthalpy change required to completely change the state of one mole of substance from solid to liquid. Enthalpy of vaporization - is defined as the enthalpy change required to completely change the state of one mole of substance from liquid to gas. Enthalpy of sublimation - is defined as the enthalpy change required to completely change the state of one mole of substance from solid to gas. Lattice enthalpy - is defined as the energy required to separate one mole of an ionic compound into separated gaseous ions to an infinite distance apart (meaning no force of attraction). Enthalpy of mixing - is defined as the enthalpy change upon mixing of two (non-reacting) chemical substances. Open systems In thermodynamic open systems, mass (of substances) may flow in and out of the system boundaries. The first law of thermodynamics for open systems states: The increase in the internal energy of a system is equal to the amount of energy added to the system by mass flowing in and by heating, minus the amount lost by mass flowing out and in the form of work done by the system: where is the average internal energy entering the system, and is the average internal energy leaving the system. The region of space enclosed by the boundaries of the open system is usually called a control volume, and it may or may not correspond to physical walls. If we choose the shape of the control volume such that all flow in or out occurs perpendicular to its surface, then the flow of mass into the system performs work as if it were a piston of fluid pushing mass into the system, and the system performs work on the flow of mass out as if it were driving a piston of fluid. There are then two types of work performed: Flow work described above, which is performed on the fluid (this is also often called work), and mechanical work (shaft work), which may be performed on some mechanical device such as a turbine or pump. These two types of work are expressed in the equation Substitution into the equation above for the control volume (cv) yields: The definition of enthalpy, , permits us to use this thermodynamic potential to account for both internal energy and work in fluids for open systems: If we allow also the system boundary to move (e.g. due to moving pistons), we get a rather general form of the first law for open systems. In terms of time derivatives, using Newton's dot notation for time derivatives, it reads: with sums over the various places where heat is supplied, mass flows into the system, and boundaries are moving. The terms represent enthalpy flows, which can be written as with the mass flow and the molar flow at position respectively. The term represents the rate of change of the system volume at position that results in power done by the system. The parameter represents all other forms of power done by the system such as shaft power, but it can also be, say, electric power produced by an electrical power plant. Note that the previous expression holds true only if the kinetic energy flow rate is conserved between system inlet and outlet. Otherwise, it has to be included in the enthalpy balance. During steady-state operation of a device (see turbine, pump, and engine), the average may be set equal to zero. This yields a useful expression for the average power generation for these devices in the absence of chemical reactions: where the angle brackets denote time averages. The technical importance of the enthalpy is directly related to its presence in the first law for open systems, as formulated above. Diagrams The enthalpy values of important substances can be obtained using commercial software. Practically all relevant material properties can be obtained either in tabular or in graphical form. There are many types of diagrams, such as diagrams, which give the specific enthalpy as function of temperature for various pressures, and diagrams, which give as function of for various . One of the most common diagrams is the temperature–specific entropy diagram ( diagram). It gives the melting curve and saturated liquid and vapor values together with isobars and isenthalps. These diagrams are powerful tools in the hands of the thermal engineer. Some basic applications The points  through in the figure play a role in the discussion in this section. {| class="wikitable" style="text-align:center" |- |Point ! !! !! !! |- style="background:#EEEEEE;" | Unit || K || bar || || |- | || 300 || 1 || 6.85 || 461 |- | || 380 || 2 || 6.85 || 530 |- | || 300 || 200 || 5.16 || 430 |- | || 270 || 1 || 6.79 || 430 |- | || 108 || 13 || 3.55 || 100 |- | || 77.2 || 1 || 3.75 || 100 |- | || 77.2 || 1 || 2.83 || 28 |- | || 77.2 || 1 || 5.41 || 230 |} Points  and are saturated liquids, and point  is a saturated gas. Throttling One of the simple applications of the concept of enthalpy is the so-called throttling process, also known as Joule–Thomson expansion. It concerns a steady adiabatic flow of a fluid through a flow resistance (valve, porous plug, or any other type of flow resistance) as shown in the figure. This process is very important, since it is at the heart of domestic refrigerators, where it is responsible for the temperature drop between ambient temperature and the interior of the refrigerator. It is also the final stage in many types of liquefiers. For a steady state flow regime, the enthalpy of the system (dotted rectangle) has to be constant. Hence Since the mass flow is constant, the specific enthalpies at the two sides of the flow resistance are the same: that is, the enthalpy per unit mass does not change during the throttling. The consequences of this relation can be demonstrated using the diagram above. Example 1 Point c is at 200 bar and room temperature (300 K). A Joule–Thomson expansion from 200 bar to 1 bar follows a curve of constant enthalpy of roughly 425  (not shown in the diagram) lying between the 400 and 450  isenthalps and ends in point d, which is at a temperature of about 270 K . Hence the expansion from 200 bar to 1 bar cools nitrogen from 300 K to 270 K . In the valve, there is a lot of friction, and a lot of entropy is produced, but still the final temperature is below the starting value. Example 2 Point e is chosen so that it is on the saturated liquid line with It corresponds roughly with and Throttling from this point to a pressure of 1 bar ends in the two-phase region (point f). This means that a mixture of gas and liquid leaves the throttling valve. Since the enthalpy is an extensive parameter, the enthalpy in is equal to the enthalpy in multiplied by the liquid fraction in plus the enthalpy in multiplied by the gas fraction in So With numbers: so This means that the mass fraction of the liquid in the liquid–gas mixture that leaves the throttling valve is 64%. Compressors A power is applied e.g. as electrical power. If the compression is adiabatic, the gas temperature goes up. In the reversible case it would be at constant entropy, which corresponds with a vertical line in the diagram. For example, compressing nitrogen from 1 bar (point a) to 2 bar (point b''') would result in a temperature increase from 300 K to 380 K. In order to let the compressed gas exit at ambient temperature , heat exchange, e.g. by cooling water, is necessary. In the ideal case the compression is isothermal. The average heat flow to the surroundings is . Since the system is in the steady state the first law gives The minimal power needed for the compression is realized if the compression is reversible. In that case the second law of thermodynamics for open systems gives Eliminating gives for the minimal power For example, compressing 1 kg of nitrogen from 1 bar to 200 bar costs at least : With the data, obtained with the diagram, we find a value of The relation for the power can be further simplified by writing it as With this results in the final relation History and etymology The term enthalpy was coined relatively late in the history of thermodynamics, in the early 20th century. Energy was introduced in a modern sense by Thomas Young in 1802, while entropy by Rudolf Clausius in 1865. Energy uses the root of the Greek word (ergon), meaning "work", to express the idea of capacity to perform work. Entropy uses the Greek word (tropē) meaning transformation or turning. Enthalpy uses the root of the Greek word (thalpos) "warmth, heat". The term expresses the obsolete concept of heat content, as refers to the amount of heat gained in a process at constant pressure only, but not in the general case when pressure is variable. J. W. Gibbs used the term "a heat function for constant pressure" for clarity. Introduction of the concept of "heat content" is associated with Benoît Paul Émile Clapeyron and Rudolf Clausius (Clausius–Clapeyron relation, 1850). The term enthalpy first appeared in print in 1909. It is attributed to Heike Kamerlingh Onnes, who most likely introduced it orally the year before, at the first meeting of the Institute of Refrigeration in Paris. It gained currency only in the 1920s, notably with the Mollier Steam Tables and Diagrams'', published in 1927. Until the 1920s, the symbol was used, somewhat inconsistently, for "heat" in general. The definition of as strictly limited to enthalpy or "heat content at constant pressure" was formally proposed by A. W. Porter in 1922. Notes See also Calorimetry Calorimeter Departure function Hess's law Isenthalpic process Laws of thermodynamics Stagnation enthalpy Standard enthalpy of formation Thermodynamic databases Thermodynamics References Bibliography External links State functions Energy (physics) Physical quantities
Enthalpy
[ "Physics", "Chemistry", "Mathematics" ]
5,449
[ "State functions", "Thermodynamic properties", "Physical phenomena", "Physical quantities", "Quantity", "Energy (physics)", "Enthalpy", "Wikipedia categories named after physical quantities", "Physical properties" ]
10,278
https://en.wikipedia.org/wiki/List%20of%20explosives%20used%20during%20World%20War%20II
Almost all the common explosives listed here were mixtures of several common components: Ammonium picrate TNT (Trinitrotoluene) PETN (Pentaerythritol tetranitrate) RDX Powdered aluminium. This is only a partial list; there were many others. Many of these compositions are now obsolete and only encountered in legacy munitions and unexploded ordnance. Two nuclear explosives, containing mixtures of uranium and plutonium, respectively, were also used at the bombings of Hiroshima and Nagasaki See also List of Japanese World War II explosives Explosive material Little Boy Fat Man Explosives
List of explosives used during World War II
[ "Chemistry" ]
125
[ "Explosives", "Explosions" ]
10,283
https://en.wikipedia.org/wiki/Erlang%20%28unit%29
The erlang (symbol E) is a dimensionless unit that is used in telephony as a measure of offered load or carried load on service-providing elements such as telephone circuits or telephone switching equipment. A single cord circuit has the capacity to be used for 60 minutes in one hour. Full utilization of that capacity, 60 minutes of traffic, constitutes 1 erlang. Carried traffic in erlangs is the average number of concurrent calls measured over a given period (often one hour), while offered traffic is the traffic that would be carried if all call-attempts succeeded. How much offered traffic is carried in practice will depend on what happens to unanswered calls when all servers are busy. The CCITT named the international unit of telephone traffic the erlang in 1946 in honor of Agner Krarup Erlang. In Erlang's analysis of efficient telephone line usage he derived the formulae for two important cases, Erlang-B and Erlang-C, which became foundational results in teletraffic engineering and queueing theory. His results, which are still used today, relate quality of service to the number of available servers. Both formulae take offered load as one of their main inputs (in erlangs), which is often expressed as call arrival rate times average call length. A distinguishing assumption behind the Erlang B formula is that there is no queue, so that if all service elements are already in use then a newly arriving call will be blocked and subsequently lost. The formula gives the probability of this occurring. In contrast, the Erlang C formula provides for the possibility of an unlimited queue and it gives the probability that a new call will need to wait in the queue due to all servers being in use. Erlang's formulae apply quite widely, but they may fail when congestion is especially high causing unsuccessful traffic to repeatedly retry. One way of accounting for retries when no queue is available is the Extended Erlang B method. Traffic measurements of a telephone circuit When used to represent carried traffic, a value (which can be a non-integer such as 43.5) followed by “erlangs” represents the average number of concurrent calls carried by the circuits (or other service-providing elements), where that average is calculated over some reasonable period of time. The period over which the average is calculated is often one hour, but shorter periods (e.g., 15 minutes) may be used where it is known that there are short spurts of demand and a traffic measurement is desired that does not mask these spurts. One erlang of carried traffic refers to a single resource being in continuous use, or two channels each being in use fifty percent of the time, and so on. For example, if an office has two telephone operators who are both busy all the time, that would represent two erlangs (2 E) of traffic; or a radio channel that is occupied continuously during the period of interest (e.g. one hour) is said to have a load of 1 erlang. When used to describe offered traffic, a value followed by “erlangs” represents the average number of concurrent calls that would have been carried if there were an unlimited number of circuits (that is, if the call-attempts that were made when all circuits were in use had not been rejected). The relationship between offered traffic and carried traffic depends on the design of the system and user behavior. Three common models are (a) callers whose call-attempts are rejected go away and never come back, (b) callers whose call-attempts are rejected try again within a fairly short space of time, and (c) the system allows users to wait in queue until a circuit becomes available. A third measurement of traffic is instantaneous traffic, expressed as a certain number of erlangs, meaning the exact number of calls taking place at a point in time. In this case the number is a non-negative integer. Traffic-level-recording devices, such as moving-pen recorders, plot instantaneous traffic. Erlang's analysis The concepts and mathematics introduced by Agner Krarup Erlang have broad applicability beyond telephony. They apply wherever users arrive more or less at random to receive exclusive service from any one of a group of service-providing elements without prior reservation, for example, where the service-providing elements are ticket-sales windows, toilets on an airplane, or motel rooms. (Erlang's models do not apply where the service-providing elements are shared between several concurrent users or different amounts of service are consumed by different users, for instance, on circuits carrying data traffic.) The goal of Erlang's traffic theory is to determine exactly how many service-providing elements should be provided in order to satisfy users, without wasteful over-provisioning. To do this, a target is set for the grade of service (GoS) or quality of service (QoS). For example, in a system where there is no queuing, the GoS may be that no more than 1 call in 100 is blocked (i.e., rejected) due to all circuits being in use (a GoS of 0.01), which becomes the target probability of call blocking, Pb, when using the Erlang B formula. There are several resulting formulae, including Erlang B, Erlang C and the related Engset formula, based on different models of user behavior and system operation. These may each be derived by means of a special case of continuous-time Markov processes known as a birth–death process. The more recent Extended Erlang B method provides a further traffic solution that draws on Erlang's results. Calculating offered traffic Offered traffic (in erlangs) is related to the call arrival rate, λ, and the average call-holding time (the average time of a phone call), h, by: provided that h and λ are expressed using the same units of time (seconds and calls per second, or minutes and calls per minute). The practical measurement of traffic is typically based on continuous observations over several days or weeks, during which the instantaneous traffic is recorded at regular, short intervals (such as every few seconds). These measurements are then used to calculate a single result, most commonly the busy-hour traffic (in erlangs). This is the average number of concurrent calls during a given one-hour period of the day, where that period is selected to give the highest result. (This result is called the time-consistent busy-hour traffic). An alternative is to calculate a busy-hour traffic value separately for each day (which may correspond to slightly different times each day) and take the average of these values. This generally gives a slightly higher value than the time-consistent busy-hour value. Where the existing busy-hour carried traffic, Ec, is measured on an already overloaded system, with a significant level of blocking, it is necessary to take account of the blocked calls in estimating the busy-hour offered traffic Eo (which is the traffic value to be used in the Erlang formulae). The offered traffic can be estimated by . For this purpose, where the system includes a means of counting blocked calls and successful calls, Pb can be estimated directly from the proportion of calls that are blocked. Failing that, Pb can be estimated by using Ec in place of Eo in the Erlang formula and the resulting estimate of Pb can then be used in to provide a first estimate of Eo. Another method of estimating Eo in an overloaded system is to measure the busy-hour call arrival rate, λ (counting successful calls and blocked calls), and the average call-holding time (for successful calls), h, and then estimate Eo using the formula . For a situation where the traffic to be handled is completely new traffic, the only choice is to try to model expected user behavior. For example, one could estimate active user population, N, expected level of use, U (number of calls/transactions per user per day), busy-hour concentration factor, C (proportion of daily activity that will fall in the busy hour), and average holding time/service time, h (expressed in minutes). A projection of busy-hour offered traffic would then be . (The division by 60 translates the busy-hour call/transaction arrival rate into a per-minute value, to match the units in which h is expressed.) Erlang B formula The Erlang B formula (or Erlang-B with a hyphen), also known as the Erlang loss formula, is a formula for the blocking probability that describes the probability of call losses for a group of identical parallel resources (telephone lines, circuits, traffic channels, or equivalent), sometimes referred to as an M/M/c/c queue. It is, for example, used to dimension a telephone network's links. The formula was derived by Agner Krarup Erlang and is not limited to telephone networks, since it describes a probability in a queuing system (albeit a special case with a number of servers but no queueing space for incoming calls to wait for a free server). Hence, the formula is also used in certain inventory systems with lost sales. The formula applies under the condition that an unsuccessful call, because the line is busy, is not queued or retried, but instead really vanishes forever. It is assumed that call attempts arrive following a Poisson process, so call arrival instants are independent. Further, it is assumed that the message lengths (holding times) are exponentially distributed (Markovian system), although the formula turns out to apply under general holding time distributions. The Erlang B formula assumes an infinite population of sources (such as telephone subscribers), which jointly offer traffic to N servers (such as telephone lines). The rate expressing the frequency at which new calls arrive, λ, (birth rate, traffic intensity, etc.) is constant, and does not depend on the number of active sources. The total number of sources is assumed to be infinite. The Erlang B formula calculates the blocking probability of a buffer-less loss system, where a request that is not served immediately is aborted, causing that no requests become queued. Blocking occurs when a new request arrives at a time where all available servers are currently busy. The formula also assumes that blocked traffic is cleared and does not return. The formula provides the GoS (grade of service) which is the probability Pb that a new call arriving to the resources group is rejected because all resources (servers, lines, circuits) are busy: where E is the total offered traffic in erlang, offered to m identical parallel resources (servers, communication channels, traffic lanes). where: Pb is the probability of blocking m is the number of identical parallel resources such as servers, telephone lines, etc. E = λh is the normalised ingress load (offered traffic stated in erlang). Note: The erlang is a dimensionless load unit calculated as the mean arrival rate, λ, multiplied by the mean call holding time, h. See Little's law to prove that the erlang unit has to be dimensionless for Little's Law to be dimensionally sane. This may be expressed recursively as follows, in a form that is used to simplify the calculation of tables of the Erlang B formula: Typically, instead of B(E, m) the inverse 1/B(E, m) is calculated in numerical computation in order to ensure numerical stability: Function ErlangB (E As Double, m As Integer) As Double Dim InvB As Double Dim j As Integer InvB = 1.0 For j = 1 To m InvB = 1.0 + InvB * j / E Next j ErlangB = 1.0 / InvB End Function or a Python version def erlang_b(E, m: int) -> float: """Calculate the probability of call losses.""" inv_b = 1.0 for j in range(1, m + 1): inv_b = 1.0 + inv_b * j / E return 1.0 / inv_b The Erlang B formula is decreasing and convex in m. It requires that call arrivals can be modeled by a Poisson process, which is not always a good match, but is valid for any statistical distribution of call holding times with a finite mean. It applies to traffic transmission systems that do not buffer traffic. More modern examples compared to POTS where Erlang B is still applicable, are optical burst switching (OBS) and several current approaches to optical packet switching (OPS). Erlang B was developed as a trunk sizing tool for telephone networks with holding times in the minutes range, but being a mathematical equation it applies on any time-scale. Extended Erlang B Extended Erlang B differs from the classic Erlang-B assumptions by allowing for a proportion of blocked callers to try again, causing an increase in offered traffic from the initial baseline level. It is an iterative calculation rather than a formula and adds an extra parameter, the recall factor , which defines the recall attempts. The steps in the process are as follows. It starts at iteration with a known initial baseline level of traffic , which is successively adjusted to calculate a sequence of new offered traffic values , each of which accounts for the recalls arising from the previously calculated offered traffic . Calculate the probability of a caller being blocked on their first attempt as above for Erlang B. Calculate the probable number of blocked calls Calculate the number of recalls, , assuming a fixed recall factor, , Calculate the new offered traffic where is the initial (baseline) level of traffic. Return to step 1, substituting for , and iterate until a stable value of is obtained. Once a satisfactory value of has been found, the blocking probability and the recall factor can be used to calculate the probability that all of a caller's attempts are lost, not just their first call but also any subsequent retries. Erlang C formula The Erlang C formula expresses the probability that an arriving customer will need to queue (as opposed to immediately being served). Just as the Erlang B formula, Erlang C assumes an infinite population of sources, which jointly offer traffic of erlangs to servers. However, if all the servers are busy when a request arrives from a source, the request is queued. An unlimited number of requests may be held in the queue in this way simultaneously. This formula calculates the probability of queuing offered traffic, assuming that blocked calls stay in the system until they can be handled. This formula is used to determine the number of agents or customer service representatives needed to staff a call centre, for a specified desired probability of queuing. However, the Erlang C formula assumes that callers never hang up while in queue, which makes the formula predict that more agents should be used than are really needed to maintain a desired service level. where: is the total traffic offered in units of erlangs is the number of servers is the probability that a customer has to wait for service. It is assumed that the call arrivals can be modeled by a Poisson process and that call holding times are described by an exponential distribution, therefore the Erlang C formula follows from the assumptions of the M/M/c queue model. Limitations of the Erlang formula When Erlang developed the Erlang-B and Erlang-C traffic equations, they were developed on a set of assumptions. These assumptions are accurate under most conditions; however in the event of extremely high traffic congestion, Erlang's equations fail to accurately predict the correct number of circuits required because of re-entrant traffic. This is termed a high-loss system, where congestion breeds further congestion at peak times. In such cases, it is first necessary for many additional circuits to be made available so that the high loss can be alleviated. Once this action has been taken, congestion will return to reasonable levels and Erlang's equations can then be used to determine how exactly many circuits are really required. An example of an instance which would cause such a High Loss System to develop would be if a TV-based advertisement were to announce a particular telephone number to call at a specific time. In this case, a large number of people would simultaneously phone the number provided. If the service provider had not catered for this sudden peak demand, extreme traffic congestion will develop and Erlang's equations cannot be used. See also System spectral efficiency (discussing cellular network capacity in Erlang/MHz/cell) A. K. Erlang Call centre Discrete-event simulation Engset formula Erlang programming language Erlang distribution Little's law Poisson distribution Traffic mix References Further reading Network performance Units of measurement Teletraffic Queueing theory
Erlang (unit)
[ "Mathematics" ]
3,459
[ "Quantity", "Units of measurement" ]
10,290
https://en.wikipedia.org/wiki/Emulsion
An emulsion is a mixture of two or more liquids that are normally immiscible (unmixable or unblendable) owing to liquid-liquid phase separation. Emulsions are part of a more general class of two-phase systems of matter called colloids. Although the terms colloid and emulsion are sometimes used interchangeably, emulsion should be used when both phases, dispersed and continuous, are liquids. In an emulsion, one liquid (the dispersed phase) is dispersed in the other (the continuous phase). Examples of emulsions include vinaigrettes, homogenized milk, liquid biomolecular condensates, and some cutting fluids for metal working. Two liquids can form different types of emulsions. As an example, oil and water can form, first, an oil-in-water emulsion, in which the oil is the dispersed phase, and water is the continuous phase. Second, they can form a water-in-oil emulsion, in which water is the dispersed phase and oil is the continuous phase. Multiple emulsions are also possible, including a "water-in-oil-in-water" emulsion and an "oil-in-water-in-oil" emulsion. Emulsions, being liquids, do not exhibit a static internal structure. The droplets dispersed in the continuous phase (sometimes referred to as the "dispersion medium") are usually assumed to be statistically distributed to produce roughly spherical droplets. The term "emulsion" is also used to refer to the photo-sensitive side of photographic film. Such a photographic emulsion consists of silver halide colloidal particles dispersed in a gelatin matrix. Nuclear emulsions are similar to photographic emulsions, except that they are used in particle physics to detect high-energy elementary particles. Etymology The word "emulsion" comes from the Latin emulgere "to milk out", from ex "out" + mulgere "to milk", as milk is an emulsion of fat and water, along with other components, including colloidal casein micelles (a type of secreted biomolecular condensate). Appearance and properties Emulsions contain both a dispersed and a continuous phase, with the boundary between the phases called the "interface". Emulsions tend to have a cloudy appearance because the many phase interfaces scatter light as it passes through the emulsion. Emulsions appear white when all light is scattered equally. If the emulsion is dilute enough, higher-frequency (shorter-wavelength) light will be scattered more, and the emulsion will appear bluer – this is called the "Tyndall effect". If the emulsion is concentrated enough, the color will be distorted toward comparatively longer wavelengths, and will appear more yellow. This phenomenon is easily observable when comparing skimmed milk, which contains little fat, to cream, which contains a much higher concentration of milk fat. One example would be a mixture of water and oil. Two special classes of emulsions – microemulsions and nanoemulsions, with droplet sizes below 100 nm – appear translucent. This property is due to the fact that light waves are scattered by the droplets only if their sizes exceed about one-quarter of the wavelength of the incident light. Since the visible spectrum of light is composed of wavelengths between 390 and 750 nanometers (nm), if the droplet sizes in the emulsion are below about 100 nm, the light can penetrate through the emulsion without being scattered. Due to their similarity in appearance, translucent nanoemulsions and microemulsions are frequently confused. Unlike translucent nanoemulsions, which require specialized equipment to be produced, microemulsions are spontaneously formed by "solubilizing" oil molecules with a mixture of surfactants, co-surfactants, and co-solvents. The required surfactant concentration in a microemulsion is, however, several times higher than that in a translucent nanoemulsion, and significantly exceeds the concentration of the dispersed phase. Because of many undesirable side-effects caused by surfactants, their presence is disadvantageous or prohibitive in many applications. In addition, the stability of a microemulsion is often easily compromised by dilution, by heating, or by changing pH levels. Common emulsions are inherently unstable and, thus, do not tend to form spontaneously. Energy input – through shaking, stirring, homogenizing, or exposure to power ultrasound – is needed to form an emulsion. Over time, emulsions tend to revert to the stable state of the phases comprising the emulsion. An example of this is seen in the separation of the oil and vinegar components of vinaigrette, an unstable emulsion that will quickly separate unless shaken almost continuously. There are important exceptions to this rule – microemulsions are thermodynamically stable, while translucent nanoemulsions are kinetically stable. Whether an emulsion of oil and water turns into a "water-in-oil" emulsion or an "oil-in-water" emulsion depends on the volume fraction of both phases and the type of emulsifier (surfactant) (see Emulsifier, below) present. Instability Emulsion stability refers to the ability of an emulsion to resist change in its properties over time. There are four types of instability in emulsions: flocculation, coalescence, creaming/sedimentation, and Ostwald ripening. Flocculation occurs when there is an attractive force between the droplets, so they form flocs, like bunches of grapes. This process can be desired, if controlled in its extent, to tune physical properties of emulsions such as their flow behaviour. Coalescence occurs when droplets bump into each other and combine to form a larger droplet, so the average droplet size increases over time. Emulsions can also undergo creaming, where the droplets rise to the top of the emulsion under the influence of buoyancy, or under the influence of the centripetal force induced when a centrifuge is used. Creaming is a common phenomenon in dairy and non-dairy beverages (i.e. milk, coffee milk, almond milk, soy milk) and usually does not change the droplet size. Sedimentation is the opposite phenomenon of creaming and normally observed in water-in-oil emulsions. Sedimentation happens when the dispersed phase is denser than the continuous phase and the gravitational forces pull the denser globules towards the bottom of the emulsion. Similar to creaming, sedimentation follows Stokes' law. An appropriate surface active agent (or surfactant) can increase the kinetic stability of an emulsion so that the size of the droplets does not change significantly with time. The stability of an emulsion, like a suspension, can be studied in terms of zeta potential, which indicates the repulsion between droplets or particles. If the size and dispersion of droplets does not change over time, it is said to be stable. For example, oil-in-water emulsions containing mono- and diglycerides and milk protein as surfactant showed that stable oil droplet size over 28 days storage at 25 °C. Monitoring physical stability The stability of emulsions can be characterized using techniques such as light scattering, focused beam reflectance measurement, centrifugation, and rheology. Each method has advantages and disadvantages. Accelerating methods for shelf life prediction The kinetic process of destabilization can be rather long – up to several months, or even years for some products. Often the formulator must accelerate this process in order to test products in a reasonable time during product design. Thermal methods are the most commonly used – these consist of increasing the emulsion temperature to accelerate destabilization (if below critical temperatures for phase inversion or chemical degradation). Temperature affects not only the viscosity but also the interfacial tension in the case of non-ionic surfactants or, on a broader scope, interactions between droplets within the system. Storing an emulsion at high temperatures enables the simulation of realistic conditions for a product (e.g., a tube of sunscreen emulsion in a car in the summer heat), but also accelerates destabilization processes up to 200 times. Mechanical methods of acceleration, including vibration, centrifugation, and agitation, can also be used. These methods are almost always empirical, without a sound scientific basis. Emulsifiers An emulsifier is a substance that stabilizes an emulsion by reducing the oil-water interface tension. Emulsifiers are a part of a broader group of compounds known as surfactants, or "surface-active agents". Surfactants are compounds that are typically amphiphilic, meaning they have a polar or hydrophilic (i.e., water-soluble) part and a non-polar (i.e., hydrophobic or lipophilic) part. Emulsifiers that are more soluble in water (and, conversely, less soluble in oil) will generally form oil-in-water emulsions, while emulsifiers that are more soluble in oil will form water-in-oil emulsions. Examples of food emulsifiers are: Egg yolk – in which the main emulsifying and thickening agent is lecithin. Mustard – where a variety of chemicals in the mucilage surrounding the seed hull act as emulsifiers Soy lecithin is another emulsifier and thickener Pickering stabilization – uses particles under certain circumstances Mono- and diglycerides – a common emulsifier found in many food products (coffee creamers, ice creams, spreads, breads, cakes) Sodium stearoyl lactylate DATEM (diacetyl tartaric acid esters of mono- and diglycerides) – an emulsifier used primarily in baking Proteins – those with both hydrophilic and hydrophobic regions, e.g. sodium caseinate, as in meltable cheese product In food emulsions, the type of emulsifier greatly affects how emulsions are structured in the stomach and how accessible the oil is for gastric lipases, thereby influencing how fast emulsions are digested and trigger a satiety inducing hormone response. Detergents are another class of surfactant, and will interact physically with both oil and water, thus stabilizing the interface between the oil and water droplets in suspension. This principle is exploited in soap, to remove grease for the purpose of cleaning. Many different emulsifiers are used in pharmacy to prepare emulsions such as creams and lotions. Common examples include emulsifying wax, polysorbate 20, and ceteareth 20. Sometimes the inner phase itself can act as an emulsifier, and the result is a nanoemulsion, where the inner state disperses into "nano-size" droplets within the outer phase. A well-known example of this phenomenon, the "ouzo effect", happens when water is poured into a strong alcoholic anise-based beverage, such as ouzo, pastis, absinthe, arak, or raki. The anisolic compounds, which are soluble in ethanol, then form nano-size droplets and emulsify within the water. The resulting color of the drink is opaque and milky white. Mechanisms of emulsification A number of different chemical and physical processes and mechanisms can be involved in the process of emulsification: Surface tension theory – according to this theory, emulsification takes place by reduction of interfacial tension between two phases Repulsion theory – According to this theory, the emulsifier creates a film over one phase that forms globules, which repel each other. This repulsive force causes them to remain suspended in the dispersion medium Viscosity modification – emulgents like acacia and tragacanth, which are hydrocolloids, as well as PEG (polyethylene glycol), glycerine, and other polymers like CMC (carboxymethyl cellulose), all increase the viscosity of the medium, which helps create and maintain the suspension of globules of dispersed phase Uses In food Oil-in-water emulsions are common in food products: Mayonnaise and Hollandaise sauces – these are oil-in-water emulsions stabilized with egg yolk lecithin, or with other types of food additives, such as sodium stearoyl lactylate Homogenized milk – an emulsion of milk fat in water, with milk proteins as the emulsifier Vinaigrette – an emulsion of vegetable oil in vinegar, if this is prepared using only oil and vinegar (i.e., without an emulsifier), an unstable emulsion results Water-in-oil emulsions are less common in food, but still exist: Butter – an emulsion of water in butterfat Margarine Other foods can be turned into products similar to emulsions, for example meat emulsion is a suspension of meat in liquid that is similar to true emulsions. In health care In pharmaceutics, hairstyling, personal hygiene, and cosmetics, emulsions are frequently used. These are usually oil and water emulsions but dispersed, and which is continuous depends in many cases on the pharmaceutical formulation. These emulsions may be called creams, ointments, liniments (balms), pastes, films, or liquids, depending mostly on their oil-to-water ratios, other additives, and their intended route of administration. The first 5 are topical dosage forms, and may be used on the surface of the skin, transdermally, ophthalmically, rectally, or vaginally. A highly liquid emulsion may also be used orally, or may be injected in some cases. Microemulsions are used to deliver vaccines and kill microbes. Typical emulsions used in these techniques are nanoemulsions of soybean oil, with particles that are 400–600 nm in diameter. The process is not chemical, as with other types of antimicrobial treatments, but mechanical. The smaller the droplet the greater the surface tension and thus the greater the force required to merge with other lipids. The oil is emulsified with detergents using a high-shear mixer to stabilize the emulsion so, when they encounter the lipids in the cell membrane or envelope of bacteria or viruses, they force the lipids to merge with themselves. On a mass scale, in effect this disintegrates the membrane and kills the pathogen. The soybean oil emulsion does not harm normal human cells, or the cells of most other higher organisms, with the exceptions of sperm cells and blood cells, which are vulnerable to nanoemulsions due to the peculiarities of their membrane structures. For this reason, these nanoemulsions are not currently used intravenously (IV). The most effective application of this type of nanoemulsion is for the disinfection of surfaces. Some types of nanoemulsions have been shown to effectively destroy HIV-1 and tuberculosis pathogens on non-porous surfaces. Applications in Pharmaceutical industry Oral drug delivery: Emulsions may provide an efficient means of administering drugs that are poorly soluble or have low bioavailability or dissolution rates, increasing both dissolution rates and absorption to increase bioavailability and improve bioavailability. By increasing surface area provided by an emulsion, dissolution rates and absorption rates of drugs are increased, improving their bioavailability. Topical formulations: Emulsions are widely utilized as bases for topical drug delivery formulations such as creams, lotions and ointments. Their incorporation allows lipophilic as well as hydrophilic drugs to be mixed together for maximum skin penetration and permeation of active ingredients. Parenteral drug delivery: Emulsions serve as carriers for intravenous or intramuscular administration of drugs, solubilizing lipophilic ones while protecting from degradation and decreasing injection site irritation. Examples include propofol as a widely used anesthetic and lipid-based solutions used for total parenteral nutrition delivery. Ocular Drug Delivery: Emulsions can be used to formulate eye drops and other ocular drug delivery systems, increasing drug retention time in the eye and permeating through corneal barriers more easily while providing sustained release of active ingredients and thus increasing therapeutic efficacy. Nasal and Pulmonary Drug Delivery: Emulsions can be an ideal vehicle for creating nasal sprays and inhalable drug products, enhancing drug absorption through nasal and pulmonary mucosa while providing sustained release with reduced local irritation. Vaccine Adjuvants: Emulsions can serve as vaccine adjuvants by strengthening immune responses against specific antigens. Emulsions can enhance antigen solubility and uptake by immune cells while simultaneously providing controlled release, amplifying an immunological response and thus amplifying its effect. Taste Masking: Emulsions can be used to encase bitter or otherwise unpleasant-tasting drugs, masking their taste and increasing patient compliance - particularly with pediatric formulations. Cosmeceuticals: Emulsions are widely utilized in cosmeceuticals products that combine cosmetic and pharmaceutical properties. These emulsions act as carriers for active ingredients like vitamins, antioxidants and skin lightening agents to provide improved skin penetration and increased stability. In firefighting Emulsifying agents are effective at extinguishing fires on small, thin-layer spills of flammable liquids (class B fires). Such agents encapsulate the fuel in a fuel-water emulsion, thereby trapping the flammable vapors in the water phase. This emulsion is achieved by applying an aqueous surfactant solution to the fuel through a high-pressure nozzle. Emulsifiers are not effective at extinguishing large fires involving bulk/deep liquid fuels, because the amount of emulsifier agent needed for extinguishment is a function of the volume of the fuel, whereas other agents such as aqueous film-forming foam need cover only the surface of the fuel to achieve vapor mitigation. Chemical synthesis Emulsions are used to manufacture polymer dispersions – polymer production in an emulsion 'phase' has a number of process advantages, including prevention of coagulation of product. Products produced by such polymerisations may be used as the emulsions – products including primary components for glues and paints. Synthetic latexes (rubbers) are also produced by this process. See also References Other sources Handbook of Nanostructured Materials and Nanotechnology; Nalwa, H.S., Ed.; Academic Press: New York, NY, USA, 2000; Volume 5, pp. 501–575 Chemical mixtures Colloidal chemistry Colloids Dosage forms Drug delivery devices Soft matter
Emulsion
[ "Physics", "Chemistry", "Materials_science" ]
3,956
[ "Pharmacology", "Colloidal chemistry", "Soft matter", "Drug delivery devices", "Colloids", "Surface science", "Chemical mixtures", "Condensed matter physics", "nan" ]
10,294
https://en.wikipedia.org/wiki/Encryption
In cryptography, encryption (more specifically, encoding) is the process of transforming information in a way that, ideally, only authorized parties can decode. This process converts the original representation of the information, known as plaintext, into an alternative form known as ciphertext. Despite its goal, encryption does not itself prevent interference but denies the intelligible content to a would-be interceptor. For technical reasons, an encryption scheme usually uses a pseudo-random encryption key generated by an algorithm. It is possible to decrypt the message without possessing the key but, for a well-designed encryption scheme, considerable computational resources and skills are required. An authorized recipient can easily decrypt the message with the key provided by the originator to recipients but not to unauthorized users. Historically, various forms of encryption have been used to aid in cryptography. Early encryption techniques were often used in military messaging. Since then, new techniques have emerged and become commonplace in all areas of modern computing. Modern encryption schemes use the concepts of public-key and symmetric-key. Modern encryption techniques ensure security because modern computers are inefficient at cracking the encryption. History Ancient One of the earliest forms of encryption is symbol replacement, which was first found in the tomb of Khnumhotep II, who lived in 1900 BC Egypt. Symbol replacement encryption is “non-standard,” which means that the symbols require a cipher or key to understand. This type of early encryption was used throughout Ancient Greece and Rome for military purposes. One of the most famous military encryption developments was the Caesar cipher, in which a plaintext letter is shifted a fixed number of positions along the alphabet to get the encoded letter. A message encoded with this type of encryption could be decoded with a fixed number on the Caesar cipher. Around 800 AD, Arab mathematician Al-Kindi developed the technique of frequency analysis – which was an attempt to crack ciphers systematically, including the Caesar cipher. This technique looked at the frequency of letters in the encrypted message to determine the appropriate shift: for example, the most common letter in English text is E and is therefore likely to be represented by the letter that appears most commonly in the ciphertext. This technique was rendered ineffective by the polyalphabetic cipher, described by Al-Qalqashandi (1355–1418) and Leon Battista Alberti (in 1465), which varied the substitution alphabet as encryption proceeded in order to confound such analysis. 19th–20th century Around 1790, Thomas Jefferson theorized a cipher to encode and decode messages to provide a more secure way of military correspondence. The cipher, known today as the Wheel Cipher or the Jefferson Disk, although never actually built, was theorized as a spool that could jumble an English message up to 36 characters. The message could be decrypted by plugging in the jumbled message to a receiver with an identical cipher. A similar device to the Jefferson Disk, the M-94, was developed in 1917 independently by US Army Major Joseph Mauborne. This device was used in U.S. military communications until 1942. In World War II, the Axis powers used a more advanced version of the M-94 called the Enigma Machine. The Enigma Machine was more complex because unlike the Jefferson Wheel and the M-94, each day the jumble of letters switched to a completely new combination. Each day's combination was only known by the Axis, so many thought the only way to break the code would be to try over 17,000 combinations within 24 hours. The Allies used computing power to severely limit the number of reasonable combinations they needed to check every day, leading to the breaking of the Enigma Machine. Modern Today, encryption is used in the transfer of communication over the Internet for security and commerce. As computing power continues to increase, computer encryption is constantly evolving to prevent eavesdropping attacks. One of the first "modern" cipher suites, DES, used a 56-bit key with 72,057,594,037,927,936 possibilities; it was cracked in 1999 by EFF's brute-force DES cracker, which required 22 hours and 15 minutes to do so. Modern encryption standards often use stronger key sizes, such as AES (256-bit mode), TwoFish, ChaCha20-Poly1305, Serpent (configurable up to 512-bit). Cipher suites that use a 128-bit or higher key, like AES, will not be able to be brute-forced because the total amount of keys is 3.4028237e+38 possibilities. The most likely option for cracking ciphers with high key size is to find vulnerabilities in the cipher itself, like inherent biases and backdoors or by exploiting physical side effects through Side-channel attacks. For example, RC4, a stream cipher, was cracked due to inherent biases and vulnerabilities in the cipher. Encryption in cryptography In the context of cryptography, encryption serves as a mechanism to ensure confidentiality. Since data may be visible on the Internet, sensitive information such as passwords and personal communication may be exposed to potential interceptors. The process of encrypting and decrypting messages involves keys. The two main types of keys in cryptographic systems are symmetric-key and public-key (also known as asymmetric-key). Many complex cryptographic algorithms often use simple modular arithmetic in their implementations. Types In symmetric-key schemes, the encryption and decryption keys are the same. Communicating parties must have the same key in order to achieve secure communication. The German Enigma Machine used a new symmetric-key each day for encoding and decoding messages. In addition to traditional encryption types, individuals can enhance their security by using VPNs or specific browser settings to encrypt their internet connection, providing additional privacy protection while browsing the web. In public-key encryption schemes, the encryption key is published for anyone to use and encrypt messages. However, only the receiving party has access to the decryption key that enables messages to be read. Public-key encryption was first described in a secret document in 1973; beforehand, all encryption schemes were symmetric-key (also called private-key). Although published subsequently, the work of Diffie and Hellman was published in a journal with a large readership, and the value of the methodology was explicitly described. The method became known as the Diffie-Hellman key exchange. RSA (Rivest–Shamir–Adleman) is another notable public-key cryptosystem. Created in 1978, it is still used today for applications involving digital signatures. Using number theory, the RSA algorithm selects two prime numbers, which help generate both the encryption and decryption keys. A publicly available public-key encryption application called Pretty Good Privacy (PGP) was written in 1991 by Phil Zimmermann, and distributed free of charge with source code. PGP was purchased by Symantec in 2010 and is regularly updated. Uses Encryption has long been used by militaries and governments to facilitate secret communication. It is now commonly used in protecting information within many kinds of civilian systems. For example, the Computer Security Institute reported that in 2007, 71% of companies surveyed used encryption for some of their data in transit, and 53% used encryption for some of their data in storage. Encryption can be used to protect data "at rest", such as information stored on computers and storage devices (e.g. USB flash drives). In recent years, there have been numerous reports of confidential data, such as customers' personal records, being exposed through loss or theft of laptops or backup drives; encrypting such files at rest helps protect them if physical security measures fail. Digital rights management systems, which prevent unauthorized use or reproduction of copyrighted material and protect software against reverse engineering (see also copy protection), is another somewhat different example of using encryption on data at rest. Encryption is also used to protect data in transit, for example data being transferred via networks (e.g. the Internet, e-commerce), mobile telephones, wireless microphones, wireless intercom systems, Bluetooth devices and bank automatic teller machines. There have been numerous reports of data in transit being intercepted in recent years. Data should also be encrypted when transmitted across networks in order to protect against eavesdropping of network traffic by unauthorized users. Data erasure Conventional methods for permanently deleting data from a storage device involve overwriting the device's whole content with zeros, ones, or other patterns – a process which can take a significant amount of time, depending on the capacity and the type of storage medium. Cryptography offers a way of making the erasure almost instantaneous. This method is called crypto-shredding. An example implementation of this method can be found on iOS devices, where the cryptographic key is kept in a dedicated 'effaceable storage'. Because the key is stored on the same device, this setup on its own does not offer full privacy or security protection if an unauthorized person gains physical access to the device. Limitations Encryption is used in the 21st century to protect digital data and information systems. As computing power increased over the years, encryption technology has only become more advanced and secure. However, this advancement in technology has also exposed a potential limitation of today's encryption methods. The length of the encryption key is an indicator of the strength of the encryption method. For example, the original encryption key, DES (Data Encryption Standard), was 56 bits, meaning it had 2^56 combination possibilities. With today's computing power, a 56-bit key is no longer secure, being vulnerable to brute force attacks. Quantum computing uses properties of quantum mechanics in order to process large amounts of data simultaneously. Quantum computing has been found to achieve computing speeds thousands of times faster than today's supercomputers. This computing power presents a challenge to today's encryption technology. For example, RSA encryption uses the multiplication of very large prime numbers to create a semiprime number for its public key. Decoding this key without its private key requires this semiprime number to be factored, which can take a very long time to do with modern computers. It would take a supercomputer anywhere between weeks to months to factor in this key. However, quantum computing can use quantum algorithms to factor this semiprime number in the same amount of time it takes for normal computers to generate it. This would make all data protected by current public-key encryption vulnerable to quantum computing attacks. Other encryption techniques like elliptic curve cryptography and symmetric key encryption are also vulnerable to quantum computing. While quantum computing could be a threat to encryption security in the future, quantum computing as it currently stands is still very limited. Quantum computing currently is not commercially available, cannot handle large amounts of code, and only exists as computational devices, not computers. Furthermore, quantum computing advancements will be able to be used in favor of encryption as well. The National Security Agency (NSA) is currently preparing post-quantum encryption standards for the future. Quantum encryption promises a level of security that will be able to counter the threat of quantum computing. Attacks and countermeasures Encryption is an important tool but is not sufficient alone to ensure the security or privacy of sensitive information throughout its lifetime. Most applications of encryption protect information only at rest or in transit, leaving sensitive data in clear text and potentially vulnerable to improper disclosure during processing, such as by a cloud service for example. Homomorphic encryption and secure multi-party computation are emerging techniques to compute encrypted data; these techniques are general and Turing complete but incur high computational and/or communication costs. In response to encryption of data at rest, cyber-adversaries have developed new types of attacks. These more recent threats to encryption of data at rest include cryptographic attacks, stolen ciphertext attacks, attacks on encryption keys, insider attacks, data corruption or integrity attacks, data destruction attacks, and ransomware attacks. Data fragmentation and active defense data protection technologies attempt to counter some of these attacks, by distributing, moving, or mutating ciphertext so it is more difficult to identify, steal, corrupt, or destroy. The debate around encryption The question of balancing the need for national security with the right to privacy has been debated for years, since encryption has become critical in today's digital society. The modern encryption debate started around the '90s when US government tried to ban cryptography because, according to them, it would threaten national security. The debate is polarized around two opposing views. Those who see strong encryption as a problem making it easier for criminals to hide their illegal acts online and others who argue that encryption keep digital communications safe. The debate heated up in 2014, when Big Tech like Apple and Google set encryption by default in their devices. This was the start of a series of controversies that puts governments, companies and internet users at stake. Integrity protection of Ciphertexts Encryption, by itself, can protect the confidentiality of messages, but other techniques are still needed to protect the integrity and authenticity of a message; for example, verification of a message authentication code (MAC) or a digital signature usually done by a hashing algorithm or a PGP signature. Authenticated encryption algorithms are designed to provide both encryption and integrity protection together. Standards for cryptographic software and hardware to perform encryption are widely available, but successfully using encryption to ensure security may be a challenging problem. A single error in system design or execution can allow successful attacks. Sometimes an adversary can obtain unencrypted information without directly undoing the encryption. See for example traffic analysis, TEMPEST, or Trojan horse. Integrity protection mechanisms such as MACs and digital signatures must be applied to the ciphertext when it is first created, typically on the same device used to compose the message, to protect a message end-to-end along its full transmission path; otherwise, any node between the sender and the encryption agent could potentially tamper with it. Encrypting at the time of creation is only secure if the encryption device itself has correct keys and has not been tampered with. If an endpoint device has been configured to trust a root certificate that an attacker controls, for example, then the attacker can both inspect and tamper with encrypted data by performing a man-in-the-middle attack anywhere along the message's path. The common practice of TLS interception by network operators represents a controlled and institutionally sanctioned form of such an attack, but countries have also attempted to employ such attacks as a form of control and censorship. Ciphertext length and padding Even when encryption correctly hides a message's content and it cannot be tampered with at rest or in transit, a message's length is a form of metadata that can still leak sensitive information about the message. For example, the well-known CRIME and BREACH attacks against HTTPS were side-channel attacks that relied on information leakage via the length of encrypted content. Traffic analysis is a broad class of techniques that often employs message lengths to infer sensitive implementation about traffic flows by aggregating information about a large number of messages. Padding a message's payload before encrypting it can help obscure the cleartext's true length, at the cost of increasing the ciphertext's size and introducing or increasing bandwidth overhead. Messages may be padded randomly or deterministically, with each approach having different tradeoffs. Encrypting and padding messages to form padded uniform random blobs or PURBs is a practice guaranteeing that the cipher text leaks no metadata about its cleartext's content, and leaks asymptotically minimal information via its length. See also Cryptosystem Cold boot attack Cryptography standards Cyberspace Electronic Security Act (US) Dictionary attack Disk encryption Encrypted function Enigma machine Export of cryptography Geo-blocking Indistinguishability obfuscation Key management Multiple encryption Physical Layer Encryption Pretty Good Privacy Post-quantum cryptography Rainbow table Rotor machine Side-channel attack Substitution cipher Television encryption Tokenization (data security) References Further reading Kahn, David (1967), The Codebreakers - The Story of Secret Writing () Preneel, Bart (2000), "Advances in Cryptology – EUROCRYPT 2000", Springer Berlin Heidelberg, Sinkov, Abraham (1966): Elementary Cryptanalysis: A Mathematical Approach, Mathematical Association of America. Tenzer, Theo (2021): SUPER SECRETO – The Third Epoch of Cryptography: Multiple, exponential, quantum-secure and above all, simple and practical Encryption for Everyone, Norderstedt, . External links Cryptography Data protection
Encryption
[ "Mathematics", "Engineering" ]
3,438
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
10,296
https://en.wikipedia.org/wiki/Einstein%E2%80%93Podolsky%E2%80%93Rosen%20paradox
The Einstein–Podolsky–Rosen (EPR) paradox is a thought experiment proposed by physicists Albert Einstein, Boris Podolsky and Nathan Rosen, which argues that the description of physical reality provided by quantum mechanics is incomplete. In a 1935 paper titled "Can Quantum-Mechanical Description of Physical Reality be Considered Complete?", they argued for the existence of "elements of reality" that were not part of quantum theory, and speculated that it should be possible to construct a theory containing these hidden variables. Resolutions of the paradox have important implications for the interpretation of quantum mechanics. The thought experiment involves a pair of particles prepared in what would later become known as an entangled state. Einstein, Podolsky, and Rosen pointed out that, in this state, if the position of the first particle were measured, the result of measuring the position of the second particle could be predicted. If instead the momentum of the first particle were measured, then the result of measuring the momentum of the second particle could be predicted. They argued that no action taken on the first particle could instantaneously affect the other, since this would involve information being transmitted faster than light, which is impossible according to the theory of relativity. They invoked a principle, later known as the "EPR criterion of reality", which posited that: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity." From this, they inferred that the second particle must have a definite value of both position and of momentum prior to either quantity being measured. But quantum mechanics considers these two observables incompatible and thus does not associate simultaneous values for both to any system. Einstein, Podolsky, and Rosen therefore concluded that quantum theory does not provide a complete description of reality. The "Paradox" paper The term "Einstein–Podolsky–Rosen paradox" or "EPR" arose from a paper written in 1934 after Einstein joined the Institute for Advanced Study, having fled the rise of Nazi Germany. The original paper purports to describe what must happen to "two systems I and II, which we permit to interact", and after some time "we suppose that there is no longer any interaction between the two parts." The EPR description involves "two particles, A and B, [which] interact briefly and then move off in opposite directions." According to Heisenberg's uncertainty principle, it is impossible to measure both the momentum and the position of particle B exactly; however, it is possible to measure the exact position of particle A. By calculation, therefore, with the exact position of particle A known, the exact position of particle B can be known. Alternatively, the exact momentum of particle A can be measured, so the exact momentum of particle B can be worked out. As Manjit Kumar writes, "EPR argued that they had proved that ... [particle] B can have simultaneously exact values of position and momentum. ... Particle B has a position that is real and a momentum that is real. EPR appeared to have contrived a means to establish the exact values of either the momentum or the position of B due to measurements made on particle A, without the slightest possibility of particle B being physically disturbed." EPR tried to set up a paradox to question the range of true application of quantum mechanics: quantum theory predicts that both values cannot be known for a particle, and yet the EPR thought experiment purports to show that they must all have determinate values. The EPR paper says: "We are thus forced to conclude that the quantum-mechanical description of physical reality given by wave functions is not complete." The EPR paper ends by saying: "While we have thus shown that the wave function does not provide a complete description of the physical reality, we left open the question of whether or not such a description exists. We believe, however, that such a theory is possible." The 1935 EPR paper condensed the philosophical discussion into a physical argument. The authors claim that given a specific experiment, in which the outcome of a measurement is known before the measurement takes place, there must exist something in the real world, an "element of reality", that determines the measurement outcome. They postulate that these elements of reality are, in modern terminology, local, in the sense that each belongs to a certain point in spacetime. Each element may, again in modern terminology, only be influenced by events that are located in the backward light cone of its point in spacetime (i.e. in the past). These claims are founded on assumptions about nature that constitute what is now known as local realism. Though the EPR paper has often been taken as an exact expression of Einstein's views, it was primarily authored by Podolsky, based on discussions at the Institute for Advanced Study with Einstein and Rosen. Einstein later expressed to Erwin Schrödinger that, "it did not come out as well as I had originally wanted; rather, the essential thing was, so to speak, smothered by the formalism." Einstein would later go on to present an individual account of his local realist ideas. Shortly before the EPR paper appeared in the Physical Review, The New York Times ran a news story about it, under the headline "Einstein Attacks Quantum Theory". The story, which quoted Podolsky, irritated Einstein, who wrote to the Times, "Any information upon which the article 'Einstein Attacks Quantum Theory' in your issue of May 4 is based was given to you without authority. It is my invariable practice to discuss scientific matters only in the appropriate forum and I deprecate advance publication of any announcement in regard to such matters in the secular press." The Times story also sought out comment from physicist Edward Condon, who said, "Of course, a great deal of the argument hinges on just what meaning is to be attached to the word 'reality' in physics." The physicist and historian Max Jammer later noted, "[I]t remains a historical fact that the earliest criticism of the EPR paper – moreover, a criticism that correctly saw in Einstein's conception of physical reality the key problem of the whole issue – appeared in a daily newspaper prior to the publication of the criticized paper itself." Bohr's reply The publication of the paper prompted a response by Niels Bohr, which he published in the same journal (Physical Review), in the same year, using the same title. (This exchange was only one chapter in a prolonged debate between Bohr and Einstein about the nature of quantum reality.) He argued that EPR had reasoned fallaciously. Bohr said measurements of position and of momentum are complementary, meaning the choice to measure one excludes the possibility of measuring the other. Consequently, a fact deduced regarding one arrangement of laboratory apparatus could not be combined with a fact deduced by means of the other, and so, the inference of predetermined position and momentum values for the second particle was not valid. Bohr concluded that EPR's "arguments do not justify their conclusion that the quantum description turns out to be essentially incomplete." Einstein's own argument In his own publications and correspondence, Einstein indicated that he was not satisfied with the EPR paper and that Rosen had authored most of it. He later used a different argument to insist that quantum mechanics is an incomplete theory. He explicitly de-emphasized EPR's attribution of "elements of reality" to the position and momentum of particle B, saying that "I couldn't care less" whether the resulting states of particle B allowed one to predict the position and momentum with certainty. For Einstein, the crucial part of the argument was the demonstration of nonlocality, that the choice of measurement done in particle A, either position or momentum, would lead to two different quantum states of particle B. He argued that, because of locality, the real state of particle B could not depend on which kind of measurement was done in A and that the quantum states therefore cannot be in one-to-one correspondence with the real states. Einstein struggled unsuccessfully for the rest of his life to find a theory that could better comply with his idea of locality. Later developments Bohm's variant In 1951, David Bohm proposed a variant of the EPR thought experiment in which the measurements have discrete ranges of possible outcomes, unlike the position and momentum measurements considered by EPR. The EPR–Bohm thought experiment can be explained using electron–positron pairs. Suppose we have a source that emits electron–positron pairs, with the electron sent to destination A, where there is an observer named Alice, and the positron sent to destination B, where there is an observer named Bob. According to quantum mechanics, we can arrange our source so that each emitted pair occupies a quantum state called a spin singlet. The particles are thus said to be entangled. This can be viewed as a quantum superposition of two states, which we call state I and state II. In state I, the electron has spin pointing upward along the z-axis (+z) and the positron has spin pointing downward along the z-axis (−z). In state II, the electron has spin −z and the positron has spin +z. Because it is in a superposition of states, it is impossible without measuring to know the definite state of spin of either particle in the spin singlet. Alice now measures the spin along the z-axis. She can obtain one of two possible outcomes: +z or −z. Suppose she gets +z. Informally speaking, the quantum state of the system collapses into state I. The quantum state determines the probable outcomes of any measurement performed on the system. In this case, if Bob subsequently measures spin along the z-axis, there is 100% probability that he will obtain −z. Similarly, if Alice gets −z, Bob will get +z. There is nothing special about choosing the z-axis: according to quantum mechanics the spin singlet state may equally well be expressed as a superposition of spin states pointing in the x direction. Whatever axis their spins are measured along, they are always found to be opposite. In quantum mechanics, the x-spin and z-spin are "incompatible observables", meaning the Heisenberg uncertainty principle applies to alternating measurements of them: a quantum state cannot possess a definite value for both of these variables. Suppose Alice measures the z-spin and obtains +z, so that the quantum state collapses into state I. Now, instead of measuring the z-spin as well, Bob measures the x-spin. According to quantum mechanics, when the system is in state I, Bob's x-spin measurement will have a 50% probability of producing +x and a 50% probability of -x. It is impossible to predict which outcome will appear until Bob actually performs the measurement. Therefore, Bob's positron will have a definite spin when measured along the same axis as Alice's electron, but when measured in the perpendicular axis its spin will be uniformly random. It seems as if information has propagated (faster than light) from Alice's apparatus to make Bob's positron assume a definite spin in the appropriate axis. Bell's theorem In 1964, John Stewart Bell published a paper investigating the puzzling situation at that time: on one hand, the EPR paradox purportedly showed that quantum mechanics was nonlocal, and suggested that a hidden-variable theory could heal this nonlocality. On the other hand, David Bohm had recently developed the first successful hidden-variable theory, but it had a grossly nonlocal character. Bell set out to investigate whether it was indeed possible to solve the nonlocality problem with hidden variables, and found out that first, the correlations shown in both EPR's and Bohm's versions of the paradox could indeed be explained in a local way with hidden variables, and second, that the correlations shown in his own variant of the paradox couldn't be explained by any local hidden-variable theory. This second result became known as the Bell theorem. To understand the first result, consider the following toy hidden-variable theory introduced later by J.J. Sakurai: in it, quantum spin-singlet states emitted by the source are actually approximate descriptions for "true" physical states possessing definite values for the z-spin and x-spin. In these "true" states, the positron going to Bob always has spin values opposite to the electron going to Alice, but the values are otherwise completely random. For example, the first pair emitted by the source might be "(+z, −x) to Alice and (−z, +x) to Bob", the next pair "(−z, −x) to Alice and (+z, +x) to Bob", and so forth. Therefore, if Bob's measurement axis is aligned with Alice's, he will necessarily get the opposite of whatever Alice gets; otherwise, he will get "+" and "−" with equal probability. Bell showed, however, that such models can only reproduce the singlet correlations when Alice and Bob make measurements on the same axis or on perpendicular axes. As soon as other angles between their axes are allowed, local hidden-variable theories become unable to reproduce the quantum mechanical correlations. This difference, expressed using inequalities known as "Bell's inequalities", is in principle experimentally testable. After the publication of Bell's paper, a variety of experiments to test Bell's inequalities were carried out, notably by the group of Alain Aspect in the 1980s; all experiments conducted to date have found behavior in line with the predictions of quantum mechanics. The present view of the situation is that quantum mechanics flatly contradicts Einstein's philosophical postulate that any acceptable physical theory must fulfill "local realism". The fact that quantum mechanics violates Bell inequalities indicates that any hidden-variable theory underlying quantum mechanics must be non-local; whether this should be taken to imply that quantum mechanics itself is non-local is a matter of continuing debate. Steering Inspired by Schrödinger's treatment of the EPR paradox back in 1935, Howard M. Wiseman et al. formalised it in 2007 as the phenomenon of quantum steering. They defined steering as the situation where Alice's measurements on a part of an entangled state steer Bob's part of the state. That is, Bob's observations cannot be explained by a local hidden state model, where Bob would have a fixed quantum state in his side, that is classically correlated but otherwise independent of Alice's. Locality Locality has several different meanings in physics. EPR describe the principle of locality as asserting that physical processes occurring at one place should have no immediate effect on the elements of reality at another location. At first sight, this appears to be a reasonable assumption to make, as it seems to be a consequence of special relativity, which states that energy can never be transmitted faster than the speed of light without violating causality; however, it turns out that the usual rules for combining quantum mechanical and classical descriptions violate EPR's principle of locality without violating special relativity or causality. Causality is preserved because there is no way for Alice to transmit messages (i.e., information) to Bob by manipulating her measurement axis. Whichever axis she uses, she has a 50% probability of obtaining "+" and 50% probability of obtaining "−", completely at random; according to quantum mechanics, it is fundamentally impossible for her to influence what result she gets. Furthermore, Bob is able to perform his measurement only once: there is a fundamental property of quantum mechanics, the no-cloning theorem, which makes it impossible for him to make an arbitrary number of copies of the electron he receives, perform a spin measurement on each, and look at the statistical distribution of the results. Therefore, in the one measurement he is allowed to make, there is a 50% probability of getting "+" and 50% of getting "−", regardless of whether or not his axis is aligned with Alice's. As a summary, the results of the EPR thought experiment do not contradict the predictions of special relativity. Neither the EPR paradox nor any quantum experiment demonstrates that superluminal signaling is possible; however, the principle of locality appeals powerfully to physical intuition, and Einstein, Podolsky and Rosen were unwilling to abandon it. Einstein derided the quantum mechanical predictions as "spooky action at a distance". The conclusion they drew was that quantum mechanics is not a complete theory. Mathematical formulation Bohm's variant of the EPR paradox can be expressed mathematically using the quantum mechanical formulation of spin. The spin degree of freedom for an electron is associated with a two-dimensional complex vector space V, with each quantum state corresponding to a vector in that space. The operators corresponding to the spin along the x, y, and z direction, denoted Sx, Sy, and Sz respectively, can be represented using the Pauli matrices: where is the reduced Planck constant (or the Planck constant divided by 2π). The eigenstates of Sz are represented as and the eigenstates of Sx are represented as The vector space of the electron-positron pair is , the tensor product of the electron's and positron's vector spaces. The spin singlet state is where the two terms on the right hand side are what we have referred to as state I and state II above. From the above equations, it can be shown that the spin singlet can also be written as where the terms on the right hand side are what we have referred to as state Ia and state IIa. To illustrate the paradox, we need to show that after Alice's measurement of Sz (or Sx), Bob's value of Sz (or Sx) is uniquely determined and Bob's value of Sx (or Sz) is uniformly random. This follows from the principles of measurement in quantum mechanics. When Sz is measured, the system state collapses into an eigenvector of Sz. If the measurement result is +z, this means that immediately after measurement the system state collapses to Similarly, if Alice's measurement result is −z, the state collapses to The left hand side of both equations show that the measurement of Sz on Bob's positron is now determined, it will be −z in the first case or +z in the second case. The right hand side of the equations show that the measurement of Sx on Bob's positron will return, in both cases, +x or −x with probability 1/2 each. See also Aspect's experiment Bohr-Einstein debates: The argument of EPR Coherence Correlation does not imply causation CHSH inequality ER = EPR GHZ experiment Measurement problem Philosophy of information Philosophy of physics Popper's experiment Superdeterminism Quantum entanglement Quantum information Quantum pseudo-telepathy Quantum teleportation Quantum Zeno effect Synchronicity Ward's probability amplitude Notes References Selected papers A. Fine, Do Correlations need to be explained?, in Philosophical Consequences of Quantum Theory: Reflections on Bell's Theorem, edited by Cushing & McMullin (University of Notre Dame Press, 1986). M. Mizuki, A classical interpretation of Bell's inequality. Annales de la Fondation Louis de Broglie 26 683 (2001) P. Pluch, "Theory for Quantum Probability", PhD Thesis University of Klagenfurt (2006) Books Bell, John S. (1987). Speakable and Unspeakable in Quantum Mechanics. Cambridge University Press. . Fine, Arthur (1996). The Shaky Game: Einstein, Realism and the Quantum Theory. 2nd ed. Univ. of Chicago Press. Gribbin, John (1984). In Search of Schrödinger's Cat. Black Swan. Lederman, Leon; Teresi, Dick (1993). The God Particle: If the Universe Is the Answer, What Is the Question? Houghton Mifflin Company, pp. 21, 187–189. Selleri, Franco (1988). Quantum Mechanics Versus Local Realism: The Einstein–Podolsky–Rosen Paradox. New York: Plenum Press. . External links Stanford Encyclopedia of Philosophy: The Einstein–Podolsky–Rosen Argument in Quantum Theory; 1.2 The argument in the text Internet Encyclopedia of Philosophy: "The Einstein-Podolsky-Rosen Argument and the Bell Inequalities" Stanford Encyclopedia of Philosophy: Abner Shimony (2019) "Bell's Theorem" EPR, Bell & Aspect: The Original References Does Bell's Inequality Principle rule out local theories of quantum mechanics? from the Usenet Physics FAQ Theoretical use of EPR in teleportation Effective use of EPR in cryptography EPR experiment with single photons interactive Spooky Actions At A Distance?: Oppenheimer Lecture by Prof. Mermin Original paper EPR paradox Physical paradoxes Quantum measurement Thought experiments in quantum mechanics
Einstein–Podolsky–Rosen paradox
[ "Physics" ]
4,398
[ "Quantum measurement", "Quantum mechanics", "Thought experiments in quantum mechanics" ]
10,303
https://en.wikipedia.org/wiki/Evaporation
Evaporation is a type of vaporization that occurs on the surface of a liquid as it changes into the gas phase. A high concentration of the evaporating substance in the surrounding gas significantly slows down evaporation, such as when humidity affects rate of evaporation of water. When the molecules of the liquid collide, they transfer energy to each other based on how they collide. When a molecule near the surface absorbs enough energy to overcome the vapor pressure, it will escape and enter the surrounding air as a gas. When evaporation occurs, the energy removed from the vaporized liquid will reduce the temperature of the liquid, resulting in evaporative cooling. On average, only a fraction of the molecules in a liquid have enough heat energy to escape from the liquid. The evaporation will continue until an equilibrium is reached when the evaporation of the liquid is equal to its condensation. In an enclosed environment, a liquid will evaporate until the surrounding air is saturated. Evaporation is an essential part of the water cycle. The sun (solar energy) drives evaporation of water from oceans, lakes, moisture in the soil, and other sources of water. In hydrology, evaporation and transpiration (which involves evaporation within plant stomata) are collectively termed evapotranspiration. Evaporation of water occurs when the surface of the liquid is exposed, allowing molecules to escape and form water vapor; this vapor can then rise up and form clouds. With sufficient energy, the liquid will turn into vapor. Theory For molecules of a liquid to evaporate, they must be located near the surface, they have to be moving in the proper direction, and have sufficient kinetic energy to overcome liquid-phase intermolecular forces. When only a small proportion of the molecules meet these criteria, the rate of evaporation is low. Since the kinetic energy of a molecule is proportional to its temperature, evaporation proceeds more quickly at higher temperatures. As the faster-moving molecules escape, the remaining molecules have lower average kinetic energy, and the temperature of the liquid decreases. This phenomenon is also called evaporative cooling. This is why evaporating sweat cools the human body. Evaporation also tends to proceed more quickly with higher flow rates between the gaseous and liquid phase and in liquids with higher vapor pressure. For example, laundry on a clothes line will dry (by evaporation) more rapidly on a windy day than on a still day. Three key parts to evaporation are heat, atmospheric pressure (determines the percent humidity), and air movement. On a molecular level, there is no strict boundary between the liquid state and the vapor state. Instead, there is a Knudsen layer, where the phase is undetermined. Because this layer is only a few molecules thick, at a macroscopic scale a clear phase transition interface cannot be seen. Liquids that do not evaporate visibly at a given temperature in a given gas (e.g., cooking oil at room temperature) have molecules that do not tend to transfer energy to each other in a pattern sufficient to frequently give a molecule the heat energy necessary to turn into vapor. However, these liquids are evaporating. It is just that the process is much slower and thus significantly less visible. Evaporative equilibrium If evaporation takes place in an enclosed area, the escaping molecules accumulate as a vapor above the liquid. Many of the molecules return to the liquid, with returning molecules becoming more frequent as the density and pressure of the vapor increases. When the process of escape and return reaches an equilibrium, the vapor is said to be "saturated", and no further change in either vapor pressure and density or liquid temperature will occur. For a system consisting of vapor and liquid of a pure substance, this equilibrium state is directly related to the vapor pressure of the substance, as given by the Clausius–Clapeyron relation: where P1, P2 are the vapor pressures at temperatures T1, T2 respectively, ΔHvap is the enthalpy of vaporization, and R is the universal gas constant. The rate of evaporation in an open system is related to the vapor pressure found in a closed system. If a liquid is heated, when the vapor pressure reaches the ambient pressure the liquid will boil. The ability for a molecule of a liquid to evaporate is based largely on the amount of kinetic energy an individual particle may possess. Even at lower temperatures, individual molecules of a liquid can evaporate if they have more than the minimum amount of kinetic energy required for vaporization. Factors influencing the rate of evaporation Note: Air is used here as a common example of the surrounding gas; however, other gases may hold that role. Concentration of the substance evaporating in the air If the air already has a high concentration of the substance evaporating, then the given substance will evaporate more slowly. Flow rate of air This is in part related to the concentration points above. If "fresh" air (i.e., air which is neither already saturated with the substance nor with other substances) is moving over the substance all the time, then the concentration of the substance in the air is less likely to go up with time, thus encouraging faster evaporation. This is the result of the boundary layer at the evaporation surface decreasing with flow velocity, decreasing the diffusion distance in the stagnant layer. The amount of minerals dissolved in the liquid Inter-molecular forces The stronger the forces keeping the molecules together in the liquid state, the more energy one must get to escape. This is characterized by the enthalpy of vaporization. Pressure Evaporation happens faster if there is less exertion on the surface keeping the molecules from launching themselves. Surface area A substance that has a larger surface area will evaporate faster, as there are more surface molecules per unit of volume that are potentially able to escape. Temperature of the substance the higher the temperature of the substance the greater the kinetic energy of the molecules at its surface and therefore the faster the rate of their evaporation. Photomolecular effect The amount of light will affect the evaporation. When photons hits the surface area of the liquid they can make individual molecules break free and disappear into the air without any need for additional heat. In the US, the National Weather Service measures, at various outdoor locations nationwide, the actual rate of evaporation from a standardized "pan" open water surface. Others do likewise around the world. The US data is collected and compiled into an annual evaporation map. The measurements range from under 30 to over per year. Because it typically takes place in a complex environment, where 'evaporation is an extremely rare event', the mechanism for the evaporation of water is not completely understood. Theoretical calculations require prohibitively long and large computer simulations. 'The rate of evaporation of liquid water is one of the principal uncertainties in modern climate modeling.' Thermodynamics Evaporation is an endothermic process, since heat is absorbed during evaporation. Applications Industrial applications include many printing and coating processes; recovering salts from solutions; and drying a variety of materials such as lumber, paper, cloth and chemicals. The use of evaporation to dry or concentrate samples is a common preparatory step for many laboratory analyses such as spectroscopy and chromatography. Systems used for this purpose include rotary evaporators and centrifugal evaporators. When clothes are hung on a laundry line, even though the ambient temperature is below the boiling point of water, water evaporates. This is accelerated by factors such as low humidity, heat (from the sun), and wind. In a clothes dryer, hot air is blown through the clothes, allowing water to evaporate very rapidly. The matki/matka, a traditional Indian porous clay container used for storing and cooling water and other liquids. The botijo, a traditional Spanish porous clay container designed to cool the contained water by evaporation. Evaporative coolers, which can significantly cool a building by simply blowing dry air over a filter saturated with water. Combustion vaporization Fuel droplets vaporize as they receive heat by mixing with the hot gases in the combustion chamber. Heat (energy) can also be received by radiation from any hot refractory wall of the combustion chamber. Pre-combustion vaporization Internal combustion engines rely upon the vaporization of the fuel in the cylinders to form a fuel/air mixture in order to burn well. The chemically correct air/fuel mixture for total burning of gasoline has been determined to be about 15 parts air to one part gasoline or 15/1 by weight. Changing this to a volume ratio yields 8000 parts air to one part gasoline or 8,000/1 by volume. Film deposition Thin films may be deposited by evaporating a substance and condensing it onto a substrate, or by dissolving the substance in a solvent, spreading the resulting solution thinly over a substrate, and evaporating the solvent. The Hertz–Knudsen equation is often used to estimate the rate of evaporation in these instances. See also Atmometer (evaporimeter) Cryophorus Crystallisation Desalination Distillation Eddy covariance flux (a.k.a. eddy correlation, eddy flux) Evaporator Evapotranspiration Flash evaporation Heat of vaporization Hertz–Knudsen equation Hydrology (agriculture) Latent heat Latent heat flux Pan evaporation Sublimation (phase transition) (phase transfer from solid directly to gas) Transpiration References Further reading Has an especially detailed discussion of film deposition by evaporation. External links Atmospheric thermodynamics Meteorological phenomena Materials science Phase transitions Thin film deposition Gases
Evaporation
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
2,020
[ "Physical phenomena", "Phase transitions", "Applied and interdisciplinary physics", "Earth phenomena", "Gases", "Thin film deposition", "Planes (geometry)", "Coatings", "Phases of matter", "Materials science", "Critical phenomena", "Thin films", "Meteorological phenomena", "nan", "Statis...
10,307
https://en.wikipedia.org/wiki/Equal%20temperament
An equal temperament is a musical temperament or tuning system that approximates just intervals by dividing an octave (or other interval) into steps such that the ratio of the frequencies of any adjacent pair of notes is the same. This system yields pitch steps perceived as equal in size, due to the logarithmic changes in pitch frequency. In classical music and Western music in general, the most common tuning system since the 18th century has been 12 equal temperament (also known as 12 tone equal temperament, or , informally abbreviated as 12 equal), which divides the octave into 12 parts, all of which are equal on a logarithmic scale, with a ratio equal to the 12th root of 2, ( ≈ 1.05946). That resulting smallest interval, the width of an octave, is called a semitone or half step. In Western countries the term equal temperament, without qualification, generally means . In modern times, is usually tuned relative to a standard pitch of 440 Hz, called A 440, meaning one note, , is tuned to 440 hertz and all other notes are defined as some multiple of semitones away from it, either higher or lower in frequency. The standard pitch has not always been 440 Hz; it has varied considerably and generally risen over the past few hundred years. Other equal temperaments divide the octave differently. For example, some music has been written in and , while the Arab tone system uses Instead of dividing an octave, an equal temperament can also divide a different interval, like the equal-tempered version of the Bohlen–Pierce scale, which divides the just interval of an octave and a fifth (ratio 3:1), called a "tritave" or a "pseudo-octave" in that system, into 13 equal parts. For tuning systems that divide the octave equally, but are not approximations of just intervals, the term equal division of the octave, or can be used. Unfretted string ensembles, which can adjust the tuning of all notes except for open strings, and vocal groups, who have no mechanical tuning limitations, sometimes use a tuning much closer to just intonation for acoustic reasons. Other instruments, such as some wind, keyboard, and fretted instruments, often only approximate equal temperament, where technical limitations prevent exact tunings. Some wind instruments that can easily and spontaneously bend their tone, most notably trombones, use tuning similar to string ensembles and vocal groups. General properties In an equal temperament, the distance between two adjacent steps of the scale is the same interval. Because the perceived identity of an interval depends on its ratio, this scale in even steps is a geometric sequence of multiplications. (An arithmetic sequence of intervals would not sound evenly spaced and would not permit transposition to different keys.) Specifically, the smallest interval in an equal-tempered scale is the ratio: where the ratio divides the ratio (typically the octave, which is 2:1) into equal parts. (See Twelve-tone equal temperament below.) Scales are often measured in cents, which divide the octave into 1200 equal intervals (each called a cent). This logarithmic scale makes comparison of different tuning systems easier than comparing ratios, and has considerable use in ethnomusicology. The basic step in cents for any equal temperament can be found by taking the width of above in cents (usually the octave, which is 1200 cents wide), called below , and dividing it into parts: In musical analysis, material belonging to an equal temperament is often given an integer notation, meaning a single integer is used to represent each pitch. This simplifies and generalizes discussion of pitch material within the temperament in the same way that taking the logarithm of a multiplication reduces it to addition. Furthermore, by applying the modular arithmetic where the modulus is the number of divisions of the octave (usually 12), these integers can be reduced to pitch classes, which removes the distinction (or acknowledges the similarity) between pitches of the same name, e.g., is 0 regardless of octave register. The MIDI encoding standard uses integer note designations. General formulas for the equal-tempered interval Twelve-tone equal temperament 12 tone equal temperament, which divides the octave into 12 intervals of equal size, is the musical system most widely used today, especially in Western music. History The two figures frequently credited with the achievement of exact calculation of equal temperament are Zhu Zaiyu (also romanized as Chu-Tsaiyu. Chinese: ) in 1584 and Simon Stevin in 1585. According to F.A. Kuttner, a critic of giving credit to Zhu, it is known that Zhu "presented a highly precise, simple and ingenious method for arithmetic calculation of equal temperament mono-chords in 1584" and that Stevin "offered a mathematical definition of equal temperament plus a somewhat less precise computation of the corresponding numerical values in 1585 or later." The developments occurred independently. Kenneth Robinson credits the invention of equal temperament to Zhu and provides textual quotations as evidence. In 1584 Zhu wrote: I have founded a new system. I establish one foot as the number from which the others are to be extracted, and using proportions I extract them. Altogether one has to find the exact figures for the pitch-pipers in twelve operations. Kuttner disagrees and remarks that his claim "cannot be considered correct without major qualifications". Kuttner proposes that neither Zhu nor Stevin achieved equal temperament and that neither should be considered its inventor. China Chinese theorists had previously come up with approximations for , but Zhu was the first person to mathematically solve 12 tone equal temperament, which he described in two books, published in 1580 and 1584. Needham also gives an extended account. Zhu obtained his result by dividing the length of string and pipe successively by , and for pipe length by , such that after 12 divisions (an octave), the length was halved. Zhu created several instruments tuned to his system, including bamboo pipes. Europe Some of the first Europeans to advocate equal temperament were lutenists Vincenzo Galilei, Giacomo Gorzanis, and Francesco Spinacino, all of whom wrote music in it. Simon Stevin was the first to develop 12  based on the twelfth root of two, which he described in van de Spiegheling der singconst (), published posthumously in 1884. Plucked instrument players (lutenists and guitarists) generally favored equal temperament, while others were more divided. In the end, 12-tone equal temperament won out. This allowed enharmonic modulation, new styles of symmetrical tonality and polytonality, atonal music such as that written with the 12-tone technique or serialism, and jazz (at least its piano component) to develop and flourish. Mathematics In 12 tone equal temperament, which divides the octave into 12 equal parts, the width of a semitone, i.e. the frequency ratio of the interval between two adjacent notes, is the twelfth root of two: This interval is divided into 100 cents. Calculating absolute frequencies To find the frequency, , of a note in 12 , the following formula may be used: In this formula represents the pitch, or frequency (usually in hertz), you are trying to find. is the frequency of a reference pitch. The indes numbers and are the labels assigned to the desired pitch () and the reference pitch (). These two numbers are from a list of consecutive integers assigned to consecutive semitones. For example, A (the reference pitch) is the 49th key from the left end of a piano (tuned to 440 Hz), and C (middle C), and F are the 40th and 46th keys, respectively. These numbers can be used to find the frequency of C and F: Converting frequencies to their equal temperament counterparts To convert a frequency (in Hz) to its equal 12  counterpart, the following formula can be used: where in general is the frequency of a pitch in equal temperament, and is the frequency of a reference pitch. For example, if we let the reference pitch equal 440 Hz, we can see that and have the following frequencies, respectively: where in this case where in this case Comparison with just intonation The intervals of 12  closely approximate some intervals in just intonation. The fifths and fourths are almost indistinguishably close to just intervals, while thirds and sixths are further away. In the following table, the sizes of various just intervals are compared to their equal-tempered counterparts, given as a ratio as well as cents. {| class="wikitable" style="margin:auto;text-align:center;" |- ! Interval Name ! Exact value in 12  ! Decimal value in 12  ! Pitch in ! Just intonation interval ! Cents in just intonation ! 12  centstuning error |- | Unison () | = | 1 | 0 | = | 0 | 0 |- | Minor second () | = | | 100 | = | | |- | Major second () | = | | 200 | = | | |- | Minor third () | = | | 300 | = | | |- | Major third () | = | | 400 | = | | + |- | Perfect fourth () | = | | 500 | = | | + |- | Tritone () | = | | 600 | = | | |- | Perfect fifth () | = | | 700 | = | | |- | Minor sixth () | = | | 800 | = | | |- | Major sixth () | = | | 900 | = | | + |- | Minor seventh () | = | | 1000 | = | | + |- | Major seventh () | = | | 1100 | = | 0 | + |- | Octave () | = | | 1200 | = | 1200.00 | 0 |} Seven-tone equal division of the fifth Violins, violas, and cellos are tuned in perfect fifths ( for violins and for violas and cellos), which suggests that their semitone ratio is slightly higher than in conventional 12 tone equal temperament. Because a perfect fifth is in 3:2 relation with its base tone, and this interval comprises seven steps, each tone is in the ratio of to the next (100.28 cents), which provides for a perfect fifth with ratio of 3:2, but a slightly widened octave with a rather than the usual 2:1, because 12 perfect fifths do not equal seven octaves. During actual play, however, violinists choose pitches by ear, and only the four unstopped pitches of the strings are guaranteed to exhibit this 3:2 ratio. Other equal temperaments Five-, seven-, and nine-tone temperaments in ethnomusicology Five- and seven-tone equal temperament ( and ), with 240 cent and 171 cent steps, respectively, are fairly common. and mark the endpoints of the syntonic temperament's valid tuning range, as shown in Figure 1. In the tempered perfect fifth is 720 cents wide (at the top of the tuning continuum), and marks the endpoint on the tuning continuum at which the width of the minor second shrinks to a width of 0 cents. In the tempered perfect fifth is 686 cents wide (at the bottom of the tuning continuum), and marks the endpoint on the tuning continuum, at which the minor second expands to be as wide as the major second (at 171 cents each). 5 tone and 9 tone equal temperament According to Kunst (1949), Indonesian gamelans are tuned to but according to Hood (1966) and McPhee (1966) their tuning varies widely, and according to Tenzer (2000) they contain stretched octaves. It is now accepted that of the two primary tuning systems in gamelan music, slendro and pelog, only slendro somewhat resembles five-tone equal temperament, while pelog is highly unequal; however, in 1972 Surjodiningrat, Sudarjana and Susanto analyze pelog as equivalent to (133-cent steps ). 7-tone equal temperament A Thai xylophone measured by Morton in 1974 "varied only plus or minus 5 cents" from . According to Morton, "Thai instruments of fixed pitch are tuned to an equidistant system of seven pitches per octave ... As in Western traditional music, however, all pitches of the tuning system are not used in one mode (often referred to as 'scale'); in the Thai system five of the seven are used in principal pitches in any mode, thus establishing a pattern of nonequidistant intervals for the mode." A South American Indian scale from a pre-instrumental culture measured by Boiles in 1969 featured 175 cent seven-tone equal temperament, which stretches the octave slightly, as with instrumental gamelan music. Chinese music has traditionally used . Various equal temperaments 19 EDO Many instruments have been built using 19 EDO tuning. Equivalent to meantone, it has a slightly flatter perfect fifth (at 695 cents), but its minor third and major sixth are less than one-fifth of a cent away from just, with the lowest EDO that produces a better minor third and major sixth than 19 EDO being 232 EDO. Its perfect fourth (at 505 cents), is seven cents sharper than just intonation's and five cents sharper than 12 EDO's. 22 EDO 22 EDO is one of the most accurate EDOs to represent superpyth temperament (where 7:4 and 16:9 are the same interval) and is near the optimal generator for porcupine temperament. The fifths are so sharp that the major and minor thirds we get from stacking fifths will be the supermajor third (9/7) and subminor third (7/6). One step closer to each other are the classical major and minor thirds (5/4 and 6/5). 23 EDO 23 EDO is the largest EDO that fails to approximate the 3rd, 5th, 7th, and 11th harmonics (3:2, 5:4, 7:4, 11:8) within 20 cents, but it does approximate some ratios between them (such as the 6:5 minor third) very well, making it attractive to microtonalists seeking unusual harmonic territory. 24 EDO 24 EDO, the quarter-tone scale, is particularly popular, as it represents a convenient access point for composers conditioned on standard Western 12 EDO pitch and notation practices who are also interested in microtonality. Because 24 EDO contains all the pitches of 12 EDO, musicians employ the additional colors without losing any tactics available in 12 tone harmony. That 24 is a multiple of 12 also makes 24 EDO easy to achieve instrumentally by employing two traditional 12 EDO instruments tuned a quarter-tone apart, such as two pianos, which also allows each performer (or one performer playing a different piano with each hand) to read familiar 12 tone notation. Various composers, including Charles Ives, experimented with music for quarter-tone pianos. 24 EDO also approximates the 11th and 13th harmonics very well, unlike 12 EDO. 26 EDO 26 is the denominator of a convergent to log2(7), tuning the 7th harmonic (7:4) with less than half a cent of error. Although it is a meantone temperament, it is a very flat one, with four of its perfect fifths producing a major third 17 cents flat (equated with the 11:9 neutral third). 26 EDO has two minor thirds and two minor sixths and could be an alternate temperament for barbershop harmony. 27 EDO 27 is the lowest number of equal divisions of the octave that uniquely represents all intervals involving the first eight harmonics. It tempers out the septimal comma but not the syntonic comma. 29 EDO 29 is the lowest number of equal divisions of the octave whose perfect fifth is closer to just than in 12 EDO, in which the fifth is 1.5 cents sharp instead of 2 cents flat. Its classic major third is roughly as inaccurate as 12 EDO, but is tuned 14 cents flat rather than 14 cents sharp. It also tunes the 7th, 11th, and 13th harmonics flat by roughly the same amount, allowing 29 EDO to match intervals such as 7:5, 11:7, and 13:11 very accurately. Cutting all 29 intervals in half produces 58 EDO, which allows for lower errors for some just tones. 31 EDO 31 EDO was advocated by Christiaan Huygens and Adriaan Fokker and represents a rectification of quarter-comma meantone into an equal temperament. 31 EDO does not have as accurate a perfect fifth as 12 EDO (like 19 EDO), but its major thirds and minor sixths are less than 1 cent away from just. It also provides good matches for harmonics up to 11, of which the seventh harmonic is particularly accurate. 34 EDO 34 EDO gives slightly lower total combined errors of approximation to 3:2, 5:4, 6:5, and their inversions than 31 EDO does, despite having a slightly less accurate fit for 5:4. 34 EDO does not accurately approximate the seventh harmonic or ratios involving 7, and is not meantone since its fifth is sharp instead of flat. It enables the 600 cent tritone, since 34 is an even number. 41 EDO 41 is the next EDO with a better perfect fifth than 29 EDO and 12 EDO. Its classical major third is also more accurate, at only six cents flat. It is not a meantone temperament, so it distinguishes 10:9 and 9:8, along with the classic and Pythagorean major thirds, unlike 31 EDO. It is more accurate in the 13 limit than 31 EDO. 46 EDO 46 EDO provides major thirds and perfect fifths that are both slightly sharp of just, and many say that this gives major triads a characteristic bright sound. The prime harmonics up to 17 are all within 6 cents of accuracy, with 10:9 and 9:5 a fifth of a cent away from pure. As it is not a meantone system, it distinguishes 10:9 and 9:8. 53 EDO 53 EDO has only had occasional use, but is better at approximating the traditional just consonances than 12, 19 or 31 EDO. Its extremely accurate perfect fifths make it equivalent to an extended Pythagorean tuning, as 53 is the denominator of a convergent to log2(3). With its accurate cycle of fifths and multi-purpose comma step, 53 EDO has been used in Turkish music theory. It is not a meantone temperament, which put good thirds within easy reach by stacking fifths; instead, like all schismatic temperaments, the very consonant thirds are represented by a Pythagorean diminished fourth (C-F), reached by stacking eight perfect fourths. It also tempers out the kleisma, allowing its fifth to be reached by a stack of six minor thirds (6:5). 58 EDO 58 equal temperament is a duplication of 29 EDO, which it contains as an embedded temperament. Like 29 EDO it can match intervals such as 7:4, 7:5, 11:7, and 13:11 very accurately, as well as better approximating just thirds and sixths. 72 EDO 72 EDO approximates many just intonation intervals well, providing near-just equivalents to the 3rd, 5th, 7th, and 11th harmonics. 72 EDO has been taught, written and performed in practice by Joe Maneri and his students (whose atonal inclinations typically avoid any reference to just intonation whatsoever). As it is a multiple of 12, 72 EDO can be considered an extension of 12 EDO, containing six copies of 12 EDO starting on different pitches, three copies of 24 EDO, and two copies of 36 EDO. 96 EDO 96 EDO approximates all intervals within 6.25 cents, which is barely distinguishable. As an eightfold multiple of 12, it can be used fully like the common 12 EDO. It has been advocated by several composers, especially Julián Carrillo. Other equal divisions of the octave that have found occasional use include 13 EDO, 15 EDO, 17 EDO, and 55 EDO. 2, 5, 12, 41, 53, 306, 665 and 15601 are denominators of first convergents of log(3), so 2, 5, 12, 41, 53, 306, 665 and 15601 twelfths (and fifths), being in correspondent equal temperaments equal to an integer number of octaves, are better approximations of 2, 5, 12, 41, 53, 306, 665 and 15601 just twelfths/fifths than in any equal temperament with fewer tones. 1, 2, 3, 5, 7, 12, 29, 41, 53, 200, ... is the sequence of divisions of octave that provides better and better approximations of the perfect fifth. Related sequences containing divisions approximating other just intervals are listed in a footnote. Equal temperaments of non-octave intervals The equal-tempered version of the Bohlen–Pierce scale consists of the ratio 3:1 (1902 cents) conventionally a perfect fifth plus an octave (that is, a perfect twelfth), called in this theory a tritave (), and split into 13 equal parts. This provides a very close match to justly tuned ratios consisting only of odd numbers. Each step is 146.3 cents (), or . Wendy Carlos created three unusual equal temperaments after a thorough study of the properties of possible temperaments with step size between 30 and 120 cents. These were called alpha, beta, and gamma. They can be considered equal divisions of the perfect fifth. Each of them provides a very good approximation of several just intervals. Their step sizes: alpha: (78.0 cents) beta: (63.8 cents) gamma: (35.1 cents) Alpha and beta may be heard on the title track of Carlos's 1986 album Beauty in the Beast. Proportions between semitone and whole tone In this section, semitone and whole tone may not have their usual 12 EDO meanings, as it discusses how they may be tempered in different ways from their just versions to produce desired relationships. Let the number of steps in a semitone be , and the number of steps in a tone be . There is exactly one family of equal temperaments that fixes the semitone to any proper fraction of a whole tone, while keeping the notes in the right order (meaning that, for example, , , , , and are in ascending order if they preserve their usual relationships to ). That is, fixing to a proper fraction in the relationship also defines a unique family of one equal temperament and its multiples that fulfil this relationship. For example, where is an integer, sets sets and sets The smallest multiples in these families (e.g. 12, 19 and 31 above) has the additional property of having no notes outside the circle of fifths. (This is not true in general; in 24 , the half-sharps and half-flats are not in the circle of fifths generated starting from .) The extreme cases are where and the semitone becomes a unison, and , where and the semitone and tone are the same interval. Once one knows how many steps a semitone and a tone are in this equal temperament, one can find the number of steps it has in the octave. An equal temperament with the above properties (including having no notes outside the circle of fifths) divides the octave into and the perfect fifth into If there are notes outside the circle of fifths, one must then multiply these results by , the number of nonoverlapping circles of fifths required to generate all the notes (e.g., two in 24 , six in 72 ). (One must take the small semitone for this purpose: 19  has two semitones, one being tone and the other being . Similarly, 31  has two semitones, one being tone and the other being ). The smallest of these families is and in particular, 12  is the smallest equal temperament with the above properties. Additionally, it makes the semitone exactly half a whole tone, the simplest possible relationship. These are some of the reasons 12  has become the most commonly used equal temperament. (Another reason is that 12 EDO is the smallest equal temperament to closely approximate 5 limit harmony, the next-smallest being 19 EDO.) Each choice of fraction for the relationship results in exactly one equal temperament family, but the converse is not true: 47  has two different semitones, where one is tone and the other is , which are not complements of each other like in 19  ( and ). Taking each semitone results in a different choice of perfect fifth. Related tuning systems Equal temperament systems can be thought of in terms of the spacing of three intervals found in just intonation, most of whose chords are harmonically perfectly in tune—a good property not quite achieved between almost all pitches in almost all equal temperaments. Most just chords sound amazingly consonant, and most equal-tempered chords sound at least slightly dissonant. In C major those three intervals are: the greater tone the interval from C:D, F:G, and A:B; the lesser tone the interval from D:E and G:A; the diatonic semitone the interval from E:F and B:C. Analyzing an equal temperament in terms of how it modifies or adapts these three intervals provides a quick way to evaluate how consonant various chords can possibly be in that temperament, based on how distorted these intervals are. Regular diatonic tunings The diatonic tuning in 12 tone equal temperament can be generalized to any regular diatonic tuning dividing the octave as a sequence of steps (or some circular shift or "rotation" of it). To be called a regular diatonic tuning, each of the two semitones () must be smaller than either of the tones (greater tone, , and lesser tone, ). The comma is implicit as the size ratio between the greater and lesser tones: Expressed as frequencies or as cents . The notes in a regular diatonic tuning are connected in a "spiral of fifths" that does not close (unlike the circle of fifths in Starting on the subdominant (in the key of C) there are three perfect fifths in a row—–, –, and –—each a composite of some permutation of the smaller intervals The three in-tune fifths are interrupted by the grave fifth – (grave means "flat by a comma"), followed by another perfect fifth, –, and another grave fifth, –, and then restarting in the sharps with –; the same pattern repeats through the sharp notes, then the double-sharps, and so on, indefinitely. But each octave of all-natural or all-sharp or all-double-sharp notes flattens by two commas with every transition from naturals to sharps, or single sharps to double sharps, etc. The pattern is also reverse-symmetric in the flats: Descending by fourths the pattern reciprocally sharpens notes by two commas with every transition from natural notes to flattened notes, or flats to double flats, etc. If left unmodified, the two grave fifths in each block of all-natural notes, or all-sharps, or all-flat notes, are "wolf" intervals: Each of the grave fifths out of tune by a diatonic comma. Since the comma, , expands the lesser tone into the greater tone, a just octave can be broken up into a sequence (or a circular shift of it) of 7 diatonic semitones , 5 chromatic semitones , and 3 commas Various equal temperaments alter the interval sizes, usually breaking apart the three commas and then redistributing their parts into the seven diatonic semitones , or into the five chromatic semitones , or into both and , with some fixed proportion for each type of semitone. The sequence of intervals , , and can be repeatedly appended to itself into a greater spiral of 12 fifths, and made to connect at its far ends by slight adjustments to the size of one or several of the intervals, or left unmodified with occasional less-than-perfect fifths, flat by a comma. Morphing diatonic tunings into EDO Various equal temperaments can be understood and analyzed as having made adjustments to the sizes of and subdividing the three intervals—, , and , or at finer resolution, their constituents , , and . An equal temperament can be created by making the sizes of the major and minor tones (, ) the same (say, by setting , with the others expanded to still fill out the octave), and both semitones ( and ) the same, then 12 equal semitones, two per tone, result. In , the semitone, , is exactly half the size of the same-size whole tones = . Some of the intermediate sizes of tones and semitones can also be generated in equal temperament systems, by modifying the sizes of the comma and semitones. One obtains in the limit as the size of and tend to zero, with the octave kept fixed, and in the limit as and tend to zero; is of course, the case and For instance: and There are two extreme cases that bracket this framework: When and reduce to zero with the octave size kept fixed, the result is a 5 tone equal temperament. As the gets larger (and absorbs the space formerly used for the comma ), eventually the steps are all the same size, and the result is seven-tone equal temperament. These two extremes are not included as "regular" diatonic tunings. If the diatonic semitone is set double the size of the chromatic semitone, i.e. (in cents) and the result is with one step for the chromatic semitone , two steps for the diatonic semitone , three steps for the tones = , and the total number of steps 19 steps. The imbedded 12 tone sub-system closely approximates the historically important meantone system. If the chromatic semitone is two-thirds the size of the diatonic semitone, i.e. with the result is 31 , with two steps for the chromatic semitone, three steps for the diatonic semitone, and five steps for the tone, where 31 steps. The imbedded 12 tone sub-system closely approximates the historically important meantone. If the chromatic semitone is three-fourths the size of the diatonic semitone, i.e. with the result is 43 , with three steps for the chromatic semitone, four steps for the diatonic semitone, and seven steps for the tone, where 43. The imbedded 12 tone sub-system closely approximates meantone. If the chromatic semitone is made the same size as three commas, (in cents, in frequency ) the diatonic the same as five commas, that makes the lesser tone eight commas and the greater tone nine, Hence for 53 steps of one comma each. The comma size / step size is exactly, or the syntonic comma. It is an exceedingly close approximation to 5-limit just intonation and Pythagorean tuning, and is the basis for Turkish music theory. See also Just intonation Musical acoustics(the physics of music) Music and mathematics Microtuner Microtonal music Piano tuning List of meantone intervals Diatonic and chromatic Electronic tuner Musical tuning Footnotes References Sources As cited by Further reading — A foundational work on acoustics and the perception of sound. Especially the material in Appendix XX: Additions by the translator, pages 430–556, (pdf pages 451–577) (see also wiki article On Sensations of Tone) External links An Introduction to Historical Tunings by Kyle Gann Xenharmonic wiki on EDOs vs. Equal Temperaments Huygens-Fokker Foundation Centre for Microtonal Music A.Orlandini: Music Acoustics "Temperament" from A supplement to Mr. Chambers's cyclopædia (1753) Barbieri, Patrizio. Enharmonic instruments and music, 1470–1900. (2008) Latina, Il Levante Libreria Editrice Fractal Microtonal Music, Jim Kukula. All existing 18th century quotes on J.S. Bach and temperament Dominic Eckersley: "Rosetta Revisited: Bach's Very Ordinary Temperament" Well Temperaments, based on the Werckmeister Definition FAVORED CARDINALITIES OF SCALES by PETER BUCH Chinese discoveries
Equal temperament
[ "Physics" ]
6,720
[ "Physical quantities", "Musical symmetry", "Logarithmic scales of measurement", "Equal temperaments", "Symmetry" ]
10,313
https://en.wikipedia.org/wiki/E.%20O.%20Wilson
Edward Osborne Wilson (June 10, 1929 – December 26, 2021) was an American biologist, naturalist, ecologist, and entomologist known for developing the field of sociobiology. Born in Alabama, Wilson found an early interest in nature and frequented the outdoors. At age seven, he was partially blinded in a fishing accident; due to his reduced sight, Wilson resolved to study entomology. After graduating from the University of Alabama, Wilson transferred to complete his dissertation at Harvard University, where he distinguished himself in multiple fields. In 1956, he co-authored a paper defining the theory of character displacement. In 1967, he developed the theory of island biogeography with Robert MacArthur. Wilson was the Pellegrino University Research Professor Emeritus in Entomology for the Department of Organismic and Evolutionary Biology at Harvard University, a lecturer at Duke University, and a fellow of the Committee for Skeptical Inquiry. The Royal Swedish Academy awarded Wilson the Crafoord Prize. He was a humanist laureate of the International Academy of Humanism. He was a two-time winner of the Pulitzer Prize for General Nonfiction (for On Human Nature in 1979, and The Ants in 1991) and a New York Times bestselling author for The Social Conquest of Earth, Letters to a Young Scientist, and The Meaning of Human Existence. Wilson's work received both praise and criticism during his lifetime. His book Sociobiology was a particular flashpoint for controversy, and drew criticism from the Sociobiology Study Group. Wilson's interpretation of the theory of evolution resulted in a widely reported dispute with Richard Dawkins about multilevel selection theory. Examinations of his letters after his death revealed that he had supported the psychologist J. Philippe Rushton, whose work on race and intelligence is widely regarded by the scientific community as deeply flawed and racist. Early life Edward Osborne Wilson was born on June 10, 1929, in Birmingham, Alabama. He was the only child of Inez Linnette Freeman and Edward Osborne Wilson Sr. According to his autobiography, Naturalist, he grew up in various towns in the Southern United States which included Mobile, Decatur, and Pensacola. From an early age, he was interested in natural history. His father was an alcoholic who eventually committed suicide. His parents allowed him to bring home black widow spiders and keep them on the porch. They divorced when he was seven years old. In the same year that his parents divorced, Wilson blinded himself in his right eye in a fishing accident. Despite the prolonged pain, he did not stop fishing. He did not complain because he was anxious to stay outdoors, and never sought medical treatment. Several months later, his right pupil clouded over with a cataract. He was admitted to Pensacola Hospital to have the lens removed. Wilson writes, in his autobiography, that the "surgery was a terrifying [19th] century ordeal". Wilson retained full sight in his left eye, with a vision of 20/10. The 20/10 vision prompted him to focus on "little things": "I noticed butterflies and ants more than other kids did, and took an interest in them automatically." Although he had lost his stereoscopic vision, he could still see fine print and the hairs on the bodies of small insects. His reduced ability to observe mammals and birds led him to concentrate on insects. At the age of nine, Wilson undertook his first expeditions at Rock Creek Park in Washington, D.C. He began to collect insects and he gained a passion for butterflies. He would capture them using nets made with brooms, coat hangers, and cheesecloth bags. Going on these expeditions led to Wilson's fascination with ants. He describes in his autobiography how one day he pulled the bark of a rotting tree away and discovered citronella ants underneath. The worker ants he found were "short, fat, brilliant yellow, and emitted a strong lemony odor". Wilson said the event left a "vivid and lasting impression". He also earned the Eagle Scout award and served as Nature Director of his Boy Scouts summer camp. At age 18, intent on becoming an entomologist, he began by collecting flies, but the shortage of insect pins during World War II caused him to switch to ants, which could be stored in vials. With the encouragement of Marion R. Smith, a myrmecologist from the National Museum of Natural History in Washington, Wilson began a survey of all the ants of Alabama. This study led him to report the first colony of fire ants in the U.S., near the port of Mobile. Education Wilson said he went to 15 or 16 schools during 11 years of schooling. He was concerned that he might not be able to afford to go to a university, and he tried to enlist in the United States Army, intending to earn U.S. government financial support for his education. He failed the Army medical examination due to his impaired eyesight, but was able to afford to enroll in the University of Alabama, where he earned his Bachelor of Science in 1949 and Master of Science in biology in 1950. The next year, Wilson transferred to Harvard University. Appointed to the Harvard Society of Fellows, he could travel on overseas expeditions, collecting ant species of Cuba and Mexico and travel the South Pacific, including Australia, New Guinea, Fiji, and New Caledonia, as well as to Sri Lanka. In 1955, he received his Ph.D. and married Irene Kelley. In Letters to a Young Scientist, Wilson stated his IQ was measured as 123. Career From 1956 until 1996, Wilson was part of the faculty of Harvard. He began as an ant taxonomist and worked on understanding their microevolution, how they developed into new species by escaping environmental disadvantages and moving into new habitats. He developed a theory of the "taxon cycle". In collaboration with mathematician William H. Bossert, Wilson developed a classification of pheromones based on insect communication patterns. In the 1960s, he collaborated with mathematician and ecologist Robert MacArthur in developing the theory of species equilibrium. In the 1970s he and biologist Daniel S. Simberloff tested this theory on tiny mangrove islets in the Florida Keys. They eradicated all insect species and observed the repopulation by new species. Wilson and MacArthur's book The Theory of Island Biogeography became a standard ecology text. In 1971, he published The Insect Societies, which argued that insect behavior and the behavior of other animals are influenced by similar evolutionary pressures. In 1973, Wilson was appointed the curator of entomology at the Harvard Museum of Comparative Zoology. In 1975, he published the book Sociobiology: The New Synthesis applying his theories of insect behavior to vertebrates, and in the last chapter, to humans. He speculated that evolved and inherited tendencies were responsible for hierarchical social organization among humans. In 1978 he published On Human Nature, which dealt with the role of biology in the evolution of human culture and won a Pulitzer Prize for General Nonfiction. Wilson was named the Frank B. Baird Jr., Professor of Science in 1976 and, after his retirement from Harvard in 1996, he became the Pellegrino University Professor Emeritus. In 1981 after collaborating with biologist Charles Lumsden, he published Genes, Mind and Culture, a theory of gene-culture coevolution. In 1990 he published The Ants, co-written with zoologist Bert Hölldobler, winning his second Pulitzer Prize for General Nonfiction. In the 1990s, he published The Diversity of Life (1992); an autobiography, Naturalist (1994); and Consilience: The Unity of Knowledge (1998) about the unity of the natural and social sciences. Wilson was praised for his environmental advocacy, and his secular-humanist and deist ideas pertaining to religious and ethical matters. Wilson was characterized by several titles during his career, including the "father of biodiversity," "ant man," and "Darwin's heir." In a PBS interview, David Attenborough described Wilson as "a magic name to many of us working in the natural world, for two reasons. First, he is a towering example of a specialist, a world authority. Nobody in the world has ever known as much as Ed Wilson about ants. But, in addition to that intense knowledge and understanding, he has the widest of pictures. He sees the planet and the natural world that it contains in amazing detail but extraordinary coherence". Disagreement with Richard Dawkins Although Dawkins defended Wilson during the so-called "sociobiology debate", a disagreement between them arose over the theory of evolution. The disagreement began in 2012 when Dawkins wrote a critical review of Wilson's book The Social Conquest of Earth in Prospect Magazine. In the review, Dawkins criticized Wilson for rejecting kin selection and for supporting group selection, labeling it "bland" and "unfocused," and he wrote that the book's theoretical errors were "important, pervasive, and integral to its thesis in a way that renders it impossible to recommend". Wilson responded in the same magazine and wrote that Dawkins made "little connection to the part he criticizes" and accused him of engaging in rhetoric. In 2014, Wilson said in an interview, "There is no dispute between me and Richard Dawkins and there never has been, because he's a journalist, and journalists are people that report what the scientists have found and the arguments I’ve had have actually been with scientists doing research". Dawkins responded in a tweet: "I greatly admire EO Wilson & his huge contributions to entomology, ecology, biogeography, conservation, etc. He's just wrong on kin selection" and later added, "Anybody who thinks I'm a journalist who reports what other scientists think is invited to read The Extended Phenotype". Biologist Jerry Coyne wrote that Wilson's remarks were "unfair, inaccurate, and uncharitable". In 2021, in an obituary to Wilson, Dawkins stated that their dispute was "purely scientific". Dawkins wrote that he stands by his critical review and doesn't regret "its outspoken tone", but noted that he also stood by his "profound admiration for Professor Wilson and his life work". Support of J. Philippe Rushton Prior to Wilson's death, his personal correspondences were donated to the Library of Congress at the library's request. Following his death, several articles were published discussing the discrepancy between Wilson's legacy as a champion of biogeography and conservation biology, and his support of scientific racist pseudoscientist J. Philippe Rushton over several years. Rushton was a controversial psychologist at the University of Western Ontario, who later headed the Pioneer Fund. From the late 1980s to the early 1990s, Wilson wrote several emails to Rushton's colleagues defending Rushton's work in the face of widespread criticism for scholarly misconduct, misrepresentation of data, and confirmation bias, all of which were allegedly used by Rushton to support his personal ideas on race. Wilson also sponsored an article written by Rushton in PNAS, and during the review process, Wilson intentionally sought out reviewers for the article who he believed would likely already agree with its premise. Wilson kept his support of Rushton's racist ideologies behind-the-scenes so as to not draw too much attention to himself or tarnish his own reputation. Wilson responded to another request from Rushton to sponsor a second PNAS article with the following: "You have my support in many ways, but for me to sponsor an article on racial differences in the PNAS would be counterproductive for both of us." Wilson also remarked that the reason Rushton's ideologies were not more widely supported is because of the "... fear of being called racist, which is virtually a death sentence in American academia if taken seriously. I admit that I myself have tended to avoid the subject of Rushton's work, out of fear." In 2022, the E.O. Wilson Biodiversity Foundation issued a statement rejecting Wilson's support of Rushton and racism, on behalf of the board of directors and staff. Work Sociobiology: The New Synthesis, 1975 Wilson used sociobiology and evolutionary principles to explain the behavior of social insects and then to understand the social behavior of other animals, including humans, thus establishing sociobiology as a new scientific field. He argued that all animal behavior, including that of humans, is the product of heredity, environmental stimuli, and past experiences, and that free will is an illusion. He referred to the biological basis of behavior as the "genetic leash". The sociobiological view is that all animal social behavior is governed by epigenetic rules worked out by the laws of evolution. This theory and research proved to be seminal, controversial, and influential. Wilson argued that the unit of selection is a gene, the basic element of heredity. The target of selection is normally the individual who carries an ensemble of genes of certain kinds. With regard to the use of kin selection in explaining the behavior of eusocial insects, the "new view that I'm proposing is that it was group selection all along, an idea first roughly formulated by Darwin." Sociobiological research was at the time particularly controversial with regard to its application to humans. The theory established a scientific argument for rejecting the common doctrine of tabula rasa, which holds that human beings are born without any innate mental content and that culture functions to increase human knowledge and aid in survival and success. Reception and controversy Sociobiology: The New Synthesis was initially met with praise by most biologists. After substantial criticism of the book was launched by the Sociobiology Study Group, associated with the organization Science for the People, a major controversy known as the "sociobiology debate" ensued, and Wilson was accused of racism, misogyny, and support for eugenics. Several of Wilson's colleagues at Harvard, such as Richard Lewontin and Stephen Jay Gould, both members of the Group, were strongly opposed. Both focused their criticism mostly on Wilson's sociobiological writings. Gould, Lewontin, and other members, wrote "Against 'Sociobiology'" in an open letter criticizing Wilson's "deterministic view of human society and human action". Other public lectures, reading groups, and press releases were organized criticizing Wilson's work. In response, Wilson produced a discussion article entitled "Academic Vigilantism and the Political Significance of Sociobiology" in BioScience. In February 1978, while participating in a discussion on sociobiology at the annual meeting of the American Association for the Advancement of Science, Wilson was surrounded, chanted at and doused with water by members of the International Committee Against Racism, who accused Wilson of advocating racism and genetic determinism. Steven Jay Gould, who was present at the event, and Science for the People, which had previously protested Wilson, condemned the attack. Philosopher Mary Midgley encountered Sociobiology in the process of writing Beast and Man (1979) and significantly rewrote the book to offer a critique of Wilson's views. Midgley praised the book for the study of animal behavior, clarity, scholarship, and encyclopedic scope, but extensively critiqued Wilson for conceptual confusion, scientism, and anthropomorphism of genetics. On Human Nature, 1978 Wilson wrote in his 1978 book On Human Nature, "The evolutionary epic is probably the best myth we will ever have." Wilson's fame prompted use of the morphed phrase epic of evolution. The book won the Pulitzer Prize in 1979. The Ants, 1990 Wilson, along with Bert Hölldobler, carried out a systematic study of ants and ant behavior, culminating in the 1990 encyclopedic work The Ants. Because much self-sacrificing behavior on the part of individual ants can be explained on the basis of their genetic interests in the survival of the sisters, with whom they share 75% of their genes (though the actual case is some species' queens mate with multiple males and therefore some workers in a colony would only be 25% related), Wilson argued for a sociobiological explanation for all social behavior on the model of the behavior of the social insects. Wilson said in reference to ants that "Karl Marx was right, socialism works, it is just that he had the wrong species". He asserted that individual ants and other eusocial species were able to reach higher Darwinian fitness putting the needs of the colony above their own needs as individuals because they lack reproductive independence: individual ants cannot reproduce without a queen, so they can only increase their fitness by working to enhance the fitness of the colony as a whole. Humans, however, do possess reproductive independence, and so individual humans enjoy their maximum level of Darwinian fitness by looking after their own survival and having their own offspring. Consilience, 1998 In his 1998 book Consilience: The Unity of Knowledge, Wilson discussed methods that have been used to unite the sciences and might be able to unite the sciences with the humanities. He argued that knowledge is a single, unified thing, not divided between science and humanistic inquiry. Wilson used the term "consilience" to describe the synthesis of knowledge from different specialized fields of human endeavor. He defined human nature as a collection of epigenetic rules, the genetic patterns of mental development. He argued that culture and rituals are products, not parts, of human nature. He said art is not part of human nature, but our appreciation of art is. He suggested that concepts such as art appreciation, fear of snakes, or the incest taboo (Westermarck effect) could be studied by scientific methods of the natural sciences and be part of interdisciplinary research. Spiritual and political beliefs Scientific humanism Wilson coined the phrase scientific humanism as "the only worldview compatible with science's growing knowledge of the real world and the laws of nature". Wilson argued that it is best suited to improve the human condition. In 2003, he was one of the signers of the Humanist Manifesto. God and religion On the question of God, Wilson described his position as "provisional deism" and explicitly denied the label of "atheist", preferring "agnostic". He explained his faith as a trajectory away from traditional beliefs: "I drifted away from the church, not definitively agnostic or atheistic, just Baptist & Christian no more." Wilson argued that belief in God and the rituals of religion are products of evolution. He argued that they should not be rejected or dismissed, but further investigated by science to better understand their significance to human nature. In his book The Creation, Wilson wrote that scientists ought to "offer the hand of friendship" to religious leaders and build an alliance with them, stating that "Science and religion are two of the most potent forces on Earth and they should come together to save the creation." Wilson made an appeal to the religious community on the lecture circuit at Midland College, Texas, for example, and that "the appeal received a 'massive reply'", that a covenant had been written and that a "partnership will work to a substantial degree as time goes on". In a New Scientist interview published on January 21, 2015, however, Wilson said that religious faith is "dragging us down", and: Ecology Wilson said that, if he could start his life over he would work in microbial ecology, when discussing the reinvigoration of his original fields of study since the 1960s. He studied the mass extinctions of the 20th century and their relationship to modern society, and identifying mass extinction as the greatest threat to Earth's future. In 1998 argued for an ecological approach at the Capitol: From the late 1970s Wilson was actively involved in the global conservation of biodiversity, contributing and promoting research. In 1984 he published Biophilia, a work that explored the evolutionary and psychological basis of humanity's attraction to the natural environment. This work introduced the word biophilia which influenced the shaping of modern conservation ethics. In 1988 Wilson edited the BioDiversity volume, based on the proceedings of the first US national conference on the subject, which also introduced the term biodiversity into the language. This work was very influential in creating the modern field of biodiversity studies. In 2011, Wilson led scientific expeditions to the Gorongosa National Park in Mozambique and the archipelagos of Vanuatu and New Caledonia in the southwest Pacific. Wilson was part of the international conservation movement, as a consultant to Columbia University's Earth Institute, as a director of the American Museum of Natural History, Conservation International, The Nature Conservancy and the World Wildlife Fund. Understanding the scale of the extinction crisis led him to advocate for forest protection, including the "Act to Save America's Forests", first introduced in 1998 and reintroduced in 2008, but never passed. The Forests Now Declaration called for new markets-based mechanisms to protect tropical forests. Wilson once said destroying a rainforest for economic gain was like burning a Renaissance painting to cook a meal. In 2014, Wilson called for setting aside 50% of Earth's surface for other species to thrive in as the only possible strategy to solve the extinction crisis. The idea became the basis for his book Half-Earth (2016) and for the Half-Earth Project of the E.O. Wilson Biodiversity Foundation. Wilson's influence regarding ecology through popular science was discussed by Alan G. Gross in The Scientific Sublime (2018). Wilson was instrumental in launching the Encyclopedia of Life (EOL) initiative with the goal of creating a global database to include information on the 1.9 million species recognized by science. Currently, it includes information on practically all known species. This open and searchable digital repository for organism traits, measurements, interactions and other data has more than 300 international partners and countless scientists providing global users' access to knowledge of life on Earth. For his part, Wilson discovered and described more than 400 species of ants. Retirement and death In 1996, Wilson officially retired from Harvard University, where he continued to hold the positions of Professor Emeritus and Honorary Curator in Entomology. He fully retired from Harvard in 2002 at age 73. After stepping down, he published more than a dozen books, including a digital biology textbook for the iPad. He founded the E.O. Wilson Biodiversity Foundation, which finances the PEN/E. O. Wilson Literary Science Writing Award and is an "independent foundation" at the Nicholas School of the Environment at Duke University. Wilson became a special lecturer at Duke University as part of the agreement. Wilson and his wife, Irene, resided in Lexington, Massachusetts. He had a daughter, Catherine. He was preceded in death by his wife (on August 7, 2021) and died in nearby Burlington on December 26, 2021, at the age of 92. Awards and honors Wilson's scientific and conservation honors include: Member of the American Academy of Arts and Sciences, elected 1959 Member of the National Academy of Sciences, elected 1969 Member of the American Philosophical Society, elected 1976. U.S. National Medal of Science, 1977 Leidy Award, 1979, from the Academy of Natural Sciences of Philadelphia Pulitzer Prize for On Human Nature, 1979 Tyler Prize for Environmental Achievement, 1984 ECI Prize, International Ecology Institute, terrestrial ecology, 1987 Honorary doctorate from the Faculty of Mathematics and Science at Uppsala University, Sweden, 1987 Academy of Achievement Golden Plate Award, 1988 His books The Insect Societies and Sociobiology: The New Synthesis were honored with the Science Citation Classic award by the Institute for Scientific Information. Crafoord Prize, 1990, a prize awarded by the Royal Swedish Academy of Sciences Pulitzer Prize for The Ants (with Bert Hölldobler), 1991 International Prize for Biology, 1993 Carl Sagan Award for Public Understanding of Science, 1994 The National Audubon Society's Audubon Medal, 1995 Time magazine's 25 Most Influential People in America, 1995 Certificate of Distinction, International Congresses of Entomology, Florence, Italy 1996 Benjamin Franklin Medal for Distinguished Achievement in the Sciences of the American Philosophical Society, 1998. American Humanist Association's 1999 Humanist of the Year Lewis Thomas Prize for Writing about Science, 2000 Nierenberg Prize, 2001 Distinguished Eagle Scout Award 2004 Dauphin Island Sea Lab christened one of its research vessel the R/V E.O. Wilson. Linnean Tercentenary Silver Medal, 2006 Addison Emery Verrill Medal from the Peabody Museum of Natural History, 2007 TED Prize 2007 given yearly to "honor a maximum of three individuals who have shown that they can, in some way, positively impact life on this planet." XIX Premi Internacional Catalunya 2007 E.O. Wilson Biophilia Center on Nokuse Plantation in Walton County, Florida 2009 video The Explorers Club Medal, 2009 2010 BBVA Frontiers of Knowledge Award in the Ecology and Conservation Biology Category Thomas Jefferson Medal in Architecture, 2010 2010 Heartland Prize for fiction for his first novel Anthill: A Novel EarthSky Science Communicator of the Year, 2010 International Cosmos Prize, 2012 Kew International Medal (2014) Doctor of Science, honoris causa, from the American Museum of Natural History (2014) 2016 Harper Lee Award Commemoration in the species' epithet of Myrmoderus eowilsoni (2018) Commemoration in the species' epithet of Miniopterus wilsoni (2020) Busk Medal by the Royal Geographical Society in 2002. Main works , coauthored with William Brown Jr.; paper honored in 1986 as a Science Citation Classic, i.e., as one of the most frequently cited scientific papers of all time. The Theory of Island Biogeography, 1967, Princeton University Press (2001 reprint), , with Robert H. MacArthur The Insect Societies, 1971, Harvard University Press, Sociobiology: The New Synthesis 1975, Harvard University Press, (Twenty-fifth Anniversary Edition, 2000 ) On Human Nature, 1979, Harvard University Press, , winner of the 1979 Pulitzer Prize for General Nonfiction. Genes, Mind and Culture: The Coevolutionary Process, 1981, Harvard University Press, Promethean Fire: Reflections on the Origin of Mind, 1983, Harvard University Press, Biophilia, 1984, Harvard University Press, Success and Dominance in Ecosystems: The Case of the Social Insects, 1990, Inter-Research, The Ants, 1990, Harvard University Press, , Winner of the 1991 Pulitzer Prize, with Bert Hölldobler The Diversity of Life, 1992, Harvard University Press, , The Diversity of Life: Special Edition, The Biophilia Hypothesis, 1993, Shearwater Books, , with Stephen R. Kellert Journey to the Ants: A Story of Scientific Exploration, 1994, Harvard University Press, , with Bert Hölldobler Naturalist, 1994, Shearwater Books, In Search of Nature, 1996, Shearwater Books, , with Laura Simonds Southworth Consilience: The Unity of Knowledge, 1998, Knopf, The Future of Life, 2002, Knopf, Pheidole in the New World: A Dominant, Hyperdiverse Ant Genus, 2003, Harvard University Press, The Creation: An Appeal to Save Life on Earth, September 2006, W. W. Norton & Company, Inc. Nature Revealed: Selected Writings 1949–2006, The Superorganism: The Beauty, Elegance, and Strangeness of Insect Societies, 2009, W.W. Norton & Company, Inc. , with Bert Hölldobler Anthill: A Novel, April 2010, W. W. Norton & Company, Inc. Kingdom of Ants: Jose Celestino Mutis and the Dawn of Natural History in the New World, 2010, Johns Hopkins University Press, Baltimore, with José María Gómez Durán The Leafcutter Ants: Civilization by Instinct, 2011, W.W. Norton & Company, Inc. , with Bert Hölldobler The Social Conquest of Earth, 2012, Liveright Publishing Corporation, New York, Letters to a Young Scientist, 2014, Liveright, A Window on Eternity: A Biologist's Walk Through Gorongosa National Park, 2014, Simon & Schuster, The Meaning of Human Existence, 2014, Liveright, Half-Earth, 2016, Liveright, The Origins of Creativity, 2017, Liveright, Genesis: The Deep Origin of Societies, 2019, Liveright; Tales from the Ant World, 2020, Liveright, Naturalist: A Graphic Adaptation November 10, 2020, Island Press; Edited works From So Simple a Beginning: Darwin's Four Great Books, edited with introductions by Edward O. Wilson (2005, W. W. Norton) References Sources Books Journals Newspapers External links Curriculum vitae E.O. Wilson Foundation Review of The Social Conquest of Earth E.O. Wilson Biophilia Center 1929 births 2021 deaths 20th-century American male writers 20th-century American non-fiction writers 20th-century American novelists 20th-century American zoologists 21st-century American male writers 21st-century American non-fiction writers 21st-century American novelists 21st-century American zoologists American autobiographers American conservationists American deists American ecologists American entomologists American humanists American male non-fiction writers American male novelists American naturalists American non-fiction environmental writers American science writers American skeptics Biogeographers American critics of religions Entomological writers Ethologists Evolutionary biologists Fellows of the Ecological Society of America Foreign members of the Royal Society Harvard University alumni Harvard University faculty Human evolution theorists Members of the United States National Academy of Sciences Myrmecologists National Medal of Science laureates Neutral theory Novelists from Alabama Novelists from Massachusetts American philosophers of social science Pulitzer Prize for General Nonfiction winners People involved in race and intelligence controversies Secular humanists Sociobiologists American sustainability advocates University of Alabama alumni Writers about activism and social change Writers about religion and science Writers from Birmingham, Alabama Members of the American Philosophical Society
E. O. Wilson
[ "Biology" ]
6,064
[ "Non-Darwinian evolution", "Neutral theory", "Biology theories" ]
10,315
https://en.wikipedia.org/wiki/Edwin%20Howard%20Armstrong
Edwin Howard Armstrong (December 18, 1890 – February 1, 1954) was an American electrical engineer and inventor who developed FM (frequency modulation) radio and the superheterodyne receiver system. He held 42 patents and received numerous awards, including the first Medal of Honor awarded by the Institute of Radio Engineers (now IEEE), the French Legion of Honor, the 1941 Franklin Medal and the 1942 Edison Medal. He achieved the rank of major in the U.S. Army Signal Corps during World War I and was often referred to as "Major Armstrong" during his career. He was inducted into the National Inventors Hall of Fame and included in the International Telecommunication Union's roster of great inventors. He was inducted into the Wireless Hall of Fame posthumously in 2001. Armstrong attended Columbia University, and served as a professor there for most of his life. Early life Armstrong was born in the Chelsea district of New York City, the oldest of John and Emily (née Smith) Armstrong's three children. His father began working at a young age at the American branch of the Oxford University Press, which published bibles and standard classical works, eventually advancing to the position of vice president. His parents first met at the North Presbyterian Church, located at 31st Street and Ninth Avenue. His mother's family had strong ties to Chelsea, and an active role in church functions. When the church moved north, the Smiths and Armstrongs followed, and in 1895 the Armstrong family moved from their brownstone row house at 347 West 29th Street to a similar house at 26 West 97th Street in the Upper West Side. The family was comfortably middle class. At the age of eight, Armstrong contracted Sydenham's chorea (then known as St. Vitus' Dance), an infrequent but serious neurological disorder precipitated by rheumatic fever. For the rest of his life, Armstrong was afflicted with a physical tic exacerbated by excitement or stress. Due to this illness, he withdrew from public school and was home-tutored for two years. To improve his health, the Armstrong family moved to a house overlooking the Hudson River, at 1032 Warburton Avenue in Yonkers. The Smith family subsequently moved next door. Armstrong's tic and the time missed from school led him to become socially withdrawn. From an early age, Armstrong showed an interest in electrical and mechanical devices, particularly trains. He loved heights and constructed a makeshift backyard antenna tower that included a bosun's chair for hoisting himself up and down its length, to the concern of neighbors. Much of his early research was conducted in the attic of his parents' house. In 1909, Armstrong enrolled at Columbia University in New York City, where he became a member of the Epsilon Chapter of the Theta Xi engineering fraternity, and studied under Professor Michael Pupin at the Hartley Laboratories, a separate research unit at Columbia. Another of his instructors, Professor John H. Morecroft, later remembered Armstrong as being intensely focused on the topics that interested him, but somewhat indifferent to the rest of his studies. Armstrong challenged conventional wisdom and was quick to question the opinions of both professors and peers. In one case, he recounted how he tricked a visiting professor from Cornell University that he disliked into receiving a severe electrical shock. He also stressed the practical over the theoretical, stating that progress was more likely the product of experimentation and reasoning than on mathematical calculation and the formulae of "mathematical physics". Armstrong graduated from Columbia in 1913, earning an electrical engineering degree. During World War I, Armstrong served in the Signal Corps as a captain and later a major. Following college graduation, he received a $600 one-year appointment as a laboratory assistant at Columbia, after which he nominally worked as a research assistant, for a salary of $1 a year, under Professor Pupin. Unlike most engineers, Armstrong never became a corporate employee. He set up a self-financed independent research and development laboratory at Columbia, and owned his patents outright. In 1934, he filled the vacancy left by John H. Morecroft's death, receiving an appointment as a professor of Electrical Engineering at Columbia, a position he held the remainder of his life. Early work Regenerative circuit Armstrong began working on his first major invention while still an undergraduate at Columbia. In late 1906, Lee de Forest had invented the three-element (triode) "grid Audion" vacuum-tube. How vacuum tubes worked was not understood at the time. De Forest's initial Audions did not have a high vacuum and developed a blue glow at modest plate voltages; De Forest improved the vacuum for Federal Telegraph. By 1912, vacuum tube operation was understood, and regenerative circuits using high-vacuum tubes were appreciated. While growing up, Armstrong had experimented with the early temperamental, "gassy" Audions. Spurred by the later discoveries, he developed a keen interest in gaining a detailed scientific understanding of how vacuum tubes worked. In conjunction with Professor Morecroft he used an oscillograph to conduct comprehensive studies. His breakthrough discovery was determining that employing positive feedback (also known as "regeneration") produced amplification hundreds of times greater than previously attained, with the amplified signals now strong enough so that receivers could use loudspeakers instead of headphones. Further investigation revealed that when the feedback was increased beyond a certain level a vacuum-tube would go into oscillation, thus could also be used as a continuous-wave radio transmitter. Beginning in 1913 Armstrong prepared a series of comprehensive demonstrations and papers that carefully documented his research, and in late 1913 applied for patent protection covering the regenerative circuit. On October 6, 1914, was issued for his discovery. Although Lee de Forest initially discounted Armstrong's findings, beginning in 1915 de Forest filed a series of competing patent applications that largely copied Armstrong's claims, now stating that he had discovered regeneration first, based on a notebook entry made on August 6, 1912, while working for the Federal Telegraph company, prior to the date recognized for Armstrong of January 31, 1913. The result was an interference hearing at the patent office to determine priority. De Forest was not the only other inventor involved – the four competing claimants included Armstrong, de Forest, General Electric's Langmuir, and Alexander Meissner, who was a German national, which led to his application being seized by the Office of Alien Property Custodian during World War I. Following the end of WWI Armstrong enlisted representation by the law firm of Pennie, Davis, Martin and Edmonds. To finance his legal expenses he began issuing non-transferable licenses for use of the regenerative patents to a select group of small radio equipment firms, and by November 1920, 17 companies had been licensed. These licensees paid 5% royalties on their sales which were restricted to only "amateurs and experimenters". Meanwhile, Armstrong explored his options for selling the commercial rights to his work. Although the obvious candidate was the Radio Corporation of America (RCA), on October 5, 1920, the Westinghouse Electric & Manufacturing Company took out an option for $335,000 for the commercial rights for both the regenerative and superheterodyne patents, with an additional $200,000 to be paid if Armstrong prevailed in the regenerative patent dispute. Westinghouse exercised this option on November 4, 1920. Legal proceedings related to the regeneration patent became separated into two groups of court cases. An initial court action was triggered in 1919 when Armstrong sued de Forest's company in district court, alleging infringement of patent 1,113,149. This court ruled in Armstrong's favor on May 17, 1921. A second line of court cases, the result of the patent office interference hearing, had a different outcome. The interference board had also sided with Armstrong, but he was unwilling to settle with de Forest for less than what he considered full compensation. Thus pressured, de Forest continued his legal defense, and appealed the interference board decision to the District of Columbia district court. On May 8, 1924, that court ruled that it was de Forest who should be considered regeneration's inventor. Armstrong (along with much of the engineering community) was shocked by these events, and his side appealed this decision. Although the legal proceeding twice went before the US Supreme Court, in 1928 and 1934, he was unsuccessful in overturning the decision. In response to the second Supreme Court decision upholding de Forest as the inventor of regeneration, Armstrong attempted to return his 1917 IRE Medal of Honor, which had been awarded "in recognition of his work and publications dealing with the action of the oscillating and non-oscillating audion". The organization's board refused to allow him, and issued a statement that it "strongly affirms the original award". Superheterodyne circuit The United States entered WWI in April 1917. Later that year Armstrong was commissioned as a captain in the U.S. Army Signal Corps, and assigned to a laboratory in Paris, France to help develop radio communication for the Allied war effort. He returned to the US in the autumn of 1919, after being promoted to the rank of Major. (During both world wars, Armstrong gave the US military free use of his patents.) During this period, Armstrong's most significant accomplishment was the development of a "supersonic heterodyne" – soon shortened to "superheterodyne" – radio receiver circuit. This circuit made radio receivers more sensitive and selective and is used extensively today. The key feature of the superheterodyne approach is the mixing of the incoming radio signal with a locally generated, different frequency signal within a radio set. That circuit is called the mixer. The result is a fixed, unchanging intermediate frequency, or I.F. signal which is easily amplified and detected by following circuit stages. In 1919, Armstrong filed an application for a US patent of the superheterodyne circuit which was issued the next year. This patent was subsequently sold to Westinghouse. The patent was challenged, triggering another patent office interference hearing. Armstrong ultimately lost this patent battle; although the outcome was less controversial than that involving the regeneration proceedings. The challenger was Lucien Lévy of France who had worked developing Allied radio communication during WWI. He had been awarded French patents in 1917 and 1918 that covered some of the same basic ideas used in Armstrong's superheterodyne receiver. AT&T, interested in radio development at this time, primarily for point-to-point extensions of its wired telephone exchanges, purchased the US rights to Lévy's patent and contested Armstrong's grant. The subsequent court reviews continued until 1928, when the District of Columbia Court of Appeals disallowed all nine claims of Armstrong's patent, assigning priority for seven of the claims to Lévy, and one each to Ernst Alexanderson of General Electric and Burton W. Kendall of Bell Laboratories. Although most early radio receivers used regeneration Armstrong approached RCA's David Sarnoff, whom he had known since giving a demonstration of his regeneration receiver in 1913, about the corporation offering superheterodynes as a superior offering to the general public. (The ongoing patent dispute was not a hindrance, because extensive cross-licensing agreements signed in 1920 and 1921 between RCA, Westinghouse and AT&T meant that Armstrong could freely use the Lévy patent.) Superheterodyne sets were initially thought to be prohibitively complicated and expensive as the initial designs required multiple tuning knobs and used nine vacuum tubes. In conjunction with RCA engineers, Armstrong developed a simpler, less costly design. RCA introduced its superheterodyne Radiola sets in the US market in early 1924, and they were an immediate success, dramatically increasing the corporation's profits. These sets were considered so valuable that RCA would not license the superheterodyne to other US companies until 1930. Super-regeneration circuit The regeneration legal battle had one serendipitous outcome for Armstrong. While he was preparing apparatus to counteract a claim made by a patent attorney, he "accidentally ran into the phenomenon of super-regeneration", where, by rapidly "quenching" the vacuum-tube oscillations, he was able to achieve even greater levels of amplification. A year later, in 1922, Armstrong sold his super-regeneration patent to RCA for $200,000 plus 60,000 shares of corporation stock, which was later increased to 80,000 shares in payment for consulting services. This made Armstrong RCA's largest shareholder, and he noted that "The sale of that invention was to net me more than the sale of the regenerative circuit and the superheterodyne combined". RCA envisioned selling a line of super-regenerative receivers until superheterodyne sets could be perfected for general sales, but it turned out the circuit was not selective enough to make it practical for broadcast receivers. Wide-band FM radio "Static" interference – extraneous noises caused by sources such as thunderstorms and electrical equipment – bedeviled early radio communication using amplitude modulation and perplexed numerous inventors attempting to eliminate it. Many ideas for static elimination were investigated, with little success. In the mid-1920s, Armstrong began researching a solution. He initially, and unsuccessfully, attempted to resolve the problem by modifying the characteristics of AM transmissions. One approach used frequency modulation (FM) transmissions. Instead of varying the strength of the carrier wave as with AM, the frequency of the carrier was changed to represent the audio signal. In 1922 John Renshaw Carson of AT&T, inventor of Single-sideband modulation (SSB), had published a detailed mathematical analysis which showed that FM transmissions did not provide any improvement over AM. Although the Carson bandwidth rule for FM is important today, Carson's review turned out to be incomplete, as it analyzed only (what is now known as) "narrow-band" FM. In early 1928 Armstrong began researching the capabilities of FM. Although there were others involved in FM research at this time, he knew of an RCA project to see if FM shortwave transmissions were less susceptible to fading than AM. In 1931 the RCA engineers constructed a successful FM shortwave link transmitting the Schmeling–Stribling fight broadcast from California to Hawaii, and noted at the time that the signals seemed to be less affected by static. The project made little further progress. Working in secret in the basement laboratory of Columbia's Philosophy Hall, Armstrong developed "wide-band" FM, in the process discovering significant advantages over the earlier "narrow-band" FM transmissions. In a "wide-band" FM system, the deviations of the carrier frequency are made to be much larger than the frequency of the audio signal which can be shown to provide better noise rejection. He was granted five US patents covering the basic features of the new system on December 26, 1933. Initially, the primary claim was that his FM system was effective at filtering out the noise produced in receivers, by vacuum tubes. Armstrong had a standing agreement to give RCA the right of first refusal to his patents. In 1934 he presented his new system to RCA president Sarnoff. Sarnoff was somewhat taken aback by its complexity, as he had hoped it would be possible to eliminate static merely by adding a simple device to existing receivers. From May 1934 until October 1935 Armstrong conducted field tests of his FM technology from an RCA laboratory located on the 85th floor of the Empire State Building in New York City. An antenna attached to the building's spire transmitted signals for distances up to . These tests helped demonstrate FM's static-reduction and high-fidelity capabilities. RCA, which was heavily invested in perfecting TV broadcasting, chose not to invest in FM, and instructed Armstrong to remove his equipment. Denied the marketing and financial clout of RCA, Armstrong decided to finance his own development and form ties with smaller members of the radio industry, including Zenith and General Electric, to promote his invention. Armstrong thought that FM had the potential to replace AM stations within 5 years, which he promoted as a boost for the radio manufacturing industry, then suffering from the effects of the Great Depression. Making existing AM radio transmitters and receivers obsolete would necessitate that stations buy replacement transmitters and listeners purchase FM-capable receivers. In 1936 he published a landmark paper in the Proceedings of the IRE that documented the superior capabilities of using wide-band FM. (This paper would be reprinted in the August 1984 issue of Proceedings of the IEEE.) A year later, a paper by Murray G. Crosby (inventor of Crosby system for FM Stereo) in the same journal provided further analysis of the wide-band FM characteristics, and introduced the concept of "threshold", demonstrating that there is a superior signal-to-noise ratio when the signal is stronger than a certain level. In June 1936, Armstrong gave a formal presentation of his new system at the US Federal Communications Commission (FCC) headquarters. For comparison, he played a jazz record using a conventional AM radio, then switched to an FM transmission. A United Press correspondent was present, and recounted in a wire service report that: "if the audience of 500 engineers had shut their eyes they would have believed the jazz band was in the same room. There were no extraneous sounds." Moreover, "Several engineers said after the demonstration that they consider Dr. Armstrong's invention one of the most important radio developments since the first earphone crystal sets were introduced." Armstrong was quoted as saying he could "visualize a time not far distant when the use of ultra-high frequency wave bands will play the leading role in all broadcasting", although the article noted that "A switchover to the ultra-high frequency system would mean the junking of present broadcasting equipment and present receivers in homes, eventually causing the expenditure of billions of dollars." In the late 1930s, as technical advances made it possible to transmit on higher frequencies, the FCC investigated options for increasing the number of broadcasting stations, in addition to ideas for better audio quality, known as "high-fidelity". In 1937 it introduced what became known as the Apex band, consisting of 75 broadcasting frequencies from 41.02 to 43.98 MHz. As on the standard broadcast band, these were AM stations but with higher quality audio – in one example, a frequency response from 20 Hz to 17,000 Hz +/- 1 dB – because station separations were 40 kHz instead of the 10 kHz spacings used on the original AM band. Armstrong worked to convince the FCC that a band of FM broadcasting stations would be a superior approach. That year he financed the construction of the first FM radio station, W2XMN (later KE2XCC) at Alpine, New Jersey. FCC engineers had believed that transmissions using high frequencies would travel little farther than line-of-sight distances, limited by the horizon. When operating with 40 kilowatts on 42.8 MHz, the station could be clearly heard away, matching the daytime coverage of a full power 50-kilowatt AM station. FCC studies comparing the Apex station transmissions with Armstrong's FM system concluded that his approach was superior. In early 1940, the FCC held hearings on whether to establish a commercial FM service. Following this review, the FCC announced the establishment of an FM band effective January 1, 1941, consisting of forty 200 kHz-wide channels on a band from 42 to 50 MHz, with the first five channels reserved for educational stations. Existing Apex stations were notified that they would not be allowed to operate after January 1, 1941, unless they converted to FM. Although there was interest in the new FM band by station owners, construction restrictions that went into place during WWII limited the growth of the new service. Following the end of WWII, the FCC moved to standardize its frequency allocations. One area of concern was the effects of tropospheric and Sporadic E propagation, which at times reflected station signals over great distances, causing mutual interference. A particularly controversial proposal, spearheaded by RCA, was that the FM band needed to be shifted to higher frequencies to avoid this problem. This reassignment was fiercely opposed as unneeded by Armstrong, but he lost. The FCC made its decision final on June 27, 1945. It allocated 100 FM channels from 88 to 108 MHz, and assigned the former FM band to 'non government fixed and mobile' (42–44 MHz), and television channel 1 (44–50 MHz), now sidestepping the interference concerns. A period of allowing existing FM stations to broadcast on both low and high bands ended at midnight on January 8, 1949, at which time any low band transmitters were shut down, making obsolete 395,000 receivers that had already been purchased by the public for the original band. Although converters allowing low band FM sets to receive high band were manufactured, they ultimately proved to be complicated to install, and often as (or more) expensive than buying a new high band set outright. Armstrong felt the FM band reassignment had been inspired primarily by a desire to cause a disruption that would limit FM's ability to challenge the existing radio industry, including RCA's AM radio properties that included the NBC radio network, plus the other major networks including CBS, ABC and Mutual. The change was thought to have been favored by AT&T, as the elimination of FM relaying stations would require radio stations to lease wired links from that company. Particularly galling was the FCC assignment of TV channel 1 to the 44–50 MHz segment of the old FM band. Channel 1 was later deleted, since periodic radio propagation would make local TV signals unviewable. Although the FM band shift was an economic setback, there was reason for optimism. A book published in 1946 by Charles A. Siepmann heralded FM stations as "Radio's Second Chance". In late 1945, Armstrong contracted with John Orr Young, founding member of the public relations firm Young & Rubicam, to conduct a national campaign promoting FM broadcasting, especially by educational institutions. Article placements promoting both Armstrong personally and FM were made with general circulation publications including The Nation, Fortune, The New York Times, Atlantic Monthly, and The Saturday Evening Post. In 1940, RCA offered Armstrong $1,000,000 for a non-exclusive, royalty-free license to use his FM patents. He refused this offer, because he felt this would be unfair to the other licensed companies, which had to pay 2% royalties on their sales. Over time this impasse with RCA dominated Armstrong's life. RCA countered by conducting its own FM research, eventually developing what it claimed was a non-infringing FM system. The corporation encouraged other companies to stop paying royalties to Armstrong. Outraged by this, in 1948 Armstrong filed suit against RCA and the National Broadcasting Company, accusing them of patent infringement and that they had "deliberately set out to oppose and impair the value" of his invention, for which he requested treble damages. Although he was confident that this suit would be successful and result in a major monetary award, the protracted legal maneuvering that followed eventually began to impair his finances, especially after his primary patents expired in late 1950. FM radar During World War II, Armstrong turned his attention to investigations of continuous-wave FM radar funded by government contracts. Armstrong hoped that the interference fighting characteristic of wide-band FM and a narrow receiver bandwidth to reduce noise would increase range. Primary development took place at Armstrong's Alpine, NJ laboratory. A duplicate set of equipment was sent to the U.S. Army's Evans Signal Laboratory. The results of his investigations were inconclusive, the war ended, and the project was dropped by the Army. Under the name Project Diana, the Evans staff took up the possibility of bouncing radar signals off the moon. Calculations showed that standard pulsed radar like the stock SCR-271 would not do the job; higher average power, much wider transmitter pulses, and very narrow receiver bandwidth would be required. They realized that the Armstrong equipment could be modified to accomplish the task. The FM modulator of the transmitter was disabled and the transmitter keyed to produce quarter-second CW pulses. The narrow-band (57 Hz) receiver, which tracked the transmitter frequency, got an incremental tuning control to compensate for the possible 300 Hz Doppler shift on the lunar echoes. They achieved success on January 10, 1946. Death Bitter and overtaxed by years of litigation and mounting financial problems, Armstrong lashed out at his wife one day with a fireplace poker, striking her on the arm. She left their apartment to stay with her sister. Sometime during the night of January 31February 1, 1954, Armstrong jumped to his death from a window in his 12-room apartment on the 13th floor of River House in Manhattan, New York City. The New York Times described the contents of his two-page suicide note to his wife: "he was heartbroken at being unable to see her once again, and expressing deep regret at having hurt her, the dearest thing in his life." The note concluded, "God keep you and Lord have mercy on my Soul." David Sarnoff disclaimed any responsibility, telling Carl Dreher directly that "I did not kill Armstrong." After his death, a friend of Armstrong estimated that 90 percent of his time was spent on litigation against RCA. U.S. Senator Joseph McCarthy (R-Wisconsin) reported that Armstrong had recently met with one of his investigators, and had been "mortally afraid" that secret radar discoveries by him and other scientists "were being fed to the Communists as fast as they could be developed". Legacy Following her husband's death, Marion Armstrong took charge of pursuing his estate's legal cases. In late December 1954, it was announced that through arbitration a settlement of "approximately $1,000,000" had been made with RCA. Dana Raymond of Cravath, Swaine & Moore in New York served as counsel in that litigation. Marion Armstrong was able to formally establish Armstrong as the inventor of FM following protracted court proceedings over five of his basic FM patents, with a series of successful suits, which lasted until 1967, against other companies that were found guilty of infringement. It was not until the 1960s that FM stations in the United States started to challenge the popularity of the AM band, helped by the development of FM stereo by General Electric, followed by the FCC's FM Non-Duplication Rule, which limited large-city broadcasters with AM and FM licenses to simulcasting on those two frequencies for only half of their broadcast hours. Armstrong's FM system was also used for communications between NASA and the Apollo program astronauts. A US Postage Stamp was released in his honor in 1983 in a series commemorating American Inventors. Armstrong has been called "the most prolific and influential inventor in radio history". The superheterodyne process is still extensively used by radio equipment. Eighty years after its invention, FM technology has started to be supplemented, and in some cases replaced, by more efficient digital technologies. The introduction of digital television eliminated the FM audio channel that had been used by analog television, HD Radio has added digital sub-channels to FM band stations, and, in Europe and Pacific Asia, Digital Audio Broadcasting bands have been created that will, in some cases, eliminate existing FM stations altogether. However, FM broadcasting is still used internationally, and remains the dominant system employed for audio broadcasting services. Personal life In 1923, combining his love for high places with courtship rituals, Armstrong climbed the WJZ (now WABC) antenna located atop a 20-story building in New York City, where he reportedly did a handstand, and when a witness asked him what motivated him to "do these damnfool things", Armstrong replied "I do it because the spirit moves me." Armstrong had arranged to have photographs taken, which he had delivered to David Sarnoff's secretary, Marion McInnis. Armstrong and McInnis married later that year. Armstrong bought a Hispano-Suiza motor car before the wedding, which he kept until his death, and which he drove to Palm Beach, Florida for their honeymoon. A publicity photograph was made of him presenting Marion with the world's first portable superheterodyne radio as a wedding gift. He was an avid tennis player until an injury in 1940, and drank an Old Fashioned with dinner. Politically, he was described by one of his associates as "a revolutionist only in technology – in politics he was one of the most conservative of men." In 1955, Marion Armstrong founded the Armstrong Memorial Research Foundation, and participated in its work until her death in 1979 at the age of 81. She was survived by two nephews and a niece. Honors In 1917, Armstrong was the first recipient of the IRE's (now IEEE) Medal of Honor. For his wartime work on radio, the French government gave him the Legion of Honor in 1919. He was awarded the 1941 Franklin Medal, and in 1942 received the AIEEs Edison Medal "for distinguished contributions to the art of electric communication, notably the regenerative circuit, the superheterodyne, and frequency modulation." The ITU added him to its roster of great inventors of electricity in 1955. He later received two honorary doctorates, from Columbia in 1929, and Muhlenberg College in 1941. In 1980, he was inducted into the National Inventors Hall of Fame, and appeared on a U.S. postage stamp in 1983. The Consumer Electronics Hall of Fame inducted him in 2000, "in recognition of his contributions and pioneering spirit that have laid the foundation for consumer electronics." He was posthumously inducted into the Wireless Hall of Fame in 2001. Columbia University established the Edwin Howard Armstrong Professorship in the School of Engineering and Applied Science in his memory. Philosophy Hall, the Columbia building where Armstrong developed FM, was declared a National Historic Landmark. Armstrong's boyhood home in Yonkers, New York was recognized by the National Historic Landmark program and the National Register of Historic Places, although this was withdrawn when the house was demolished. Armstrong Hall at Columbia was named in his honor. The hall, located at the northeast corner of Broadway and 112th Street, was originally an apartment house but was converted to research space after being purchased by the university. It is currently home to the Goddard Institute for Space Studies, a research institute dedicated to atmospheric and climate science that is jointly operated by Columbia and the National Aeronautics and Space Administration. A storefront in a corner of the building houses Tom's Restaurant, a longtime neighborhood fixture that inspired Susanne Vega's song "Tom's Diner" and was used for establishing shots for the fictional "Monk's diner" in the "Seinfeld" television series. A second Armstrong Hall, also named for the inventor, is located at the United States Army Communications and Electronics Life Cycle Management Command (CECOM-LCMC) Headquarters at Aberdeen Proving Ground, Maryland. In 2005, Armstrong's regenerative feedback circuit and superheterodyne and FM circuits were inducted into the TECnology Hall of Fame, an honor given to "products and innovations that have had an enduring impact on the development of audio technology." Patents E. H. Armstrong patents: : "Frequency Modulation Multiplex System" : "Radio Signaling" : "Frequency-Modulated Carrier Signal Receiver" : "Frequency Modulation Signaling System" : "Means for Receiving Radio Signals" : "Method and Means for Transmitting Frequency Modulated Signals" : "Current Limiting Device" : "Frequency Modulation System" : "Radio Rebroadcasting System" : "Means and Method for Relaying Frequency Modulated Signals" : "Means and Method for Relaying Frequency Modulated Signals" : "Frequency Modulation Signaling System" : "Radio Transmitting System" : "Radio Transmitting System" : "Radio Transmitting System" : "Frequency Changing System" : "Radio Receiving System" : "Radio Receiving System" : "Multiplex Radio Signaling System" : "Radio Signaling System" : "Radio Transmitting System" : "Phase Control System" : "Radio Signaling System" : "Radio Transmitting System" : "Radio Signaling System" : "Radio Telephone Signaling" : "Radiosignaling" : "Radiosignaling" : "Radio Broadcasting and Receiving System" : "Radio Signaling System" : "Wave Signaling System" : "Wave Signaling System" : "Wireless Receiving System for Continuous Wave" : "Wave Signaling System" : "Wave Signaling System" : "Wave Signaling System" : "Wave Signaling System" : "Wave Signaling System" : "Signaling System" : "Radioreceiving System Having High Selectivity" : "Selectively Opposing Impedance to Received Electrical Oscillations" : "Multiple Antenna for Electrical Wave Transmission" : "Method of Receiving High Frequency Oscillation" : "Antenna with Distributed Positive Resistance" : "Electric Wave Transmission" (Note: Co-patentee with Mihajlo Pupin) : "Wireless Receiving System" U.S. Patent and Trademark Office Database Search The following patents were issued to Armstrong's estate after his death: : "Radio detection and ranging systems" 1956 : "Multiplex frequency modulation transmitter" 1956 : "Linear detector for subcarrier frequency modulated waves" 1958 : "Noise reduction in phase shift modulation" 1959 : "Stabilized multiple frequency modulation receiver" 1959 See also Awards named after E. H. Armstrong Grid-leak detector Notes References Frost, Gary L. (2010), Early FM Radio: Incremental Technology in Twentieth-Century America. Baltimore: Johns Hopkins University Press, 2010. , . Further reading Ira Brodsky. The History of Wireless: How Creative Minds Produced Technology for the Masses. St. Louis: Telescope Books, 2008. Ken Burns. Empire of the Air . Documentary that first aired on PBS in 1992. External links Armstrong Memorial Research Foundation – The Armstrong Foundation disseminates knowledge of Armstrong's research and achievements Houck Collection – A collection of images and documents that belonged to Armstrong's assistant, Harry W. Houck, which have been annotated by Mike Katzdorn. Rare Book & Manuscript Library Collections – A collection of images and documents at Columbia University Biography The Broadcast Archive – A brief biography by Donna Halper Ammon, Richard T., "The Rolls Royce Of Reception : Super Heterodynes – 1918 to 1930". IEEE History Center's Edwin H. Armstrong : Excerpt from "The Legacy of Edwin Howard Armstrong," by J. E. Brittain Proceedings of the IEEE, vol. 79, no. 2, February 1991 Hong, Sungook, "A History of the Regeneration Circuit: From Invention to Patent Litigation" University, Seoul, Korea (PDF) Who Invented the Superhetrodyne? The history of the invention of the superhetrodyne receiver and related patent disputes Yannis Tsividis, "Edwin Armstrong: Pioneer of the Airwaves", 2002. A profile on the web site of Columbia University, Armstrong's alma mater 1890 births 1954 suicides 1954 deaths 20th-century American inventors American electronics engineers United States Army personnel of World War I American Presbyterians American telecommunications engineers Analog electronics engineers Knights of the Legion of Honour Columbia School of Engineering and Applied Science alumni Columbia School of Engineering and Applied Science faculty Discovery and invention controversies History of radio in the United States IEEE Medal of Honor recipients IEEE Edison Medal recipients People from Yonkers, New York Radio electronics Radio pioneers United States Army Signal Corps personnel Military personnel from New York (state) Suicides by jumping in New York City People from Chelsea, Manhattan United States Army officers Recipients of Franklin Medal
Edwin Howard Armstrong
[ "Engineering" ]
7,263
[ "Radio electronics" ]
10,339
https://en.wikipedia.org/wiki/Electrochemical%20cell
An electrochemical cell is a device that generates electrical energy from chemical reactions. Electrical energy can also be applied to these cells to cause chemical reactions to occur. Electrochemical cells that generate an electric current are called voltaic or galvanic cells and those that generate chemical reactions, via electrolysis for example, are called electrolytic cells. Both galvanic and electrolytic cells can be thought of as having two half-cells: consisting of separate oxidation and reduction reactions. When one or more electrochemical cells are connected in parallel or series they make a battery. Primary cells are single use batteries. Types of electrochemical cells Galvanic cell A galvanic cell (voltaic cell), named after Luigi Galvani (Alessandro Volta), is an electrochemical cell that generates electrical energy from spontaneous redox reactions. A wire connects two different metals (e.g. zinc and copper). Each metal is in a separate solution; often the aqueous sulphate or nitrate forms of the metal, however more generally metal salts and water which conduct current. A salt bridge or porous membrane connects the two solutions, keeping electric neutrality and the avoidance of charge accumulation. The metal's differences in oxidation/reduction potential drive the reaction until equilibrium. Key features: spontaneous reaction generates electric current current flows through a wire, and ions flow through a salt bridge anode (negative), cathode (positive) Half cells Galvanic cells consists of two half-cells. Each half-cell consists of an electrode and an electrolyte (both half-cells may use the same or different electrolytes). The chemical reactions in the cell involve the electrolyte, electrodes, and/or an external substance (fuel cells may use hydrogen gas as a reactant). In a full electrochemical cell, species from one half-cell lose electrons (oxidation) to their electrode while species from the other half-cell gain electrons (reduction) from their electrode. A salt bridge (e.g., filter paper soaked in KNO3, NaCl, or some other electrolyte) is used to ionically connect two half-cells with different electrolytes, but it prevents the solutions from mixing and unwanted side reactions. An alternative to a salt bridge is to allow direct contact (and mixing) between the two half-cells, for example in simple electrolysis of water. As electrons flow from one half-cell to the other through an external circuit, a difference in charge is established. If no ionic contact were provided, this charge difference would quickly prevent the further flow of electrons. A salt bridge allows the flow of negative or positive ions to maintain a steady-state charge distribution between the oxidation and reduction vessels, while keeping the contents otherwise separate. Other devices for achieving separation of solutions are porous pots and gelled solutions. A porous pot is used in the Bunsen cell. Equilibrium reaction Each half-cell has a characteristic voltage (depending on the metal and its characteristic reduction potential). Each reaction is undergoing an equilibrium reaction between different oxidation states of the ions: when equilibrium is reached, the cell cannot provide further voltage. In the half-cell performing oxidation, the closer the equilibrium lies to the ion/atom with the more positive oxidation state the more potential this reaction will provide. Likewise, in the reduction reaction, the closer the equilibrium lies to the ion/atom with the more negative oxidation state the higher the potential. Cell potential The cell potential can be predicted through the use of electrode potentials (the voltages of each half-cell). These half-cell potentials are defined relative to the assignment of 0 volts to the standard hydrogen electrode (SHE). (See table of standard electrode potentials). The difference in voltage between electrode potentials gives a prediction for the potential measured. When calculating the difference in voltage, one must first rewrite the half-cell reaction equations to obtain a balanced oxidation-reduction equation. Reverse the reduction reaction with the smallest potential (to create an oxidation reaction/overall positive cell potential) Half-reactions must be multiplied by integers to achieve electron balance. Cell potentials have a possible range of roughly zero to 6 volts. Cells using water-based electrolytes are usually limited to cell potentials less than about 2.5 volts due to high reactivity of the powerful oxidizing and reducing agents with water which is needed to produce a higher voltage. Higher cell potentials are possible with cells using other solvents instead of water. For instance, lithium cells with a voltage of 3 volts are commonly available. The cell potential depends on the concentration of the reactants, as well as their type. As the cell is discharged, the concentration of the reactants decreases and the cell potential also decreases. Electrolytic cell An electrolytic cell is an electrochemical cell in which applied electrical energy drives a non-spontaneous redox reaction. They are often used to decompose chemical compounds, in a process called electrolysis. (The Greek word "lysis" (λύσις) means "loosing" or "setting free".) Important examples of electrolysis are the decomposition of water into hydrogen and oxygen, and of bauxite into aluminium and other chemicals. Electroplating (e.g. of Copper, Silver, Nickel or Chromium) is done using an electrolytic cell. Electrolysis is a technique that uses a direct electric current (DC). The components of an electrolytic cell are: an electrolyte: usually a solution of water or other solvents in which ions are dissolved. Molten salts such as sodium chloride are also electrolytes. two electrodes (a cathode and an anode) which are electrical terminals consisting of a suitable substance at which oxidation or reduction can take place, and maintained at two different electric potentials. When driven by an external voltage (potential difference) applied to the electrodes, the ions in the electrolyte are attracted to the electrode with the opposite potential, where charge-transferring (also called faradaic or redox) reactions can take place. Only with a sufficient external voltage can an electrolytic cell decompose a normally stable, or inert chemical compound in the solution. Thus the electrical energy provided produces a chemical reaction which would not occur spontaneously otherwise.Key features: non-spontaneous reaction generates current current flows through a wire, and ions flow through salt bridge anode (positive), cathode (negative) Primary cell A primary cell produces current by irreversible chemical reactions (ex. small disposable batteries) and is not rechargeable. They are used for their portability, low cost, and short lifetime. Primary cells are made in a range of standard sizes to power small household appliances such as flashlights and portable radios. As chemical reactions proceed in a primary cell, the battery uses up the chemicals that generate the power; when they are gone, the battery stops producing electricity. Primary batteries make up about 90% of the $50 billion battery market, but secondary batteries have been gaining market share. About 15 billion primary batteries are thrown away worldwide every year, virtually all ending up in landfills. Due to the toxic heavy metals and strong acids or alkalis they contain, batteries are hazardous waste. Most municipalities classify them as such and require separate disposal. The energy needed to manufacture a battery is about 50 times greater than the energy it contains. Due to their high pollutant content compared to their small energy content, the primary battery is considered a wasteful, environmentally unfriendly technology. Mainly due to the increasing sales of wireless devices and cordless tools, which cannot be economically powered by primary batteries and come with integral rechargeable batteries, the secondary battery industry has high growth and has slowly been replacing the primary battery in high end products. Secondary cell A secondary cell produces current by reversible chemical reactions (ex. lead-acid battery car battery) and is rechargeable. Lead-acid batteries are used in an automobile to start an engine and to operate the car's electrical accessories when the engine is not running. The alternator, once the car is running, recharges the battery. It can perform as a galvanic cell and an electrolytic cell. It is a convenient way to store electricity: when current flows one way, the levels of one or more chemicals build up (charging); while it is discharging, they reduce and the resulting electromotive force can do work. They are used for their high voltage, low costs, reliability, and long lifetime. Fuel cell A fuel cell is an electrochemical cell that reacts hydrogen fuel with oxygen or another oxidizing agent, to convert chemical energy to electricity. Fuel cells are different from batteries in requiring a continuous source of fuel and oxygen (usually from air) to sustain the chemical reaction, whereas in a battery the chemical energy comes from chemicals already present in the battery. Fuel cells can produce electricity continuously for as long as fuel and oxygen are supplied. They are used for primary and backup power for commercial, industrial and residential buildings and in remote or inaccessible areas. They are also used to power fuel cell vehicles, including forklifts, automobiles, buses, boats, motorcycles and submarines. Fuel cells are classified by the type of electrolyte they use and by the difference in startup time, which ranges from 1 second for proton-exchange membrane fuel cells (PEM fuel cells, or PEMFC) to 10 minutes for solid oxide fuel cells (SOFC). There are many types of fuel cells, but they all consist of: anode At the anode a catalyst causes the fuel to undergo oxidation reactions that generate protons (positively charged hydrogen ions) and electrons. The protons flow from the anode to the cathode through the electrolyte after the reaction. At the same time, electrons are drawn from the anode to the cathode through an external circuit, producing direct current electricity. cathode At the cathode, another catalyst causes hydrogen ions, electrons, and oxygen to react, forming water. electrolyte Allows positively charged hydrogen ions (protons) to move between the two sides of the fuel cell. A related technology are flow batteries, in which the fuel can be regenerated by recharging. Individual fuel cells produce relatively small electrical potentials, about 0.7 volts, so cells are "stacked", or placed in series, to create sufficient voltage to meet an application's requirements. In addition to electricity, fuel cells produce water, heat and, depending on the fuel source, very small amounts of nitrogen dioxide and other emissions. The energy efficiency of a fuel cell is generally between 40 and 60%; however, if waste heat is captured in a cogeneration scheme, efficiencies up to 85% can be obtained. In 2022, the global fuel cell market was estimated to be $6.3 billion, and is expected to increase by 19.9% by 2030. Many countries are attempting to enter the market by setting renewable energy GW goals. See also Activity (chemistry) Cell notation Electrochemical potential Electrochemical engineering Battery (electricity) Rechargeable battery Fuel cell Flow battery Scanning flow cell References External links Tools
Electrochemical cell
[ "Chemistry" ]
2,303
[ "Electrochemistry", "Electrochemical cells" ]
10,340
https://en.wikipedia.org/wiki/Ecdysis
Ecdysis is the moulting of the cuticle in many invertebrates of the clade Ecdysozoa. Since the cuticle of these animals typically forms a largely inelastic exoskeleton, it is shed during growth and a new, larger covering is formed. The remnants of the old, empty exoskeleton are called exuviae. After moulting, an arthropod is described as teneral, a callow; it is "fresh", pale and soft-bodied. Within one or two hours, the cuticle hardens and darkens following a tanning process analogous to the production of leather. During this short phase the animal expands, since growth is otherwise constrained by the rigidity of the exoskeleton. Growth of the limbs and other parts normally covered by the hard exoskeleton is achieved by transfer of body fluids from soft parts before the new skin hardens. A spider with a small abdomen may be undernourished but more probably has recently undergone ecdysis. Some arthropods, especially large insects with tracheal respiration, expand their new exoskeleton by swallowing or otherwise taking in air. The maturation of the structure and colouration of the new exoskeleton might take days or weeks in a long-lived insect; this can make it difficult to identify an individual if it has recently undergone ecdysis. Ecdysis allows damaged tissue and missing limbs to be regenerated or substantially re-formed. Complete regeneration may require a series of moults, the stump becoming a little larger with each moult until the limb is a normal, or near normal, size. Etymology The term ecdysis comes from Ancient Greek () 'to take off, strip off'. Process In preparation for ecdysis, the arthropod becomes inactive for a period of time, undergoing apolysis or separation of the old exoskeleton from the underlying epidermal cells. For most organisms, the resting period is a stage of preparation during which the secretion of fluid from the moulting glands of the epidermal layer and the loosening of the underpart of the cuticle occurs. Once the old cuticle has separated from the epidermis, a digesting fluid is secreted into the space between them. However, this fluid remains inactive until the upper part of the new cuticle has been formed. Then, by crawling movements, the organism pushes forward in the old integumentary shell, which splits down the back allowing the animal to emerge. Often, this initial crack is caused by a combination of movement and increase in pressure of hemolymph within the body, forcing an expansion across its exoskeleton, leading to an eventual crack that allows for certain organisms such as spiders to extricate themselves. While the old cuticle is being digested, the new layer is secreted. All cuticular structures are shed at ecdysis, including the inner parts of the exoskeleton, which includes terminal linings of the alimentary tract and of the tracheae if they are present. Insects Each stage of development between moults for insects in the taxon Endopterygota is called an instar, or stadium, and each stage between moults of insects in the Exopterygota is called a nymph: there may be up to 15 nymphal stages. Endopterygota tend to have only four or five instars. Endopterygotes have more alternatives to moulting, such as expansion of the cuticle and collapse of air sacs to allow growth of internal organs. The process of moulting in insects begins with the separation of the cuticle from the underlying epidermal cells (apolysis) and ends with the shedding of the old cuticle (ecdysis). In many species it is initiated by an increase in the hormone ecdysone. This hormone causes: apolysis – the separation of the cuticle from the epidermis secretion of new cuticle materials beneath the old degradation of the old cuticle After apolysis the insect is known as a pharate. Moulting fluid is then secreted into the exuvial space between the old cuticle and the epidermis, this contains inactive enzymes which are activated only after the new epicuticle is secreted. This prevents the new procuticle from getting digested as it is laid down. The lower regions of the old cuticle, the endocuticle and mesocuticle, are then digested by the enzymes and subsequently absorbed. The exocuticle and epicuticle resist digestion and are hence shed at ecdysis. Spiders Spiders generally change their skin for the first time while still inside the egg sac, and the spiderling that emerges broadly resembles the adult. The number of moults varies, both between species and sexes, but generally will be between five times and nine times before the spider reaches maturity. Not surprisingly, since males are generally smaller than females, the males of many species mature faster and do not undergo ecdysis as many times as the females before maturing. Members of the Mygalomorphae are very long-lived, sometimes 20 years or more; they moult annually even after they mature. Spiders stop feeding at some time before moulting, usually for several days. The physiological processes of releasing the old exoskeleton from the tissues beneath typically cause various colour changes, such as darkening. If the old exoskeleton is not too thick it may be possible to see new structures, such as setae, from the outside. However, contact between the nerves and the old exoskeleton is maintained until a very late stage in the process. The new, teneral exoskeleton has to accommodate a larger frame than the previous instar, while the spider has had to fit into the previous exoskeleton until it has been shed. This means the spider does not fill out the new exoskeleton completely, so it commonly appears somewhat wrinkled. Most species of spiders hang from silk during the entire process, either dangling from a drop line, or fastening their claws into webbed fibres attached to a suitable base. The discarded, dried exoskeleton typically remains hanging where it was abandoned once the spider has left. To open the old exoskeleton, the spider generally contracts its abdomen (opisthosoma) to supply enough fluid to pump into the prosoma with sufficient pressure to crack it open along its lines of weakness. The carapace lifts off from the front, like a helmet, as its surrounding skin ruptures, but it remains attached at the back. Now the spider works its limbs free and typically winds up dangling by a new thread of silk attached to its own exuviae, which in turn hang from the original silk attachment. At this point the spider is a callow; it is teneral and vulnerable. As it dangles, its exoskeleton hardens and takes shape. The process may take minutes in small spiders, or some hours in the larger Mygalomorphs. Some spiders, such as some Synema species, members of the Thomisidae (crab spiders), mate while the female is still callow, during which time she is unable to eat the male. Eurypterids Eurypterids are a group of chelicerates that became extinct in the Late Permian. They underwent ecdysis similarly to extant chelicerates, and most fossils are thought to be of exuviae, rather than cadavers. See also Ecdysteroid References External links Animal developmental biology Protostome anatomy Ethology it:Muta (biologia)
Ecdysis
[ "Biology" ]
1,623
[ "Behavioural sciences", "Ethology", "Behavior" ]
10,356
https://en.wikipedia.org/wiki/Endothermic%20process
An endothermic process is a chemical or physical process that absorbs heat from its surroundings. In terms of thermodynamics, it is a thermodynamic process with an increase in the enthalpy (or internal energy ) of the system. In an endothermic process, the heat that a system absorbs is thermal energy transfer into the system. Thus, an endothermic reaction generally leads to an increase in the temperature of the system and a decrease in that of the surroundings. The term was coined by 19th-century French chemist Marcellin Berthelot. The term endothermic comes from the Greek ἔνδον (endon) meaning 'within' and θερμ- (therm) meaning 'hot' or 'warm'. An endothermic process may be a chemical process, such as dissolving ammonium nitrate () in water (), or a physical process, such as the melting of ice cubes. The opposite of an endothermic process is an exothermic process, one that releases or "gives out" energy, usually in the form of heat and sometimes as electrical energy. Thus, endo in endothermic refers to energy or heat going in, and exo in exothermic refers to energy or heat going out. In each term (endothermic and exothermic) the prefix refers to where heat (or electrical energy) goes as the process occurs. In chemistry Due to bonds breaking and forming during various processes (changes in state, chemical reactions), there is usually a change in energy. If the energy of the forming bonds is greater than the energy of the breaking bonds, then energy is released. This is known as an exothermic reaction. However, if more energy is needed to break the bonds than the energy being released, energy is taken up. Therefore, it is an endothermic reaction. Details Whether a process can occur spontaneously depends not only on the enthalpy change but also on the entropy change () and absolute temperature . If a process is a spontaneous process at a certain temperature, the products have a lower Gibbs free energy than the reactants (an exergonic process), even if the enthalpy of the products is higher. Thus, an endothermic process usually requires a favorable entropy increase () in the system that overcomes the unfavorable increase in enthalpy so that still . While endothermic phase transitions into more disordered states of higher entropy, e.g. melting and vaporization, are common, spontaneous chemical processes at moderate temperatures are rarely endothermic. The enthalpy increase in a hypothetical strongly endothermic process usually results in , which means that the process will not occur (unless driven by electrical or photon energy). An example of an endothermic and exergonic process is C6H12O6 + 6 H2O -> 12 H2 + 6 CO2 . Examples Evaporation Sublimation Cracking of alkanes Thermal decomposition Hydrolysis Nucleosynthesis of elements heavier than nickel in stellar cores High-energy neutrons can produce tritium from lithium-7 in an endothermic process, consuming 2.466 MeV. This was discovered when the 1954 Castle Bravo nuclear test produced an unexpectedly high yield. Nuclear fusion of elements heavier than iron in supernovae Dissolving together barium hydroxide and ammonium chloride Distinction between endothermic and endotherm The terms "endothermic" and "endotherm" are both derived from Greek "within" and "heat", but depending on context, they can have very different meanings. In physics, thermodynamics applies to processes involving a system and its surroundings, and the term "endothermic" is used to describe a reaction where energy is taken "(with)in" by the system (vs. an "exothermic" reaction, which releases energy "outwards"). In biology, thermoregulation is the ability of an organism to maintain its body temperature, and the term "endotherm" refers to an organism that can do so from "within" by using the heat released by its internal bodily functions (vs. an "ectotherm", which relies on external, environmental heat sources) to maintain an adequate temperature. References External links Exothermic and Endothermic – MSDS Hyper-Glossary at Interactive Learning Paradigms, Incorporated Thermochemistry Thermodynamic processes Chemical thermodynamics
Endothermic process
[ "Physics", "Chemistry" ]
961
[ "Chemical thermodynamics", "Thermochemistry", "Thermodynamic processes", "Thermodynamics" ]
10,359
https://en.wikipedia.org/wiki/Amiga%20Enhanced%20Chip%20Set
The Enhanced Chip Set (ECS) is the second generation of the Amiga computer's chipset, offering minor improvements over the original chipset (OCS) design. ECS was introduced in 1990 with the launch of the Amiga 3000. Another version was developed around 1994 but was unreleased due to Commodore International filing for bankruptcy. Amigas produced from 1990 onwards featured a mix of OCS and ECS chips, such as later versions of the Amiga 500 and the Commodore CDTV. Other ECS models were the Amiga 500+ in 1991 and lastly the Amiga 600 in 1992. Features The enhanced chip set had two new chips, the 8375 HR Agnus and 8373 HR Denise. The ECS Denise chip offers Productivity VGA output (640×480 non-interlaced) and SuperHiRes (1280×200 or 1280×256) display modes (also available in interlaced mode), which are however limited to only 4 bits on-screen colors. The Productivity output required a multi-sync monitor. It also allowed for a greyscale resolution of 1008 x 800 pixels with the A2024 monitor. Some Amigas, such as the Amiga 500 and the Amiga 2000 came with the ECS version of the Agnus chip but the original chipset version (OCS) of the Denise chip. It is possible to upgrade one or both of them to obtain partial or full ECS functionality by replacing OCS chips with ECS versions. Not all OCS chipset computers were upgradable, some such as the Amiga 2000-A and the Amiga 1000 had different Agnus sockets. ECS was followed by the third generation AGA chipset with the launch of the Amiga 4000 and Amiga 1200 in 1992. See also Amiga custom chips Amiga Ranger Chipset Advanced Graphics Architecture Amiga Advanced Architecture chipset AA+ Chipset Hombre chipset References Amiga chipsets Graphics chips MOS Technology integrated circuits Sound chips AmigaOS
Amiga Enhanced Chip Set
[ "Technology" ]
397
[ "AmigaOS", "Computing stubs", "Computing platforms", "Computer hardware stubs" ]
10,363
https://en.wikipedia.org/wiki/European%20Space%20Agency
The European Space Agency (ESA) is a 23-member intergovernmental body devoted to space exploration. With its headquarters in Paris and a staff of around 2,547 people globally as of 2023, the ESA was founded in 1975. Its 2024 annual budget was €7.79 billion. The ESA's space flight programme includes human spaceflight (mainly through participation in the International Space Station program); the launch and operation of crewless exploration missions to other planets (such as Mars) and the Moon; Earth observation, science and telecommunication; designing launch vehicles; and maintaining a major spaceport, the Guiana Space Centre at Kourou (French Guiana), France. The main European launch vehicle Ariane 6 will be operated through Arianespace with the ESA sharing in the costs of launching and further developing this launch vehicle. The agency is also working with NASA to manufacture the Orion spacecraft service module that flies on the Space Launch System. History Foundation After World War II, many European scientists left Western Europe in order to work with the United States. Although the 1950s boom made it possible for Western European countries to invest in research and specifically in space-related activities, Western European scientists realised solely national projects would not be able to compete with the two main superpowers. In 1958, only months after the Sputnik shock, Edoardo Amaldi (Italy) and Pierre Auger (France), two prominent members of the Western European scientific community, met to discuss the foundation of a common Western European space agency. The meeting was attended by scientific representatives from eight countries. The Western European nations decided to have two agencies: one concerned with developing a launch system, ELDO (European Launcher Development Organisation), and the other the precursor of the European Space Agency, ESRO (European Space Research Organisation). The latter was established on 20 March 1964 by an agreement signed on 14 June 1962. From 1968 to 1972, ESRO launched seven research satellites, but ELDO was not able to deliver a launch vehicle. Both agencies struggled with the underfunding and diverging interests of their participants. The ESA in its current form was founded with the ESA Convention in 1975, when ESRO was merged with ELDO. The ESA had ten founding member states: Belgium, Denmark, France, West Germany, Italy, the Netherlands, Spain, Sweden, Switzerland, and the United Kingdom. These signed the ESA Convention in 1975 and deposited the instruments of ratification by 1980, when the convention came into force. During this interval the agency functioned in a de facto fashion. The ESA launched its first major scientific mission in 1975, Cos-B, a space probe monitoring gamma-ray emissions in the universe, which was first worked on by ESRO. Later activities The ESA collaborated with NASA on the International Ultraviolet Explorer (IUE), the world's first high-orbit telescope, which was launched in 1978 and operated successfully for 18 years. A number of successful Earth-orbit projects followed, and in 1986 the ESA began Giotto, its first deep-space mission, to study the comets Halley and Grigg–Skjellerup. Hipparcos, a star-mapping mission, was launched in 1989 and in the 1990s SOHO, Ulysses and the Hubble Space Telescope were all jointly carried out with NASA. Later scientific missions in cooperation with NASA include the Cassini–Huygens space probe, to which the ESA contributed by building the Titan landing module Huygens. As the successor of ELDO, the ESA has also constructed rockets for scientific and commercial payloads. Ariane 1, launched in 1979, carried mostly commercial payloads into orbit from 1984 onward. The next two versions of the Ariane rocket were intermediate stages in the development of a more advanced launch system, the Ariane 4, which operated between 1988 and 2003 and established the ESA as the world leader in commercial space launches in the 1990s. Although the succeeding Ariane 5 experienced a failure on its first flight, it has since firmly established itself within the heavily competitive commercial space launch market with 112 successful launches until 2021. The successor launch vehicle, the Ariane 6, is under development and had a successful long-firing engine test in November 2023. The ESA plans for the Ariane 6 to launch in June or July 2024. The beginning of the new millennium saw the ESA become, along with agencies like NASA, JAXA, ISRO, the CSA and Roscosmos, one of the major participants in scientific space research. Although the ESA had relied on co-operation with NASA in previous decades, especially the 1990s, changed circumstances (such as tough legal restrictions on information sharing by the United States military) led to decisions to rely more on itself and on co-operation with Russia. A 2011 press issue thus stated: Notable ESA programmes include SMART-1, a probe testing cutting-edge space propulsion technology, the Mars Express and Venus Express missions, as well as the development of the Ariane 5 rocket and its role in the ISS partnership. The ESA maintains its scientific and research projects mainly for astronomy-space missions such as Corot, launched on 27 December 2006, a milestone in the search for exoplanets. On 21 January 2019, ArianeGroup and Arianespace announced a one-year contract with the ESA to study and prepare for a mission to mine the Moon for lunar regolith. In 2021 the ESA ministerial council agreed to the "Matosinhos manifesto" which set three priority areas (referred to as accelerators) "space for a green future, a rapid and resilient crisis response, and the protection of space assets", and two further high visibility projects (referred to as inspirators) an icy moon sample return mission; and human space exploration. In the same year the recruitment process began for the 2022 European Space Agency Astronaut Group. 1 July 2023 saw the launch of the Euclid spacecraft, developed jointly with the Euclid Consortium, after 10 years of planning and building it is designed to better understand dark energy and dark matter by accurately measuring the accelerating expansion of the universe. Facilities The agency's facilities date back to ESRO and are deliberately distributed among various countries and areas. The most important are the following centres: ESA headquarters in Paris, France; ESA science missions are based at ESTEC in Noordwijk, Netherlands; Earth Observation missions at the ESA Centre for Earth Observation in Frascati, Italy; ESA Mission Control (ESOC) is in Darmstadt, Germany; The European Astronaut Centre (EAC) that trains astronauts for future missions is situated in Cologne, Germany; The European Centre for Space Applications and Telecommunications (ECSAT), a research institute created in 2009, is located in Harwell, England, United Kingdom; The European Space Astronomy Centre (ESAC) is located in Villanueva de la Cañada, Madrid, Spain. The European Space Security and Education Centre (ESEC), located in Redu, Belgium; The ESTRACK tracking and deep space communication network. Many other facilities are operated by national space agencies in close collaboration with ESA. Esrange near Kiruna in Sweden; Guiana Space Centre in Kourou, France; Toulouse Space Centre, France; Institute of Space Propulsion in Lampoldshausen, Germany; Columbus Control Centre in Oberpfaffenhofen, Germany. Mission The treaty establishing the European Space Agency reads: The ESA is responsible for setting a unified space and related industrial policy, recommending space objectives to the member states, and integrating national programs like satellite development, into the European program as much as possible. Jean-Jacques Dordain – ESA's Director General (2003–2015) – outlined the European Space Agency's mission in a 2003 interview: Activities and programmes The ESA describes its work in two overlapping ways: For the general public, the various fields of work are described as "Activities". Budgets are organised as "Programmes". These are either mandatory or optional. Activities According to the ESA website, the activities are: Observing the Earth Human and Robotic Exploration Launchers Navigation Space Science Space Engineering & Technology Operations Telecommunications & Integrated Applications Preparing for the Future Space for Climate Programmes Mandatory Every member country (known as 'Member States') must contribute to these programmes: The European Space Agency Science Programme is a long-term programme of space science missions. Technology Development Element Programme Science Core Technology Programme General Study Programme European Component Initiative Optional Depending on their individual choices the countries can contribute to the following programmes, becoming 'Participating States', listed according to: Employment As of 2023, Many other facilities are operated by national space agencies in close collaboration with the ESA. The ESA employs around 2,547 people, and thousands of contractors. Initially, new employees are contracted for an expandable four-year term, which is until the organization's retirement age of 63. According to the ESA's documents, the staff can receive myriad of perks, such as financial childcare support, retirement plans, and financial help when migrating. The ESA also prevents employees from disclosing any private documents or correspondences to outside parties. Ars Technica'''s 2023 report, which contained testimonies of 18 people, suggested that there is a widespread harassment between management and its employees, especially with its contractors. Since the ESA is an international organization, unaffiliated with any single nation, any form of legal action is difficult to raise against the organization. Member states, funding and budget Membership and contribution to the ESA Member states participate to varying degrees with both mandatory space programs and those that are optional. , the mandatory programmes made up 25% of total expenditures while optional space programmes were the other 75%. The ESA has traditionally implemented a policy of "georeturn", where funds that ESA member states provide to the ESA "are returned in the form of contracts to companies in those countries." By 2015, the ESA was an intergovernmental organisation of 22 member states. The 2008 ESA budget amounted to €3.0 billion whilst the 2009 budget amounted to €3.6 billion. The total budget amounted to about €3.7 billion in 2010, €3.99 billion in 2011, €4.02 billion in 2012, €4.28 billion in 2013, €4.10 billion in 2014, €4.43 billion in 2015, €5.25 billion in 2016, €5.75 billion in 2017, €5.60 billion in 2018, €5.72 billion in 2019, €6,68 billion in 2020, €6.49 billion in 2021, €7.15 billion in 2022, €7.46 billion in 2023 and €7.79 billion in 2024. English and French are the two official languages of the ESA. Additionally, official documents are also provided in German and documents regarding the Spacelab have been also provided in Italian. If found appropriate, the agency may conduct its correspondence in any language of a member state. The following table lists all the member states and adjunct members, their ESA convention ratification dates, and their contributions as of 2024: Non-full member states Previously associated members were Austria, Norway and Finland and Slovenia, all of which later joined the ESA as full members. Since January 2025 there have been four associate members: Latvia, Lithuania, Slovakia and Canada. The three European members have shown interest in full membership and may eventually apply within the next years. Latvia Latvia became the second current associated member on 30 June 2020, when the Association Agreement was signed by ESA Director Jan Wörner and the Minister of Education and Science of Latvia, Ilga Šuplinska in Riga. The Saeima ratified it on 27 July. Lithuania In May 2021, Lithuania became the third current associated member. As a consequence its citizens became eligible to apply to the 2022 ESA Astronaut group, applications for which were scheduled to close one week later. The deadline was therefore extended by three weeks to allow Lithuanians a fair chance to apply. Slovakia Slovakia's Associate membership came into effect on 13 October 2022, for an initial duration of seven years. The Association Agreement supersedes the European Cooperating State (ECS) Agreement, which entered into force upon Slovakia's subscription to the Plan for European Cooperating States Charter on 4 February 2016, a scheme introduced at ESA in 2001. The ECS Agreement was subsequently extended until 3 August 2022. Canada Since 1 January 1979, Canada has had the special status of a Cooperating State within the ESA. By virtue of this accord, the Canadian Space Agency takes part in the ESA's deliberative bodies and decision-making and also in the ESA's programmes and activities. Canadian firms can bid for and receive contracts to work on programmes. The accord has a provision ensuring a fair industrial return to Canada. The most recent Cooperation Agreement was signed on 15 December 2010 with a term extending to 2020. For 2014, Canada's annual assessed contribution to the ESA general budget was €6,059,449 (CAD$8,559,050). For 2017, Canada has increased its annual contribution to €21,600,000 (CAD$30,000,000). Budget appropriation and allocation The ESA is funded from annual contributions by national governments of members as well as from an annual contribution by the European Union (EU). The budget of the ESA was €5.250 billion in 2016. Every 3–4 years, ESA member states agree on a budget plan for several years at an ESA member states conference. This plan can be amended in future years, however provides the major guideline for the ESA for several years. The 2016 budget allocations for major areas of the ESA activity are shown in the chart on the right. Countries typically have their own space programmes that differ in how they operate organisationally and financially with the ESA. For example, the French space agency CNES has a total budget of €2,015 million, of which €755 million is paid as direct financial contribution to the ESA. Several space-related projects are joint projects between national space agencies and the ESA (e.g. COROT). Also, the ESA is not the only European governmental space organisation (for example European Union Satellite Centre and the European Union Space Programme Agency). Enlargement After the decision of the ESA Council of 21/22 March 2001, the procedure for accession of the European states was detailed as described the document titled "The Plan for European Co-operating States (PECS)". Nations that want to become a full member of the ESA do so in 3 stages. First a Cooperation Agreement is signed between the country and ESA. In this stage, the country has very limited financial responsibilities. If a country wants to co-operate more fully with ESA, it signs a European Cooperating State (ECS) Agreement, albeit to be a candidate for said agreement, a country must be European. The ECS Agreement makes companies based in the country eligible for participation in ESA procurements. The country can also participate in all ESA programmes, except for the Basic Technology Research Programme. While the financial contribution of the country concerned increases, it is still much lower than that of a full member state. The agreement is normally followed by a Plan For European Cooperating State (or PECS Charter). This is a 5-year programme of basic research and development activities aimed at improving the nation's space industry capacity. At the end of the 5-year period, the country can either begin negotiations to become a full member state or an associated state or sign a new PECS Charter. Many countries, most of which joined the EU in both 2004 and 2007, have started to co-operate with the ESA on various levels: During the Ministerial Meeting in December 2014, ESA ministers approved a resolution calling for discussions to begin with Israel, Australia and South Africa on future association agreements. The ministers noted that "concrete cooperation is at an advanced stage" with these nations and that "prospects for mutual benefits are existing". A separate space exploration strategy resolution calls for further co-operation with the United States, Russia and China on "LEO exploration, including a continuation of ISS cooperation and the development of a robust plan for the coordinated use of space transportation vehicles and systems for exploration purposes, participation in robotic missions for the exploration of the Moon, the robotic exploration of Mars, leading to a broad Mars Sample Return mission in which Europe should be involved as a full partner, and human missions beyond LEO in the longer term." In August 2019, the ESA and the Australian Space Agency signed a joint statement of intent "to explore deeper cooperation and identify projects in a range of areas including deep space, communications, navigation, remote asset management, data analytics and mission support." Details of the cooperation were laid out in a framework agreement signed by the two entities. On 17 November 2020, ESA signed a memorandum of understanding (MOU) with the South African National Space Agency (SANSA). SANSA CEO Dr. Valanathan Munsami tweeted: "Today saw another landmark event for SANSA with the signing of an MoU with the ESA. This builds on initiatives that we have been discussing for a while already and which gives effect to these. Thanks Jan for your hand of friendship and making this possible." Launch vehicles The ESA currently has two operational launch vehicles Vega-C and Ariane 6. Rocket launches are carried out by Arianespace, which has 23 shareholders representing the industry that manufactures the Ariane 5 as well as CNES, at the ESA's Guiana Space Centre. Because many communication satellites have equatorial orbits, launches from French Guiana are able to take larger payloads into space than from spaceports at higher latitudes. In addition, equatorial launches give spacecraft an extra 'push' of nearly 500 m/s due to the higher rotational velocity of the Earth at the equator compared to near the Earth's poles where rotational velocity approaches zero. Ariane 6 Ariane 6 is a heavy lift expendable launch vehicle developed by Arianespace. The Ariane 6 entered into its inaugural flight campaign on 26 April 2024 with the flight conducted on 9 July 2024. Vega-C Vega is the ESA's carrier for small satellites. Developed by seven ESA members led by Italy. It is capable of carrying a payload with a mass of between 300 and 1500 kg to an altitude of 700 km, for low polar orbit. Its maiden launch from Kourou was on 13 February 2012. Vega began full commercial exploitation in December 2015. The rocket has three solid propulsion stages and a liquid propulsion upper stage (the AVUM) for accurate orbital insertion and the ability to place multiple payloads into different orbits. A larger version of the Vega launcher, Vega-C had its first flight in July 2022. The new evolution of the rocket incorporates a larger first stage booster, the P120C replacing the P80, an upgraded Zefiro (rocket stage) second stage, and the AVUM+ upper stage. This new variant enables larger single payloads, dual payloads, return missions, and orbital transfer capabilities. Ariane launch vehicle development funding Historically, the Ariane family rockets have been funded primarily "with money contributed by ESA governments seeking to participate in the program rather than through competitive industry bids. This [has meant that] governments commit multiyear funding to the development with the expectation of a roughly 90% return on investment in the form of industrial workshare." ESA is proposing changes to this scheme by moving to competitive bids for the development of the Ariane 6. Future rocket development Future projects include the Prometheus reusable engine technology demonstrator, Phoebus (an upgraded second stage for Ariane 6), and Themis (a reusable first stage). Human space flight Formation and development At the time the ESA was formed, its main goals did not encompass human space flight; rather it considered itself to be primarily a scientific research organisation for uncrewed space exploration in contrast to its American and Soviet counterparts. It is therefore not surprising that the first non-Soviet European in space was not an ESA astronaut on a European space craft; it was Czechoslovak Vladimír Remek who in 1978 became the first non-Soviet or American in space (the first man in space being Yuri Gagarin of the Soviet Union) – on a Soviet Soyuz spacecraft, followed by the Pole Mirosław Hermaszewski and East German Sigmund Jähn in the same year. This Soviet co-operation programme, known as Intercosmos, primarily involved the participation of Eastern bloc countries. In 1982, however, Jean-Loup Chrétien became the first non-Communist Bloc astronaut on a flight to the Soviet Salyut 7 space station. Because Chrétien did not officially fly into space as an ESA astronaut, but rather as a member of the French CNES astronaut corps, the German Ulf Merbold is considered the first ESA astronaut to fly into space. He participated in the STS-9 Space Shuttle mission that included the first use of the European-built Spacelab in 1983. STS-9 marked the beginning of an extensive ESA/NASA joint partnership that included dozens of space flights of ESA astronauts in the following years. Some of these missions with Spacelab were fully funded and organisationally and scientifically controlled by the ESA (such as two missions by Germany and one by Japan) with European astronauts as full crew members rather than guests on board. Beside paying for Spacelab flights and seats on the shuttles, the ESA continued its human space flight co-operation with the Soviet Union and later Russia, including numerous visits to Mir. During the latter half of the 1980s, European human space flights changed from being the exception to routine and therefore, in 1990, the European Astronaut Centre in Cologne, Germany was established. It selects and trains prospective astronauts and is responsible for the co-ordination with international partners, especially with regard to the International Space Station. As of 2006, the ESA astronaut corps officially included twelve members, including nationals from most large European countries except the United Kingdom. In 2008, the ESA started to recruit new astronauts so that final selection would be due in spring 2009. Almost 10,000 people registered as astronaut candidates before registration ended in June 2008. 8,413 fulfilled the initial application criteria. Of the applicants, 918 were chosen to take part in the first stage of psychological testing, which narrowed down the field to 192. After two-stage psychological tests and medical evaluation in early 2009, as well as formal interviews, six new members of the European Astronaut Corps were selected – five men and one woman. Crew vehicles In the 1980s, France pressed for an independent European crew launch vehicle. Around 1978, it was decided to pursue a reusable spacecraft model and starting in November 1987 a project to create a mini-shuttle by the name of Hermes was introduced. The craft was comparable to early proposals for the Space Shuttle and consisted of a small reusable spaceship that would carry 3 to 5 astronauts and 3 to 4 metric tons of payload for scientific experiments. With a total maximum weight of 21 metric tons it would have been launched on the Ariane 5 rocket, which was being developed at that time. It was planned solely for use in low Earth orbit space flights. The planning and pre-development phase concluded in 1991; the production phase was never fully implemented because at that time the political landscape had changed significantly. With the fall of the Soviet Union, the ESA looked forward to co-operation with Russia to build a next-generation space vehicle. Thus the Hermes programme was cancelled in 1995 after about 3 billion dollars had been spent. The Columbus space station programme had a similar fate. In the 21st century, the ESA started new programmes in order to create its own crew vehicles, most notable among its various projects and proposals is Hopper, whose prototype by EADS, called Phoenix, has already been tested. While projects such as Hopper are neither concrete nor to be realised within the next decade, other possibilities for human spaceflight in co-operation with the Russian Space Agency have emerged. Following talks with the Russian Space Agency in 2004 and June 2005, a co-operation between the ESA and the Russian Space Agency was announced to jointly work on the Russian-designed Kliper, a reusable spacecraft that would be available for space travel beyond LEO (e.g. the moon or even Mars). It was speculated that Europe would finance part of it. A €50 million participation study for Kliper, which was expected to be approved in December 2005, was finally not approved by ESA member states. The Russian state tender for the project was subsequently cancelled in 2006. In June 2006, ESA member states granted 15 million to the Crew Space Transportation System (CSTS) study, a two-year study to design a spacecraft capable of going beyond Low-Earth orbit based on the current Soyuz design. This project was pursued with Roskosmos instead of the cancelled Kliper proposal. A decision on the actual implementation and construction of the CSTS spacecraft was contemplated for 2008. In mid-2009 EADS Astrium was awarded a €21 million study into designing a crew vehicle based on the European ATV which is believed to now be the basis of the Advanced Crew Transportation System design. In November 2012, the ESA decided to join NASA's Orion programme. The ATV would form the basis of a propulsion unit for NASA's new crewed spacecraft. The ESA may also seek to work with NASA on Orion's launch system as well in order to secure a seat on the spacecraft for its own astronauts. In September 2014, the ESA signed an agreement with Sierra Nevada Corporation for co-operation in Dream Chaser project. Further studies on the Dream Chaser for European Utilization or DC4EU project were funded, including the feasibility of launching a Europeanised Dream Chaser onboard Ariane 5. Cooperation with other countries and organisations The ESA has signed co-operation agreements with the following states that currently neither plan to integrate as tightly with ESA institutions as Canada, nor envision future membership of the ESA: Argentina, Brazil, China, India (for the Chandrayan mission), Russia and Turkey. Additionally, the ESA has joint projects with the EUSPA of the European Union, NASA of the United States and is participating in the International Space Station together with the United States (NASA), Russia and Japan (JAXA). National space organisations of member states The Centre National d'Études Spatiales (CNES) (National Centre for Space Study) is the French government space agency (administratively, a "public establishment of industrial and commercial character"). Its headquarters are in central Paris. CNES is the main participant on the Ariane project. Indeed, CNES designed and tested all Ariane family rockets (mainly from its centre in Évry near Paris) The UK Space Agency is a partnership of the UK government departments which are active in space. Through the UK Space Agency, the partners provide delegates to represent the UK on the various ESA governing bodies. Each partner funds its own programme. The Italian Space Agency (Agenzia Spaziale Italiana or ASI) was founded in 1988 to promote, co-ordinate and conduct space activities in Italy. Operating under the Ministry of the Universities and of Scientific and Technological Research, the agency cooperates with numerous entities active in space technology and with the president of the Council of Ministers. Internationally, the ASI provides Italy's delegation to the Council of the European Space Agency and to its subordinate bodies. The German Aerospace Center (DLR) (German: Deutsches Zentrum für Luft- und Raumfahrt e. V.) is the national research centre for aviation and space flight of the Federal Republic of Germany and of other member states in the Helmholtz Association. Its extensive research and development projects are included in national and international cooperative programmes. In addition to its research projects, the centre is the assigned space agency of Germany bestowing headquarters of German space flight activities and its associates. The Instituto Nacional de Técnica Aeroespacial (INTA) (National Institute for Aerospace Technique) is a Public Research Organisation specialised in aerospace research and technology development in Spain. Among other functions, it serves as a platform for space research and acts as a significant testing facility for the aeronautic and space sector in the country. NASA The ESA has a long history of collaboration with NASA. Since ESA's astronaut corps was formed, the Space Shuttle has been the primary launch vehicle used by the ESA's astronauts to get into space through partnership programmes with NASA. In the 1980s and 1990s, the Spacelab programme was an ESA-NASA joint research programme that had the ESA develop and manufacture orbital labs for the Space Shuttle for several flights in which the ESA participates with astronauts in experiments. In robotic science mission and exploration missions, NASA has been the ESA's main partner. Cassini–Huygens was a joint NASA-ESA mission, along with the Infrared Space Observatory, INTEGRAL, SOHO, and others. Also, the Hubble Space Telescope is a joint project of NASA and the ESA. Future ESA-NASA joint projects include the James Webb Space Telescope and the proposed Laser Interferometer Space Antenna. NASA has supported the ESA's MarcoPolo-R mission which landed on asteroid Bennu in October 2020 and is scheduled to return a sample to Earth for further analysis in 2023. NASA and the ESA will also likely join for a Mars sample-return mission. In October 2020, the ESA entered into a memorandum of understanding (MOU) with NASA to work together on the Artemis program, which will provide an orbiting Lunar Gateway and also accomplish the first crewed lunar landing in 50 years, whose team will include the first woman on the Moon. Astronaut selection announcements are expected within two years of the 2024 scheduled launch date. The ESA also purchases seats on the NASA operated Commercial Crew Program. The first ESA astronaut to be on a Commercial Crew Program mission is Thomas Pesquet. Pesquet launched into space aboard Crew Dragon Endeavour on the Crew-2 mission. The ESA also has seats on Crew-3 with Matthias Maurer and Crew-4 with Samantha Cristoforetti. SpaceX In 2023, following the successful launch of the Euclid telescope in July on a Falcon 9 rocket, the ESA approached SpaceX to launch four Galileo communication satellites on two Falcon 9 rockets in 2024, however it would require approval from the European Commission and all member states of the European Union to proceed. Cooperation with other space agencies Since China has invested more money into space activities, the Chinese Space Agency has sought international partnerships. Besides the Russian Space Agency, ESA is one of its most important partners. Both space agencies cooperated in the development of the Double Star Mission. In 2017, the ESA sent two astronauts to China for two weeks sea survival training with Chinese astronauts in Yantai, Shandong. The ESA entered into a major joint venture with Russia in the form of the CSTS, the preparation of French Guiana spaceport for launches of Soyuz-2 rockets and other projects. With India, the ESA agreed to send instruments into space aboard the ISRO's Chandrayaan-1 in 2008. The ESA is also co-operating with Japan, the most notable current project in collaboration with JAXA is the BepiColombo mission to Mercury. International Space Station With regard to the International Space Station (ISS), the ESA is not represented by all of its member states: 11 of the 22 ESA member states currently participate in the project: Belgium, Denmark, France, Germany, Italy, Netherlands, Norway, Spain, Sweden, Switzerland and United Kingdom. Austria, Finland and Ireland chose not to participate, because of lack of interest or concerns about the expense of the project. Portugal, Luxembourg, Greece, the Czech Republic, Romania, Poland, Estonia and Hungary joined ESA after the agreement had been signed. The ESA takes part in the construction and operation of the ISS, with contributions such as Columbus, a science laboratory module that was brought into orbit by NASA's STS-122 Space Shuttle mission, and the Cupola observatory module that was completed in July 2005 by Alenia Spazio for the ESA. The current estimates for the ISS are approaching €100 billion in total (development, construction and 10 years of maintaining the station) of which the ESA has committed to paying €8 billion. About 90% of the costs of the ESA's ISS share will be contributed by Germany (41%), France (28%) and Italy (20%). German ESA astronaut Thomas Reiter was the first long-term ISS crew member. The ESA has developed the Automated Transfer Vehicle for ISS resupply. Each ATV has a cargo capacity of . The first ATV, Jules Verne, was launched on 9 March 2008 and on 3 April 2008 successfully docked with the ISS. This manoeuvre, considered a major technical feat, involved using automated systems to allow the ATV to track the ISS, moving at 27,000 km/h, and attach itself with an accuracy of 2 cm. Five vehicles were launched before the program ended with the launch of the fifth ATV, Georges Lemaître, in 2014. As of 2020, the spacecraft establishing supply links to the ISS are the Russian Progress and Soyuz, Japanese Kounotori (HTV), and the United States vehicles Cargo Dragon 2 and Cygnus stemmed from the Commercial Resupply Services program. European Life and Physical Sciences research on board the International Space Station (ISS) is mainly based on the European Programme for Life and Physical Sciences in Space programme that was initiated in 2001. Facilities ESA Headquarters, Paris, France European Space Operations Centre (ESOC), Darmstadt, Germany European Space Research and Technology Centre (ESTEC), Noordwijk, Netherlands European Space Astronomy Centre (ESAC), Madrid, Spain European Centre for Space Applications and Telecommunications (ECSAT), Oxfordshire, United Kingdom European Astronaut Centre (EAC), Cologne, Germany ESA Centre for Earth Observation (ESRIN), Frascati, Italy Guiana Space Centre (CSG), Kourou, French Guiana European Space Tracking Network (ESTRACK) European Data Relay System Link between ESA and EU The ESA is an independent space agency and not under the jurisdiction of the European Union, although they have common goals, share funding, and work together often. The initial aim of the European Union (EU) was to make the European Space Agency an agency of the EU by 2014. While the EU and its member states fund together 86% of the budget of the ESA, it is not an EU agency. Furthermore, the ESA has several non-EU members, most notably the United Kingdom which left the EU while remaining a full member of the ESA. The ESA is partnered with the EU on its two current flagship space programmes, the Copernicus series of Earth observation satellites and the Galileo satellite navigation system, with the ESA providing technical oversight and, in the case of Copernicus, some of the funding. The EU, though, has shown an interest in expanding into new areas, whence the proposal to rename and expand its satellite navigation agency (the European GNSS Agency) into the EU Agency for the Space Programme. The proposal drew strong criticism from the ESA, as it was perceived as encroaching on the ESA's turf. In January 2021, after years of acrimonious relations, EU and ESA officials mended their relationship, with the EU Internal Market commissioner Thierry Breton saying "The European space policy will continue to rely on the ESA and its unique technical, engineering and science expertise," and that the "ESA will continue to be the European agency for space matters. If we are to be successful in our European strategy for space, and we will be, I will need the ESA by my side." ESA director Aschbacher reciprocated, saying "I would really like to make the ESA the main agency, the go-to agency of the European Commission for all its flagship programmes." The ESA and EUSPA are now seen to have distinct roles and competencies, which will be officialised in the Financial Framework Partnership Agreement (FFPA). Whereas the ESA's focus will be on the technical elements of the EU space programmes, the EUSPA will handle the operational elements of those programmes. Security incidents On 3 August 1984, the ESA's Paris headquarters were severely damaged and six people were hurt when a bomb exploded. It was planted by the far-left armed Action Directe group. On 14 December 2015, hackers from Anonymous breached the ESA's subdomains and leaked thousands of login credentials. See also European Space Security and Education Centre Eurospace List of European Space Agency programmes and missions List of government space agencies SEDS Space Night European Union matters Agencies of the European Union Directorate-General for Defence Industry and Space Enhanced co-operation European Union Agency for the Space Programme Notes References Further reading ESA Bulletin (ESA Bulletin ) is a quarterly magazine about the work of ESA that can be subscribed to European Space Agency free of charge. Bonnet, Roger; Manno, Vittorio (1994). International Cooperation in Space: The Example of the European Space Agency (Frontiers of Space). Harvard University Press. . Johnson, Nicholas (1993). Space technologies and space science activities of member states of the European Space Agency. . Peeters, Walter (2000). Space Marketing: A European Perspective (Space Technology Library). . Zabusky, Stacia (1995 and 2001). Launching Europe: An Ethnography of European Cooperation in Space Science. Harvey, Brian (2003). Europe's Space Programme: To Ariane and Beyond''. . External links A European strategy for space – Europa Convention for the establishment of a European Space Agency, September 2005 Convention for the Establishment of a European Space Agency, Annex I: Privileges and Immunities European Space Agency fonds and 'Oral History of Europe in Space' project run by the European Space Agency at the Historical Archives of the EU in Florence Open access at the European Space Agency Organizations established in 1975 International scientific organizations based in Europe Organizations based in Paris 1975 establishments in Europe European astronauts Space agencies Space policy of the European Union
European Space Agency
[ "Engineering" ]
7,762
[ "Space programs", "European space programmes" ]
10,372
https://en.wikipedia.org/wiki/Entire%20function
In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function has a root at , then , taking the limit value at , is an entire function. On the other hand, the natural logarithm, the reciprocal function, and the square root are all not entire functions, nor can they be continued analytically to an entire function. A transcendental entire function is an entire function that is not a polynomial. Just as meromorphic functions can be viewed as a generalization of rational fractions, entire functions can be viewed as a generalization of polynomials. In particular, if for meromorphic functions one can generalize the factorization into simple fractions (the Mittag-Leffler theorem on the decomposition of a meromorphic function), then for entire functions there is a generalization of the factorization — the Weierstrass theorem on entire functions. Properties Every entire function can be represented as a single power series that converges everywhere in the complex plane, hence uniformly on compact sets. The radius of convergence is infinite, which implies that or, equivalently, Any power series satisfying this criterion will represent an entire function. If (and only if) the coefficients of the power series are all real then the function evidently takes real values for real arguments, and the value of the function at the complex conjugate of will be the complex conjugate of the value at Such functions are sometimes called self-conjugate (the conjugate function, being given by If the real part of an entire function is known in a neighborhood of a point then both the real and imaginary parts are known for the whole complex plane, up to an imaginary constant. For instance, if the real part is known in a neighborhood of zero, then we can find the coefficients for from the following derivatives with respect to a real variable : (Likewise, if the imaginary part is known in a neighborhood then the function is determined up to a real constant.) In fact, if the real part is known just on an arc of a circle, then the function is determined up to an imaginary constant.} Note however that an entire function is not determined by its real part on all curves. In particular, if the real part is given on any curve in the complex plane where the real part of some other entire function is zero, then any multiple of that function can be added to the function we are trying to determine. For example, if the curve where the real part is known is the real line, then we can add times any self-conjugate function. If the curve forms a loop, then it is determined by the real part of the function on the loop since the only functions whose real part is zero on the curve are those that are everywhere equal to some imaginary number. The Weierstrass factorization theorem asserts that any entire function can be represented by a product involving its zeroes (or "roots"). The entire functions on the complex plane form an integral domain (in fact a Prüfer domain). They also form a commutative unital associative algebra over the complex numbers. Liouville's theorem states that any bounded entire function must be constant. As a consequence of Liouville's theorem, any function that is entire on the whole Riemann sphere is constant. Thus any non-constant entire function must have a singularity at the complex point at infinity, either a pole for a polynomial or an essential singularity for a transcendental entire function. Specifically, by the Casorati–Weierstrass theorem, for any transcendental entire function and any complex there is a sequence such that Picard's little theorem is a much stronger result: Any non-constant entire function takes on every complex number as value, possibly with a single exception. When an exception exists, it is called a lacunary value of the function. The possibility of a lacunary value is illustrated by the exponential function, which never takes on the value . One can take a suitable branch of the logarithm of an entire function that never hits , so that this will also be an entire function (according to the Weierstrass factorization theorem). The logarithm hits every complex number except possibly one number, which implies that the first function will hit any value other than an infinite number of times. Similarly, a non-constant, entire function that does not hit a particular value will hit every other value an infinite number of times. Liouville's theorem is a special case of the following statement: Growth Entire functions may grow as fast as any increasing function: for any increasing function there exists an entire function such that for all real . Such a function may be easily found of the form: for a constant and a strictly increasing sequence of positive integers . Any such sequence defines an entire function , and if the powers are chosen appropriately we may satisfy the inequality for all real . (For instance, it certainly holds if one chooses and, for any integer one chooses an even exponent such that ). Order and type The order (at infinity) of an entire function is defined using the limit superior as: where is the disk of radius and denotes the supremum norm of on . The order is a non-negative real number or infinity (except when for all ). In other words, the order of is the infimum of all such that: The example of shows that this does not mean if is of order . If one can also define the type: If the order is 1 and the type is , the function is said to be "of exponential type ". If it is of order less than 1 it is said to be of exponential type 0. If then the order and type can be found by the formulas Let denote the -th derivative of . Then we may restate these formulas in terms of the derivatives at any arbitrary point : The type may be infinite, as in the case of the reciprocal gamma function, or zero (see example below under ). Another way to find out the order and type is Matsaev's theorem. Examples Here are some examples of functions of various orders: Order ρ For arbitrary positive numbers and one can construct an example of an entire function of order and type using: Order 0 Non-zero polynomials Order 1/4 where Order 1/3 where Order 1/2 with (for which the type is given by ) Order 1 with () the Bessel functions and spherical Bessel functions for integer values of the reciprocal gamma function ( is infinite) Order 3/2 Airy function Order 2 with () The Barnes G-function ( is infinite). Order infinity Genus Entire functions of finite order have Hadamard's canonical representation (Hadamard factorization theorem): where are those roots of that are not zero (), is the order of the zero of at (the case being taken to mean ), a polynomial (whose degree we shall call ), and is the smallest non-negative integer such that the series converges. The non-negative integer is called the genus of the entire function . If the order is not an integer, then is the integer part of . If the order is a positive integer, then there are two possibilities: or . For example, , and are entire functions of genus . Other examples According to J. E. Littlewood, the Weierstrass sigma function is a 'typical' entire function. This statement can be made precise in the theory of random entire functions: the asymptotic behavior of almost all entire functions is similar to that of the sigma function. Other examples include the Fresnel integrals, the Jacobi theta function, and the reciprocal Gamma function. The exponential function and the error function are special cases of the Mittag-Leffler function. According to the fundamental theorem of Paley and Wiener, Fourier transforms of functions (or distributions) with bounded support are entire functions of order and finite type. Other examples are solutions of linear differential equations with polynomial coefficients. If the coefficient at the highest derivative is constant, then all solutions of such equations are entire functions. For example, the exponential function, sine, cosine, Airy functions and Parabolic cylinder functions arise in this way. The class of entire functions is closed with respect to compositions. This makes it possible to study dynamics of entire functions. An entire function of the square root of a complex number is entire if the original function is even, for example . If a sequence of polynomials all of whose roots are real converges in a neighborhood of the origin to a limit which is not identically equal to zero, then this limit is an entire function. Such entire functions form the Laguerre–Pólya class, which can also be characterized in terms of the Hadamard product, namely, belongs to this class if and only if in the Hadamard representation all are real, , and , where and are real, and . For example, the sequence of polynomials converges, as increases, to . The polynomials have all real roots, and converge to . The polynomials also converge to , showing the buildup of the Hadamard product for cosine. See also Jensen's formula Carlson's theorem Exponential type Paley–Wiener theorem Wiman-Valiron theory Notes References Sources Analytic functions Special functions
Entire function
[ "Mathematics" ]
1,984
[ "Special functions", "Combinatorics" ]
10,375
https://en.wikipedia.org/wiki/Error%20detection%20and%20correction
In information theory and coding theory with applications in computer science and telecommunications, error detection and correction (EDAC) or error control are techniques that enable reliable delivery of digital data over unreliable communication channels. Many communication channels are subject to channel noise, and thus errors may be introduced during transmission from the source to a receiver. Error detection techniques allow detecting such errors, while error correction enables reconstruction of the original data in many cases. Definitions Error detection is the detection of errors caused by noise or other impairments during transmission from the transmitter to the receiver. Error correction is the detection of errors and reconstruction of the original, error-free data. History In classical antiquity, copyists of the Hebrew Bible were paid for their work according to the number of stichs (lines of verse). As the prose books of the Bible were hardly ever written in stichs, the copyists, in order to estimate the amount of work, had to count the letters. This also helped ensure accuracy in the transmission of the text with the production of subsequent copies. Between the 7th and 10th centuries CE a group of Jewish scribes formalized and expanded this to create the Numerical Masorah to ensure accurate reproduction of the sacred text. It included counts of the number of words in a line, section, book and groups of books, noting the middle stich of a book, word use statistics, and commentary. Standards became such that a deviation in even a single letter in a Torah scroll was considered unacceptable. The effectiveness of their error correction method was verified by the accuracy of copying through the centuries demonstrated by discovery of the Dead Sea Scrolls in 1947–1956, dating from . The modern development of error correction codes is credited to Richard Hamming in 1947. A description of Hamming's code appeared in Claude Shannon's A Mathematical Theory of Communication and was quickly generalized by Marcel J. E. Golay. Principles All error-detection and correction schemes add some redundancy (i.e., some extra data) to a message, which receivers can use to check consistency of the delivered message and to recover data that has been determined to be corrupted. Error detection and correction schemes can be either systematic or non-systematic. In a systematic scheme, the transmitter sends the original (error-free) data and attaches a fixed number of check bits (or parity data), which are derived from the data bits by some encoding algorithm. If error detection is required, a receiver can simply apply the same algorithm to the received data bits and compare its output with the received check bits; if the values do not match, an error has occurred at some point during the transmission. If error correction is required, a receiver can apply the decoding algorithm to the received data bits and the received check bits to recover the original error-free data. In a system that uses a non-systematic code, the original message is transformed into an encoded message carrying the same information and that has at least as many bits as the original message. Good error control performance requires the scheme to be selected based on the characteristics of the communication channel. Common channel models include memoryless models where errors occur randomly and with a certain probability, and dynamic models where errors occur primarily in bursts. Consequently, error-detecting and -correcting codes can be generally distinguished between random-error-detecting/correcting and burst-error-detecting/correcting. Some codes can also be suitable for a mixture of random errors and burst errors. If the channel characteristics cannot be determined, or are highly variable, an error-detection scheme may be combined with a system for retransmissions of erroneous data. This is known as automatic repeat request (ARQ), and is most notably used in the Internet. An alternate approach for error control is hybrid automatic repeat request (HARQ), which is a combination of ARQ and error-correction coding. Types of error correction There are three major types of error correction: Automatic repeat request Automatic repeat request (ARQ) is an error control method for data transmission that makes use of error-detection codes, acknowledgment and/or negative acknowledgment messages, and timeouts to achieve reliable data transmission. An acknowledgment is a message sent by the receiver to indicate that it has correctly received a data frame. Usually, when the transmitter does not receive the acknowledgment before the timeout occurs (i.e., within a reasonable amount of time after sending the data frame), it retransmits the frame until it is either correctly received or the error persists beyond a predetermined number of retransmissions. Three types of ARQ protocols are Stop-and-wait ARQ, Go-Back-N ARQ, and Selective Repeat ARQ. ARQ is appropriate if the communication channel has varying or unknown capacity, such as is the case on the Internet. However, ARQ requires the availability of a back channel, results in possibly increased latency due to retransmissions, and requires the maintenance of buffers and timers for retransmissions, which in the case of network congestion can put a strain on the server and overall network capacity. For example, ARQ is used on shortwave radio data links in the form of ARQ-E, or combined with multiplexing as ARQ-M. Forward error correction Forward error correction (FEC) is a process of adding redundant data such as an error-correcting code (ECC) to a message so that it can be recovered by a receiver even when a number of errors (up to the capability of the code being used) are introduced, either during the process of transmission or on storage. Since the receiver does not have to ask the sender for retransmission of the data, a backchannel is not required in forward error correction. Error-correcting codes are used in lower-layer communication such as cellular network, high-speed fiber-optic communication and Wi-Fi, as well as for reliable storage in media such as flash memory, hard disk and RAM. Error-correcting codes are usually distinguished between convolutional codes and block codes: Convolutional codes are processed on a bit-by-bit basis. They are particularly suitable for implementation in hardware, and the Viterbi decoder allows optimal decoding. Block codes are processed on a block-by-block basis. Early examples of block codes are repetition codes, Hamming codes and multidimensional parity-check codes. They were followed by a number of efficient codes, Reed–Solomon codes being the most notable due to their current widespread use. Turbo codes and low-density parity-check codes (LDPC) are relatively new constructions that can provide almost optimal efficiency. Shannon's theorem is an important theorem in forward error correction, and describes the maximum information rate at which reliable communication is possible over a channel that has a certain error probability or signal-to-noise ratio (SNR). This strict upper limit is expressed in terms of the channel capacity. More specifically, the theorem says that there exist codes such that with increasing encoding length the probability of error on a discrete memoryless channel can be made arbitrarily small, provided that the code rate is smaller than the channel capacity. The code rate is defined as the fraction k/n of k source symbols and n encoded symbols. The actual maximum code rate allowed depends on the error-correcting code used, and may be lower. This is because Shannon's proof was only of existential nature, and did not show how to construct codes that are both optimal and have efficient encoding and decoding algorithms. Hybrid schemes Hybrid ARQ is a combination of ARQ and forward error correction. There are two basic approaches: Messages are always transmitted with FEC parity data (and error-detection redundancy). A receiver decodes a message using the parity information and requests retransmission using ARQ only if the parity data was not sufficient for successful decoding (identified through a failed integrity check). Messages are transmitted without parity data (only with error-detection information). If a receiver detects an error, it requests FEC information from the transmitter using ARQ and uses it to reconstruct the original message. The latter approach is particularly attractive on an erasure channel when using a rateless erasure code. Types of error detection Error detection is most commonly realized using a suitable hash function (or specifically, a checksum, cyclic redundancy check or other algorithm). A hash function adds a fixed-length tag to a message, which enables receivers to verify the delivered message by recomputing the tag and comparing it with the one provided. There exists a vast variety of different hash function designs. However, some are of particularly widespread use because of either their simplicity or their suitability for detecting certain kinds of errors (e.g., the cyclic redundancy check's performance in detecting burst errors). Minimum distance coding A random-error-correcting code based on minimum distance coding can provide a strict guarantee on the number of detectable errors, but it may not protect against a preimage attack. Repetition codes A repetition code is a coding scheme that repeats the bits across a channel to achieve error-free communication. Given a stream of data to be transmitted, the data are divided into blocks of bits. Each block is transmitted some predetermined number of times. For example, to send the bit pattern , the four-bit block can be repeated three times, thus producing . If this twelve-bit pattern was received as – where the first block is unlike the other two – an error has occurred. A repetition code is very inefficient and can be susceptible to problems if the error occurs in exactly the same place for each group (e.g., in the previous example would be detected as correct). The advantage of repetition codes is that they are extremely simple, and are in fact used in some transmissions of numbers stations. Parity bit A parity bit is a bit that is added to a group of source bits to ensure that the number of set bits (i.e., bits with value 1) in the outcome is even or odd. It is a very simple scheme that can be used to detect single or any other odd number (i.e., three, five, etc.) of errors in the output. An even number of flipped bits will make the parity bit appear correct even though the data is erroneous. Parity bits added to each word sent are called transverse redundancy checks, while those added at the end of a stream of words are called longitudinal redundancy checks. For example, if each of a series of m-bit words has a parity bit added, showing whether there were an odd or even number of ones in that word, any word with a single error in it will be detected. It will not be known where in the word the error is, however. If, in addition, after each stream of n words a parity sum is sent, each bit of which shows whether there were an odd or even number of ones at that bit-position sent in the most recent group, the exact position of the error can be determined and the error corrected. This method is only guaranteed to be effective, however, if there are no more than 1 error in every group of n words. With more error correction bits, more errors can be detected and in some cases corrected. There are also other bit-grouping techniques. Checksum A checksum of a message is a modular arithmetic sum of message code words of a fixed word length (e.g., byte values). The sum may be negated by means of a ones'-complement operation prior to transmission to detect unintentional all-zero messages. Checksum schemes include parity bits, check digits, and longitudinal redundancy checks. Some checksum schemes, such as the Damm algorithm, the Luhn algorithm, and the Verhoeff algorithm, are specifically designed to detect errors commonly introduced by humans in writing down or remembering identification numbers. Cyclic redundancy check A cyclic redundancy check (CRC) is a non-secure hash function designed to detect accidental changes to digital data in computer networks. It is not suitable for detecting maliciously introduced errors. It is characterized by specification of a generator polynomial, which is used as the divisor in a polynomial long division over a finite field, taking the input data as the dividend. The remainder becomes the result. A CRC has properties that make it well suited for detecting burst errors. CRCs are particularly easy to implement in hardware and are therefore commonly used in computer networks and storage devices such as hard disk drives. The parity bit can be seen as a special-case 1-bit CRC. Cryptographic hash function The output of a cryptographic hash function, also known as a message digest, can provide strong assurances about data integrity, whether changes of the data are accidental (e.g., due to transmission errors) or maliciously introduced. Any modification to the data will likely be detected through a mismatching hash value. Furthermore, given some hash value, it is typically infeasible to find some input data (other than the one given) that will yield the same hash value. If an attacker can change not only the message but also the hash value, then a keyed hash or message authentication code (MAC) can be used for additional security. Without knowing the key, it is not possible for the attacker to easily or conveniently calculate the correct keyed hash value for a modified message. Digital signature Digital signatures can provide strong assurances about data integrity, whether the changes of the data are accidental or maliciously introduced. Digital signatures are perhaps most notable for being part of the HTTPS protocol for securely browsing the web. Error correction code Any error-correcting code can be used for error detection. A code with minimum Hamming distance, d, can detect up to d − 1 errors in a code word. Using minimum-distance-based error-correcting codes for error detection can be suitable if a strict limit on the minimum number of errors to be detected is desired. Codes with minimum Hamming distance d = 2 are degenerate cases of error-correcting codes and can be used to detect single errors. The parity bit is an example of a single-error-detecting code. Applications Applications that require low latency (such as telephone conversations) cannot use automatic repeat request (ARQ); they must use forward error correction (FEC). By the time an ARQ system discovers an error and re-transmits it, the re-sent data will arrive too late to be usable. Applications where the transmitter immediately forgets the information as soon as it is sent (such as most television cameras) cannot use ARQ; they must use FEC because when an error occurs, the original data is no longer available. Applications that use ARQ must have a return channel; applications having no return channel cannot use ARQ. Applications that require extremely low error rates (such as digital money transfers) must use ARQ due to the possibility of uncorrectable errors with FEC. Reliability and inspection engineering also make use of the theory of error-correcting codes. Internet In a typical TCP/IP stack, error control is performed at multiple levels: Each Ethernet frame uses CRC-32 error detection. Frames with detected errors are discarded by the receiver hardware. The IPv4 header contains a checksum protecting the contents of the header. Packets with incorrect checksums are dropped within the network or at the receiver. The checksum was omitted from the IPv6 header in order to minimize processing costs in network routing and because current link layer technology is assumed to provide sufficient error detection (see also RFC 3819). UDP has an optional checksum covering the payload and addressing information in the UDP and IP headers. Packets with incorrect checksums are discarded by the network stack. The checksum is optional under IPv4, and required under IPv6. When omitted, it is assumed the data-link layer provides the desired level of error protection. TCP provides a checksum for protecting the payload and addressing information in the TCP and IP headers. Packets with incorrect checksums are discarded by the network stack and eventually get retransmitted using ARQ, either explicitly (such as through three-way handshake) or implicitly due to a timeout. Deep-space telecommunications The development of error-correction codes was tightly coupled with the history of deep-space missions due to the extreme dilution of signal power over interplanetary distances, and the limited power availability aboard space probes. Whereas early missions sent their data uncoded, starting in 1968, digital error correction was implemented in the form of (sub-optimally decoded) convolutional codes and Reed–Muller codes. The Reed–Muller code was well suited to the noise the spacecraft was subject to (approximately matching a bell curve), and was implemented for the Mariner spacecraft and used on missions between 1969 and 1977. The Voyager 1 and Voyager 2 missions, which started in 1977, were designed to deliver color imaging and scientific information from Jupiter and Saturn. This resulted in increased coding requirements, and thus, the spacecraft were supported by (optimally Viterbi-decoded) convolutional codes that could be concatenated with an outer Golay (24,12,8) code. The Voyager 2 craft additionally supported an implementation of a Reed–Solomon code. The concatenated Reed–Solomon–Viterbi (RSV) code allowed for very powerful error correction, and enabled the spacecraft's extended journey to Uranus and Neptune. After ECC system upgrades in 1989, both crafts used V2 RSV coding. The Consultative Committee for Space Data Systems currently recommends usage of error correction codes with performance similar to the Voyager 2 RSV code as a minimum. Concatenated codes are increasingly falling out of favor with space missions, and are replaced by more powerful codes such as Turbo codes or LDPC codes. The different kinds of deep space and orbital missions that are conducted suggest that trying to find a one-size-fits-all error correction system will be an ongoing problem. For missions close to Earth, the nature of the noise in the communication channel is different from that which a spacecraft on an interplanetary mission experiences. Additionally, as a spacecraft increases its distance from Earth, the problem of correcting for noise becomes more difficult. Satellite broadcasting The demand for satellite transponder bandwidth continues to grow, fueled by the desire to deliver television (including new channels and high-definition television) and IP data. Transponder availability and bandwidth constraints have limited this growth. Transponder capacity is determined by the selected modulation scheme and the proportion of capacity consumed by FEC. Data storage Error detection and correction codes are often used to improve the reliability of data storage media. A parity track capable of detecting single-bit errors was present on the first magnetic tape data storage in 1951. The optimal rectangular code used in group coded recording tapes not only detects but also corrects single-bit errors. Some file formats, particularly archive formats, include a checksum (most often CRC32) to detect corruption and truncation and can employ redundancy or parity files to recover portions of corrupted data. Reed-Solomon codes are used in compact discs to correct errors caused by scratches. Modern hard drives use Reed–Solomon codes to detect and correct minor errors in sector reads, and to recover corrupted data from failing sectors and store that data in the spare sectors. RAID systems use a variety of error correction techniques to recover data when a hard drive completely fails. Filesystems such as ZFS or Btrfs, as well as some RAID implementations, support data scrubbing and resilvering, which allows bad blocks to be detected and (hopefully) recovered before they are used. The recovered data may be re-written to exactly the same physical location, to spare blocks elsewhere on the same piece of hardware, or the data may be rewritten onto replacement hardware. Error-correcting memory Dynamic random-access memory (DRAM) may provide stronger protection against soft errors by relying on error-correcting codes. Such error-correcting memory, known as ECC or EDAC-protected memory, is particularly desirable for mission-critical applications, such as scientific computing, financial, medical, etc. as well as extraterrestrial applications due to the increased radiation in space. Error-correcting memory controllers traditionally use Hamming codes, although some use triple modular redundancy. Interleaving allows distributing the effect of a single cosmic ray potentially upsetting multiple physically neighboring bits across multiple words by associating neighboring bits to different words. As long as a single-event upset (SEU) does not exceed the error threshold (e.g., a single error) in any particular word between accesses, it can be corrected (e.g., by a single-bit error-correcting code), and the illusion of an error-free memory system may be maintained. In addition to hardware providing features required for ECC memory to operate, operating systems usually contain related reporting facilities that are used to provide notifications when soft errors are transparently recovered. One example is the Linux kernel's EDAC subsystem (previously known as Bluesmoke), which collects the data from error-checking-enabled components inside a computer system; besides collecting and reporting back the events related to ECC memory, it also supports other checksumming errors, including those detected on the PCI bus. A few systems also support memory scrubbing to catch and correct errors early before they become unrecoverable. See also Berger code Burst error-correcting code ECC memory, a type of computer data storage Link adaptation List of hash functions References Further reading SoftECC: A System for Software Memory Integrity Checking A Tunable, Software-based DRAM Error Detection and Correction Library for HPC Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing External links The on-line textbook: Information Theory, Inference, and Learning Algorithms, by David J.C. MacKay, contains chapters on elementary error-correcting codes; on the theoretical limits of error-correction; and on the latest state-of-the-art error-correcting codes, including low-density parity-check codes, turbo codes, and fountain codes. ECC Page – implementations of popular ECC encoding and decoding routines Detection and correction
Error detection and correction
[ "Technology", "Engineering" ]
4,657
[ "Computer errors", "Error detection and correction", "Reliability engineering" ]
10,376
https://en.wikipedia.org/wiki/Euclidean%20domain
In mathematics, more specifically in ring theory, a Euclidean domain (also called a Euclidean ring) is an integral domain that can be endowed with a Euclidean function which allows a suitable generalization of Euclidean division of integers. This generalized Euclidean algorithm can be put to many of the same uses as Euclid's original algorithm in the ring of integers: in any Euclidean domain, one can apply the Euclidean algorithm to compute the greatest common divisor of any two elements. In particular, the greatest common divisor of any two elements exists and can be written as a linear combination of them (Bézout's identity). In particular, the existence of efficient algorithms for Euclidean division of integers and of polynomials in one variable over a field is of basic importance in computer algebra. It is important to compare the class of Euclidean domains with the larger class of principal ideal domains (PIDs). An arbitrary PID has much the same "structural properties" of a Euclidean domain (or, indeed, even of the ring of integers), but lacks an analogue of the Euclidean algorithm and extended Euclidean algorithm to compute greatest common divisors. So, given an integral domain , it is often very useful to know that has a Euclidean function: in particular, this implies that is a PID. However, if there is no "obvious" Euclidean function, then determining whether is a PID is generally a much easier problem than determining whether it is a Euclidean domain. Every ideal in a Euclidean domain is principal, which implies a suitable generalization of the fundamental theorem of arithmetic: every Euclidean domain is also a unique factorization domain. Euclidean domains appear in the following chain of class inclusions: Definition Let be an integral domain. A Euclidean function on is a function from to the non-negative integers satisfying the following fundamental division-with-remainder property: (EF1) If and are in and is nonzero, then there exist and in such that and either or . A Euclidean domain is an integral domain which can be endowed with at least one Euclidean function. A particular Euclidean function is not part of the definition of a Euclidean domain, as, in general, a Euclidean domain may admit many different Euclidean functions. In this context, and are called respectively a quotient and a remainder of the division (or Euclidean division) of by . In contrast with the case of integers and polynomials, the quotient is generally not uniquely defined, but when a quotient has been chosen, the remainder is uniquely defined. Most algebra texts require a Euclidean function to have the following additional property: (EF2) For all nonzero and in , . However, one can show that (EF1) alone suffices to define a Euclidean domain; if an integral domain is endowed with a function satisfying (EF1), then can also be endowed with a function satisfying both (EF1) and (EF2) simultaneously. Indeed, for in , one can define as follows: In words, one may define to be the minimum value attained by on the set of all non-zero elements of the principal ideal generated by . A Euclidean function is multiplicative if and is never zero. It follows that . More generally, if and only if is a unit. Notes on the definition Many authors use other terms in place of "Euclidean function", such as "degree function", "valuation function", "gauge function" or "norm function". Some authors also require the domain of the Euclidean function to be the entire ring ; however, this does not essentially affect the definition, since (EF1) does not involve the value of . The definition is sometimes generalized by allowing the Euclidean function to take its values in any well-ordered set; this weakening does not affect the most important implications of the Euclidean property. The property (EF1) can be restated as follows: for any principal ideal of with nonzero generator , all nonzero classes of the quotient ring have a representative with . Since the possible values of are well-ordered, this property can be established by showing that for any with minimal value of in its class. Note that, for a Euclidean function that is so established, there need not exist an effective method to determine and in (EF1). Examples Examples of Euclidean domains include: Any field. Define for all nonzero . , the ring of integers. Define , the absolute value of . , the ring of Gaussian integers. Define , the norm of the Gaussian integer . (where is a primitive (non-real) cube root of unity), the ring of Eisenstein integers. Define , the norm of the Eisenstein integer . , the ring of polynomials over a field . For each nonzero polynomial , define to be the degree of . , the ring of formal power series over the field . For each nonzero power series , define as the order of , that is the degree of the smallest power of occurring in . In particular, for two nonzero power series and , if and only if divides . Any discrete valuation ring. Define to be the highest power of the maximal ideal containing . Equivalently, let be a generator of , and be the unique integer such that is an associate of , then define . The previous example is a special case of this. A Dedekind domain with finitely many nonzero prime ideals . Define , where is the discrete valuation corresponding to the ideal . Examples of domains that are not Euclidean domains include: Every domain that is not a principal ideal domain, such as the ring of polynomials in at least two indeterminates over a field, or the ring of univariate polynomials with integer coefficients, or the number ring . The ring of integers of , consisting of the numbers where and are integers and both even or both odd. It is a principal ideal domain that is not Euclidean.This was proved by Theodore Motzkin and was the first case known. The ring is also a principal ideal domain that is not Euclidean. To see that it is not a Euclidean domain, it suffices to show that for every non-zero prime , the map induced by the quotient map is not surjective. Properties Let R be a domain and f a Euclidean function on R. Then: R is a principal ideal domain (PID). In fact, if I is a nonzero ideal of R then any element a of I \ {0} with minimal value (on that set) of f(a) is a generator of I. As a consequence R is also a unique factorization domain and a Noetherian ring. With respect to general principal ideal domains, the existence of factorizations (i.e., that R is an atomic domain) is particularly easy to prove in Euclidean domains: choosing a Euclidean function f satisfying (EF2), x cannot have any decomposition into more than f(x) nonunit factors, so starting with x and repeatedly decomposing reducible factors is bound to produce a factorization into irreducible elements. Any element of R at which f takes its globally minimal value is invertible in R. If an f satisfying (EF2) is chosen, then the converse also holds, and f takes its minimal value exactly at the invertible elements of R. If Euclidean division is algorithmic, that is, if there is an algorithm for computing the quotient and the remainder, then an extended Euclidean algorithm can be defined exactly as in the case of integers. If a Euclidean domain is not a field then it has an element a with the following property: any element x not divisible by a can be written as x = ay + u for some unit u and some element y. This follows by taking a to be a non-unit with f(a) as small as possible. This strange property can be used to show that some principal ideal domains are not Euclidean domains, as not all PIDs have this property. For example, for d = −19, −43, −67, −163, the ring of integers of is a PID which is Euclidean, but the cases d = −1, −2, −3, −7, −11 Euclidean. However, in many finite extensions of Q with trivial class group, the ring of integers is Euclidean (not necessarily with respect to the absolute value of the field norm; see below). Assuming the extended Riemann hypothesis, if K is a finite extension of Q and the ring of integers of K is a PID with an infinite number of units, then the ring of integers is Euclidean. In particular this applies to the case of totally real quadratic number fields with trivial class group. In addition (and without assuming ERH), if the field K is a Galois extension of Q, has trivial class group and unit rank strictly greater than three, then the ring of integers is Euclidean. An immediate corollary of this is that if the number field is Galois over Q, its class group is trivial and the extension has degree greater than 8 then the ring of integers is necessarily Euclidean. Norm-Euclidean fields Algebraic number fields K come with a canonical norm function on them: the absolute value of the field norm N that takes an algebraic element α to the product of all the conjugates of α. This norm maps the ring of integers of a number field K, say OK, to the nonnegative rational integers, so it is a candidate to be a Euclidean norm on this ring. If this norm satisfies the axioms of a Euclidean function then the number field K is called norm-Euclidean or simply Euclidean. Strictly speaking it is the ring of integers that is Euclidean since fields are trivially Euclidean domains, but the terminology is standard. If a field is not norm-Euclidean then that does not mean the ring of integers is not Euclidean, just that the field norm does not satisfy the axioms of a Euclidean function. In fact, the rings of integers of number fields may be divided in several classes: Those that are not principal and therefore not Euclidean, such as the integers of Those that are principal and not Euclidean, such as the integers of Those that are Euclidean and not norm-Euclidean, such as the integers of Those that are norm-Euclidean, such as Gaussian integers (integers of ) The norm-Euclidean quadratic fields have been fully classified; they are where takes the values −11, −7, −3, −2, −1, 2, 3, 5, 6, 7, 11, 13, 17, 19, 21, 29, 33, 37, 41, 57, 73 . Every Euclidean imaginary quadratic field is norm-Euclidean and is one of the five first fields in the preceding list. See also Valuation (algebra) Notes References Ring theory Commutative algebra Domain
Euclidean domain
[ "Mathematics" ]
2,221
[ "Fields of abstract algebra", "Commutative algebra", "Ring theory" ]
10,377
https://en.wikipedia.org/wiki/Euclidean%20algorithm
In mathematics, the Euclidean algorithm, or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two integers, the largest number that divides them both without a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his Elements (). It is an example of an algorithm, a step-by-step procedure for performing a calculation according to well-defined rules, and is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations. The Euclidean algorithm is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number. For example, is the GCD of and (as and , and the same number is also the GCD of and . Since this replacement reduces the larger of the two numbers, repeating this process gives successively smaller pairs of numbers until the two numbers become equal. When that occurs, that number is the GCD of the original two numbers. By reversing the steps or using the extended Euclidean algorithm, the GCD can be expressed as a linear combination of the two original numbers, that is the sum of the two numbers, each multiplied by an integer (for example, ). The fact that the GCD can always be expressed in this way is known as Bézout's identity. The version of the Euclidean algorithm described above—which follows Euclid's original presentation—may require many subtraction steps to find the GCD when one of the given numbers is much bigger than the other. A more efficient version of the algorithm shortcuts these steps, instead replacing the larger of the two numbers by its remainder when divided by the smaller of the two (with this version, the algorithm stops when reaching a zero remainder). With this improvement, the algorithm never requires more steps than five times the number of digits (base 10) of the smaller integer. This was proven by Gabriel Lamé in 1844 (Lamé's Theorem), and marks the beginning of computational complexity theory. Additional methods for improving the algorithm's efficiency were developed in the 20th century. The Euclidean algorithm has many theoretical and practical applications. It is used for reducing fractions to their simplest form and for performing division in modular arithmetic. Computations using this algorithm form part of the cryptographic protocols that are used to secure internet communications, and in methods for breaking these cryptosystems by factoring large composite numbers. The Euclidean algorithm may be used to solve Diophantine equations, such as finding numbers that satisfy multiple congruences according to the Chinese remainder theorem, to construct continued fractions, and to find accurate rational approximations to real numbers. Finally, it can be used as a basic tool for proving theorems in number theory such as Lagrange's four-square theorem and the uniqueness of prime factorizations. The original algorithm was described only for natural numbers and geometric lengths (real numbers), but the algorithm was generalized in the 19th century to other types of numbers, such as Gaussian integers and polynomials of one variable. This led to modern abstract algebraic notions such as Euclidean domains. Background: greatest common divisor The Euclidean algorithm calculates the greatest common divisor (GCD) of two natural numbers and . The greatest common divisor is the largest natural number that divides both and without leaving a remainder. Synonyms for GCD include greatest common factor (GCF), highest common factor (HCF), highest common divisor (HCD), and greatest common measure (GCM). The greatest common divisor is often written as or, more simply, as , although the latter notation is ambiguous, also used for concepts such as an ideal in the ring of integers, which is closely related to GCD. If , then and are said to be coprime (or relatively prime). This property does not imply that or are themselves prime numbers. For example, and factor as and , so they are not prime, but their prime factors are different, so and are coprime, with no common factors other than . Let . Since and are both multiples of , they can be written and , and there is no larger number for which this is true. The natural numbers and must be coprime, since any common factor could be factored out of and to make greater. Thus, any other number that divides both and must also divide . The greatest common divisor of and is the unique (positive) common divisor of and that is divisible by any other common divisor . The greatest common divisor can be visualized as follows. Consider a rectangular area by , and any common divisor that divides both and exactly. The sides of the rectangle can be divided into segments of length , which divides the rectangle into a grid of squares of side length . The GCD is the largest value of for which this is possible. For illustration, a rectangular area can be divided into a grid of: squares, squares, squares, squares, squares or squares. Therefore, is the GCD of and . A rectangular area can be divided into a grid of squares, with two squares along one edge () and five squares along the other (). The greatest common divisor of two numbers and is the product of the prime factors shared by the two numbers, where each prime factor can be repeated as many times as it divides both and . For example, since can be factored into , and can be factored into , the GCD of and equals , the product of their shared prime factors (with 3 repeated since divides both). If two numbers have no common prime factors, their GCD is (obtained here as an instance of the empty product); in other words, they are coprime. A key advantage of the Euclidean algorithm is that it can find the GCD efficiently without having to compute the prime factors. Factorization of large integers is believed to be a computationally very difficult problem, and the security of many widely used cryptographic protocols is based upon its infeasibility. Another definition of the GCD is helpful in advanced mathematics, particularly ring theory. The greatest common divisor of two nonzero numbers and is also their smallest positive integral linear combination, that is, the smallest positive number of the form where and are integers. The set of all integral linear combinations of and is actually the same as the set of all multiples of (, where is an integer). In modern mathematical language, the ideal generated by and is the ideal generated by  alone (an ideal generated by a single element is called a principal ideal, and all ideals of the integers are principal ideals). Some properties of the GCD are in fact easier to see with this description, for instance the fact that any common divisor of and also divides the GCD (it divides both terms of ). The equivalence of this GCD definition with the other definitions is described below. The GCD of three or more numbers equals the product of the prime factors common to all the numbers, but it can also be calculated by repeatedly taking the GCDs of pairs of numbers. For example, Thus, Euclid's algorithm, which computes the GCD of two integers, suffices to calculate the GCD of arbitrarily many integers. Procedure The Euclidean algorithm can be thought of as constructing a sequence of non-negative integers that begins with the two given integers and and will eventually terminate with the integer zero: with . The integer will then be the GCD and we can state . The algorithm indicates how to construct the intermediate remainders via division-with-remainder on the preceding pair by finding an integer quotient so that: Because the sequence of non-negative integers is strictly decreasing, it eventually must terminate. In other words, since for every , and each is an integer that is strictly smaller than the preceding , there eventually cannot be a non-negative integer smaller than zero, and hence the algorithm must terminate. In fact, the algorithm will always terminate at the th step with equal to zero. To illustrate, suppose the GCD of 1071 and 462 is requested. The sequence is initially and in order to find , we need to find integers and such that: . This is the quotient since . This determines and so the sequence is now . The next step is to continue the sequence to find by finding integers and such that: . This is the quotient since . This determines and so the sequence is now . The next step is to continue the sequence to find by finding integers and such that: . This is the quotient since . This determines and so the sequence is completed as as no further non-negative integer smaller than can be found. The penultimate remainder is therefore the requested GCD: We can generalize slightly by dropping any ordering requirement on the initial two values and . If , the algorithm may continue and trivially find that as the sequence of remainders will be . If , then we can also continue since , suggesting the next remainder should be itself, and the sequence is . Normally, this would be invalid because it breaks the requirement but now we have by construction, so the requirement is automatically satisfied and the Euclidean algorithm can continue as normal. Therefore, dropping any ordering between the first two integers does not affect the conclusion that the sequence must eventually terminate because the next remainder will always satisfy and everything continues as above. The only modifications that need to be made are that only for , and that the sub-sequence of non-negative integers for is strictly decreasing, therefore excluding from both statements. Proof of validity The validity of the Euclidean algorithm can be proven by a two-step argument. In the first step, the final nonzero remainder is shown to divide both and . Since it is a common divisor, it must be less than or equal to the greatest common divisor . In the second step, it is shown that any common divisor of and , including , must divide ; therefore, must be less than or equal to . These two opposite inequalities imply . To demonstrate that divides both and (the first step), divides its predecessor since the final remainder is zero. also divides its next predecessor because it divides both terms on the right-hand side of the equation. Iterating the same argument, divides all the preceding remainders, including and . None of the preceding remainders , , etc. divide and , since they leave a remainder. Since is a common divisor of and , . In the second step, any natural number that divides both and (in other words, any common divisor of and ) divides the remainders . By definition, and can be written as multiples of : and , where and are natural numbers. Therefore, divides the initial remainder , since . An analogous argument shows that also divides the subsequent remainders , , etc. Therefore, the greatest common divisor must divide , which implies that . Since the first part of the argument showed the reverse (), it follows that . Thus, is the greatest common divisor of all the succeeding pairs: . Worked example For illustration, the Euclidean algorithm can be used to find the greatest common divisor of and . To begin, multiples of are subtracted from until the remainder is less than . Two such multiples can be subtracted (), leaving a remainder of : . Then multiples of are subtracted from until the remainder is less than . Three multiples can be subtracted (), leaving a remainder of : . Then multiples of are subtracted from until the remainder is less than . Seven multiples can be subtracted (), leaving no remainder: . Since the last remainder is zero, the algorithm ends with as the greatest common divisor of and . This agrees with the found by prime factorization above. In tabular form, the steps are: Visualization The Euclidean algorithm can be visualized in terms of the tiling analogy given above for the greatest common divisor. Assume that we wish to cover an rectangle with square tiles exactly, where is the larger of the two numbers. We first attempt to tile the rectangle using square tiles; however, this leaves an residual rectangle untiled, where . We then attempt to tile the residual rectangle with square tiles. This leaves a second residual rectangle , which we attempt to tile using square tiles, and so on. The sequence ends when there is no residual rectangle, i.e., when the square tiles cover the previous residual rectangle exactly. The length of the sides of the smallest square tile is the GCD of the dimensions of the original rectangle. For example, the smallest square tile in the adjacent figure is (shown in red), and is the GCD of and , the dimensions of the original rectangle (shown in green). Euclidean division At every step , the Euclidean algorithm computes a quotient and remainder from two numbers and , where the is non-negative and is strictly less than the absolute value of . The theorem which underlies the definition of the Euclidean division ensures that such a quotient and remainder always exist and are unique. In Euclid's original version of the algorithm, the quotient and remainder are found by repeated subtraction; that is, is subtracted from repeatedly until the remainder is smaller than . After that and are exchanged and the process is iterated. Euclidean division reduces all the steps between two exchanges into a single step, which is thus more efficient. Moreover, the quotients are not needed, thus one may replace Euclidean division by the modulo operation, which gives only the remainder. Thus the iteration of the Euclidean algorithm becomes simply . Implementations Implementations of the algorithm may be expressed in pseudocode. For example, the division-based version may be programmed as function gcd(a, b) while b ≠ 0 t := b b := a mod b a := t return a At the beginning of the th iteration, the variable holds the latest remainder , whereas the variable holds its predecessor, . The step is equivalent to the above recursion formula . The temporary variable holds the value of while the next remainder is being calculated. At the end of the loop iteration, the variable holds the remainder , whereas the variable holds its predecessor, . (If negative inputs are allowed, or if the mod function may return negative values, the last line must be replaced with .) In the subtraction-based version, which was Euclid's original version, the remainder calculation () is replaced by repeated subtraction. Contrary to the division-based version, which works with arbitrary integers as input, the subtraction-based version supposes that the input consists of positive integers and stops when : function gcd(a, b) while a ≠ b if a > b a := a − b else b := b − a return a The variables and alternate holding the previous remainders and . Assume that is larger than at the beginning of an iteration; then equals , since . During the loop iteration, is reduced by multiples of the previous remainder until is smaller than . Then is the next remainder . Then is reduced by multiples of until it is again smaller than , giving the next remainder , and so on. The recursive version is based on the equality of the GCDs of successive remainders and the stopping condition . function gcd(a, b) if b = 0 return a else return gcd(b, a mod b) (As above, if negative inputs are allowed, or if the mod function may return negative values, the instruction must be replaced by .) For illustration, the is calculated from the equivalent . The latter GCD is calculated from the , which in turn is calculated from the . Method of least absolute remainders In another version of Euclid's algorithm, the quotient at each step is increased by one if the resulting negative remainder is smaller in magnitude than the typical positive remainder. Previously, the equation assumed that . However, an alternative negative remainder can be computed: if or if . If is replaced by . when , then one gets a variant of Euclidean algorithm such that at each step. Leopold Kronecker has shown that this version requires the fewest steps of any version of Euclid's algorithm. More generally, it has been proven that, for every input numbers a and b, the number of steps is minimal if and only if is chosen in order that where is the golden ratio. Historical development The Euclidean algorithm is one of the oldest algorithms in common use. It appears in Euclid's Elements (c. 300 BC), specifically in Book 7 (Propositions 1–2) and Book 10 (Propositions 2–3). In Book 7, the algorithm is formulated for integers, whereas in Book 10, it is formulated for lengths of line segments. (In modern usage, one would say it was formulated there for real numbers. But lengths, areas, and volumes, represented as real numbers in modern usage, are not measured in the same units and there is no natural unit of length, area, or volume; the concept of real numbers was unknown at that time.) The latter algorithm is geometrical. The GCD of two lengths and corresponds to the greatest length that measures and evenly; in other words, the lengths and are both integer multiples of the length . The algorithm was probably not discovered by Euclid, who compiled results from earlier mathematicians in his Elements. The mathematician and historian B. L. van der Waerden suggests that Book VII derives from a textbook on number theory written by mathematicians in the school of Pythagoras. The algorithm was probably known by Eudoxus of Cnidus (about 375 BC). The algorithm may even pre-date Eudoxus, judging from the use of the technical term ἀνθυφαίρεσις (anthyphairesis, reciprocal subtraction) in works by Euclid and Aristotle. Claude Brezinski, following remarks by Pappus of Alexandria, credits the algorithm to Theaetetus (c. 417 – c. 369 BC). Centuries later, Euclid's algorithm was discovered independently both in India and in China, primarily to solve Diophantine equations that arose in astronomy and making accurate calendars. In the late 5th century, the Indian mathematician and astronomer Aryabhata described the algorithm as the "pulverizer", perhaps because of its effectiveness in solving Diophantine equations. Although a special case of the Chinese remainder theorem had already been described in the Chinese book Sunzi Suanjing, the general solution was published by Qin Jiushao in his 1247 book Shushu Jiuzhang (數書九章 Mathematical Treatise in Nine Sections). The Euclidean algorithm was first described numerically and popularized in Europe in the second edition of Bachet's Problèmes plaisants et délectables (Pleasant and enjoyable problems, 1624). In Europe, it was likewise used to solve Diophantine equations and in developing continued fractions. The extended Euclidean algorithm was published by the English mathematician Nicholas Saunderson, who attributed it to Roger Cotes as a method for computing continued fractions efficiently. In the 19th century, the Euclidean algorithm led to the development of new number systems, such as Gaussian integers and Eisenstein integers. In 1815, Carl Gauss used the Euclidean algorithm to demonstrate unique factorization of Gaussian integers, although his work was first published in 1832. Gauss mentioned the algorithm in his Disquisitiones Arithmeticae (published 1801), but only as a method for continued fractions. Peter Gustav Lejeune Dirichlet seems to have been the first to describe the Euclidean algorithm as the basis for much of number theory. Lejeune Dirichlet noted that many results of number theory, such as unique factorization, would hold true for any other system of numbers to which the Euclidean algorithm could be applied. Lejeune Dirichlet's lectures on number theory were edited and extended by Richard Dedekind, who used Euclid's algorithm to study algebraic integers, a new general type of number. For example, Dedekind was the first to prove Fermat's two-square theorem using the unique factorization of Gaussian integers. Dedekind also defined the concept of a Euclidean domain, a number system in which a generalized version of the Euclidean algorithm can be defined (as described below). In the closing decades of the 19th century, the Euclidean algorithm gradually became eclipsed by Dedekind's more general theory of ideals. Other applications of Euclid's algorithm were developed in the 19th century. In 1829, Charles Sturm showed that the algorithm was useful in the Sturm chain method for counting the real roots of polynomials in any given interval. The Euclidean algorithm was the first integer relation algorithm, which is a method for finding integer relations between commensurate real numbers. Several novel integer relation algorithms have been developed, such as the algorithm of Helaman Ferguson and R.W. Forcade (1979) and the LLL algorithm. In 1969, Cole and Davie developed a two-player game based on the Euclidean algorithm, called The Game of Euclid, which has an optimal strategy. The players begin with two piles of and stones. The players take turns removing multiples of the smaller pile from the larger. Thus, if the two piles consist of and stones, where is larger than , the next player can reduce the larger pile from stones to stones, as long as the latter is a nonnegative integer. The winner is the first player to reduce one pile to zero stones. Mathematical applications Bézout's identity Bézout's identity states that the greatest common divisor of two integers and can be represented as a linear sum of the original two numbers and . In other words, it is always possible to find integers and such that . The integers and can be calculated from the quotients , , etc. by reversing the order of equations in Euclid's algorithm. Beginning with the next-to-last equation, can be expressed in terms of the quotient and the two preceding remainders, and : . Those two remainders can be likewise expressed in terms of their quotients and preceding remainders, and . Substituting these formulae for and into the first equation yields as a linear sum of the remainders and . The process of substituting remainders by formulae involving their predecessors can be continued until the original numbers and are reached: . After all the remainders , , etc. have been substituted, the final equation expresses as a linear sum of and , so that . The Euclidean algorithm, and thus Bézout's identity, can be generalized to the context of Euclidean domains. Principal ideals and related problems Bézout's identity provides yet another definition of the greatest common divisor of two numbers and . Consider the set of all numbers , where and are any two integers. Since and are both divisible by , every number in the set is divisible by . In other words, every number of the set is an integer multiple of . This is true for every common divisor of and . However, unlike other common divisors, the greatest common divisor is a member of the set; by Bézout's identity, choosing and gives . A smaller common divisor cannot be a member of the set, since every member of the set must be divisible by . Conversely, any multiple of can be obtained by choosing and , where and are the integers of Bézout's identity. This may be seen by multiplying Bézout's identity by m, . Therefore, the set of all numbers is equivalent to the set of multiples of . In other words, the set of all possible sums of integer multiples of two numbers ( and ) is equivalent to the set of multiples of . The GCD is said to be the generator of the ideal of and . This GCD definition led to the modern abstract algebraic concepts of a principal ideal (an ideal generated by a single element) and a principal ideal domain (a domain in which every ideal is a principal ideal). Certain problems can be solved using this result. For example, consider two measuring cups of volume and . By adding/subtracting multiples of the first cup and multiples of the second cup, any volume can be measured out. These volumes are all multiples of . Extended Euclidean algorithm The integers and of Bézout's identity can be computed efficiently using the extended Euclidean algorithm. This extension adds two recursive equations to Euclid's algorithm with the starting values . Using this recursion, Bézout's integers and are given by and , where is the step on which the algorithm terminates with . The validity of this approach can be shown by induction. Assume that the recursion formula is correct up to step of the algorithm; in other words, assume that for all less than . The th step of the algorithm gives the equation . Since the recursion formula has been assumed to be correct for and , they may be expressed in terms of the corresponding and variables . Rearranging this equation yields the recursion formula for step , as required . Matrix method The integers and can also be found using an equivalent matrix method. The sequence of equations of Euclid's algorithm can be written as a product of quotient matrices multiplying a two-dimensional remainder vector Let represent the product of all the quotient matrices This simplifies the Euclidean algorithm to the form To express as a linear sum of and , both sides of this equation can be multiplied by the inverse of the matrix . The determinant of equals , since it equals the product of the determinants of the quotient matrices, each of which is negative one. Since the determinant of is never zero, the vector of the final remainders can be solved using the inverse of Since the top equation gives , the two integers of Bézout's identity are and . The matrix method is as efficient as the equivalent recursion, with two multiplications and two additions per step of the Euclidean algorithm. Euclid's lemma and unique factorization Bézout's identity is essential to many applications of Euclid's algorithm, such as demonstrating the unique factorization of numbers into prime factors. To illustrate this, suppose that a number can be written as a product of two factors and , that is, . If another number also divides but is coprime with , then must divide , by the following argument: If the greatest common divisor of and is , then integers and can be found such that by Bézout's identity. Multiplying both sides by gives the relation: Since divides both terms on the right-hand side, it must also divide the left-hand side, . This result is known as Euclid's lemma. Specifically, if a prime number divides , then it must divide at least one factor of . Conversely, if a number is coprime to each of a series of numbers , , ..., , then is also coprime to their product, . Euclid's lemma suffices to prove that every number has a unique factorization into prime numbers. To see this, assume the contrary, that there are two independent factorizations of into and prime factors, respectively . Since each prime divides by assumption, it must also divide one of the factors; since each is prime as well, it must be that . Iteratively dividing by the factors shows that each has an equal counterpart ; the two prime factorizations are identical except for their order. The unique factorization of numbers into primes has many applications in mathematical proofs, as shown below. Linear Diophantine equations Diophantine equations are equations in which the solutions are restricted to integers; they are named after the 3rd-century Alexandrian mathematician Diophantus. A typical linear Diophantine equation seeks integers and such that where , and are given integers. This can be written as an equation for in modular arithmetic: . Let be the greatest common divisor of and . Both terms in are divisible by ; therefore, must also be divisible by , or the equation has no solutions. By dividing both sides by , the equation can be reduced to Bezout's identity , where and can be found by the extended Euclidean algorithm. This provides one solution to the Diophantine equation, and . In general, a linear Diophantine equation has no solutions, or an infinite number of solutions. To find the latter, consider two solutions, and , where or equivalently . Therefore, the smallest difference between two solutions is , whereas the smallest difference between two solutions is . Thus, the solutions may be expressed as . By allowing to vary over all possible integers, an infinite family of solutions can be generated from a single solution . If the solutions are required to be positive integers , only a finite number of solutions may be possible. This restriction on the acceptable solutions allows some systems of Diophantine equations with more unknowns than equations to have a finite number of solutions; this is impossible for a system of linear equations when the solutions can be any real number (see Underdetermined system). Multiplicative inverses and the RSA algorithm A finite field is a set of numbers with four generalized operations. The operations are called addition, subtraction, multiplication and division and have their usual properties, such as commutativity, associativity and distributivity. An example of a finite field is the set of 13 numbers using modular arithmetic. In this field, the results of any mathematical operation (addition, subtraction, multiplication, or division) is reduced modulo ; that is, multiples of are added or subtracted until the result is brought within the range –. For example, the result of . Such finite fields can be defined for any prime ; using more sophisticated definitions, they can also be defined for any power of a prime . Finite fields are often called Galois fields, and are abbreviated as or ). In such a field with numbers, every nonzero element has a unique modular multiplicative inverse, such that . This inverse can be found by solving the congruence equation , or the equivalent linear Diophantine equation . This equation can be solved by the Euclidean algorithm, as described above. Finding multiplicative inverses is an essential step in the RSA algorithm, which is widely used in electronic commerce; specifically, the equation determines the integer used to decrypt the message. Although the RSA algorithm uses rings rather than fields, the Euclidean algorithm can still be used to find a multiplicative inverse where one exists. The Euclidean algorithm also has other applications in error-correcting codes; for example, it can be used as an alternative to the Berlekamp–Massey algorithm for decoding BCH and Reed–Solomon codes, which are based on Galois fields. Chinese remainder theorem Euclid's algorithm can also be used to solve multiple linear Diophantine equations. Such equations arise in the Chinese remainder theorem, which describes a novel method to represent an integer x. Instead of representing an integer by its digits, it may be represented by its remainders xi modulo a set of N coprime numbers mi: The goal is to determine x from its N remainders xi. The solution is to combine the multiple equations into a single linear Diophantine equation with a much larger modulus M that is the product of all the individual moduli mi, and define Mi as Thus, each Mi is the product of all the moduli except mi. The solution depends on finding N new numbers hi such that With these numbers hi, any integer x can be reconstructed from its remainders xi by the equation Since these numbers hi are the multiplicative inverses of the Mi, they may be found using Euclid's algorithm as described in the previous subsection. Stern–Brocot tree The Euclidean algorithm can be used to arrange the set of all positive rational numbers into an infinite binary search tree, called the Stern–Brocot tree. The number 1 (expressed as a fraction 1/1) is placed at the root of the tree, and the location of any other number a/b can be found by computing gcd(a,b) using the original form of the Euclidean algorithm, in which each step replaces the larger of the two given numbers by its difference with the smaller number (not its remainder), stopping when two equal numbers are reached. A step of the Euclidean algorithm that replaces the first of the two numbers corresponds to a step in the tree from a node to its right child, and a step that replaces the second of the two numbers corresponds to a step in the tree from a node to its left child. The sequence of steps constructed in this way does not depend on whether a/b is given in lowest terms, and forms a path from the root to a node containing the number a/b. This fact can be used to prove that each positive rational number appears exactly once in this tree. For example, 3/4 can be found by starting at the root, going to the left once, then to the right twice: The Euclidean algorithm has almost the same relationship to another binary tree on the rational numbers called the Calkin–Wilf tree. The difference is that the path is reversed: instead of producing a path from the root of the tree to a target, it produces a path from the target to the root. Continued fractions The Euclidean algorithm has a close relationship with continued fractions. The sequence of equations can be written in the form The last term on the right-hand side always equals the inverse of the left-hand side of the next equation. Thus, the first two equations may be combined to form The third equation may be used to substitute the denominator term r1/r0, yielding The final ratio of remainders rk/rk−1 can always be replaced using the next equation in the series, up to the final equation. The result is a continued fraction In the worked example above, the gcd(1071, 462) was calculated, and the quotients qk were 2, 3 and 7, respectively. Therefore, the fraction 1071/462 may be written as can be confirmed by calculation. Factorization algorithms Calculating a greatest common divisor is an essential step in several integer factorization algorithms, such as Pollard's rho algorithm, Shor's algorithm, Dixon's factorization method and the Lenstra elliptic curve factorization. The Euclidean algorithm may be used to find this GCD efficiently. Continued fraction factorization uses continued fractions, which are determined using Euclid's algorithm. Algorithmic efficiency The computational efficiency of Euclid's algorithm has been studied thoroughly. This efficiency can be described by the number of division steps the algorithm requires, multiplied by the computational expense of each step. The first known analysis of Euclid's algorithm is due to A. A. L. Reynaud in 1811, who showed that the number of division steps on input (u, v) is bounded by v; later he improved this to v/2 + 2. Later, in 1841, P. J. E. Finck showed that the number of division steps is at most 2 log2 v + 1, and hence Euclid's algorithm runs in time polynomial in the size of the input. Émile Léger, in 1837, studied the worst case, which is when the inputs are consecutive Fibonacci numbers. Finck's analysis was refined by Gabriel Lamé in 1844, who showed that the number of steps required for completion is never more than five times the number h of base-10 digits of the smaller number b. In the uniform cost model (suitable for analyzing the complexity of gcd calculation on numbers that fit into a single machine word), each step of the algorithm takes constant time, and Lamé's analysis implies that the total running time is also O(h). However, in a model of computation suitable for computation with larger numbers, the computational expense of a single remainder computation in the algorithm can be as large as O(h2). In this case the total time for all of the steps of the algorithm can be analyzed using a telescoping series, showing that it is also O(h2). Modern algorithmic techniques based on the Schönhage–Strassen algorithm for fast integer multiplication can be used to speed this up, leading to quasilinear algorithms for the GCD. Number of steps The number of steps to calculate the GCD of two natural numbers, a and b, may be denoted by T(a, b). If g is the GCD of a and b, then a = mg and b = ng for two coprime numbers m and n. Then as may be seen by dividing all the steps in the Euclidean algorithm by g. By the same argument, the number of steps remains the same if a and b are multiplied by a common factor w: T(a, b) = T(wa, wb). Therefore, the number of steps T may vary dramatically between neighboring pairs of numbers, such as T(a, b) and T(a, b + 1), depending on the size of the two GCDs. The recursive nature of the Euclidean algorithm gives another equation where T(x, 0) = 0 by assumption. Worst-case If the Euclidean algorithm requires N steps for a pair of natural numbers a > b > 0, the smallest values of a and b for which this is true are the Fibonacci numbers FN+2 and FN+1, respectively. More precisely, if the Euclidean algorithm requires N steps for the pair a > b, then one has a ≥ FN+2 and b ≥ FN+1. This can be shown by induction. If N = 1, b divides a with no remainder; the smallest natural numbers for which this is true is b = 1 and a = 2, which are F2 and F3, respectively. Now assume that the result holds for all values of N up to M − 1. The first step of the M-step algorithm is a = q0b + r0, and the Euclidean algorithm requires M − 1 steps for the pair b > r0. By induction hypothesis, one has b ≥ FM+1 and r0 ≥ FM. Therefore, a = q0b + r0 ≥ b + r0 ≥ FM+1 + FM = FM+2, which is the desired inequality. This proof, published by Gabriel Lamé in 1844, represents the beginning of computational complexity theory, and also the first practical application of the Fibonacci numbers. This result suffices to show that the number of steps in Euclid's algorithm can never be more than five times the number of its digits (base 10). For if the algorithm requires N steps, then b is greater than or equal to FN+1 which in turn is greater than or equal to φN−1, where φ is the golden ratio. Since b ≥ φN−1, then N − 1 ≤ logφb. Since log10φ > 1/5, (N − 1)/5 < log10φ logφb = log10b. Thus, N ≤ 5 log10b. Thus, the Euclidean algorithm always needs less than O(h) divisions, where h is the number of digits in the smaller number b. Average The average number of steps taken by the Euclidean algorithm has been defined in three different ways. The first definition is the average time T(a) required to calculate the GCD of a given number a and a smaller natural number b chosen with equal probability from the integers 0 to a − 1 However, since T(a, b) fluctuates dramatically with the GCD of the two numbers, the averaged function T(a) is likewise "noisy". To reduce this noise, a second average τ(a) is taken over all numbers coprime with a There are φ(a) coprime integers less than a, where φ is Euler's totient function. This tau average grows smoothly with a with the residual error being of order a−(1/6)+ε, where ε is infinitesimal. The constant C in this formula is called Porter's constant and equals where is the Euler–Mascheroni constant and is the derivative of the Riemann zeta function. The leading coefficient (12/π2) ln 2 was determined by two independent methods. Since the first average can be calculated from the tau average by summing over the divisors d of a it can be approximated by the formula where Λ(d) is the Mangoldt function. A third average Y(n) is defined as the mean number of steps required when both a and b are chosen randomly (with uniform distribution) from 1 to n Substituting the approximate formula for T(a) into this equation yields an estimate for Y(n) Computational expense per step In each step k of the Euclidean algorithm, the quotient qk and remainder rk are computed for a given pair of integers rk−2 and rk−1 The computational expense per step is associated chiefly with finding qk, since the remainder rk can be calculated quickly from rk−2, rk−1, and qk The computational expense of dividing h-bit numbers scales as , where is the length of the quotient. For comparison, Euclid's original subtraction-based algorithm can be much slower. A single integer division is equivalent to the quotient q number of subtractions. If the ratio of a and b is very large, the quotient is large and many subtractions will be required. On the other hand, it has been shown that the quotients are very likely to be small integers. The probability of a given quotient q is approximately where . For illustration, the probability of a quotient of 1, 2, 3, or 4 is roughly 41.5%, 17.0%, 9.3%, and 5.9%, respectively. Since the operation of subtraction is faster than division, particularly for large numbers, the subtraction-based Euclid's algorithm is competitive with the division-based version. This is exploited in the binary version of Euclid's algorithm. Combining the estimated number of steps with the estimated computational expense per step shows that the Euclid's algorithm grows quadratically (h2) with the average number of digits h in the initial two numbers a and b. Let represent the number of digits in the successive remainders . Since the number of steps N grows linearly with h, the running time is bounded by Alternative methods Euclid's algorithm is widely used in practice, especially for small numbers, due to its simplicity. For comparison, the efficiency of alternatives to Euclid's algorithm may be determined. One inefficient approach to finding the GCD of two natural numbers a and b is to calculate all their common divisors; the GCD is then the largest common divisor. The common divisors can be found by dividing both numbers by successive integers from 2 to the smaller number b. The number of steps of this approach grows linearly with b, or exponentially in the number of digits. Another inefficient approach is to find the prime factors of one or both numbers. As noted above, the GCD equals the product of the prime factors shared by the two numbers a and b. Present methods for prime factorization are also inefficient; many modern cryptography systems even rely on that inefficiency. The binary GCD algorithm is an efficient alternative that substitutes division with faster operations by exploiting the binary representation used by computers. However, this alternative also scales like O(h²). It is generally faster than the Euclidean algorithm on real computers, even though it scales in the same way. Additional efficiency can be gleaned by examining only the leading digits of the two numbers a and b. The binary algorithm can be extended to other bases (k-ary algorithms), with up to fivefold increases in speed. Lehmer's GCD algorithm uses the same general principle as the binary algorithm to speed up GCD computations in arbitrary bases. A recursive approach for very large integers (with more than 25,000 digits) leads to quasilinear integer GCD algorithms, such as those of Schönhage, and Stehlé and Zimmermann. These algorithms exploit the 2×2 matrix form of the Euclidean algorithm given above. These quasilinear methods generally scale as Generalizations Although the Euclidean algorithm is used to find the greatest common divisor of two natural numbers (positive integers), it may be generalized to the real numbers, and to other mathematical objects, such as polynomials, quadratic integers and Hurwitz quaternions. In the latter cases, the Euclidean algorithm is used to demonstrate the crucial property of unique factorization, i.e., that such numbers can be factored uniquely into irreducible elements, the counterparts of prime numbers. Unique factorization is essential to many proofs of number theory. Rational and real numbers Euclid's algorithm can be applied to real numbers, as described by Euclid in Book 10 of his Elements. The goal of the algorithm is to identify a real number such that two given real numbers, and , are integer multiples of it: and , where and are integers. This identification is equivalent to finding an integer relation among the real numbers and ; that is, it determines integers and such that . If such an equation is possible, a and b are called commensurable lengths, otherwise they are incommensurable lengths. The real-number Euclidean algorithm differs from its integer counterpart in two respects. First, the remainders are real numbers, although the quotients are integers as before. Second, the algorithm is not guaranteed to end in a finite number of steps. If it does, the fraction is a rational number, i.e., the ratio of two integers and can be written as a finite continued fraction . If the algorithm does not stop, the fraction is an irrational number and can be described by an infinite continued fraction . Examples of infinite continued fractions are the golden ratio and the square root of two, . The algorithm is unlikely to stop, since almost all ratios of two real numbers are irrational. An infinite continued fraction may be truncated at a step to yield an approximation to that improves as is increased. The approximation is described by convergents ; the numerator and denominators are coprime and obey the recurrence relation where and are the initial values of the recursion. The convergent is the best rational number approximation to with denominator : Polynomials Polynomials in a single variable x can be added, multiplied and factored into irreducible polynomials, which are the analogs of the prime numbers for integers. The greatest common divisor polynomial of two polynomials and is defined as the product of their shared irreducible polynomials, which can be identified using the Euclidean algorithm. The basic procedure is similar to that for integers. At each step , a quotient polynomial and a remainder polynomial are identified to satisfy the recursive equation where and . Each quotient polynomial is chosen such that each remainder is either zero or has a degree that is smaller than the degree of its predecessor: . Since the degree is a nonnegative integer, and since it decreases with every step, the Euclidean algorithm concludes in a finite number of steps. The last nonzero remainder is the greatest common divisor of the original two polynomials, and . For example, consider the following two quartic polynomials, which each factor into two quadratic polynomials Dividing by yields a remainder . In the next step, is divided by yielding a remainder . Finally, dividing by yields a zero remainder, indicating that is the greatest common divisor polynomial of and , consistent with their factorization. Many of the applications described above for integers carry over to polynomials. The Euclidean algorithm can be used to solve linear Diophantine equations and Chinese remainder problems for polynomials; continued fractions of polynomials can also be defined. The polynomial Euclidean algorithm has other applications, such as Sturm chains, a method for counting the zeros of a polynomial that lie inside a given real interval. This in turn has applications in several areas, such as the Routh–Hurwitz stability criterion in control theory. Finally, the coefficients of the polynomials need not be drawn from integers, real numbers or even the complex numbers. For example, the coefficients may be drawn from a general field, such as the finite fields described above. The corresponding conclusions about the Euclidean algorithm and its applications hold even for such polynomials. Gaussian integers The Gaussian integers are complex numbers of the form , where and are ordinary integers and is the square root of negative one. By defining an analog of the Euclidean algorithm, Gaussian integers can be shown to be uniquely factorizable, by the argument above. This unique factorization is helpful in many applications, such as deriving all Pythagorean triples or proving Fermat's theorem on sums of two squares. In general, the Euclidean algorithm is convenient in such applications, but not essential; for example, the theorems can often be proven by other arguments. The Euclidean algorithm developed for two Gaussian integers and is nearly the same as that for ordinary integers, but differs in two respects. As before, we set and , and the task at each step is to identify a quotient and a remainder such that where every remainder is strictly smaller than its predecessor: . The first difference is that the quotients and remainders are themselves Gaussian integers, and thus are complex numbers. The quotients are generally found by rounding the real and complex parts of the exact ratio (such as the complex number ) to the nearest integers. The second difference lies in the necessity of defining how one complex remainder can be "smaller" than another. To do this, a norm function is defined, which converts every Gaussian integer into an ordinary integer. After each step of the Euclidean algorithm, the norm of the remainder is smaller than the norm of the preceding remainder, . Since the norm is a nonnegative integer and decreases with every step, the Euclidean algorithm for Gaussian integers ends in a finite number of steps. The final nonzero remainder is , the Gaussian integer of largest norm that divides both and ; it is unique up to multiplication by a unit, or . Many of the other applications of the Euclidean algorithm carry over to Gaussian integers. For example, it can be used to solve linear Diophantine equations and Chinese remainder problems for Gaussian integers; continued fractions of Gaussian integers can also be defined. Euclidean domains A set of elements under two binary operations, denoted as addition and multiplication, is called a Euclidean domain if it forms a commutative ring and, roughly speaking, if a generalized Euclidean algorithm can be performed on them. The two operations of such a ring need not be the addition and multiplication of ordinary arithmetic; rather, they can be more general, such as the operations of a mathematical group or monoid. Nevertheless, these general operations should respect many of the laws governing ordinary arithmetic, such as commutativity, associativity and distributivity. The generalized Euclidean algorithm requires a Euclidean function, i.e., a mapping from into the set of nonnegative integers such that, for any two nonzero elements and in , there exist and in such that and . Examples of such mappings are the absolute value for integers, the degree for univariate polynomials, and the norm for Gaussian integers above. The basic principle is that each step of the algorithm reduces f inexorably; hence, if can be reduced only a finite number of times, the algorithm must stop in a finite number of steps. This principle relies on the well-ordering property of the non-negative integers, which asserts that every non-empty set of non-negative integers has a smallest member. The fundamental theorem of arithmetic applies to any Euclidean domain: Any number from a Euclidean domain can be factored uniquely into irreducible elements. Any Euclidean domain is a unique factorization domain (UFD), although the converse is not true. The Euclidean domains and the UFD's are subclasses of the GCD domains, domains in which a greatest common divisor of two numbers always exists. In other words, a greatest common divisor may exist (for all pairs of elements in a domain), although it may not be possible to find it using a Euclidean algorithm. A Euclidean domain is always a principal ideal domain (PID), an integral domain in which every ideal is a principal ideal. Again, the converse is not true: not every PID is a Euclidean domain. The unique factorization of Euclidean domains is useful in many applications. For example, the unique factorization of the Gaussian integers is convenient in deriving formulae for all Pythagorean triples and in proving Fermat's theorem on sums of two squares. Unique factorization was also a key element in an attempted proof of Fermat's Last Theorem published in 1847 by Gabriel Lamé, the same mathematician who analyzed the efficiency of Euclid's algorithm, based on a suggestion of Joseph Liouville. Lamé's approach required the unique factorization of numbers of the form , where and are integers, and is an th root of 1, that is, . Although this approach succeeds for some values of (such as , the Eisenstein integers), in general such numbers do factor uniquely. This failure of unique factorization in some cyclotomic fields led Ernst Kummer to the concept of ideal numbers and, later, Richard Dedekind to ideals. Unique factorization of quadratic integers The quadratic integer rings are helpful to illustrate Euclidean domains. Quadratic integers are generalizations of the Gaussian integers in which the imaginary unit i is replaced by a number . Thus, they have the form , where and are integers and has one of two forms, depending on a parameter . If does not equal a multiple of four plus one, then If, however, does equal a multiple of four plus one, then If the function corresponds to a norm function, such as that used to order the Gaussian integers above, then the domain is known as norm-Euclidean. The norm-Euclidean rings of quadratic integers are exactly those where is one of the values −11, −7, −3, −2, −1, 2, 3, 5, 6, 7, 11, 13, 17, 19, 21, 29, 33, 37, 41, 57, or 73. The cases and yield the Gaussian integers and Eisenstein integers, respectively. If is allowed to be any Euclidean function, then the list of possible values of for which the domain is Euclidean is not yet known. The first example of a Euclidean domain that was not norm-Euclidean (with ) was published in 1994. In 1973, Weinberger proved that a quadratic integer ring with is Euclidean if, and only if, it is a principal ideal domain, provided that the generalized Riemann hypothesis holds. Noncommutative rings The Euclidean algorithm may be applied to some noncommutative rings such as the set of Hurwitz quaternions. Let and represent two elements from such a ring. They have a common right divisor if and for some choice of and in the ring. Similarly, they have a common left divisor if and for some choice of and in the ring. Since multiplication is not commutative, there are two versions of the Euclidean algorithm, one for right divisors and one for left divisors. Choosing the right divisors, the first step in finding the by the Euclidean algorithm can be written where represents the quotient and the remainder. Here the quotent and remainder are chosen so that (if nonzero) the remainder has for a "Euclidean function" N defined analogously to the Euclidean functions of Euclidean domains in the non-commutative case. This equation shows that any common right divisor of and is likewise a common divisor of the remainder . The analogous equation for the left divisors would be With either choice, the process is repeated as above until the greatest common right or left divisor is identified. As in the Euclidean domain, the "size" of the remainder (formally, its Euclidean function or "norm") must be strictly smaller than , and there must be only a finite number of possible sizes for , so that the algorithm is guaranteed to terminate. Many results for the GCD carry over to noncommutative numbers. For example, Bézout's identity states that the right can be expressed as a linear combination of and . In other words, there are numbers and such that The analogous identity for the left GCD is nearly the same: Bézout's identity can be used to solve Diophantine equations. For instance, one of the standard proofs of Lagrange's four-square theorem, that every positive integer can be represented as a sum of four squares, is based on quaternion GCDs in this way. See also Euclidean rhythm, a method for using the Euclidean algorithm to generate musical rhythms Notes References Bibliography . See also Vorlesungen über Zahlentheorie External links Demonstrations of Euclid's algorithm Euclid's Algorithm at cut-the-knot The Euclidean Algorithm at MathPages Euclid's Game at cut-the-knot Music and Euclid's algorithm Number theoretic algorithms Articles with example pseudocode Articles containing proofs Algorithm
Euclidean algorithm
[ "Mathematics" ]
12,038
[ "Articles containing proofs" ]
10,406
https://en.wikipedia.org/wiki/Emotion
Emotions are physical and mental states brought on by neurophysiological changes, variously associated with thoughts, feelings, behavioral responses, and a degree of pleasure or displeasure. There is no scientific consensus on a definition. Emotions are often intertwined with mood, temperament, personality, disposition, or creativity. Research on emotion has increased over the past two decades, with many fields contributing, including psychology, medicine, history, sociology of emotions, computer science and philosophy. The numerous attempts to explain the origin, function, and other aspects of emotions have fostered intense research on this topic. Theorizing about the evolutionary origin and possible purpose of emotion dates back to Charles Darwin. Current areas of research include the neuroscience of emotion, using tools like PET and fMRI scans to study the affective picture processes in the brain. From a mechanistic perspective, emotions can be defined as "a positive or negative experience that is associated with a particular pattern of physiological activity". Emotions are complex, involving multiple different components, such as subjective experience, cognitive processes, expressive behavior, psychophysiological changes, and instrumental behavior. At one time, academics attempted to identify the emotion with one of the components: William James with a subjective experience, behaviorists with instrumental behavior, psychophysiologists with physiological changes, and so on. More recently, emotion has been said to consist of all the components. The different components of emotion are categorized somewhat differently depending on the academic discipline. In psychology and philosophy, emotion typically includes a subjective, conscious experience characterized primarily by psychophysiological expressions, biological reactions, and mental states. A similar multi-componential description of emotion is found in sociology. For example, Peggy Thoits described emotions as involving physiological components, cultural or emotional labels (anger, surprise, etc.), expressive body actions, and the appraisal of situations and contexts. Cognitive processes, like reasoning and decision-making, are often regarded as separate from emotional processes, making a division between "thinking" and "feeling". However, not all theories of emotion regard this separation as valid. Nowadays, most research into emotions in the clinical and well-being context focuses on emotion dynamics in daily life, predominantly the intensity of specific emotions and their variability, instability, inertia, and differentiation, as well as whether and how emotions augment or blunt each other over time and differences in these dynamics between people and along the lifespan. Etymology The word "emotion" dates back to 1579, when it was adapted from the French word émouvoir, which means "to stir up". The term emotion was introduced into academic discussion as a catch-all term to passions, sentiments and affections. The word "emotion" was coined in the early 1800s by Thomas Brown and it is around the 1830s that the modern concept of emotion first emerged for the English language. "No one felt emotions before about 1830. Instead they felt other things – 'passions', 'accidents of the soul', 'moral sentiments' – and explained them very differently from how we understand emotions today." Some cross-cultural studies indicate that the categorization of "emotion" and classification of basic emotions such as "anger" and "sadness" are not universal and that the boundaries and domains of these concepts are categorized differently by all cultures. However, others argue that there are some universal bases of emotions (see Section 6.1). In psychiatry and psychology, an inability to express or perceive emotion is sometimes referred to as alexithymia. History Human nature and the accompanying bodily sensations have always been part of the interests of thinkers and philosophers. Far more extensively, this has also been of great interest to both Western and Eastern societies. Emotional states have been associated with the divine and with the enlightenment of the human mind and body. The ever-changing actions of individuals and their mood variations have been of great importance to most of the Western philosophers (including Aristotle, Plato, Descartes, Aquinas, and Hobbes), leading them to propose extensive theories—often competing theories—that sought to explain emotion and the accompanying motivators of human action, as well as its consequences. In the Age of Enlightenment, Scottish thinker David Hume proposed a revolutionary argument that sought to explain the main motivators of human action and conduct. He proposed that actions are motivated by "fears, desires, and passions". As he wrote in his book A Treatise of Human Nature (1773): "Reason alone can never be a motive to any action of the will… it can never oppose passion in the direction of the will… The reason is, and ought to be, the slave of the passions, and can never pretend to any other office than to serve and obey them". With these lines, Hume attempted to explain that reason and further action would be subject to the desires and experience of the self. Later thinkers would propose that actions and emotions are deeply interrelated with social, political, historical, and cultural aspects of reality that would also come to be associated with sophisticated neurological and physiological research on the brain and other parts of the physical body. Definitions The Lexico definition of emotion is "A strong feeling deriving from one's circumstances, mood, or relationships with others". Emotions are responses to significant internal and external events. Emotions can be occurrences (e.g., panic) or dispositions (e.g., hostility), and short-lived (e.g., anger) or long-lived (e.g., grief). Psychotherapist Michael C. Graham describes all emotions as existing on a continuum of intensity. Thus fear might range from mild concern to terror or shame might range from simple embarrassment to toxic shame. Emotions have been described as consisting of a coordinated set of responses, which may include verbal, physiological, behavioral, and neural mechanisms. Emotions have been categorized, with some relationships existing between emotions and some direct opposites existing. Graham differentiates emotions as functional or dysfunctional and argues all functional emotions have benefits. In some uses of the word, emotions are intense feelings that are directed at someone or something. On the other hand, emotion can be used to refer to states that are mild (as in annoyed or content) and to states that are not directed at anything (as in anxiety and depression). One line of research looks at the meaning of the word emotion in everyday language and finds that this usage is rather different from that in academic discourse. In practical terms, Joseph LeDoux has defined emotions as the result of a cognitive and conscious process which occurs in response to a body system response to a trigger. Components According to Scherer's Component Process Model (CPM) of emotion, there are five crucial elements of emotion. From the component process perspective, emotional experience requires that all of these processes become coordinated and synchronized for a short period of time, driven by appraisal processes. Although the inclusion of cognitive appraisal as one of the elements is slightly controversial, since some theorists make the assumption that emotion and cognition are separate but interacting systems, the CPM provides a sequence of events that effectively describes the coordination involved during an emotional episode. Cognitive appraisal: provides an evaluation of events and objects. Bodily symptoms: the physiological component of emotional experience. Action tendencies: a motivational component for the preparation and direction of motor responses. Expression: facial and vocal expression almost always accompanies an emotional state to communicate reaction and intention of actions. Feelings: the subjective experience of emotional state once it has occurred. Differentiation Emotion can be differentiated from a number of similar constructs within the field of affective neuroscience: Emotions: predispositions to a certain type of action in response to a specific stimulus, which produce a cascade of rapid and synchronized physiological and cognitive changes. Feeling: not all feelings include emotion, such as the feeling of knowing. In the context of emotion, feelings are best understood as a subjective representation of emotions, private to the individual experiencing them. Emotions are often described as the raw, instinctive responses, while feelings involve our interpretation and awareness of those responses. Moods: enduring affective states that are considered less intense than emotions and appear to lack a contextual stimulus. Affect: a broader term used to describe the emotional and cognitive experience of an emotion, feeling or mood. It can be understood as a combination of three components: emotion, mood, and affectivity (an individual's overall disposition or temperament, which can be characterized as having a generally positive or negative affect). Evolutionary approach: Emotions' purpose and value There is no single, universally accepted evolutionary theory. The most prominent ideas suggest that emotions have evolved to serve various adaptive functions: Survival, threat detection, decision-making, and motivation. One view is that emotions facilitate adaptive responses to environmental challenges. Emotions like fear, anger, and disgust are thought to have evolved to help humans and other animals detect and respond to threats and dangers in their environment. For example, fear helps individuals react quickly to potential dangers, anger can motivate self-defense or assertiveness, and disgust can protect against harmful substances. In addition, happiness might reinforce behaviors that lead to positive outcomes. For example, the anticipation of the reward associated with a pleasurable emotion like joy can motivate individuals to engage in behaviors that promote their well-being. Memory enhancement: Emotions can enhance memory. Events or experiences that trigger strong emotions are often remembered more vividly, which can be advantageous for learning from past experiences and avoiding potential threats or repeating successful behaviors. Social communication. Emotions play a crucial role in social interactions. Expressing emotions through facial expressions, body language, and vocalizations helps convey information to others about one's internal state. This, in turn, facilitates cooperation, bonding, and the maintenance of social relationships. For example, a smile communicates happiness and friendliness, while a frown may signal distress or disapproval. Emotions can also ignite conversations about values and ethics. However some emotions, such as some forms of anxiety, are sometimes regarded as part of a mental illness and thus possibly of negative value. Classification A distinction can be made between emotional episodes and emotional dispositions. Emotional dispositions are also comparable to character traits, where someone may be said to be generally disposed to experience certain emotions. For example, an irritable person is generally disposed to feel irritation more easily or quickly than others do. Finally, some theorists place emotions within a more general category of "affective states" where affective states can also include emotion-related phenomena such as pleasure and pain, motivational states (for example, hunger or curiosity), moods, dispositions and traits. Basic emotions theory For more than 40 years, Paul Ekman has supported the view that emotions are discrete, measurable, and physiologically distinct. Ekman's most influential work revolved around the finding that certain emotions appeared to be universally recognized, even in cultures that were preliterate and could not have learned associations for facial expressions through media. Another classic study found that when participants contorted their facial muscles into distinct facial expressions (for example, disgust), they reported subjective and physiological experiences that matched the distinct facial expressions. Ekman's facial-expression research examined six basic emotions: anger, disgust, fear, happiness, sadness and surprise. Later in his career, Ekman theorized that other universal emotions may exist beyond these six. In light of this, recent cross-cultural studies led by Daniel Cordaro and Dacher Keltner, both former students of Ekman, extended the list of universal emotions. In addition to the original six, these studies provided evidence for amusement, awe, contentment, desire, embarrassment, pain, relief, and sympathy in both facial and vocal expressions. They also found evidence for boredom, confusion, interest, pride, and shame facial expressions, as well as contempt, relief, and triumph vocal expressions. Robert Plutchik agreed with Ekman's biologically driven perspective but developed the "wheel of emotions", suggesting eight primary emotions grouped on a positive or negative basis: joy versus sadness; anger versus fear; trust versus disgust; and surprise versus anticipation. Some basic emotions can be modified to form complex emotions. The complex emotions could arise from cultural conditioning or association combined with the basic emotions. Alternatively, similar to the way primary colors combine, primary emotions could blend to form the full spectrum of human emotional experience. For example, interpersonal anger and disgust could blend to form contempt. Relationships exist between basic emotions, resulting in positive or negative influences. Jaak Panksepp carved out seven biologically inherited primary affective systems called SEEKING (expectancy), FEAR (anxiety), RAGE (anger), LUST (sexual excitement), CARE (nurturance), PANIC/GRIEF (sadness), and PLAY (social joy). He proposed what is known as "core-SELF" to be generating these affects. Multi-dimensional analysis theory Psychologists have used methods such as factor analysis to attempt to map emotion-related responses onto a more limited number of dimensions. Such methods attempt to boil emotions down to underlying dimensions that capture the similarities and differences between experiences. Often, the first two dimensions uncovered by factor analysis are valence (how negative or positive the experience feels) and arousal (how energized or enervated the experience feels). These two dimensions can be depicted on a 2D coordinate map. This two-dimensional map has been theorized to capture one important component of emotion called core affect. Core affect is not theorized to be the only component to emotion, but to give the emotion its hedonic and felt energy. Using statistical methods to analyze emotional states elicited by short videos, Cowen and Keltner identified 27 varieties of emotional experience: admiration, adoration, aesthetic appreciation, amusement, anger, anxiety, awe, awkwardness, boredom, calmness, confusion, craving, disgust, empathic pain, entrancement, excitement, fear, horror, interest, joy, nostalgia, relief, romance, sadness, satisfaction, sexual desire, and surprise. Theories Pre-modern history In Hinduism, Bharata Muni enunciated the nine rasas (emotions) in the Nātyasāstra, an ancient Sanskrit text of dramatic theory and other performance arts, written between 200 BC and 200 AD. The theory of rasas still forms the aesthetic underpinning of all Indian classical dance and theatre, such as Bharatanatyam, kathak, Kuchipudi, Odissi, Manipuri, Kudiyattam, Kathakali and others. Bharata Muni established the following: Śṛṅgāraḥ (शृङ्गारः): Romance / Love / attractiveness, Hāsyam (हास्यं): Laughter / mirth / comedy, Raudram (रौद्रं): Fury / Anger, Kāruṇyam (कारुण्यं): Compassion / mercy, Bībhatsam (बीभत्सं): Disgust / aversion, Bhayānakam (भयानकं): Horror / terror, Veeram (वीरं): Pride / Heroism, Adbhutam (अद्भुतं): Surprise / wonder. In Buddhism, emotions occur when an object is considered attractive or repulsive. There is a felt tendency impelling people towards attractive objects and propelling them to move away from repulsive or harmful objects; a disposition to possess the object (greed), to destroy it (hatred), to flee from it (fear), to get obsessed or worried over it (anxiety), and so on. In Stoic theories, normal emotions (like delight and fear) are described as irrational impulses that come from incorrect appraisals of what is 'good' or 'bad'. Alternatively, there are 'good emotions' (like joy and caution) experienced by those that are wise, which come from correct appraisals of what is 'good' and 'bad'. Aristotle believed that emotions were an essential component of virtue. In the Aristotelian view all emotions (called passions) corresponded to appetites or capacities. During the Middle Ages, the Aristotelian view was adopted and further developed by scholasticism and Thomas Aquinas in particular. In Chinese antiquity, excessive emotion was believed to cause damage to qi, which in turn, damages the vital organs. The four humors theory made popular by Hippocrates contributed to the study of emotion in the same way that it did for medicine. In the early 11th century, Avicenna theorized about the influence of emotions on health and behaviors, suggesting the need to manage emotions. Early modern views on emotion are developed in the works of philosophers such as René Descartes, Niccolò Machiavelli, Baruch Spinoza, Thomas Hobbes and David Hume. In the 19th century emotions were considered adaptive and were studied more frequently from an empiricist psychiatric perspective. Western theological Christian perspective on emotion presupposes a theistic origin to humanity. God who created humans gave humans the ability to feel emotion and interact emotionally. Biblical content expresses that God is a person who feels and expresses emotion. Though a somatic view would place the locus of emotions in the physical body, Christian theory of emotions would view the body more as a platform for the sensing and expression of emotions. Therefore, emotions themselves arise from the person, or that which is "imago-dei" or Image of God in humans. In Christian thought, emotions have the potential to be controlled through reasoned reflection. That reasoned reflection also mimics God who made mind. The purpose of emotions in human life is therefore summarized in God's call to enjoy Him and creation, humans are to enjoy emotions and benefit from them and use them to energize behavior. Evolutionary theories 19th century Perspectives on emotions from evolutionary theory were initiated during the mid-late 19th century with Charles Darwin's 1872 book The Expression of the Emotions in Man and Animals. Darwin argued that emotions served no evolved purpose for humans, neither in communication, nor in aiding survival. Darwin largely argued that emotions evolved via the inheritance of acquired characters. He pioneered various methods for studying non-verbal expressions, from which he concluded that some expressions had cross-cultural universality. Darwin also detailed homologous expressions of emotions that occur in animals. This led the way for animal research on emotions and the eventual determination of the neural underpinnings of emotion. Contemporary More contemporary views along the evolutionary psychology spectrum posit that both basic emotions and social emotions evolved to motivate (social) behaviors that were adaptive in the ancestral environment. Emotion is an essential part of any human decision-making and planning, and the famous distinction made between reason and emotion is not as clear as it seems. Paul D. MacLean claims that emotion competes with even more instinctive responses, on one hand, and the more abstract reasoning, on the other hand. The increased potential in neuroimaging has also allowed investigation into evolutionarily ancient parts of the brain. Important neurological advances were derived from these perspectives in the 1990s by Joseph E. LeDoux and Antonio Damasio. For example, in an extensive study of a subject with ventromedial frontal lobe damage described in the book Descartes' Error, Damasio demonstrated how loss of physiological capacity for emotion resulted in the subject's lost capacity to make decisions despite having robust faculties for rationally assessing options. Research on physiological emotion has caused modern neuroscience to abandon the model of emotions and rationality as opposing forces. In contrast to the ancient Greek ideal of dispassionate reason, the neuroscience of emotion shows that emotion is necessarily integrated with intellect. Research on social emotion also focuses on the physical displays of emotion including body language of animals and humans (see affect display). For example, spite seems to work against the individual but it can establish an individual's reputation as someone to be feared. Shame and pride can motivate behaviors that help one maintain one's standing in a community, and self-esteem is one's estimate of one's status. Somatic theories Somatic theories of emotion claim that bodily responses, rather than cognitive interpretations, are essential to emotions. The first modern version of such theories came from William James in the 1880s. The theory lost favor in the 20th century, but has regained popularity more recently due largely to theorists such as John T. Cacioppo, Antonio Damasio, Joseph E. LeDoux and Robert Zajonc who are able to appeal to neurological evidence. James–Lange theory In his 1884 article William James argued that feelings and emotions were secondary to physiological phenomena. In his theory, James proposed that the perception of what he called an "exciting fact" directly led to a physiological response, known as "emotion". To account for different types of emotional experiences, James proposed that stimuli trigger activity in the autonomic nervous system, which in turn produces an emotional experience in the brain. The Danish psychologist Carl Lange also proposed a similar theory at around the same time, and therefore this theory became known as the James–Lange theory. As James wrote, "the perception of bodily changes, as they occur, is the emotion". James further claims that "we feel sad because we cry, angry because we strike, afraid because we tremble, and either we cry, strike, or tremble because we are sorry, angry, or fearful, as the case may be". An example of this theory in action would be as follows: An emotion-evoking stimulus (snake) triggers a pattern of physiological response (increased heart rate, faster breathing, etc.), which is interpreted as a particular emotion (fear). This theory is supported by experiments in which by manipulating the bodily state induces a desired emotional state. Some people may believe that emotions give rise to emotion-specific actions, for example, "I'm crying because I'm sad", or "I ran away because I was scared". The issue with the James–Lange theory is that of causation (bodily states causing emotions and being a priori), not that of the bodily influences on emotional experience (which can be argued and is still quite prevalent today in biofeedback studies and embodiment theory). Although mostly abandoned in its original form, Tim Dalgleish argues that most contemporary neuroscientists have embraced the components of the James-Lange theory of emotions. Cannon–Bard theory Walter Bradford Cannon agreed that physiological responses played a crucial role in emotions, but did not believe that physiological responses alone could explain subjective emotional experiences. He argued that physiological responses were too slow and often imperceptible and this could not account for the relatively rapid and intense subjective awareness of emotion. He also believed that the richness, variety, and temporal course of emotional experiences could not stem from physiological reactions, that reflected fairly undifferentiated fight or flight responses. An example of this theory in action is as follows: An emotion-evoking event (snake) triggers simultaneously both a physiological response and a conscious experience of an emotion. Phillip Bard contributed to the theory with his work on animals. Bard found that sensory, motor, and physiological information all had to pass through the diencephalon (particularly the thalamus), before being subjected to any further processing. Therefore, Cannon also argued that it was not anatomically possible for sensory events to trigger a physiological response prior to triggering conscious awareness and emotional stimuli had to trigger both physiological and experiential aspects of emotion simultaneously. Two-factor theory Stanley Schachter formulated his theory on the earlier work of a Spanish physician, Gregorio Marañón, who injected patients with epinephrine and subsequently asked them how they felt. Marañón found that most of these patients felt something but in the absence of an actual emotion-evoking stimulus, the patients were unable to interpret their physiological arousal as an experienced emotion. Schachter did agree that physiological reactions played a big role in emotions. He suggested that physiological reactions contributed to emotional experience by facilitating a focused cognitive appraisal of a given physiologically arousing event and that this appraisal was what defined the subjective emotional experience. Emotions were thus a result of two-stage process: general physiological arousal, and experience of emotion. For example, the physiological arousal, heart pounding, in a response to an evoking stimulus, the sight of a bear in the kitchen. The brain then quickly scans the area, to explain the pounding, and notices the bear. Consequently, the brain interprets the pounding heart as being the result of fearing the bear. With his student, Jerome Singer, Schachter demonstrated that subjects can have different emotional reactions despite being placed into the same physiological state with an injection of epinephrine. Subjects were observed to express either anger or amusement depending on whether another person in the situation (a confederate) displayed that emotion. Hence, the combination of the appraisal of the situation (cognitive) and the participants' reception of adrenalin or a placebo together determined the response. This experiment has been criticized in Jesse Prinz's (2004) Gut Reactions. Cognitive theories With the two-factor theory now incorporating cognition, several theories began to argue that cognitive activity in the form of judgments, evaluations, or thoughts were entirely necessary for an emotion to occur. Cognitive theories of emotion emphasize that emotions are shaped by how individuals interpret and appraise situations. These theories highlight: The role of cognitive appraisals in evaluating the significance of events. The subjectivity of emotions and the influence of individual differences. The cognitive labeling of emotional experiences. The complexity of emotional responses, influenced by cognitive processes, physiological reactions, and situational factors. These theories acknowledge that emotions are not automatic reactions but result from the interplay of cognitive interpretations, physiological responses, and the social context. A prominent philosophical exponent is Robert C. Solomon (for example, The Passions, Emotions and the Meaning of Life, 1993). Solomon claims that emotions are judgments. He has put forward a more nuanced view which responds to what he has called the 'standard objection' to cognitivism, the idea that a judgment that something is fearsome can occur with or without emotion, so judgment cannot be identified with emotion. Cognitive Appraisal Theory One of the main proponents of this view was Richard Lazarus who argued that emotions must have some cognitive intentionality. The cognitive activity involved in the interpretation of an emotional context may be conscious or unconscious and may or may not take the form of conceptual processing. Lazarus' theory is very influential; emotion is a disturbance that occurs in the following order: Cognitive appraisal: The individual assesses the event cognitively, which cues the emotion. Physiological changes: The cognitive reaction starts biological changes such as increased heart rate or pituitary adrenal response. Action: The individual feels the emotion and chooses how to react. For example: Jenny sees a snake. Jenny cognitively assesses the snake in her presence. Cognition allows her to understand it as a danger. Her brain activates the adrenal glands which pump adrenalin through her blood stream, resulting in increased heartbeat. Jenny screams and runs away. Lazarus stressed that the quality and intensity of emotions are controlled through cognitive processes. These processes underline coping strategies that form the emotional reaction by altering the relationship between the person and the environment. Two-Process Theory George Mandler provided an extensive theoretical and empirical discussion of emotion as influenced by cognition, consciousness, and the autonomic nervous system in two books (Mind and Emotion, 1975, and Mind and Body: Psychology of Emotion and Stress, 1984) George Mandler, a prominent psychologist known for his contributions to the study of cognition and emotion, proposed the "Two-Process Theory of Emotion". This theory offers insights into how emotions are generated and how cognitive processes play a role in emotional experiences. Mandler's theory focuses on the interplay between primary and secondary appraisal processes in the formation of emotions. Here are the key components of his theory: Primary Appraisal: This initial cognitive appraisal involves evaluating a situation for its relevance and implications for one's well-being. It assesses whether a situation is beneficial, harmful, or neutral. A positive primary appraisal may lead to positive emotions, while a negative primary appraisal may lead to negative emotions. Secondary Appraisal: Secondary appraisal follows the primary appraisal and involves an assessment of one's ability to cope with or manage the situation. If an individual believes they have the resources and skills to cope effectively, this may result in a different emotional response than if they perceive themselves as unable to cope. Emotion Generation: The combination of the primary and secondary appraisals contributes to the generation of emotions. The specific emotion experienced is determined by these appraisals. For instance, if a person appraises a situation as relevant to their well-being (positive or negative) and believes they have the resources to cope, this might lead to an emotion such as joy or relief. Conversely, if the situation is appraised negatively, and coping resources are perceived as lacking, emotions like fear or sadness may result. Mandler's Two-Process Theory of Emotion emphasizes the importance of cognitive appraisal processes in shaping emotional experiences. It recognizes that emotions are not just automatic reactions but result from complex evaluations of the significance of situations and one's ability to manage them effectively. This theory underscores the role of cognition in the emotional process and highlights the interplay of cognitive factors in the formation of emotions. The Affect Infusion Model (AIM) The Affect Infusion Model (AIM) is a psychological framework that was developed by Joseph Forgas in the 1990s. This model focuses on how affect, or mood and emotions, can influence cognitive processes and decision-making. The central idea of the AIM is that affect, whether it is a positive or negative mood, can "infuse" or influence various cognitive activities, including information processing and judgments. Key components and principles of the Affect Infusion Model include: Affect as Information: The AIM posits that individuals use their current mood or emotional state as a source of information when making judgments or decisions. In other words, people consider their emotional experiences as part of the decision-making process. Information Processing Strategies: The model suggests that affect can influence the strategies people use to process information. Positive affect might lead to a more heuristic or "top-down" processing style, whereas negative affect might lead to a more systematic, detail-oriented "bottom-up" processing style. Affect Congruence: The AIM suggests that when the affective state is congruent with the information being processed, it can enhance processing efficiency and lead to more favorable judgments. For example, a positive mood might lead to more positive evaluations of positive information. Affect Infusion: The concept of "affect infusion" refers to the idea that affect can "infuse" or bias cognitive processes, potentially leading to decision-making that is influenced by emotional factors. Moderating Factors: The model acknowledges that various factors, such as individual differences, task complexity, and the extent of attention paid to one's mood, can moderate the degree to which affect influences cognition. The Affect Infusion Model has been applied to a wide range of areas, including consumer behavior, social judgment, and interpersonal interactions. It emphasizes the idea that emotions and mood play a more significant role in cognitive processes and decision-making than traditionally thought. While it has been influential in understanding the interplay between affect and cognition, it is important to note that the AIM is just one of several models in the field of emotion and cognition that help explain the intricate relationship between emotions and thinking. Appraisal-Tendency Theory Source: The Appraisal-Tendency Theory, developed by Joseph P. Forgas, is a theory that focuses on how people have dispositional tendencies to appraise and interpret situations in specific ways, leading to consistent emotional reactions to particular types of situations. This theory suggests that certain individuals may have stable, habitual patterns of appraising and attributing emotional significance to events, and these tendencies can influence their emotional responses and judgments. Key features and concepts of the Appraisal-Tendency Theory include: Cognitive Appraisals: Appraisal tendencies refer to the habitual or characteristic ways that individuals appraise or evaluate situations. Appraisals involve cognitive judgments about the personal relevance, desirability, and significance of events or situations. Stable and Individual Differences: The theory posits that these appraisal tendencies are stable and relatively consistent across time. They are also seen as individual differences, meaning that people may differ in the specific appraisal tendencies they exhibit. Emotional Responses: Appraisal tendencies influence emotional responses to situations. For instance, individuals with a tendency to appraise situations as threatening may consistently experience fear or anxiety in response to a range of situations perceived as threats. Influence on Social Judgments: The theory extends beyond emotions to include the impact of appraisal tendencies on social judgments and evaluations. For example, individuals with a tendency to perceive events as unfair may make consistent social judgments related to fairness and justice. Context Dependence: Appraisal tendencies may interact with situational factors. In some situations, the tendency to appraise a situation as threatening, for instance, may lead to fear, while in different contexts, it may not produce the same emotional response. Appraisal-Tendency Theory suggests that these cognitive tendencies can shape an individual's overall emotional disposition, influencing their emotional reactions and social judgments. This theory has been applied in various contexts, including studies of personality, social psychology, and decision-making, to better understand how cognitive appraisal tendencies influence emotional and evaluative responses. Laws of Emotion Source: Nico Frijda was a prominent psychologist known for his work in the field of emotion and affective science. One of the key contributions of Frijda are his "Laws of Emotion", which outline a set of principles that help explain how emotions function and how they are experienced. Frijda's Laws of Emotion are as follows: The Law of Situational Meaning: This law posits that emotions are elicited by events or situations that have personal significance and meaning for the individual. Emotions are not random but are a response to the perceived meaning of the situation. The Law of Concern: Frijda suggests that emotions are fundamentally concerned with the individual's well-being and adaptation. Emotions serve as signals or reactions to situations that impact one's goals, needs, or values. The Law of Appraisal: This law acknowledges the role of cognitive appraisal processes in the emotional experience. Individuals appraise or evaluate a situation based on factors such as its relevance, congruence with goals, and coping potential, which in turn shapes the specific emotional response. The Law of Readiness: Frijda's theory suggests that emotions prepare individuals for action. Emotions are associated with physiological changes and action tendencies that ready the individual to respond to the situation. For example, fear may prepare someone to escape a threat. The Law of Concerned Expectancy: Emotions are influenced by both what is happening now and what is anticipated to occur in the future. Emotions can reflect an individual's expectations about the consequences of a situation. Frijda's theory emphasizes the adaptive function of emotions and the role of cognitive appraisal in shaping emotional experiences. It highlights that emotions are not simply reactions to external events but are intimately tied to the individual's goals, values, and perceptions of the situation's meaning. Frijda's work has had a significant influence on the study of emotions and has contributed to a more comprehensive understanding of how emotions operate. Emotion Attribution Theory Source: Jesse Prinz is a contemporary philosopher and cognitive scientist who has contributed to the field of emotion theory. One of his influential theories is the "Emotion Attribution Theory", which provides a perspective on how people recognize and understand emotions in themselves and others. Emotion Attribution Theory, proposed by Jesse Prinz, focuses on the role of emotion attributions in the experience and understanding of emotions. Key ideas and components of Prinz's theory include: Emotion Attribution: Prinz suggests that emotions are recognized through a process of attributing specific emotional states to oneself and others based on observed or perceived cues. These cues can include facial expressions, body language, vocal tone, and context. Basic Emotions: Prinz's theory is associated with the idea of basic emotions, which are a limited set of universal and biologically driven emotional states. He argues that attributions of basic emotions are part of human cognitive architecture and that these attributions are made automatically and rapidly. Social and Cultural Influence: While basic emotions are seen as universal, Prinz acknowledges the role of social and cultural factors in shaping how emotions are expressed and interpreted. Culture can influence the display rules for emotions and how emotions are perceived in various contexts. Emotion and Moral Evaluation: Prinz's theory also explores the connection between emotions and moral evaluation. He suggests that emotions are linked to our moral judgments and evaluations of actions and events. Emotion attributions are crucial in the moral assessment of others' behaviors. Overall, Prinz's Emotion Attribution Theory emphasizes the role of attributions in the recognition and understanding of emotions. It highlights the automatic and cognitive processes involved in identifying and interpreting emotional states in oneself and others. This theory has implications for fields such as psychology, philosophy, and cognitive science and contributes to our understanding of the social and cultural aspects of emotions. Affective Events Theory (AET) Source: The Affective Events Theory (AET) is a psychological theory that focuses on the role of workplace events in shaping employees' emotions, attitudes, and behaviors in the context of their job. This theory was developed by organizational psychologists Howard M. Weiss and Russell Cropanzano in the late 1990s. AET primarily concerns itself with how emotional experiences at work can impact job satisfaction, performance, and other outcomes. Key concepts and principles of the Affective Events Theory include: Affective Events: AET centers on "affective events", which are specific events or occurrences in the workplace that trigger emotional responses in employees. These events can be positive (e.g., receiving praise or a promotion) or negative (e.g., conflicts with coworkers or work-related stressors). Emotion Generation: The theory suggests that these affective events generate emotions in employees. These emotions can be either discrete (specific emotions like happiness, anger, or sadness) or general mood states (e.g., feeling generally positive or negative). Emotion-Driven Outcomes: AET posits that emotions generated by affective events at work have consequences for employee attitudes and behaviors. For example, positive emotions may lead to increased job satisfaction, improved performance, and greater commitment to the organization, while negative emotions might result in reduced job satisfaction and increased turnover intentions. Moderating Factors: AET recognizes that individual and situational factors can moderate the relationship between affective events and outcomes. Personal characteristics, job roles, and organizational culture can influence how employees respond to affective events. Feedback Loop: The theory also suggests that there can be a feedback loop where the emotional reactions of employees influence their perceptions of subsequent events. In other words, an employee's emotional state may color their perception of future events and experiences in the workplace. Time Lag: AET acknowledges that the effects of affective events may not be immediate and can manifest over time. The theory allows for the consideration of both short-term and long-term emotional influences on employees. AET has been influential in the field of organizational psychology and has helped shed light on how workplace events can have a significant impact on employee well-being and organizational outcomes. It highlights the importance of understanding and managing the emotional experiences of employees in the context of their work. Situated perspective on emotion A situated perspective on emotion, developed by Paul E. Griffiths and Andrea Scarantino, emphasizes the importance of external factors in the development and communication of emotion, drawing upon the situationism approach in psychology. This theory is markedly different from both cognitivist and neo-Jamesian theories of emotion, both of which see emotion as a purely internal process, with the environment only acting as a stimulus to the emotion. In contrast, a situationist perspective on emotion views emotion as the product of an organism investigating its environment, and observing the responses of other organisms. Emotion stimulates the evolution of social relationships, acting as a signal to mediate the behavior of other organisms. In some contexts, the expression of emotion (both voluntary and involuntary) could be seen as strategic moves in the transactions between different organisms. The situated perspective on emotion states that conceptual thought is not an inherent part of emotion, since emotion is an action-oriented form of skillful engagement with the world. Griffiths and Scarantino suggested that this perspective on emotion could be helpful in understanding phobias, as well as the emotions of infants and animals. Genetics Emotions can motivate social interactions and relationships and therefore are directly related with basic physiology, particularly with the stress systems. This is important because emotions are related to the anti-stress complex, with an oxytocin-attachment system, which plays a major role in bonding. Emotional phenotype temperaments affect social connectedness and fitness in complex social systems. These characteristics are shared with other species and taxa and are due to the effects of genes and their continuous transmission. Information that is encoded in the DNA sequences provides the blueprint for assembling proteins that make up our cells. Zygotes require genetic information from their parental germ cells, and at every speciation event, heritable traits that have enabled its ancestor to survive and reproduce successfully are passed down along with new traits that could be potentially beneficial to the offspring. In the five million years since the lineages leading to modern humans and chimpanzees split, only about 1.2% of their genetic material has been modified. This suggests that everything that separates us from chimpanzees must be encoded in that very small amount of DNA, including our behaviors. Students that study animal behaviors have only identified intraspecific examples of gene-dependent behavioral phenotypes. In voles (Microtus spp.) minor genetic differences have been identified in a vasopressin receptor gene that corresponds to major species differences in social organization and the mating system. Another potential example with behavioral differences is the FOXP2 gene, which is involved in neural circuitry handling speech and language. Its present form in humans differed from that of the chimpanzees by only a few mutations and has been present for about 200,000 years, coinciding with the beginning of modern humans. Speech, language, and social organization are all part of the basis for emotions. Formation Neurobiological explanation Based on discoveries made through neural mapping of the limbic system, the neurobiological explanation of human emotion is that emotion is a pleasant or unpleasant mental state organized in the limbic system of the mammalian brain. If distinguished from reactive responses of reptiles, emotions would then be mammalian elaborations of general vertebrate arousal patterns, in which neurochemicals (for example, dopamine, noradrenaline, and serotonin) step-up or step-down the brain's activity level, as visible in body movements, gestures and postures. Emotions can likely be mediated by pheromones (see fear). For example, the emotion of love is proposed to be the expression of Paleocircuits of the mammalian brain (specifically, modules of the cingulate cortex (or gyrus)) which facilitate the care, feeding, and grooming of offspring. Paleocircuits are neural platforms for bodily expression configured before the advent of cortical circuits for speech. They consist of pre-configured pathways or networks of nerve cells in the forebrain, brainstem and spinal cord. Other emotions like fear and anxiety long thought to be exclusively generated by the most primitive parts of the brain (stem) and more associated to the fight-or-flight responses of behavior, have also been associated as adaptive expressions of defensive behavior whenever a threat is encountered. Although defensive behaviors have been present in a wide variety of species, Blanchard et al. (2001) discovered a correlation of given stimuli and situation that resulted in a similar pattern of defensive behavior towards a threat in human and non-human mammals. Whenever potentially dangerous stimuli are presented, additional brain structures activate that previous thought (hippocampus, thalamus, etc.). Thus, giving the amygdala an important role in coordinating the following behavioral input based on the presented neurotransmitters that respond to threat stimuli. These biological functions of the amygdala are not only limited to the "fear-conditioning" and "processing of aversive stimuli", but also are present on other components of the amygdala. Therefore, it can referred the amygdala as a key structure to understand the potential responses of behavior in danger like situations in human and non-human mammals. The motor centers of reptiles react to sensory cues of vision, sound, touch, chemical, gravity, and motion with pre-set body movements and programmed postures. With the arrival of night-active mammals, smell replaced vision as the dominant sense, and a different way of responding arose from the olfactory sense, which is proposed to have developed into mammalian emotion and emotional memory. The mammalian brain invested heavily in olfaction to succeed at night as reptiles slept – one explanation for why olfactory lobes in mammalian brains are proportionally larger than in the reptiles. These odor pathways gradually formed the neural blueprint for what was later to become our limbic brain. Emotions are thought to be related to certain activities in brain areas that direct our attention, motivate our behavior, and determine the significance of what is going on around us. Pioneering work by Paul Broca (1878), James Papez (1937), and Paul D. MacLean (1952) suggested that emotion is related to a group of structures in the center of the brain called the limbic system, which includes the hypothalamus, cingulate cortex, hippocampi, and other structures. More recent research has shown that some of these limbic structures are not as directly related to emotion as others are while some non-limbic structures have been found to be of greater emotional relevance. Prefrontal cortex There is ample evidence that the left prefrontal cortex is activated by stimuli that cause positive approach. If attractive stimuli can selectively activate a region of the brain, then logically the converse should hold, that selective activation of that region of the brain should cause a stimulus to be judged more positively. This was demonstrated for moderately attractive visual stimuli and replicated and extended to include negative stimuli. Two neurobiological models of emotion in the prefrontal cortex made opposing predictions. The valence model predicted that anger, a negative emotion, would activate the right prefrontal cortex. The direction model predicted that anger, an approach emotion, would activate the left prefrontal cortex. The second model was supported. This still left open the question of whether the opposite of approach in the prefrontal cortex is better described as moving away (direction model), as unmoving but with strength and resistance (movement model), or as unmoving with passive yielding (action tendency model). Support for the action tendency model (passivity related to right prefrontal activity) comes from research on shyness and research on behavioral inhibition. Research that tested the competing hypotheses generated by all four models also supported the action tendency model. Homeostatic/primordial emotion Another neurological approach proposed by Bud Craig in 2003 distinguishes two classes of emotion: "classical" emotions such as love, anger and fear that are evoked by environmental stimuli, and "homeostatic emotions"attention-demanding feelings evoked by body states, such as pain, hunger and fatigue, that motivate behavior (withdrawal, eating or resting in these examples) aimed at maintaining the body's internal milieu at its ideal state. Derek Denton calls the latter "primordial emotions" and defines them as "the subjective element of the instincts, which are the genetically programmed behavior patterns which contrive homeostasis. They include thirst, hunger for air, hunger for food, pain and hunger for specific minerals etc. There are two constituents of a primordial emotion – the specific sensation which when severe may be imperious, and the compelling intention for gratification by a consummatory act". Emergent explanation Emotions are seen by some researchers to be constructed (emerge) in social and cognitive domain alone, without directly implying biologically inherited characteristics. Joseph LeDoux differentiates between the human's defense system, which has evolved over time, and emotions such as fear and anxiety. He has said that the amygdala may release hormones due to a trigger (such as an innate reaction to seeing a snake), but "then we elaborate it through cognitive and conscious processes". Lisa Feldman Barrett highlights differences in emotions between different cultures, and says that emotions (such as anxiety) are socially constructed (see theory of constructed emotion). She says that they "are not triggered; you create them. They emerge as a combination of the physical properties of your body, a flexible brain that wires itself to whatever environment it develops in, and your culture and upbringing, which provide that environment". She has termed this approach the theory of constructed emotion. Disciplinary approaches Many different disciplines have produced work on the emotions. Human sciences study the role of emotions in mental processes, disorders, and neural mechanisms. In psychiatry, emotions are examined as part of the discipline's study and treatment of mental disorders in humans. Nursing studies emotions as part of its approach to the provision of holistic health care to humans. Psychology examines emotions from a scientific perspective by treating them as mental processes and behavior and they explore the underlying physiological and neurological processes, e.g., cognitive behavioral therapy. In neuroscience sub-fields such as social neuroscience and affective neuroscience, scientists study the neural mechanisms of emotion by combining neuroscience with the psychological study of personality, emotion, and mood. In linguistics, the expression of emotion may change to the meaning of sounds. In education, the role of emotions in relation to learning is examined. Social sciences often examine emotion for the role that it plays in human culture and social interactions. In sociology, emotions are examined for the role they play in human society, social patterns and interactions, and culture. In anthropology, the study of humanity, scholars use ethnography to undertake contextual analyzes and cross-cultural comparisons of a range of human activities. Some anthropology studies examine the role of emotions in human activities. In the field of communication studies, critical organizational scholars have examined the role of emotions in organizations, from the perspectives of managers, employees, and even customers. A focus on emotions in organizations can be credited to Arlie Russell Hochschild's concept of emotional labor. The University of Queensland hosts EmoNet, an e-mail distribution list representing a network of academics that facilitates scholarly discussion of all matters relating to the study of emotion in organizational settings. The list was established in January 1997 and has over 700 members from across the globe. In economics, the social science that studies the production, distribution, and consumption of goods and services, emotions are analyzed in some sub-fields of microeconomics, in order to assess the role of emotions on purchase decision-making and risk perception. In criminology, a social science approach to the study of crime, scholars often draw on behavioral sciences, sociology, and psychology; emotions are examined in criminology issues such as anomie theory and studies of "toughness", aggressive behavior, and hooliganism. In law, which underpins civil obedience, politics, economics and society, evidence about people's emotions is often raised in tort law claims for compensation and in criminal law prosecutions against alleged lawbreakers (as evidence of the defendant's state of mind during trials, sentencing, and parole hearings). In political science, emotions are examined in a number of sub-fields, such as the analysis of voter decision-making. In philosophy, emotions are studied in sub-fields such as ethics, the philosophy of art (for example, sensory–emotional values, and matters of taste and sentimentality), and the philosophy of music (see also music and emotion). In history, scholars examine documents and other sources to interpret and analyze past activities; speculation on the emotional state of the authors of historical documents is one of the tools of interpretation. In literature and film-making, the expression of emotion is the cornerstone of genres such as drama, melodrama, and romance. In communication studies, scholars study the role that emotion plays in the dissemination of ideas and messages. Emotion is also studied in non-human animals in ethology, a branch of zoology which focuses on the scientific study of animal behavior. Ethology is a combination of laboratory and field science, with strong ties to ecology and evolution. Ethologists often study one type of behavior (for example, aggression) in a number of unrelated animals. History of emotions The history of emotions has become an increasingly popular topic recently, with some scholars arguing that it is an essential category of analysis, not unlike class, race, or gender. Historians, like other social scientists, assume that emotions, feelings and their expressions are regulated in different ways by both different cultures and different historical times, and the constructivist school of history claims even that some sentiments and meta-emotions, for example schadenfreude, are learnt and not only regulated by culture. Historians of emotion trace and analyze the changing norms and rules of feeling, while examining emotional regimes, codes, and lexicons from social, cultural, or political history perspectives. Others focus on the history of medicine, science, or psychology. What somebody can and may feel (and show) in a given situation, towards certain people or things, depends on social norms and rules; thus historically variable and open to change. Several research centers have opened in the past few years in Germany, England, Spain, Sweden, and Australia. Furthermore, research in historical trauma suggests that some traumatic emotions can be passed on from parents to offspring to second and even third generation, presented as examples of transgenerational trauma. Sociology A common way in which emotions are conceptualized in sociology is in terms of the multidimensional characteristics including cultural or emotional labels (for example, anger, pride, fear, happiness), physiological changes (for example, increased perspiration, changes in pulse rate), expressive facial and body movements (for example, smiling, frowning, baring teeth), and appraisals of situational cues. One comprehensive theory of emotional arousal in humans has been developed by Jonathan Turner (2007: 2009). Two of the key eliciting factors for the arousal of emotions within this theory are expectations states and sanctions. When people enter a situation or encounter with certain expectations for how the encounter should unfold, they will experience different emotions depending on the extent to which expectations for Self, other and situation are met or not met. People can also provide positive or negative sanctions directed at Self or other which also trigger different emotional experiences in individuals. Turner analyzed a wide range of emotion theories across different fields of research including sociology, psychology, evolutionary science, and neuroscience. Based on this analysis, he identified four emotions that all researchers consider being founded on human neurology including assertive-anger, aversion-fear, satisfaction-happiness, and disappointment-sadness. These four categories are called primary emotions and there is some agreement amongst researchers that these primary emotions become combined to produce more elaborate and complex emotional experiences. These more elaborate emotions are called first-order elaborations in Turner's theory, and they include sentiments such as pride, triumph, and awe. Emotions can also be experienced at different levels of intensity so that feelings of concern are a low-intensity variation of the primary emotion aversion-fear whereas depression is a higher intensity variant. Attempts are frequently made to regulate emotion according to the conventions of the society and the situation based on many (sometimes conflicting) demands and expectations which originate from various entities. The expression of anger is in many cultures discouraged in girls and women to a greater extent than in boys and men (the notion being that an angry man has a valid complaint that needs to be rectified, while an angry women is hysterical or oversensitive, and her anger is somehow invalid), while the expression of sadness or fear is discouraged in boys and men relative to girls and women (attitudes implicit in phrases like "man up" or "don't be a sissy"). Expectations attached to social roles, such as "acting as man" and not as a woman, and the accompanying "feeling rules" contribute to the differences in expression of certain emotions. Some cultures encourage or discourage happiness, sadness, or jealousy, and the free expression of the emotion of disgust is considered socially unacceptable in most cultures. Some social institutions are seen as based on certain emotion, such as love in the case of contemporary institution of marriage. In advertising, such as health campaigns and political messages, emotional appeals are commonly found. Recent examples include no-smoking health campaigns and political campaigns emphasizing the fear of terrorism. Sociological attention to emotion has varied over time. Émile Durkheim (1915/1965) wrote about the collective effervescence or emotional energy that was experienced by members of totemic rituals in Australian Aboriginal society. He explained how the heightened state of emotional energy achieved during totemic rituals transported individuals above themselves giving them the sense that they were in the presence of a higher power, a force, that was embedded in the sacred objects that were worshipped. These feelings of exaltation, he argued, ultimately lead people to believe that there were forces that governed sacred objects. In the 1990s, sociologists focused on different aspects of specific emotions and how these emotions were socially relevant. For Cooley (1992), pride and shame were the most important emotions that drive people to take various social actions. During every encounter, he proposed that we monitor ourselves through the "looking glass" that the gestures and reactions of others provide. Depending on these reactions, we either experience pride or shame and this results in particular paths of action. Retzinger (1991) conducted studies of married couples who experienced cycles of rage and shame. Drawing predominantly on Goffman and Cooley's work, Scheff (1990) developed a micro sociological theory of the social bond. The formation or disruption of social bonds is dependent on the emotions that people experience during interactions. Subsequent to these developments, Randall Collins (2004) formulated his interaction ritual theory by drawing on Durkheim's work on totemic rituals that was extended by Goffman (1964/2013; 1967) into everyday focused encounters. Based on interaction ritual theory, we experience different levels or intensities of emotional energy during face-to-face interactions. Emotional energy is considered to be a feeling of confidence to take action and a boldness that one experiences when they are charged up from the collective effervescence generated during group gatherings that reach high levels of intensity. There is a growing body of research applying the sociology of emotion to understanding the learning experiences of students during classroom interactions with teachers and other students (for example, Milne & Otieno, 2007; Olitsky, 2007; Tobin, et al., 2013; Zembylas, 2002). These studies show that learning subjects like science can be understood in terms of classroom interaction rituals that generate emotional energy and collective states of emotional arousal like emotional climate. Apart from interaction ritual traditions of the sociology of emotion, other approaches have been classed into one of six other categories: evolutionary/biological theories symbolic interactionist theories dramaturgical theories ritual theories power and status theories stratification theories exchange theories This list provides a general overview of different traditions in the sociology of emotion that sometimes conceptualize emotion in different ways and at other times in complementary ways. Many of these different approaches were synthesized by Turner (2007) in his sociological theory of human emotions in an attempt to produce one comprehensive sociological account that draws on developments from many of the above traditions. Psychotherapy and regulation Emotion regulation refers to the cognitive and behavioral strategies people use to influence their own emotional experience. For example, a behavioral strategy in which one avoids a situation to avoid unwanted emotions (trying not to think about the situation, doing distracting activities, etc.). Depending on the particular school's general emphasis on either cognitive components of emotion, physical energy discharging, or on symbolic movement and facial expression components of emotion different schools of psychotherapy approach the regulation of emotion differently. Cognitively oriented schools approach them via their cognitive components, such as rational emotive behavior therapy. Yet others approach emotions via symbolic movement and facial expression components (like in contemporary Gestalt therapy). Cross-cultural research Research on emotions reveals the strong presence of cross-cultural differences in emotional reactions and that emotional reactions are likely to be culture-specific. In strategic settings, cross-cultural research on emotions is required for understanding the psychological situation of a given population or specific actors. This implies the need to comprehend the current emotional state, mental disposition or other behavioral motivation of a target audience located in a different culture, basically founded on its national, political, social, economic, and psychological peculiarities but also subject to the influence of circumstances and events. Computer science In the 2000s, research in computer science, engineering, psychology and neuroscience has been aimed at developing devices that recognize human affect display and model emotions. In computer science, affective computing is a branch of the study and development of artificial intelligence that deals with the design of systems and devices that can recognize, interpret, and process human emotions. It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science. While the origins of the field may be traced as far back as to early philosophical enquiries into emotion, the more modern branch of computer science originated with Rosalind Picard's 1995 paper on affective computing. Detecting emotional information begins with passive sensors which capture data about the user's physical state or behavior without interpreting the input. The data gathered is analogous to the cues humans use to perceive emotions in others. Another area within affective computing is the design of computational devices proposed to exhibit either innate emotional capabilities or that are capable of convincingly simulating emotions. Emotional speech processing recognizes the user's emotional state by analyzing speech patterns. The detection and processing of facial expression or body gestures is achieved through detectors and sensors. Effects on memory Emotion affects the way autobiographical memories are encoded and retrieved. Emotional memories are reactivated more, they are remembered better and have more attention devoted to them. Through remembering our past achievements and failures, autobiographical memories affect how we perceive and feel about ourselves. Notable theorists In the late 19th century, the most influential theorists were William James (1842–1910) and Carl Lange (1834–1900). James was an American psychologist and philosopher who wrote about educational psychology, psychology of religious experience/mysticism, and the philosophy of pragmatism. Lange was a Danish physician and psychologist. Working independently, they developed the James–Lange theory, a hypothesis on the origin and nature of emotions. The theory states that within human beings, as a response to experiences in the world, the autonomic nervous system creates physiological events such as muscular tension, a rise in heart rate, perspiration, and dryness of the mouth. Emotions, then, are feelings which come about as a result of these physiological changes, rather than being their cause. Silvan Tomkins (1911–1991) developed the affect theory and script theory. The affect theory introduced the concept of basic emotions, and was based on the idea that the dominance of the emotion, which he called the affected system, was the motivating force in human life. Some of the most influential deceased theorists on emotion from the 20th century include Magda B. Arnold (1903–2002), an American psychologist who developed the appraisal theory of emotions; Richard Lazarus (1922–2002), an American psychologist who specialized in emotion and stress, especially in relation to cognition; Herbert A. Simon (1916–2001), who included emotions into decision making and artificial intelligence; Robert Plutchik (1928–2006), an American psychologist who developed a psychoevolutionary theory of emotion; Robert Zajonc (1923–2008) a Polish–American social psychologist who specialized in social and cognitive processes such as social facilitation; Robert C. Solomon (1942–2007), an American philosopher who contributed to the theories on the philosophy of emotions with books such as What Is An Emotion?: Classic and Contemporary Readings (2003); Peter Goldie (1946–2011), a British philosopher who specialized in ethics, aesthetics, emotion, mood and character; Nico Frijda (1927–2015), a Dutch psychologist who advanced the theory that human emotions serve to promote a tendency to undertake actions that are appropriate in the circumstances, detailed in his book The Emotions (1986); Jaak Panksepp (1943–2017), an Estonian-born American psychologist, psychobiologist, neuroscientist and pioneer in affective neuroscience; John T. Cacioppo (1951–2018), one of the founding fathers of social neuroscience; George Mandler (1924–2016), an American psychologist who wrote influential books on cognition and emotion. Influential theorists who are still active include the following psychologists, neurologists, philosophers, and sociologists: Michael Apter – (born 1939) British psychologist who developed reversal theory, a structural, phenomenological theory of personality, motivation, and emotion Lisa Feldman Barrett – (born 1963) neuroscientist and psychologist specializing in affective science and human emotion Randall Collins – (born 1941) American sociologist from the University of Pennsylvania developed the interaction ritual theory which includes the emotional entrainment model Antonio Damasio (born 1944) – Portuguese behavioral neurologist and neuroscientist who works in the US Richard Davidson (born 1951) – American psychologist and neuroscientist; pioneer in affective neuroscience Paul Ekman (born 1934) – psychologist specializing in the study of emotions and their relation to facial expressions Barbara Fredrickson – Social psychologist who specializes in emotions and positive psychology. Arlie Russell Hochschild (born 1940) – American sociologist whose central contribution was in forging a link between the subcutaneous flow of emotion in social life and the larger trends set loose by modern capitalism within organizations Joseph E. LeDoux (born 1949) – American neuroscientist who studies the biological underpinnings of memory and emotion, especially the mechanisms of fear Jesse Prinz – American philosopher who specializes in emotion, moral psychology, aesthetics and consciousness James A. Russell (born 1947) – American psychologist who developed or co-developed the PAD theory of environmental impact, circumplex model of affect, prototype theory of emotion concepts, a critique of the hypothesis of universal recognition of emotion from facial expression, concept of core affect, developmental theory of differentiation of emotion concepts, and, more recently, the theory of the psychological construction of emotion Klaus Scherer (born 1943) – Swiss psychologist and director of the Swiss Center for Affective Sciences in Geneva; he specializes in the psychology of emotion Ronald de Sousa (born 1940) – English–Canadian philosopher who specializes in the philosophy of emotions, philosophy of mind and philosophy of biology Jonathan H. Turner (born 1942) – American sociologist from the University of California, Riverside, who is a general sociological theorist with specialty areas including the sociology of emotions, ethnic relations, social institutions, social stratification, and bio-sociology Dominique Moïsi (born 1946) – authored a book titled The Geopolitics of Emotion focusing on emotions related to globalization See also Affect measures Affective forecasting Affective neuroscience Coping Emotion and memory Emotion Review Emotional intelligence Emotional isolation Emotionally focused therapy Emotions in virtual communication Facial feedback hypothesis Fuzzy-trace theory Group emotion Homeostatic feeling Moral emotions Social sharing of emotions Two-factor theory of emotion Kuleshov effect References Further reading Glinka, Lukasz Andrzej (2013) Theorizing Emotions: A Brief Study of Psychological, Philosophical, and Cultural Aspects of Human Emotions. Great Abington: Cambridge International Science Publishing. . Dana Sugu & Amita Chaterjee "Flashback: Reshuffling Emotions" , International Journal on Humanistic Ideology, Vol. 3 No. 1, Spring–Summer 2010. Cornelius, R. (1996). The science of emotion. New Jersey: Prentice Hall. González, Ana Marta (2012). The Emotions and Cultural Analysis. Burlington, VT: Ashgate. Ekman, P. (1999). "Basic Emotions". In: T. Dalgleish and M. Power (Eds.). Handbook of Cognition and Emotion. John Wiley & Sons Ltd, Sussex, UK:. Frijda, N.H. (1986). The Emotions. Maison des Sciences de l'Homme and Cambridge University Press Hogan, Patrick Colm. (2011). What Literature Teaches Us about Emotion Cambridge: Cambridge University Press. Hordern, Joshua. (2013). Political Affections: Civic Participation and Moral Theology. Oxford: Oxford University Press. LeDoux, J.E. (1986). "The neurobiology of emotion". Chap. 15 in J.E. LeDoux & W. Hirst (Eds.) Mind and Brain: dialogues in cognitive neuroscience. New York: Cambridge. Mandler, G. (1984). Mind and Body: Psychology of emotion and stress. New York: Norton. Wayback Machine Plutchik, R. (1980). "A general psychoevolutionary theory of emotion". In R. Plutchik & H. Kellerman (Eds.), Emotion: Theory, research, and experience: Vol. 1. Theories of emotion (pp.3–33). New York: Academic. Roberts, Robert. (2003). Emotions: An Essay in Aid of Moral Psychology. Cambridge: Cambridge University Press. Solomon, R. (1993). The Passions: Emotions and the Meaning of Life. Indianapolis: Hackett Publishing. Wikibook Cognitive psychology and cognitive neuroscience External links About Emotions W. B. Cannon (1915). Bodily changes in pain, hunger, fear, and rage: an account of recent researches into the function of emotional excitement. New York: D. Appleton and Company Limbic system Subjective experience Human behavior Psychology Mental states
Emotion
[ "Biology" ]
14,483
[ "Emotion", "Behavior", "Psychology", "Behavioural sciences", "Human behavior" ]
10,412
https://en.wikipedia.org/wiki/Elementary%20function
In mathematics, an elementary function is a function of a single variable (typically real or complex) that is defined as taking sums, products, roots and compositions of finitely many polynomial, rational, trigonometric, hyperbolic, and exponential functions, and their inverses (e.g., arcsin, log, or x1/n). All elementary functions are continuous on their domains. Elementary functions were introduced by Joseph Liouville in a series of papers from 1833 to 1841. An algebraic treatment of elementary functions was started by Joseph Fels Ritt in the 1930s. Many textbooks and dictionaries do not give a precise definition of the elementary functions, and mathematicians differ on it. Examples Basic examples Elementary functions of a single variable include: Constant functions: etc. Rational powers of : etc. Exponential functions: Logarithms: Trigonometric functions: etc. Inverse trigonometric functions: etc. Hyperbolic functions: etc. Inverse hyperbolic functions: etc. All functions obtained by adding, subtracting, multiplying or dividing a finite number of any of the previous functions All functions obtained by root extraction of a polynomial with coefficients in elementary functions All functions obtained by composing a finite number of any of the previously listed functions Certain elementary functions of a single complex variable , such as and , may be multivalued. Additionally, certain classes of functions may be obtained by others using the final two rules. For example, the exponential function composed with addition, subtraction, and division provides the hyperbolic functions, while initial composition with instead provides the trigonometric functions. Composite examples Examples of elementary functions include: Addition, e.g. (+1) Multiplication, e.g. (2) Polynomial functions The last function is equal to , the inverse cosine, in the entire complex plane. All monomials, polynomials, rational functions and algebraic functions are elementary. The absolute value function, for real , is also elementary as it can be expressed as the composition of a power and root of : . Non-elementary functions Many mathematicians exclude non-analytic functions such as the absolute value function or discontinuous functions such as the step function, but others allow them. Some have proposed extending the set to include, for example, the Lambert W function. Some examples of functions that are not elementary: tetration the gamma function non-elementary Liouvillian functions, including the exponential integral (Ei), logarithmic integral (Li or li) and Fresnel integrals (S and C). the error function, a fact that may not be immediately obvious, but can be proven using the Risch algorithm. other nonelementary integrals, including the Dirichlet integral and elliptic integral. Closure It follows directly from the definition that the set of elementary functions is closed under arithmetic operations, root extraction and composition. The elementary functions are closed under differentiation. They are not closed under limits and infinite sums. Importantly, the elementary functions are closed under integration, as shown by Liouville's theorem, see nonelementary integral. The Liouvillian functions are defined as the elementary functions and, recursively, the integrals of the Liouvillian functions. Differential algebra The mathematical definition of an elementary function, or a function in elementary form, is considered in the context of differential algebra. A differential algebra is an algebra with the extra operation of derivation (algebraic version of differentiation). Using the derivation operation new equations can be written and their solutions used in extensions of the algebra. By starting with the field of rational functions, two special types of transcendental extensions (the logarithm and the exponential) can be added to the field building a tower containing elementary functions. A differential field F is a field F0 (rational functions over the rationals Q for example) together with a derivation map u → ∂u. (Here ∂u is a new function. Sometimes the notation u′ is used.) The derivation captures the properties of differentiation, so that for any two elements of the base field, the derivation is linear and satisfies the Leibniz product rule An element h is a constant if ∂h = 0. If the base field is over the rationals, care must be taken when extending the field to add the needed transcendental constants. A function u of a differential extension F[u] of a differential field F is an elementary function over F if the function u is algebraic over F, or is an exponential, that is, ∂u = u ∂a for a ∈ F, or is a logarithm, that is, ∂u = ∂a / a for a ∈ F. (see also Liouville's theorem) See also Notes References Further reading External links Elementary functions at Encyclopaedia of Mathematics Differential algebra Computer algebra Types of functions
Elementary function
[ "Mathematics", "Technology" ]
994
[ "Differential algebra", "Functions and mappings", "Computer algebra", "Computational mathematics", "Mathematical objects", "Fields of abstract algebra", "Computer science", "Mathematical relations", "Types of functions", "Algebra" ]
10,474
https://en.wikipedia.org/wiki/Eight%20queens%20puzzle
The eight queens puzzle is the problem of placing eight chess queens on an 8×8 chessboard so that no two queens threaten each other; thus, a solution requires that no two queens share the same row, column, or diagonal. There are 92 solutions. The problem was first posed in the mid-19th century. In the modern era, it is often used as an example problem for various computer programming techniques. The eight queens puzzle is a special case of the more general n queens problem of placing n non-attacking queens on an n×n chessboard. Solutions exist for all natural numbers n with the exception of n = 2 and n = 3. Although the exact number of solutions is only known for n ≤ 27, the asymptotic growth rate of the number of solutions is approximately (0.143 n)n. History Chess composer Max Bezzel published the eight queens puzzle in 1848. Franz Nauck published the first solutions in 1850. Nauck also extended the puzzle to the n queens problem, with n queens on a chessboard of n×n squares. Since then, many mathematicians, including Carl Friedrich Gauss, have worked on both the eight queens puzzle and its generalized n-queens version. In 1874, S. Günther proposed a method using determinants to find solutions. J.W.L. Glaisher refined Gunther's approach. In 1972, Edsger Dijkstra used this problem to illustrate the power of what he called structured programming. He published a highly detailed description of a depth-first backtracking algorithm. Constructing and counting solutions when n = 8 The problem of finding all solutions to the 8-queens problem can be quite computationally expensive, as there are 4,426,165,368 possible arrangements of eight queens on an 8×8 board, but only 92 solutions. It is possible to use shortcuts that reduce computational requirements or rules of thumb that avoids brute-force computational techniques. For example, by applying a simple rule that chooses one queen from each column, it is possible to reduce the number of possibilities to 16,777,216 (that is, 88) possible combinations. Generating permutations further reduces the possibilities to just 40,320 (that is, 8!), which can then be checked for diagonal attacks. The eight queens puzzle has 92 distinct solutions. If solutions that differ only by the symmetry operations of rotation and reflection of the board are counted as one, the puzzle has 12 solutions. These are called fundamental solutions; representatives of each are shown below. A fundamental solution usually has eight variants (including its original form) obtained by rotating 90, 180, or 270° and then reflecting each of the four rotational variants in a mirror in a fixed position. However, one of the 12 fundamental solutions (solution 12 below) is identical to its own 180° rotation, so has only four variants (itself and its reflection, its 90° rotation and the reflection of that). Thus, the total number of distinct solutions is 11×8 + 1×4 = 92. All fundamental solutions are presented below: Solution 10 has the additional property that no three queens are in a straight line. Existence of solutions Brute-force algorithms to count the number of solutions are computationally manageable for , but would be intractable for problems of , as 20! = 2.433 × 1018. If the goal is to find a single solution, one can show solutions exist for all n ≥ 4 with no search whatsoever. These solutions exhibit stair-stepped patterns, as in the following examples for n = 8, 9 and 10: The examples above can be obtained with the following formulas. Let (i, j) be the square in column i and row j on the n × n chessboard, k an integer. One approach is If the remainder from dividing n by 6 is not 2 or 3 then the list is simply all even numbers followed by all odd numbers not greater than n. Otherwise, write separate lists of even and odd numbers (2, 4, 6, 8 – 1, 3, 5, 7). If the remainder is 2, swap 1 and 3 in odd list and move 5 to the end (3, 1, 7, 5). If the remainder is 3, move 2 to the end of even list and 1,3 to the end of odd list (4, 6, 8, 2 – 5, 7, 9, 1, 3). Append odd list to the even list and place queens in the rows given by these numbers, from left to right (a2, b4, c6, d8, e3, f1, g7, h5). For this results in fundamental solution 1 above. A few more examples follow. 14 queens (remainder 2): 2, 4, 6, 8, 10, 12, 14, 3, 1, 7, 9, 11, 13, 5. 15 queens (remainder 3): 4, 6, 8, 10, 12, 14, 2, 5, 7, 9, 11, 13, 15, 1, 3. 20 queens (remainder 2): 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 3, 1, 7, 9, 11, 13, 15, 17, 19, 5. Counting solutions for other sizes n Exact enumeration There is no known formula for the exact number of solutions for placing n queens on an board i.e. the number of independent sets of size n in an queen's graph. The 27×27 board is the highest-order board that has been completely enumerated. The following tables give the number of solutions to the n queens problem, both fundamental and all , for all known cases. The number of placements in which furthermore no three queens lie on any straight line is known for . Asymptotic enumeration In 2021, Michael Simkin proved that for large numbers n, the number of solutions of the n queens problem is approximately . More precisely, the number of solutions has asymptotic growth where is a constant that lies between 1.939 and 1.945. (Here o(1) represents little o notation.) If one instead considers a toroidal chessboard (where diagonals "wrap around" from the top edge to the bottom and from the left edge to the right), it is only possible to place n queens on an board if In this case, the asymptotic number of solutions is Related problems Higher dimensions Find the number of non-attacking queens that can be placed in a d-dimensional chess of size n. More than n queens can be placed in some higher dimensions (the smallest example is four non-attacking queens in a 3×3×3 chess space), and it is in fact known that for any k, there are higher dimensions where nk queens do not suffice to attack all spaces. Using pieces other than queens On an 8×8 board one can place 32 knights, or 14 bishops, 16 kings or 8 rooks, so that no two pieces attack each other. In the case of knights, an easy solution is to place one on each square of a given color, since they move only to the opposite color. The solution is also easy for rooks and kings. Sixteen kings can be placed on the board by dividing it into 2-by-2 squares and placing the kings at equivalent points on each square. Placements of n rooks on an n×n board are in direct correspondence with order-n permutation matrices. Chess variations Related problems can be asked for chess variations such as shogi. For instance, the n+k dragon kings problem asks to place k shogi pawns and n+k mutually nonattacking dragon kings on an n×n shogi board. Nonstandard boards Pólya studied the n queens problem on a toroidal ("donut-shaped") board and showed that there is a solution on an n×n board if and only if n is not divisible by 2 or 3. Domination Given an n×n board, the domination number is the minimum number of queens (or other pieces) needed to attack or occupy every square. For n = 8 the queen's domination number is 5. Queens and other pieces Variants include mixing queens with other pieces; for example, placing m queens and m knights on an n×n board so that no piece attacks another or placing queens and pawns so that no two queens attack each other. Magic squares In 1992, Demirörs, Rafraf, and Tanik published a method for converting some magic squares into n-queens solutions, and vice versa. Latin squares In an n×n matrix, place each digit 1 through n in n locations in the matrix so that no two instances of the same digit are in the same row or column. Exact cover Consider a matrix with one primary column for each of the n ranks of the board, one primary column for each of the n files, and one secondary column for each of the 4n − 6 nontrivial diagonals of the board. The matrix has n2 rows: one for each possible queen placement, and each row has a 1 in the columns corresponding to that square's rank, file, and diagonals and a 0 in all the other columns. Then the n queens problem is equivalent to choosing a subset of the rows of this matrix such that every primary column has a 1 in precisely one of the chosen rows and every secondary column has a 1 in at most one of the chosen rows; this is an example of a generalized exact cover problem, of which sudoku is another example. n-queens completion The completion problem asks whether, given an n×n chessboard on which some queens are already placed, it is possible to place a queen in every remaining row so that no two queens attack each other. This and related questions are NP-complete and #P-complete. Any placement of at most n/60 queens can be completed, while there are partial configurations of roughly n/4 queens that cannot be completed. Exercise in algorithm design Finding all solutions to the eight queens puzzle is a good example of a simple but nontrivial problem. For this reason, it is often used as an example problem for various programming techniques, including nontraditional approaches such as constraint programming, logic programming or genetic algorithms. Most often, it is used as an example of a problem that can be solved with a recursive algorithm, by phrasing the n queens problem inductively in terms of adding a single queen to any solution to the problem of placing n−1 queens on an n×n chessboard. The induction bottoms out with the solution to the 'problem' of placing 0 queens on the chessboard, which is the empty chessboard. This technique can be used in a way that is much more efficient than the naïve brute-force search algorithm, which considers all 648 = 248 = 281,474,976,710,656 possible blind placements of eight queens, and then filters these to remove all placements that place two queens either on the same square (leaving only 64!/56! = 178,462,987,637,760 possible placements) or in mutually attacking positions. This very poor algorithm will, among other things, produce the same results over and over again in all the different permutations of the assignments of the eight queens, as well as repeating the same computations over and over again for the different sub-sets of each solution. A better brute-force algorithm places a single queen on each row, leading to only 88 = 224 = 16,777,216 blind placements. It is possible to do much better than this. One algorithm solves the eight rooks puzzle by generating the permutations of the numbers 1 through 8 (of which there are 8! = 40,320), and uses the elements of each permutation as indices to place a queen on each row. Then it rejects those boards with diagonal attacking positions. The backtracking depth-first search program, a slight improvement on the permutation method, constructs the search tree by considering one row of the board at a time, eliminating most nonsolution board positions at a very early stage in their construction. Because it rejects rook and diagonal attacks even on incomplete boards, it examines only 15,720 possible queen placements. A further improvement, which examines only 5,508 possible queen placements, is to combine the permutation based method with the early pruning method: the permutations are generated depth-first, and the search space is pruned if the partial permutation produces a diagonal attack. Constraint programming can also be very effective on this problem. An alternative to exhaustive search is an 'iterative repair' algorithm, which typically starts with all queens on the board, for example with one queen per column. It then counts the number of conflicts (attacks), and uses a heuristic to determine how to improve the placement of the queens. The 'minimum-conflicts' heuristic – moving the piece with the largest number of conflicts to the square in the same column where the number of conflicts is smallest – is particularly effective: it easily finds a solution to even the 1,000,000 queens problem. Unlike the backtracking search outlined above, iterative repair does not guarantee a solution: like all greedy procedures, it may get stuck on a local optimum. (In such a case, the algorithm may be restarted with a different initial configuration.) On the other hand, it can solve problem sizes that are several orders of magnitude beyond the scope of a depth-first search. As an alternative to backtracking, solutions can be counted by recursively enumerating valid partial solutions, one row at a time. Rather than constructing entire board positions, blocked diagonals and columns are tracked with bitwise operations. This does not allow the recovery of individual solutions. Sample program The following program is a translation of Niklaus Wirth's solution into the Python programming language, but does without the index arithmetic found in the original and instead uses lists to keep the program code as simple as possible. By using a coroutine in the form of a generator function, both versions of the original can be unified to compute either one or all of the solutions. Only 15,720 possible queen placements are examined. def queens(n: int, i: int, a: list, b: list, c: list): if i < n: for j in range(n): if j not in a and i + j not in b and i - j not in c: yield from queens(n, i + 1, a + [j], b + [i + j], c + [i - j]) else: yield a for solution in queens(8, 0, [], [], []): print(solution)The following program is an implementation of Donald Knuth's informal description of the solution on Page 31, Section 7.2.2 Backtrack Programming from The Art of Computer Programming, Volume 4B into the Python programming language. def property(perm: list): for k in range(0, len(perm)): for j in range(0, len(perm)): if j < k: if perm[k] == perm[j]: return False elif abs(perm[k] - perm[j]) == k - j: return False return True def extend(perm: list, n: int): new_perm = [] for p in perm: for i in range(0, n): new_perm.append(p + [i]) return new_perm def n_queens(n: int): domain = list(range(0, n)) perm = [[]] for i in range(n): new_perm = list(filter(property, extend(perm, n))) perm = new_perm return len(perm) In popular culture In the game The 7th Guest, the 8th Puzzle: "The Queen's Dilemma" in the game room of the Stauf mansion is the de facto eight queens puzzle. In the game Professor Layton and the Curious Village, the 130th puzzle: "Too Many Queens 5" () is an eight queens puzzle. See also Mathematical game Mathematical puzzle No-three-in-line problem Rook polynomial Costas array Notes References Further reading On The Modular N-Queen Problem in Higher Dimensions, Ricardo Gomez, Juan Jose Montellano and Ricardo Strausz (2004), Instituto de Matematicas, Area de la Investigacion Cientifica, Circuito Exterior, Ciudad Universitaria, Mexico. External links Eight Queens Puzzle in Turbo Pascal for CP/M Eight Queens Puzzle one line solution in Python Solutions in more than 100 different programming languages (on Rosetta Code) Mathematical chess problems Enumerative combinatorics 1848 in chess Mathematical problems
Eight queens puzzle
[ "Mathematics" ]
3,522
[ "Mathematical chess problems", "Recreational mathematics", "Enumerative combinatorics", "Combinatorics", "Mathematical problems" ]
10,475
https://en.wikipedia.org/wiki/Enrico%20Bombieri
Enrico Bombieri (born 26 November 1940) is an Italian mathematician, known for his work in analytic number theory, Diophantine geometry, complex analysis, and group theory. Bombieri is currently professor emeritus in the School of Mathematics at the Institute for Advanced Study in Princeton, New Jersey. Bombieri won the Fields Medal in 1974 for his work on the large sieve and its application to the distribution of prime numbers. Career Bombieri published his first mathematical paper in 1957, when he was 16 years old. In 1963, at age 22, he earned his first degree (Laurea) in mathematics from the Università degli Studi di Milano under the supervision of Giovanni Ricci and then studied at Trinity College, Cambridge, with Harold Davenport. Bombieri was an assistant professor (1963–1965) and then a full professor (1965–1966) at the Università di Cagliari, at the Università di Pisa in 1966–1974, and then at the Scuola Normale Superiore di Pisa in 1974–1977. From Pisa, he emigrated in 1977 to the United States, where he became a professor at the School of Mathematics at the Institute for Advanced Study in Princeton, New Jersey. In 2011, he became professor emeritus. Bombieri is also known for his pro bono service on behalf of the mathematics profession, e.g. for serving on external review boards and for peer-reviewing extraordinarily complicated manuscripts (like the paper of Per Enflo on the invariant subspace problem). Research The Bombieri–Vinogradov theorem is one of the major applications of the large sieve method. It improves Dirichlet's theorem on prime numbers in arithmetic progressions, by showing that by averaging over the modulus over a range, the mean error is much less than can be proved in a given case. This result can sometimes substitute for the still-unproved generalized Riemann hypothesis. In 1969, Bombieri, De Giorgi, and Giusti solved Bernstein's problem on minimal surfaces in dimensions above eight. In 1976, Bombieri developed the technique known as the "asymptotic sieve". In 1980, he supplied the completion of the proof of the uniqueness of finite groups of Ree type in characteristic 3; at the time of its publication, it was one of the missing steps in the classification of finite simple groups. Awards Bombieri's research in number theory, algebraic geometry, and mathematical analysis has earned him many international prizes — a Fields Medal in 1974 and the Balzan Prize in 1980. He was a plenary speaker at the International Congress of Mathematicians, which took place in 1974 in Vancouver. He is a member, or foreign member, of several learned academies, including the Accademia Nazionale dei Lincei (elected 1976), the French Academy of Sciences (elected 1984), the Academia Europaea (elected 1995), and the United States National Academy of Sciences (elected 1996). In 2002 he was made Cavaliere di Gran Croce al Merito della Repubblica Italiana. In 2010, he received the King Faisal International Prize (jointly with Terence Tao). and in 2020 he was awarded the Crafoord Prize in Mathematics. Other interests Bombieri, accomplished also in the arts, explored for wild orchids and other plants as a hobby in the Alps when a young man. Selected publications Sole E. Bombieri, Le Grand Crible dans la Théorie Analytique des Nombres (Seconde Édition). Astérisque 18, Paris 1987. Joint B. Beauzamy, E. Bombieri, P. Enflo and H. L. Montgomery. "Product of polynomials in many variables", Journal of Number Theory, pages 219–245, 1990. See also Bombieri norm Bombieri–Vinogradov theorem Glossary of arithmetic and Diophantine geometry – Bombieri–Lang conjecture References Sources E. Bombieri, Le Grand Crible dans la Théorie Analytique des Nombres (Seconde Édition). Astérisque 18, Paris 1987. B. Beauzamy, E. Bombieri, P. Enflo and H. L. Montgomery. "Product of polynomials in many variables", Journal of Number Theory, pages 219–245, 1990. External links Enrico Bombieri, Institute for Advanced Study Lista delle pubblicazioni di Enrico Bombieri, University of Pisa 20th-century Italian mathematicians 21st-century Italian mathematicians 1940 births Fields Medalists Institute for Advanced Study faculty Living people Members of the French Academy of Sciences Members of the United States National Academy of Sciences Number theorists PDE theorists Scientists from Milan University of Milan alumni Alumni of Trinity College, Cambridge Members of the Royal Swedish Academy of Sciences Members of Academia Europaea
Enrico Bombieri
[ "Mathematics" ]
986
[ "Number theorists", "Number theory" ]
10,500
https://en.wikipedia.org/wiki/Earless%20seal
The earless seals, phocids, or true seals are one of the three main groups of mammals within the seal lineage, Pinnipedia. All true seals are members of the family Phocidae (). They are sometimes called crawling seals to distinguish them from the fur seals and sea lions of the family Otariidae. Seals live in the oceans of both hemispheres and, with the exception of the more tropical monk seals, are mostly confined to polar, subpolar, and temperate climates. The Baikal seal is the only species of exclusively freshwater seal. Taxonomy and evolution Evolution The earliest known fossil earless seal is Noriphoca gaudini from the late Oligocene or earliest Miocene (Aquitanian) of Italy. Other early fossil phocids date from the mid-Miocene, 15 million years ago in the north Atlantic. Until recently, many researchers believed that phocids evolved separately from otariids and odobenids; and that they evolved from otter-like animals, such as Potamotherium, which inhabited European freshwater lakes. Recent evidence strongly suggests a monophyletic origin for all pinnipeds from a single ancestor, possibly Enaliarctos, most closely related to the mustelids and bears. Monk seals and elephant seals were previously believed to have first entered the Pacific through the open straits between North and South America, with the Antarctic true seals either using the same route or travelled down the west coast of Africa. It is now thought that the monk seals, elephant seals, and Antarctic seals all evolved in the southern hemisphere, and likely dispersed to their current distributions from more southern latitudes. Taxonomy In the 1980s and 1990s, morphological phylogenetic analysis of the phocids led to new conclusions about the interrelatedness of the various genera. More recent molecular phylogenetic analyses have confirmed the monophyly of the two phocid subfamilies (Phocinae and Monachinae). The Monachinae (known as the "southern" seals), is composed of three tribes; the Lobodontini, Miroungini, and Monachini. The four Antarctic genera Hydrurga, Leptonychotes, Lobodon, and Ommatophoca are part of the tribe Lobodontini. Tribe Miroungini is composed of the elephant seals. The Monk seals (Monachus and Neomonachus) are all part of the tribe Monachini. Likewise, subfamily Phocinae (the "northern" seals) also includes three tribes; Erignathini (Erignathus), Cystophorini (Cystophora), and Phocini (all other phocines). More recently, five species have been split off from Phoca, forming three additional genera. Alternatively the three monachine tribes have been evaluated to familiar status, which elephant seals and the Antarctic seals are more closely related to the phocines. Extant genera Biology External anatomy Adult phocids vary from in length and in weight in the ringed seal to and in the southern elephant seal, which is the largest member of the order Carnivora. Phocids have fewer teeth than land-based members of the Carnivora, although they retain powerful canines. Some species lack molars altogether. The dental formula is: While otariids are known for speed and maneuverability, phocids are known for efficient, economical movement. This allows most phocids to forage far from land to exploit prey resources, while otariids are tied to rich upwelling zones close to breeding sites. Phocids swim by sideways movements of their bodies, using their hind flippers to fullest effect. Their fore flippers are used primarily for steering, while their hind flippers are bound to the pelvis in such a way that they cannot bring them under their bodies to walk on them. They are more streamlined than fur seals and sea lions, so they can swim more effectively over long distances. However, because they cannot turn their hind flippers downward, they are very clumsy on land, having to wriggle with their front flippers and abdominal muscles. Phocid respiratory and circulatory systems are adapted to allow diving to considerable depths, and they can spend a long time underwater between breaths. Air is forced from the lungs during a dive and into the upper respiratory passages, where gases cannot easily be absorbed into the bloodstream. This helps protect the seal from the bends. The middle ear is also lined with blood sinuses that inflate during diving, helping to maintain a constant pressure. Phocids are more specialized for aquatic life than otariids. They lack external ears and have sleek, streamlined bodies. Retractable nipples, internal testicles, and an internal penile sheath provide further streamlining. A smooth layer of blubber lies underneath the skin. Phocids are able to divert blood flow to this layer to help control their temperatures. Communication Unlike otariids, true seals do not communicate by 'barking'. Instead, they communicate by slapping the water and grunting. Reproduction Phocids spend most of their time at sea, although they return to land or pack ice to breed and give birth. Pregnant females spend long periods foraging at sea, building up fat reserves, and then return to the breeding site to use their stored energy to nurse pups. However, the common seal displays a reproductive strategy similar to that used by otariids, in which the mother makes short foraging trips between nursing bouts. Because a phocid mother's feeding grounds are often hundreds of kilometers from the breeding site, she must fast while lactating. This combination of fasting with lactation requires the mother to provide large amounts of energy to her pup at a time when she is not eating (and often, not drinking). Mothers must supply their own metabolic needs while nursing. This is a miniature version of the humpback whales' strategy, which involves fasting during their months-long migration from arctic feeding areas to tropical breeding/nursing areas and back. Phocids produce thick, fat-rich milk that allows them to provide their pups with large amounts of energy in a short period. This allows the mother to return to the sea in time to replenish her reserves. Lactation ranges from five to seven weeks in the monk seal to just three to five days in the hooded seal. The mother ends nursing by leaving her pup at the breeding site to search for food (pups continue to nurse if given the opportunity). "Milk stealers" that suckle from unrelated, sleeping females are not uncommon; this often results in the death of the mother's pup, since a female can only feed one pup. Growth and maturation The pup's diet is so high in calories that it builds up a fat store. Before the pup is ready to forage, the mother abandons it, and the pup consumes its own fat for weeks or even months while it matures. Seals, like all marine mammals, need time to develop the oxygen stores, swimming muscles, and neural pathways necessary for effective diving and foraging. Seal pups typically eat no food and drink no water during the period, although some polar species eat snow. The postweaning fast ranges from two weeks in the hooded seal to 9–12 weeks in the northern elephant seal. The physiological and behavioral adaptations that allow phocid pups to endure these remarkable fasts, which are among the longest for any mammal, remain an area of active study and research. Feeding strategy Phocids make use of at least four different feeding strategies: suction feeding, grip and tear feeding, filter feeding, and pierce feeding. Each of these feeding strategies is aided by a specialized skull, mandible, and tooth morphology. However, despite morphological specialization, most phocids are opportunistic and employ multiple strategies to capture and eat prey. For example, the leopard seal, Hydrurga leptonyx, uses grip and tear feeding to prey on penguins, suction feeding to consume small fish, and filter feeding to catch krill. See also Marine mammals as food References Extant Miocene first appearances Pinnipeds Taxa named by John Edward Gray Non-human celestial navigation
Earless seal
[ "Astronomy" ]
1,693
[ "Celestial navigation", "Non-human celestial navigation" ]
10,585
https://en.wikipedia.org/wiki/Flagellate
A flagellate is a cell or organism with one or more whip-like appendages called flagella. The word flagellate also describes a particular construction (or level of organization) characteristic of many prokaryotes and eukaryotes and their means of motion. The term presently does not imply any specific relationship or classification of the organisms that possess flagella. However, several derivations of the term "flagellate" (such as "dinoflagellate" and "choanoflagellata") are more formally characterized. Form and behavior Flagella in eukaryotes are supported by microtubules in a characteristic arrangement, with nine fused pairs surrounding two central singlets. These arise from a basal body. In some flagellates, flagella direct food into a cytostome or mouth, where food is ingested. Flagella role in classifying eukaryotes. Among protoctists and microscopic animals, a flagellate is an organism with one or more flagella. Some cells in other animals may be flagellate, for instance the spermatozoa of most animal phyla. Flowering plants do not produce flagellate cells, but ferns, mosses, green algae, and some gymnosperms and closely related plants do so. Likewise, most fungi do not produce cells with flagellae, but the primitive fungal chytrids do. Many protists take the form of single-celled flagellates. Flagella are generally used for propulsion. They may also be used to create a current that brings in food. In most such organisms, one or more flagella are located at or near the anterior of the cell (e.g., Euglena). Often there is one directed forwards and one trailing behind. Many parasites that affect human health or economy are flagellates in at least one stage of life cycle, such as Naegleria, Trichomonas and Plasmodium. Flagellates are the major consumers of primary and secondary production in aquatic ecosystems - consuming bacteria and other protists. Flagellates as specialized cells or life cycle stages An overview of the occurrence of flagellated cells in eukaryote groups, as specialized cells of multicellular organisms or as life cycle stages, is given below (see also the article flagellum): Archaeplastida: most green algae (zoospores and male gametes, except in Zygnematophyceae), bryophytes (male gametes), pteridophytes (male gametes), some gymnosperms (cycads and Ginkgo, as male gametes) Stramenopiles: centric diatoms (male gametes), brown algae (zoospores and gametes), oomycetes (assexual zoospores and gametes), hyphochytrids (zoospores), labyrinthulomycetes (zoospores), some chrysophytes, some xanthophytes, eustigmatophytes Alveolata: some apicomplexans (gametes) Rhizaria: some radiolarians (probably gametes), foraminiferans (as gametes) Cercozoa: plasmodiophoromycetes (zoospores and gametes), chlorarachniophytes (zoospores) Amoebozoa: myxogastrids Opisthokonta: most metazoans (male gametes, epithelia and choanocytes), chytrid fungi (zoospores and gametes) Excavata: some acrasids (Pocheina, as zoospores) Flagellates as organisms: the Flagellata In older classifications, flagellated protozoa were grouped in Flagellata (= Mastigophora), sometimes divided into Phytoflagellata (= Phytomastigina, mostly autotrophic) and Zooflagellata (= Zoomastigina, heterotrophic). They were sometimes grouped with Sarcodina (ameboids) in the group Sarcomastigophora. The autotrophic flagellates were grouped similarly to the botanical schemes used for the corresponding algae groups. The colourless flagellates were customarily grouped in three groups, highly artificial: Protomastigineae, in which absorption of food-particles in holozoic nutrition occurs at a localised point of the cell surface, often at a cytostome, although many groups were merely saprophytes; it included the majority of colourless flagellates, and even many "apochlorotic" algae; Pantostomatineae (or Rhizomastigineae), in which the absorption takes place at any point on the cell surface; roughly corresponds to "amoeboflagellates"; Distomatineae, a group of binucleate "double individuals" with symmetrically distributed flagella and, in many species, two symmetrical mouths; roughly corresponds to current Diplomonadida. Presently, these groups are known to be highly polyphyletic. In modern classifications of the protists, the principal flagellated taxa are placed in the following eukaryote groups, which include also non-flagellated forms (where "A", "F", "P" and "S" stands for autotrophic, free-living heterotrophic, parasitic and symbiotic, respectively): Archaeplastida: volvocids (A/F), prasinophytes (A), glaucophytes (A) Stramenopiles: bicosoecids (F), proteromonads (F), opalines (F), most chrysophytes (A/F), part of xanthophytes (A), raphidophytes/chloromonads (A), silicoflagellates (A), ciliophryids (F), pedinellids (A/F) Alveolata: dinoflagellates (A/F), Colpodella (F) Rhizaria Cercozoa: cercomonads (F), spongomonads (F), thaumatomonads (F), glissomonads (F), cryomonads (F), heliomonads/dimorphids (F), ebriids (F) Amoebozoa: Multicilia (F), phalansteriids (F), some archamoebae (F/S) Opisthokonta: choanoflagellates (F) Excavata Discoba: jakobids (F), kinetoplastids (bodonids, F/P, trypanosomatids, P), euglenids (F/A), some heteroloboseans (P/F/S) Metamonada: diplomonads (P/F), retortamonads (S), Preaxostyla/anaeromonads (oxymonads, S, Trimastix, F, Paratrimastix, F), parabasalids (trichomonads, P/S, hypermastigids, S) Eukaryota incertae sedis : haptophytes (F/A), cryptophytes (F/A), kathablepharids (F), Apusozoa (apusomondas, F, ancyromonads, F, spironemids/hemimastigids, F), collodictyonids/diphylleids (F), Phyllomonas (F), and about a hundred genera Although the taxonomic group Flagellata was abandoned, the term "flagellate" is still used as the description of a level of organization and also as an ecological functional group. Another term used is "monadoid", from monad. as in Monas, and Cryptomonas and in the groups as listed above. The amoeboflagellates (e.g., the rhizarian genus Cercomonas, some amoebozoan Archamoebae, some excavate Heterolobosea) have a peculiar type of flagellate/amoeboid organization, in which cells may present flagella and pseudopods, simultaneously or sequentially, while the helioflagellates (e.g., the cercozoan heliomonads/dimorphids, the stramenopile pedinellids and ciliophryids) have a flagellate/heliozoan organization. References External links Leadbeater, B.S.C. & Green, J.C., eds. (2000). The Flagellates. Unity, diversity and evolution. Taylor and Francis, London. Cell biology Microbiology
Flagellate
[ "Chemistry", "Biology" ]
1,927
[ "Cell biology", "Microbiology", "Microscopy" ]
10,603
https://en.wikipedia.org/wiki/Field%20%28mathematics%29
In mathematics, a field is a set on which addition, subtraction, multiplication, and division are defined and behave as the corresponding operations on rational and real numbers. A field is thus a fundamental algebraic structure which is widely used in algebra, number theory, and many other areas of mathematics. The best known fields are the field of rational numbers, the field of real numbers and the field of complex numbers. Many other fields, such as fields of rational functions, algebraic function fields, algebraic number fields, and p-adic fields are commonly used and studied in mathematics, particularly in number theory and algebraic geometry. Most cryptographic protocols rely on finite fields, i.e., fields with finitely many elements. The theory of fields proves that angle trisection and squaring the circle cannot be done with a compass and straightedge. Galois theory, devoted to understanding the symmetries of field extensions, provides an elegant proof of the Abel–Ruffini theorem that general quintic equations cannot be solved in radicals. Fields serve as foundational notions in several mathematical domains. This includes different branches of mathematical analysis, which are based on fields with additional structure. Basic theorems in analysis hinge on the structural properties of the field of real numbers. Most importantly for algebraic purposes, any field may be used as the scalars for a vector space, which is the standard general context for linear algebra. Number fields, the siblings of the field of rational numbers, are studied in depth in number theory. Function fields can help describe properties of geometric objects. Definition Informally, a field is a set, along with two operations defined on that set: an addition operation written as , and a multiplication operation written as , both of which behave similarly as they behave for rational numbers and real numbers, including the existence of an additive inverse for all elements , and of a multiplicative inverse for every nonzero element . This allows one to also consider the so-called inverse operations of subtraction, , and division, , by defining: , . Classic definition Formally, a field is a set together with two binary operations on called addition and multiplication. A binary operation on is a mapping , that is, a correspondence that associates with each ordered pair of elements of a uniquely determined element of . The result of the addition of and is called the sum of and , and is denoted . Similarly, the result of the multiplication of and is called the product of and , and is denoted or . These operations are required to satisfy the following properties, referred to as field axioms. These axioms are required to hold for all elements , , of the field : Associativity of addition and multiplication: , and . Commutativity of addition and multiplication: , and . Additive and multiplicative identity: there exist two distinct elements and in such that and . Additive inverses: for every in , there exists an element in , denoted , called the additive inverse of , such that . Multiplicative inverses: for every in , there exists an element in , denoted by or , called the multiplicative inverse of , such that . Distributivity of multiplication over addition: . An equivalent, and more succinct, definition is: a field has two commutative operations, called addition and multiplication; it is a group under addition with as the additive identity; the nonzero elements form a group under multiplication with as the multiplicative identity; and multiplication distributes over addition. Even more succinctly: a field is a commutative ring where and all nonzero elements are invertible under multiplication. Alternative definition Fields can also be defined in different, but equivalent ways. One can alternatively define a field by four binary operations (addition, subtraction, multiplication, and division) and their required properties. Division by zero is, by definition, excluded. In order to avoid existential quantifiers, fields can be defined by two binary operations (addition and multiplication), two unary operations (yielding the additive and multiplicative inverses respectively), and two nullary operations (the constants and ). These operations are then subject to the conditions above. Avoiding existential quantifiers is important in constructive mathematics and computing. One may equivalently define a field by the same two binary operations, one unary operation (the multiplicative inverse), and two (not necessarily distinct) constants and , since and . Examples Rational numbers Rational numbers have been widely used a long time before the elaboration of the concept of field. They are numbers that can be written as fractions , where and are integers, and . The additive inverse of such a fraction is , and the multiplicative inverse (provided that ) is , which can be seen as follows: The abstractly required field axioms reduce to standard properties of rational numbers. For example, the law of distributivity can be proven as follows: Real and complex numbers The real numbers , with the usual operations of addition and multiplication, also form a field. The complex numbers consist of expressions with real, where is the imaginary unit, i.e., a (non-real) number satisfying . Addition and multiplication of real numbers are defined in such a way that expressions of this type satisfy all field axioms and thus hold for . For example, the distributive law enforces It is immediate that this is again an expression of the above type, and so the complex numbers form a field. Complex numbers can be geometrically represented as points in the plane, with Cartesian coordinates given by the real numbers of their describing expression, or as the arrows from the origin to these points, specified by their length and an angle enclosed with some distinct direction. Addition then corresponds to combining the arrows to the intuitive parallelogram (adding the Cartesian coordinates), and the multiplication is – less intuitively – combining rotating and scaling of the arrows (adding the angles and multiplying the lengths). The fields of real and complex numbers are used throughout mathematics, physics, engineering, statistics, and many other scientific disciplines. Constructible numbers In antiquity, several geometric problems concerned the (in)feasibility of constructing certain numbers with compass and straightedge. For example, it was unknown to the Greeks that it is, in general, impossible to trisect a given angle in this way. These problems can be settled using the field of constructible numbers. Real constructible numbers are, by definition, lengths of line segments that can be constructed from the points 0 and 1 in finitely many steps using only compass and straightedge. These numbers, endowed with the field operations of real numbers, restricted to the constructible numbers, form a field, which properly includes the field of rational numbers. The illustration shows the construction of square roots of constructible numbers, not necessarily contained within . Using the labeling in the illustration, construct the segments , , and a semicircle over (center at the midpoint ), which intersects the perpendicular line through in a point , at a distance of exactly from when has length one. Not all real numbers are constructible. It can be shown that is not a constructible number, which implies that it is impossible to construct with compass and straightedge the length of the side of a cube with volume 2, another problem posed by the ancient Greeks. A field with four elements In addition to familiar number systems such as the rationals, there are other, less immediate examples of fields. The following example is a field consisting of four elements called , , , and . The notation is chosen such that plays the role of the additive identity element (denoted 0 in the axioms above), and is the multiplicative identity (denoted in the axioms above). The field axioms can be verified by using some more field theory, or by direct computation. For example, , which equals , as required by the distributivity. This field is called a finite field or Galois field with four elements, and is denoted or . The subset consisting of and (highlighted in red in the tables at the right) is also a field, known as the binary field or . Elementary notions In this section, denotes an arbitrary field and and are arbitrary elements of . Consequences of the definition One has and . In particular, one may deduce the additive inverse of every element as soon as one knows . If then or must be , since, if , then . This means that every field is an integral domain. In addition, the following properties are true for any elements and : if Additive and multiplicative groups of a field The axioms of a field imply that it is an abelian group under addition. This group is called the additive group of the field, and is sometimes denoted by when denoting it simply as could be confusing. Similarly, the nonzero elements of form an abelian group under multiplication, called the multiplicative group, and denoted by or just , or . A field may thus be defined as set equipped with two operations denoted as an addition and a multiplication such that is an abelian group under addition, is an abelian group under multiplication (where 0 is the identity element of the addition), and multiplication is distributive over addition. Some elementary statements about fields can therefore be obtained by applying general facts of groups. For example, the additive and multiplicative inverses and are uniquely determined by . The requirement is imposed by convention to exclude the trivial ring, which consists of a single element; this guides any choice of the axioms that define fields. Every finite subgroup of the multiplicative group of a field is cyclic (see ). Characteristic In addition to the multiplication of two elements of , it is possible to define the product of an arbitrary element of by a positive integer to be the -fold sum (which is an element of .) If there is no positive integer such that , then is said to have characteristic . For example, the field of rational numbers has characteristic 0 since no positive integer is zero. Otherwise, if there is a positive integer satisfying this equation, the smallest such positive integer can be shown to be a prime number. It is usually denoted by and the field is said to have characteristic then. For example, the field has characteristic since (in the notation of the above addition table) . If has characteristic , then for all in . This implies that , since all other binomial coefficients appearing in the binomial formula are divisible by . Here, ( factors) is the th power, i.e., the -fold product of the element . Therefore, the Frobenius map is compatible with the addition in (and also with the multiplication), and is therefore a field homomorphism. The existence of this homomorphism makes fields in characteristic quite different from fields of characteristic . Subfields and prime fields A subfield of a field is a subset of that is a field with respect to the field operations of . Equivalently is a subset of that contains , and is closed under addition, multiplication, additive inverse and multiplicative inverse of a nonzero element. This means that , that for all both and are in , and that for all in , both and are in . Field homomorphisms are maps between two fields such that , , and , where and are arbitrary elements of . All field homomorphisms are injective. If is also surjective, it is called an isomorphism (or the fields and are called isomorphic). A field is called a prime field if it has no proper (i.e., strictly smaller) subfields. Any field contains a prime field. If the characteristic of is (a prime number), the prime field is isomorphic to the finite field introduced below. Otherwise the prime field is isomorphic to . Finite fields Finite fields (also called Galois fields) are fields with finitely many elements, whose number is also referred to as the order of the field. The above introductory example is a field with four elements. Its subfield is the smallest field, because by definition a field has at least two distinct elements, and . The simplest finite fields, with prime order, are most directly accessible using modular arithmetic. For a fixed positive integer , arithmetic "modulo " means to work with the numbers The addition and multiplication on this set are done by performing the operation in question in the set of integers, dividing by and taking the remainder as result. This construction yields a field precisely if is a prime number. For example, taking the prime results in the above-mentioned field . For and more generally, for any composite number (i.e., any number which can be expressed as a product of two strictly smaller natural numbers), is not a field: the product of two non-zero elements is zero since in , which, as was explained above, prevents from being a field. The field with elements ( being prime) constructed in this way is usually denoted by . Every finite field has elements, where is prime and . This statement holds since may be viewed as a vector space over its prime field. The dimension of this vector space is necessarily finite, say , which implies the asserted statement. A field with elements can be constructed as the splitting field of the polynomial . Such a splitting field is an extension of in which the polynomial has zeros. This means has as many zeros as possible since the degree of is . For , it can be checked case by case using the above multiplication table that all four elements of satisfy the equation , so they are zeros of . By contrast, in , has only two zeros (namely and ), so does not split into linear factors in this smaller field. Elaborating further on basic field-theoretic notions, it can be shown that two finite fields with the same order are isomorphic. It is thus customary to speak of the finite field with elements, denoted by or . History Historically, three algebraic disciplines led to the concept of a field: the question of solving polynomial equations, algebraic number theory, and algebraic geometry. A first step towards the notion of a field was made in 1770 by Joseph-Louis Lagrange, who observed that permuting the zeros of a cubic polynomial in the expression (with being a third root of unity) only yields two values. This way, Lagrange conceptually explained the classical solution method of Scipione del Ferro and François Viète, which proceeds by reducing a cubic equation for an unknown to a quadratic equation for . Together with a similar observation for equations of degree 4, Lagrange thus linked what eventually became the concept of fields and the concept of groups. Vandermonde, also in 1770, and to a fuller extent, Carl Friedrich Gauss, in his Disquisitiones Arithmeticae (1801), studied the equation for a prime and, again using modern language, the resulting cyclic Galois group. Gauss deduced that a regular -gon can be constructed if . Building on Lagrange's work, Paolo Ruffini claimed (1799) that quintic equations (polynomial equations of degree ) cannot be solved algebraically; however, his arguments were flawed. These gaps were filled by Niels Henrik Abel in 1824. Évariste Galois, in 1832, devised necessary and sufficient criteria for a polynomial equation to be algebraically solvable, thus establishing in effect what is known as Galois theory today. Both Abel and Galois worked with what is today called an algebraic number field, but conceived neither an explicit notion of a field, nor of a group. In 1871 Richard Dedekind introduced, for a set of real or complex numbers that is closed under the four arithmetic operations, the German word Körper, which means "body" or "corpus" (to suggest an organically closed entity). The English term "field" was introduced by . In 1881 Leopold Kronecker defined what he called a domain of rationality, which is a field of rational fractions in modern terms. Kronecker's notion did not cover the field of all algebraic numbers (which is a field in Dedekind's sense), but on the other hand was more abstract than Dedekind's in that it made no specific assumption on the nature of the elements of a field. Kronecker interpreted a field such as abstractly as the rational function field . Prior to this, examples of transcendental numbers were known since Joseph Liouville's work in 1844, until Charles Hermite (1873) and Ferdinand von Lindemann (1882) proved the transcendence of and , respectively. The first clear definition of an abstract field is due to . In particular, Heinrich Martin Weber's notion included the field . Giuseppe Veronese (1891) studied the field of formal power series, which led to introduce the field of -adic numbers. synthesized the knowledge of abstract field theory accumulated so far. He axiomatically studied the properties of fields and defined many important field-theoretic concepts. The majority of the theorems mentioned in the sections Galois theory, Constructing fields and Elementary notions can be found in Steinitz's work. linked the notion of orderings in a field, and thus the area of analysis, to purely algebraic properties. Emil Artin redeveloped Galois theory from 1928 through 1942, eliminating the dependency on the primitive element theorem. Constructing fields Constructing fields from rings A commutative ring is a set that is equipped with an addition and multiplication operation and satisfies all the axioms of a field, except for the existence of multiplicative inverses . For example, the integers form a commutative ring, but not a field: the reciprocal of an integer is not itself an integer, unless . In the hierarchy of algebraic structures fields can be characterized as the commutative rings in which every nonzero element is a unit (which means every element is invertible). Similarly, fields are the commutative rings with precisely two distinct ideals, and . Fields are also precisely the commutative rings in which is the only prime ideal. Given a commutative ring , there are two ways to construct a field related to , i.e., two ways of modifying such that all nonzero elements become invertible: forming the field of fractions, and forming residue fields. The field of fractions of is , the rationals, while the residue fields of are the finite fields . Field of fractions Given an integral domain , its field of fractions is built with the fractions of two elements of exactly as Q is constructed from the integers. More precisely, the elements of are the fractions where and are in , and . Two fractions and are equal if and only if . The operation on the fractions work exactly as for rational numbers. For example, It is straightforward to show that, if the ring is an integral domain, the set of the fractions form a field. The field of the rational fractions over a field (or an integral domain) is the field of fractions of the polynomial ring . The field of Laurent series over a field is the field of fractions of the ring of formal power series (in which ). Since any Laurent series is a fraction of a power series divided by a power of (as opposed to an arbitrary power series), the representation of fractions is less important in this situation, though. Residue fields In addition to the field of fractions, which embeds injectively into a field, a field can be obtained from a commutative ring by means of a surjective map onto a field . Any field obtained in this way is a quotient , where is a maximal ideal of . If has only one maximal ideal , this field is called the residue field of . The ideal generated by a single polynomial in the polynomial ring (over a field ) is maximal if and only if is irreducible in , i.e., if cannot be expressed as the product of two polynomials in of smaller degree. This yields a field This field contains an element (namely the residue class of ) which satisfies the equation . For example, is obtained from by adjoining the imaginary unit symbol , which satisfies , where . Moreover, is irreducible over , which implies that the map that sends a polynomial to yields an isomorphism Constructing fields within a bigger field Fields can be constructed inside a given bigger container field. Suppose given a field , and a field containing as a subfield. For any element of , there is a smallest subfield of containing and , called the subfield of F generated by and denoted . The passage from to is referred to by adjoining an element to . More generally, for a subset , there is a minimal subfield of containing and , denoted by . The compositum of two subfields and of some field is the smallest subfield of containing both and . The compositum can be used to construct the biggest subfield of satisfying a certain property, for example the biggest subfield of , which is, in the language introduced below, algebraic over . Field extensions The notion of a subfield can also be regarded from the opposite point of view, by referring to being a field extension (or just extension) of , denoted by , and read " over ". A basic datum of a field extension is its degree , i.e., the dimension of as an -vector space. It satisfies the formula . Extensions whose degree is finite are referred to as finite extensions. The extensions and are of degree , whereas is an infinite extension. Algebraic extensions A pivotal notion in the study of field extensions are algebraic elements. An element is algebraic over if it is a root of a polynomial with coefficients in , that is, if it satisfies a polynomial equation , with in , and . For example, the imaginary unit in is algebraic over , and even over , since it satisfies the equation . A field extension in which every element of is algebraic over is called an algebraic extension. Any finite extension is necessarily algebraic, as can be deduced from the above multiplicativity formula. The subfield generated by an element , as above, is an algebraic extension of if and only if is an algebraic element. That is to say, if is algebraic, all other elements of are necessarily algebraic as well. Moreover, the degree of the extension , i.e., the dimension of as an -vector space, equals the minimal degree such that there is a polynomial equation involving , as above. If this degree is , then the elements of have the form For example, the field of Gaussian rationals is the subfield of consisting of all numbers of the form where both and are rational numbers: summands of the form (and similarly for higher exponents) do not have to be considered here, since can be simplified to . Transcendence bases The above-mentioned field of rational fractions , where is an indeterminate, is not an algebraic extension of since there is no polynomial equation with coefficients in whose zero is . Elements, such as , which are not algebraic are called transcendental. Informally speaking, the indeterminate and its powers do not interact with elements of . A similar construction can be carried out with a set of indeterminates, instead of just one. Once again, the field extension discussed above is a key example: if is not algebraic (i.e., is not a root of a polynomial with coefficients in ), then is isomorphic to . This isomorphism is obtained by substituting to in rational fractions. A subset of a field is a transcendence basis if it is algebraically independent (do not satisfy any polynomial relations) over and if is an algebraic extension of . Any field extension has a transcendence basis. Thus, field extensions can be split into ones of the form (purely transcendental extensions) and algebraic extensions. Closure operations A field is algebraically closed if it does not have any strictly bigger algebraic extensions or, equivalently, if any polynomial equation , with coefficients , has a solution . By the fundamental theorem of algebra, is algebraically closed, i.e., any polynomial equation with complex coefficients has a complex solution. The rational and the real numbers are not algebraically closed since the equation does not have any rational or real solution. A field containing is called an algebraic closure of if it is algebraic over (roughly speaking, not too big compared to ) and is algebraically closed (big enough to contain solutions of all polynomial equations). By the above, is an algebraic closure of . The situation that the algebraic closure is a finite extension of the field is quite special: by the Artin–Schreier theorem, the degree of this extension is necessarily , and is elementarily equivalent to . Such fields are also known as real closed fields. Any field has an algebraic closure, which is moreover unique up to (non-unique) isomorphism. It is commonly referred to as the algebraic closure and denoted . For example, the algebraic closure of is called the field of algebraic numbers. The field is usually rather implicit since its construction requires the ultrafilter lemma, a set-theoretic axiom that is weaker than the axiom of choice. In this regard, the algebraic closure of , is exceptionally simple. It is the union of the finite fields containing (the ones of order ). For any algebraically closed field of characteristic , the algebraic closure of the field of Laurent series is the field of Puiseux series, obtained by adjoining roots of . Fields with additional structure Since fields are ubiquitous in mathematics and beyond, several refinements of the concept have been adapted to the needs of particular mathematical areas. Ordered fields A field F is called an ordered field if any two elements can be compared, so that and whenever and . For example, the real numbers form an ordered field, with the usual ordering . The Artin–Schreier theorem states that a field can be ordered if and only if it is a formally real field, which means that any quadratic equation only has the solution . The set of all possible orders on a fixed field is isomorphic to the set of ring homomorphisms from the Witt ring of quadratic forms over , to . An Archimedean field is an ordered field such that for each element there exists a finite expression whose value is greater than that element, that is, there are no infinite elements. Equivalently, the field contains no infinitesimals (elements smaller than all rational numbers); or, yet equivalent, the field is isomorphic to a subfield of . An ordered field is Dedekind-complete if all upper bounds, lower bounds (see Dedekind cut) and limits, which should exist, do exist. More formally, each bounded subset of is required to have a least upper bound. Any complete field is necessarily Archimedean, since in any non-Archimedean field there is neither a greatest infinitesimal nor a least positive rational, whence the sequence , every element of which is greater than every infinitesimal, has no limit. Since every proper subfield of the reals also contains such gaps, is the unique complete ordered field, up to isomorphism. Several foundational results in calculus follow directly from this characterization of the reals. The hyperreals form an ordered field that is not Archimedean. It is an extension of the reals obtained by including infinite and infinitesimal numbers. These are larger, respectively smaller than any real number. The hyperreals form the foundational basis of non-standard analysis. Topological fields Another refinement of the notion of a field is a topological field, in which the set is a topological space, such that all operations of the field (addition, multiplication, the maps and ) are continuous maps with respect to the topology of the space. The topology of all the fields discussed below is induced from a metric, i.e., a function that measures a distance between any two elements of . The completion of is another field in which, informally speaking, the "gaps" in the original field are filled, if there are any. For example, any irrational number , such as , is a "gap" in the rationals in the sense that it is a real number that can be approximated arbitrarily closely by rational numbers , in the sense that distance of and given by the absolute value is as small as desired. The following table lists some examples of this construction. The fourth column shows an example of a zero sequence, i.e., a sequence whose limit (for ) is zero. The field is used in number theory and -adic analysis. The algebraic closure carries a unique norm extending the one on , but is not complete. The completion of this algebraic closure, however, is algebraically closed. Because of its rough analogy to the complex numbers, it is sometimes called the field of complex p-adic numbers and is denoted by . Local fields The following topological fields are called local fields: finite extensions of (local fields of characteristic zero) finite extensions of , the field of Laurent series over (local fields of characteristic ). These two types of local fields share some fundamental similarities. In this relation, the elements and (referred to as uniformizer) correspond to each other. The first manifestation of this is at an elementary level: the elements of both fields can be expressed as power series in the uniformizer, with coefficients in . (However, since the addition in is done using carrying, which is not the case in , these fields are not isomorphic.) The following facts show that this superficial similarity goes much deeper: Any first-order statement that is true for almost all is also true for almost all . An application of this is the Ax–Kochen theorem describing zeros of homogeneous polynomials in . Tamely ramified extensions of both fields are in bijection to one another. Adjoining arbitrary -power roots of (in ), respectively of (in ), yields (infinite) extensions of these fields known as perfectoid fields. Strikingly, the Galois groups of these two fields are isomorphic, which is the first glimpse of a remarkable parallel between these two fields: Differential fields Differential fields are fields equipped with a derivation, i.e., allow to take derivatives of elements in the field. For example, the field , together with the standard derivative of polynomials forms a differential field. These fields are central to differential Galois theory, a variant of Galois theory dealing with linear differential equations. Galois theory Galois theory studies algebraic extensions of a field by studying the symmetry in the arithmetic operations of addition and multiplication. An important notion in this area is that of finite Galois extensions , which are, by definition, those that are separable and normal. The primitive element theorem shows that finite separable extensions are necessarily simple, i.e., of the form , where is an irreducible polynomial (as above). For such an extension, being normal and separable means that all zeros of are contained in and that has only simple zeros. The latter condition is always satisfied if has characteristic . For a finite Galois extension, the Galois group is the group of field automorphisms of that are trivial on (i.e., the bijections that preserve addition and multiplication and that send elements of to themselves). The importance of this group stems from the fundamental theorem of Galois theory, which constructs an explicit one-to-one correspondence between the set of subgroups of and the set of intermediate extensions of the extension . By means of this correspondence, group-theoretic properties translate into facts about fields. For example, if the Galois group of a Galois extension as above is not solvable (cannot be built from abelian groups), then the zeros of cannot be expressed in terms of addition, multiplication, and radicals, i.e., expressions involving . For example, the symmetric groups is not solvable for . Consequently, as can be shown, the zeros of the following polynomials are not expressible by sums, products, and radicals. For the latter polynomial, this fact is known as the Abel–Ruffini theorem: (and ), (where is regarded as a polynomial in , for some indeterminates , is any field, and ). The tensor product of fields is not usually a field. For example, a finite extension of degree is a Galois extension if and only if there is an isomorphism of -algebras . This fact is the beginning of Grothendieck's Galois theory, a far-reaching extension of Galois theory applicable to algebro-geometric objects. Invariants of fields Basic invariants of a field include the characteristic and the transcendence degree of over its prime field. The latter is defined as the maximal number of elements in that are algebraically independent over the prime field. Two algebraically closed fields and are isomorphic precisely if these two data agree. This implies that any two uncountable algebraically closed fields of the same cardinality and the same characteristic are isomorphic. For example, and are isomorphic (but not isomorphic as topological fields). Model theory of fields In model theory, a branch of mathematical logic, two fields and are called elementarily equivalent if every mathematical statement that is true for is also true for and conversely. The mathematical statements in question are required to be first-order sentences (involving , , the addition and multiplication). A typical example, for , an integer, is = "any polynomial of degree in has a zero in " The set of such formulas for all expresses that is algebraically closed. The Lefschetz principle states that is elementarily equivalent to any algebraically closed field of characteristic zero. Moreover, any fixed statement holds in if and only if it holds in any algebraically closed field of sufficiently high characteristic. If is an ultrafilter on a set , and is a field for every in , the ultraproduct of the with respect to is a field. It is denoted by , since it behaves in several ways as a limit of the fields : Łoś's theorem states that any first order statement that holds for all but finitely many , also holds for the ultraproduct. Applied to the above sentence , this shows that there is an isomorphism The Ax–Kochen theorem mentioned above also follows from this and an isomorphism of the ultraproducts (in both cases over all primes ) . In addition, model theory also studies the logical properties of various other types of fields, such as real closed fields or exponential fields (which are equipped with an exponential function ). Absolute Galois group For fields that are not algebraically closed (or not separably closed), the absolute Galois group is fundamentally important: extending the case of finite Galois extensions outlined above, this group governs all finite separable extensions of . By elementary means, the group can be shown to be the Prüfer group, the profinite completion of . This statement subsumes the fact that the only algebraic extensions of are the fields for , and that the Galois groups of these finite extensions are given by . A description in terms of generators and relations is also known for the Galois groups of -adic number fields (finite extensions of ). Representations of Galois groups and of related groups such as the Weil group are fundamental in many branches of arithmetic, such as the Langlands program. The cohomological study of such representations is done using Galois cohomology. For example, the Brauer group, which is classically defined as the group of central simple -algebras, can be reinterpreted as a Galois cohomology group, namely . K-theory Milnor K-theory is defined as The norm residue isomorphism theorem, proved around 2000 by Vladimir Voevodsky, relates this to Galois cohomology by means of an isomorphism Algebraic K-theory is related to the group of invertible matrices with coefficients the given field. For example, the process of taking the determinant of an invertible matrix leads to an isomorphism . Matsumoto's theorem shows that agrees with . In higher degrees, K-theory diverges from Milnor K-theory and remains hard to compute in general. Applications Linear algebra and commutative algebra If , then the equation has a unique solution in a field , namely This immediate consequence of the definition of a field is fundamental in linear algebra. For example, it is an essential ingredient of Gaussian elimination and of the proof that any vector space has a basis. The theory of modules (the analogue of vector spaces over rings instead of fields) is much more complicated, because the above equation may have several or no solutions. In particular systems of linear equations over a ring are much more difficult to solve than in the case of fields, even in the specially simple case of the ring of the integers. Finite fields: cryptography and coding theory A widely applied cryptographic routine uses the fact that discrete exponentiation, i.e., computing ( factors, for an integer ) in a (large) finite field can be performed much more efficiently than the discrete logarithm, which is the inverse operation, i.e., determining the solution to an equation . In elliptic curve cryptography, the multiplication in a finite field is replaced by the operation of adding points on an elliptic curve, i.e., the solutions of an equation of the form . Finite fields are also used in coding theory and combinatorics. Geometry: field of functions Functions on a suitable topological space into a field can be added and multiplied pointwise, e.g., the product of two functions is defined by the product of their values within the domain: . This makes these functions a -commutative algebra. For having a field of functions, one must consider algebras of functions that are integral domains. In this case the ratios of two functions, i.e., expressions of the form form a field, called field of functions. This occurs in two main cases. When is a complex manifold . In this case, one considers the algebra of holomorphic functions, i.e., complex differentiable functions. Their ratios form the field of meromorphic functions on . The function field of an algebraic variety (a geometric object defined as the common zeros of polynomial equations) consists of ratios of regular functions, i.e., ratios of polynomial functions on the variety. The function field of the -dimensional space over a field is , i.e., the field consisting of ratios of polynomials in indeterminates. The function field of is the same as the one of any open dense subvariety. In other words, the function field is insensitive to replacing by a (slightly) smaller subvariety. The function field is invariant under isomorphism and birational equivalence of varieties. It is therefore an important tool for the study of abstract algebraic varieties and for the classification of algebraic varieties. For example, the dimension, which equals the transcendence degree of , is invariant under birational equivalence. For curves (i.e., the dimension is one), the function field is very close to : if is smooth and proper (the analogue of being compact), can be reconstructed, up to isomorphism, from its field of functions. In higher dimension the function field remembers less, but still decisive information about . The study of function fields and their geometric meaning in higher dimensions is referred to as birational geometry. The minimal model program attempts to identify the simplest (in a certain precise sense) algebraic varieties with a prescribed function field. Number theory: global fields Global fields are in the limelight in algebraic number theory and arithmetic geometry. They are, by definition, number fields (finite extensions of ) or function fields over (finite extensions of ). As for local fields, these two types of fields share several similar features, even though they are of characteristic and positive characteristic, respectively. This function field analogy can help to shape mathematical expectations, often first by understanding questions about function fields, and later treating the number field case. The latter is often more difficult. For example, the Riemann hypothesis concerning the zeros of the Riemann zeta function (open as of 2017) can be regarded as being parallel to the Weil conjectures (proven in 1974 by Pierre Deligne). Cyclotomic fields are among the most intensely studied number fields. They are of the form , where is a primitive th root of unity, i.e., a complex number that satisfies and for all . For being a regular prime, Kummer used cyclotomic fields to prove Fermat's Last Theorem, which asserts the non-existence of rational nonzero solutions to the equation . Local fields are completions of global fields. Ostrowski's theorem asserts that the only completions of , a global field, are the local fields and . Studying arithmetic questions in global fields may sometimes be done by looking at the corresponding questions locally. This technique is called the local–global principle. For example, the Hasse–Minkowski theorem reduces the problem of finding rational solutions of quadratic equations to solving these equations in and , whose solutions can easily be described. Unlike for local fields, the Galois groups of global fields are not known. Inverse Galois theory studies the (unsolved) problem whether any finite group is the Galois group for some number field . Class field theory describes the abelian extensions, i.e., ones with abelian Galois group, or equivalently the abelianized Galois groups of global fields. A classical statement, the Kronecker–Weber theorem, describes the maximal abelian extension of : it is the field obtained by adjoining all primitive th roots of unity. Kronecker's Jugendtraum asks for a similarly explicit description of of general number fields . For imaginary quadratic fields, , , the theory of complex multiplication describes using elliptic curves. For general number fields, no such explicit description is known. Related notions In addition to the additional structure that fields may enjoy, fields admit various other related notions. Since in any field , any field has at least two elements. Nonetheless, there is a concept of field with one element, which is suggested to be a limit of the finite fields , as tends to . In addition to division rings, there are various other weaker algebraic structures related to fields such as quasifields, near-fields and semifields. There are also proper classes with field structure, which are sometimes called Fields, with a capital 'F'. The surreal numbers form a Field containing the reals, and would be a field except for the fact that they are a proper class, not a set. The nimbers, a concept from game theory, form such a Field as well. Division rings Dropping one or several axioms in the definition of a field leads to other algebraic structures. As was mentioned above, commutative rings satisfy all field axioms except for the existence of multiplicative inverses. Dropping instead commutativity of multiplication leads to the concept of a division ring or skew field; sometimes associativity is weakened as well. The only division rings that are finite-dimensional -vector spaces are itself, (which is a field), and the quaternions (in which multiplication is non-commutative). This result is known as the Frobenius theorem. The octonions , for which multiplication is neither commutative nor associative, is a normed alternative division algebra, but is not a division ring. This fact was proved using methods of algebraic topology in 1958 by Michel Kervaire, Raoul Bott, and John Milnor. Wedderburn's little theorem states that all finite division rings are fields. Notes Citations References , especially Chapter 13 . See especially Book 3 () and Book 6 (). External links Algebraic structures Abstract algebra
Field (mathematics)
[ "Mathematics" ]
9,020
[ "Mathematical structures", "Algebra", "Mathematical objects", "Algebraic structures", "Abstract algebra" ]
10,606
https://en.wikipedia.org/wiki/Factorial
In mathematics, the factorial of a non-negative denoted is the product of all positive integers less than or equal The factorial also equals the product of with the next smaller factorial: For example, The value of 0! is 1, according to the convention for an empty product. Factorials have been discovered in several ancient cultures, notably in Indian mathematics in the canonical works of Jain literature, and by Jewish mystics in the Talmudic book Sefer Yetzirah. The factorial operation is encountered in many areas of mathematics, notably in combinatorics, where its most basic use counts the possible distinct sequences – the permutations – of distinct objects: there In mathematical analysis, factorials are used in power series for the exponential function and other functions, and they also have applications in algebra, number theory, probability theory, and computer science. Much of the mathematics of the factorial function was developed beginning in the late 18th and early 19th centuries. Stirling's approximation provides an accurate approximation to the factorial of large numbers, showing that it grows more quickly than exponential growth. Legendre's formula describes the exponents of the prime numbers in a prime factorization of the factorials, and can be used to count the trailing zeros of the factorials. Daniel Bernoulli and Leonhard Euler interpolated the factorial function to a continuous function of complex numbers, except at the negative integers, the (offset) gamma function. Many other notable functions and number sequences are closely related to the factorials, including the binomial coefficients, double factorials, falling factorials, primorials, and subfactorials. Implementations of the factorial function are commonly used as an example of different computer programming styles, and are included in scientific calculators and scientific computing software libraries. Although directly computing large factorials using the product formula or recurrence is not efficient, faster algorithms are known, matching to within a constant factor the time for fast multiplication algorithms for numbers with the same number of digits. History The concept of factorials has arisen independently in many cultures: In Indian mathematics, one of the earliest known descriptions of factorials comes from the Anuyogadvāra-sūtra, one of the canonical works of Jain literature, which has been assigned dates varying from 300 BCE to 400 CE. It separates out the sorted and reversed order of a set of items from the other ("mixed") orders, evaluating the number of mixed orders by subtracting two from the usual product formula for the factorial. The product rule for permutations was also described by 6th-century CE Jain monk Jinabhadra. Hindu scholars have been using factorial formulas since at least 1150, when Bhāskara II mentioned factorials in his work Līlāvatī, in connection with a problem of how many ways Vishnu could hold his four characteristic objects (a conch shell, discus, mace, and lotus flower) in his four hands, and a similar problem for a ten-handed god. In the mathematics of the Middle East, the Hebrew mystic book of creation Sefer Yetzirah, from the Talmudic period (200 to 500 CE), lists factorials up to 7! as part of an investigation into the number of words that can be formed from the Hebrew alphabet. Factorials were also studied for similar reasons by 8th-century Arab grammarian Al-Khalil ibn Ahmad al-Farahidi. Arab mathematician Ibn al-Haytham (also known as Alhazen, c. 965 – c. 1040) was the first to formulate Wilson's theorem connecting the factorials with the prime numbers. In Europe, although Greek mathematics included some combinatorics, and Plato famously used 5,040 (a factorial) as the population of an ideal community, in part because of its divisibility properties, there is no direct evidence of ancient Greek study of factorials. Instead, the first work on factorials in Europe was by Jewish scholars such as Shabbethai Donnolo, explicating the Sefer Yetzirah passage. In 1677, British author Fabian Stedman described the application of factorials to change ringing, a musical art involving the ringing of several tuned bells. From the late 15th century onward, factorials became the subject of study by Western mathematicians. In a 1494 treatise, Italian mathematician Luca Pacioli calculated factorials up to 11!, in connection with a problem of dining table arrangements. Christopher Clavius discussed factorials in a 1603 commentary on the work of Johannes de Sacrobosco, and in the 1640s, French polymath Marin Mersenne published large (but not entirely correct) tables of factorials, up to 64!, based on the work of Clavius. The power series for the exponential function, with the reciprocals of factorials for its coefficients, was first formulated in 1676 by Isaac Newton in a letter to Gottfried Wilhelm Leibniz. Other important works of early European mathematics on factorials include extensive coverage in a 1685 treatise by John Wallis, a study of their approximate values for large values of by Abraham de Moivre in 1721, a 1729 letter from James Stirling to de Moivre stating what became known as Stirling's approximation, and work at the same time by Daniel Bernoulli and Leonhard Euler formulating the continuous extension of the factorial function to the gamma function. Adrien-Marie Legendre included Legendre's formula, describing the exponents in the factorization of factorials into prime powers, in an 1808 text on number theory. The notation for factorials was introduced by the French mathematician Christian Kramp in 1808. Many other notations have also been used. Another later notation , in which the argument of the factorial was half-enclosed by the left and bottom sides of a box, was popular for some time in Britain and America but fell out of use, perhaps because it is difficult to typeset. The word "factorial" (originally French: factorielle) was first used in 1800 by Louis François Antoine Arbogast, in the first work on Faà di Bruno's formula, but referring to a more general concept of products of arithmetic progressions. The "factors" that this name refers to are the terms of the product formula for the factorial. Definition The factorial function of a positive integer is defined by the product of all positive integers not greater than This may be written more concisely in product notation as If this product formula is changed to keep all but the last term, it would define a product of the same form, for a smaller factorial. This leads to a recurrence relation, according to which each value of the factorial function can be obtained by multiplying the previous value For example, Factorial of zero The factorial or in symbols, There are several motivations for this definition: For the definition of as a product involves the product of no numbers at all, and so is an example of the broader convention that the empty product, a product of no factors, is equal to the multiplicative identity. There is exactly one permutation of zero objects: with nothing to permute, the only rearrangement is to do nothing. This convention makes many identities in combinatorics valid for all valid choices of their parameters. For instance, the number of ways to choose all elements from a set of is a binomial coefficient identity that would only be valid With the recurrence relation for the factorial remains valid Therefore, with this convention, a recursive computation of the factorial needs to have only the value for zero as a base case, simplifying the computation and avoiding the need for additional special cases. Setting allows for the compact expression of many formulae, such as the exponential function, as a power series: This choice matches the gamma function and the gamma function must have this value to be a continuous function. Applications The earliest uses of the factorial function involve counting permutations: there are different ways of arranging distinct objects into a sequence. Factorials appear more broadly in many formulas in combinatorics, to account for different orderings of objects. For instance the binomial coefficients count the combinations (subsets of from a set with and can be computed from factorials using the formula The Stirling numbers of the first kind sum to the factorials, and count the permutations grouped into subsets with the same numbers of cycles. Another combinatorial application is in counting derangements, permutations that do not leave any element in its original position; the number of derangements of items is the nearest integer In algebra, the factorials arise through the binomial theorem, which uses binomial coefficients to expand powers of sums. They also occur in the coefficients used to relate certain families of polynomials to each other, for instance in Newton's identities for symmetric polynomials. Their use in counting permutations can also be restated algebraically: the factorials are the orders of finite symmetric groups. In calculus, factorials occur in Faà di Bruno's formula for chaining higher derivatives. In mathematical analysis, factorials frequently appear in the denominators of power series, most notably in the series for the exponential function, and in the coefficients of other Taylor series (in particular those of the trigonometric and hyperbolic functions), where they cancel factors of coming from the This usage of factorials in power series connects back to analytic combinatorics through the exponential generating function, which for a combinatorial class with elements of is defined as the power series In number theory, the most salient property of factorials is the divisibility of by all positive integers up described more precisely for prime factors by Legendre's formula. It follows that arbitrarily large prime numbers can be found as the prime factors of the numbers , leading to a proof of Euclid's theorem that the number of primes is infinite. When is itself prime it is called a factorial prime; relatedly, Brocard's problem, also posed by Srinivasa Ramanujan, concerns the existence of square numbers of the form In contrast, the numbers must all be composite, proving the existence of arbitrarily large prime gaps. An elementary proof of Bertrand's postulate on the existence of a prime in any interval of the one of the first results of Paul Erdős, was based on the divisibility properties of factorials. The factorial number system is a mixed radix notation for numbers in which the place values of each digit are factorials. Factorials are used extensively in probability theory, for instance in the Poisson distribution and in the probabilities of random permutations. In computer science, beyond appearing in the analysis of brute-force searches over permutations, factorials arise in the lower bound of on the number of comparisons needed to comparison sort a set of items, and in the analysis of chained hash tables, where the distribution of keys per cell can be accurately approximated by a Poisson distribution. Moreover, factorials naturally appear in formulae from quantum and statistical physics, where one often considers all the possible permutations of a set of particles. In statistical mechanics, calculations of entropy such as Boltzmann's entropy formula or the Sackur–Tetrode equation must correct the count of microstates by dividing by the factorials of the numbers of each type of indistinguishable particle to avoid the Gibbs paradox. Quantum physics provides the underlying reason for why these corrections are necessary. Properties Growth and approximation As a function the factorial has faster than exponential growth, but grows more slowly than a double exponential function. Its growth rate is similar but slower by an exponential factor. One way of approaching this result is by taking the natural logarithm of the factorial, which turns its product formula into a sum, and then estimating the sum by an integral: Exponentiating the result (and ignoring the negligible term) approximates as More carefully bounding the sum both above and below by an integral, using the trapezoid rule, shows that this estimate needs a correction factor proportional The constant of proportionality for this correction can be found from the Wallis product, which expresses as a limiting ratio of factorials and powers of two. The result of these corrections is Stirling's approximation: Here, the symbol means that, as goes to infinity, the ratio between the left and right sides approaches one in the limit. Stirling's formula provides the first term in an asymptotic series that becomes even more accurate when taken to greater numbers of terms: An alternative version uses only odd exponents in the correction terms: Many other variations of these formulas have also been developed, by Srinivasa Ramanujan, Bill Gosper, and others. The binary logarithm of the factorial, used to analyze comparison sorting, can be very accurately estimated using Stirling's approximation. In the formula below, the term invokes big O notation. Divisibility and digits The product formula for the factorial implies that is divisible by all prime numbers that are at and by no larger prime numbers. More precise information about its divisibility is given by Legendre's formula, which gives the exponent of each prime in the prime factorization of as Here denotes the sum of the digits and the exponent given by this formula can also be interpreted in advanced mathematics as the -adic valuation of the factorial. Applying Legendre's formula to the product formula for binomial coefficients produces Kummer's theorem, a similar result on the exponent of each prime in the factorization of a binomial coefficient. Grouping the prime factors of the factorial into prime powers in different ways produces the multiplicative partitions of factorials. The special case of Legendre's formula for gives the number of trailing zeros in the decimal representation of the factorials. According to this formula, the number of zeros can be obtained by subtracting the base-5 digits of from , and dividing the result by four. Legendre's formula implies that the exponent of the prime is always larger than the exponent for so each factor of five can be paired with a factor of two to produce one of these trailing zeros. The leading digits of the factorials are distributed according to Benford's law. Every sequence of digits, in any base, is the sequence of initial digits of some factorial number in that base. Another result on divisibility of factorials, Wilson's theorem, states that is divisible by if and only if is a prime number. For any given the Kempner function of is given by the smallest for which divides For almost all numbers (all but a subset of exceptions with asymptotic density zero), it coincides with the largest prime factor The product of two factorials, always evenly divides There are infinitely many factorials that equal the product of other factorials: if is itself any product of factorials, then equals that same product multiplied by one more factorial, The only known examples of factorials that are products of other factorials but are not of this "trivial" form are and It would follow from the conjecture that there are only finitely many nontrivial examples. The greatest common divisor of the values of a primitive polynomial of degree over the integers evenly divides Continuous interpolation and non-integer generalization There are infinitely many ways to extend the factorials to a continuous function. The most widely used of these uses the gamma function, which can be defined for positive real numbers as the integral The resulting function is related to the factorial of a non-negative integer by the equation which can be used as a definition of the factorial for non-integer arguments. At all values for which both and are defined, the gamma function obeys the functional equation generalizing the recurrence relation for the factorials. The same integral converges more generally for any complex number whose real part is positive. It can be extended to the non-integer points in the rest of the complex plane by solving for Euler's reflection formula However, this formula cannot be used at integers because, for them, the term would produce a division by zero. The result of this extension process is an analytic function, the analytic continuation of the integral formula for the gamma function. It has a nonzero value at all complex numbers, except for the non-positive integers where it has simple poles. Correspondingly, this provides a definition for the factorial at all complex numbers other than the negative integers. One property of the gamma function, distinguishing it from other continuous interpolations of the factorials, is given by the Bohr–Mollerup theorem, which states that the gamma function (offset by one) is the only log-convex function on the positive real numbers that interpolates the factorials and obeys the same functional equation. A related uniqueness theorem of Helmut Wielandt states that the complex gamma function and its scalar multiples are the only holomorphic functions on the positive complex half-plane that obey the functional equation and remain bounded for complex numbers with real part between 1 and 2. Other complex functions that interpolate the factorial values include Hadamard's gamma function, which is an entire function over all the complex numbers, including the non-positive integers. In the -adic numbers, it is not possible to continuously interpolate the factorial function directly, because the factorials of large integers (a dense subset of the -adics) converge to zero according to Legendre's formula, forcing any continuous function that is close to their values to be zero everywhere. Instead, the -adic gamma function provides a continuous interpolation of a modified form of the factorial, omitting the factors in the factorial that are divisible by . The digamma function is the logarithmic derivative of the gamma function. Just as the gamma function provides a continuous interpolation of the factorials, offset by one, the digamma function provides a continuous interpolation of the harmonic numbers, offset by the Euler–Mascheroni constant. Computation The factorial function is a common feature in scientific calculators. It is also included in scientific programming libraries such as the Python mathematical functions module and the Boost C++ library. If efficiency is not a concern, computing factorials is trivial: just successively multiply a variable initialized by the integers up The simplicity of this computation makes it a common example in the use of different computer programming styles and methods. The computation of can be expressed in pseudocode using iteration as define factorial(n): f := 1 for i := 1, 2, 3, ..., n: f := f * i return f or using recursion based on its recurrence relation as define factorial(n): if (n = 0) return 1 return n * factorial(n − 1) Other methods suitable for its computation include memoization, dynamic programming, and functional programming. The computational complexity of these algorithms may be analyzed using the unit-cost random-access machine model of computation, in which each arithmetic operation takes constant time and each number uses a constant amount of storage space. In this model, these methods can compute in time and the iterative version uses space Unless optimized for tail recursion, the recursive version takes linear space to store its call stack. However, this model of computation is only suitable when is small enough to allow to fit into a machine word. The values 12! and 20! are the largest factorials that can be stored in, respectively, the 32-bit and 64-bit integers. Floating point can represent larger factorials, but approximately rather than exactly, and will still overflow for factorials larger than The exact computation of larger factorials involves arbitrary-precision arithmetic, because of fast growth and integer overflow. Time of computation can be analyzed as a function of the number of digits or bits in the result. By Stirling's formula, has bits. The Schönhage–Strassen algorithm can produce a product in time and faster multiplication algorithms taking time are known. However, computing the factorial involves repeated products, rather than a single multiplication, so these time bounds do not apply directly. In this setting, computing by multiplying the numbers from 1 in sequence is inefficient, because it involves multiplications, a constant fraction of which take time each, giving total time A better approach is to perform the multiplications as a divide-and-conquer algorithm that multiplies a sequence of numbers by splitting it into two subsequences of numbers, multiplies each subsequence, and combines the results with one last multiplication. This approach to the factorial takes total time one logarithm comes from the number of bits in the factorial, a second comes from the multiplication algorithm, and a third comes from the divide and conquer. Even better efficiency is obtained by computing from its prime factorization, based on the principle that exponentiation by squaring is faster than expanding an exponent into a product. An algorithm for this by Arnold Schönhage begins by finding the list of the primes up for instance using the sieve of Eratosthenes, and uses Legendre's formula to compute the exponent for each prime. Then it computes the product of the prime powers with these exponents, using a recursive algorithm, as follows: Use divide and conquer to compute the product of the primes whose exponents are odd Divide all of the exponents by two (rounding down to an integer), recursively compute the product of the prime powers with these smaller exponents, and square the result Multiply together the results of the two previous steps The product of all primes up to is an -bit number, by the prime number theorem, so the time for the first step is , with one logarithm coming from the divide and conquer and another coming from the multiplication algorithm. In the recursive calls to the algorithm, the prime number theorem can again be invoked to prove that the numbers of bits in the corresponding products decrease by a constant factor at each level of recursion, so the total time for these steps at all levels of recursion adds in a geometric series The time for the squaring in the second step and the multiplication in the third step are again because each is a single multiplication of a number with bits. Again, at each level of recursion the numbers involved have a constant fraction as many bits (because otherwise repeatedly squaring them would produce too large a final result) so again the amounts of time for these steps in the recursive calls add in a geometric series Consequentially, the whole algorithm takes proportional to a single multiplication with the same number of bits in its result. Related sequences and functions Several other integer sequences are similar to or related to the factorials: Alternating factorial The alternating factorial is the absolute value of the alternating sum of the first factorials, These have mainly been studied in connection with their primality; only finitely many of them can be prime, but a complete list of primes of this form is not known. Bhargava factorial The Bhargava factorials are a family of integer sequences defined by Manjul Bhargava with similar number-theoretic properties to the factorials, including the factorials themselves as a special case. Double factorial The product of all the odd integers up to some odd positive is called the double factorial and denoted by That is, For example, . Double factorials are used in trigonometric integrals, in expressions for the gamma function at half-integers and the volumes of hyperspheres, and in counting binary trees and perfect matchings. Exponential factorial Just as triangular numbers sum the numbers from and factorials take their product, the exponential factorial exponentiates. The exponential factorial is defined recursively For example, the exponential factorial of 4 is These numbers grow much more quickly than regular factorials. Falling factorial The notations or are sometimes used to represent the product of the greatest integers counting up to and equal to This is also known as a falling factorial or backward factorial, and the notation is a Pochhammer symbol. Falling factorials count the number of different sequences of distinct items that can be drawn from a universe of items. They occur as coefficients in the higher derivatives of polynomials, and in the factorial moments of random variables. Hyperfactorials The hyperfactorial of is the product . These numbers form the discriminants of Hermite polynomials. They can be continuously interpolated by the K-function, and obey analogues to Stirling's formula and Wilson's theorem. Jordan–Pólya numbers The Jordan–Pólya numbers are the products of factorials, allowing repetitions. Every tree has a symmetry group whose number of symmetries is a Jordan–Pólya number, and every Jordan–Pólya number counts the symmetries of some tree. Primorial The primorial is the product of prime numbers less than or equal this construction gives them some similar divisibility properties to factorials, but unlike factorials they are squarefree. As with the factorial primes researchers have studied primorial primes Subfactorial The subfactorial yields the number of derangements of a set of objects. It is sometimes denoted , and equals the closest integer Superfactorial The superfactorial of is the product of the first factorials. The superfactorials are continuously interpolated by the Barnes G-function. References External links Combinatorics Gamma and related functions Factorial and binomial topics Unary operations
Factorial
[ "Mathematics" ]
5,323
[ "Functions and mappings", "Discrete mathematics", "Factorial and binomial topics", "Unary operations", "Mathematical objects", "Combinatorics", "Mathematical relations" ]
10,635
https://en.wikipedia.org/wiki/Free%20software
Free software, libre software, libreware sometimes known as freedom-respecting software is computer software distributed under terms that allow users to run the software for any purpose as well as to study, change, and distribute it and any adapted versions. Free software is a matter of liberty, not price; all users are legally free to do what they want with their copies of a free software (including profiting from them) regardless of how much is paid to obtain the program. Computer programs are deemed "free" if they give end-users (not just the developer) ultimate control over the software and, subsequently, over their devices. The right to study and modify a computer program entails that the source code—the preferred format for making changes—be made available to users of that program. While this is often called "access to source code" or "public availability", the Free Software Foundation (FSF) recommends against thinking in those terms, because it might give the impression that users have an obligation (as opposed to a right) to give non-users a copy of the program. Although the term "free software" had already been used loosely in the past and other permissive software like the Berkeley Software Distribution released in 1978 existed, Richard Stallman is credited with tying it to the sense under discussion and starting the free software movement in 1983, when he launched the GNU Project: a collaborative effort to create a freedom-respecting operating system, and to revive the spirit of cooperation once prevalent among hackers during the early days of computing. Context Free software differs from: proprietary software, such as Microsoft Office, Windows, Adobe Photoshop, Facebook or FaceTime. Users cannot study, change, and share their source code. freeware or gratis software, which is a category of proprietary software that does not require payment for basic use. For software under the purview of copyright to be free, it must carry a software license whereby the author grants users the aforementioned rights. Software that is not covered by copyright law, such as software in the public domain, is free as long as the source code is also in the public domain, or otherwise available without restrictions. Proprietary software uses restrictive software licences or EULAs and usually does not provide users with the source code. Users are thus legally or technically prevented from changing the software, and this results in reliance on the publisher to provide updates, help, and support. (See also vendor lock-in and abandonware). Users often may not reverse engineer, modify, or redistribute proprietary software. Beyond copyright law, contracts and a lack of source code, there can exist additional obstacles keeping users from exercising freedom over a piece of software, such as software patents and digital rights management (more specifically, tivoization). Free software can be a for-profit, commercial activity or not. Some free software is developed by volunteer computer programmers while other is developed by corporations; or even by both. Naming and differences with open source Although both definitions refer to almost equivalent corpora of programs, the Free Software Foundation recommends using the term "free software" rather than "open-source software" (an alternative, yet similar, concept coined in 1998), because the goals and messaging are quite dissimilar. According to the Free Software Foundation, "Open source" and its associated campaign mostly focus on the technicalities of the public development model and marketing free software to businesses, while taking the ethical issue of user rights very lightly or even antagonistically. Stallman has also stated that considering the practical advantages of free software is like considering the practical advantages of not being handcuffed, in that it is not necessary for an individual to consider practical reasons in order to realize that being handcuffed is undesirable in itself. The FSF also notes that "Open Source" has exactly one specific meaning in common English, namely that "you can look at the source code." It states that while the term "Free Software" can lead to two different interpretations, at least one of them is consistent with the intended meaning unlike the term "Open Source". The loan adjective "libre" is often used to avoid the ambiguity of the word "free" in the English language, and the ambiguity with the older usage of "free software" as public-domain software. (See Gratis versus libre.) Definition and the Four Essential Freedoms of Free Software The first formal definition of free software was published by FSF in February 1986. That definition, written by Richard Stallman, is still maintained today and states that software is free software if people who receive a copy of the software have the following four freedoms. The numbering begins with zero, not only as a spoof on the common usage of zero-based numbering in programming languages, but also because "Freedom 0" was not initially included in the list, but later added first in the list as it was considered very important. Freedom 0: The freedom to use the program for any purpose. Freedom 1: The freedom to study how the program works, and change it to make it do what you wish. Freedom 2: The freedom to redistribute and make copies so you can help your neighbor. Freedom 3: The freedom to improve the program, and release your improvements (and modified versions in general) to the public, so that the whole community benefits. Freedoms 1 and 3 require source code to be available because studying and modifying software without its source code can range from highly impractical to nearly impossible. Thus, free software means that computer users have the freedom to cooperate with whom they choose, and to control the software they use. To summarize this into a remark distinguishing libre (freedom) software from gratis (zero price) software, the Free Software Foundation says: "Free software is a matter of liberty, not price. To understand the concept, you should think of 'free' as in 'free speech', not as in 'free beer. (See Gratis versus libre.) In the late 1990s, other groups published their own definitions that describe an almost identical set of software. The most notable are Debian Free Software Guidelines published in 1997, and The Open Source Definition, published in 1998. The BSD-based operating systems, such as FreeBSD, OpenBSD, and NetBSD, do not have their own formal definitions of free software. Users of these systems generally find the same set of software to be acceptable, but sometimes see copyleft as restrictive. They generally advocate permissive free software licenses, which allow others to use the software as they wish, without being legally forced to provide the source code. Their view is that this permissive approach is more free. The Kerberos, X11, and Apache software licenses are substantially similar in intent and implementation. Examples There are thousands of free applications and many operating systems available on the Internet. Users can easily download and install those applications via a package manager that comes included with most Linux distributions. The Free Software Directory maintains a large database of free-software packages. Some of the best-known examples include Linux-libre, Linux-based operating systems, the GNU Compiler Collection and C library; the MySQL relational database; the Apache web server; and the Sendmail mail transport agent. Other influential examples include the Emacs text editor; the GIMP raster drawing and image editor; the X Window System graphical-display system; the LibreOffice office suite; and the TeX and LaTeX typesetting systems. History From the 1950s up until the early 1970s, it was normal for computer users to have the software freedoms associated with free software, which was typically public-domain software. Software was commonly shared by individuals who used computers and by hardware manufacturers who welcomed the fact that people were making software that made their hardware useful. Organizations of users and suppliers, for example, SHARE, were formed to facilitate exchange of software. As software was often written in an interpreted language such as BASIC, the source code was distributed to use these programs. Software was also shared and distributed as printed source code (Type-in program) in computer magazines (like Creative Computing, SoftSide, Compute!, Byte, etc.) and books, like the bestseller BASIC Computer Games. By the early 1970s, the picture changed: software costs were dramatically increasing, a growing software industry was competing with the hardware manufacturer's bundled software products (free in that the cost was included in the hardware cost), leased machines required software support while providing no revenue for software, and some customers able to better meet their own needs did not want the costs of "free" software bundled with hardware product costs. In United States vs. IBM, filed January 17, 1969, the government charged that bundled software was anti-competitive. While some software might always be free, there would henceforth be a growing amount of software produced primarily for sale. In the 1970s and early 1980s, the software industry began using technical measures (such as only distributing binary copies of computer programs) to prevent computer users from being able to study or adapt the software applications as they saw fit. In 1980, copyright law was extended to computer programs. In 1983, Richard Stallman, one of the original authors of the popular Emacs program and a longtime member of the hacker community at the MIT Artificial Intelligence Laboratory, announced the GNU Project, the purpose of which was to produce a completely non-proprietary Unix-compatible operating system, saying that he had become frustrated with the shift in climate surrounding the computer world and its users. In his initial declaration of the project and its purpose, he specifically cited as a motivation his opposition to being asked to agree to non-disclosure agreements and restrictive licenses which prohibited the free sharing of potentially profitable in-development software, a prohibition directly contrary to the traditional hacker ethic. Software development for the GNU operating system began in January 1984, and the Free Software Foundation (FSF) was founded in October 1985. He developed a free software definition and the concept of "copyleft", designed to ensure software freedom for all. Some non-software industries are beginning to use techniques similar to those used in free software development for their research and development process; scientists, for example, are looking towards more open development processes, and hardware such as microchips are beginning to be developed with specifications released under copyleft licenses (see the OpenCores project, for instance). Creative Commons and the free-culture movement have also been largely influenced by the free software movement. 1980s: Foundation of the GNU Project In 1983, Richard Stallman, longtime member of the hacker community at the MIT Artificial Intelligence Laboratory, announced the GNU Project, saying that he had become frustrated with the effects of the change in culture of the computer industry and its users. Software development for the GNU operating system began in January 1984, and the Free Software Foundation (FSF) was founded in October 1985. An article outlining the project and its goals was published in March 1985 titled the GNU Manifesto. The manifesto included significant explanation of the GNU philosophy, Free Software Definition and "copyleft" ideas. 1990s: Release of the Linux kernel The Linux kernel, started by Linus Torvalds, was released as freely modifiable source code in 1991. The first licence was a proprietary software licence. However, with version 0.12 in February 1992, he relicensed the project under the GNU General Public License. Much like Unix, Torvalds' kernel attracted the attention of volunteer programmers. FreeBSD and NetBSD (both derived from 386BSD) were released as free software when the USL v. BSDi lawsuit was settled out of court in 1993. OpenBSD forked from NetBSD in 1995. Also in 1995, The Apache HTTP Server, commonly referred to as Apache, was released under the Apache License 1.0. Licensing All free-software licenses must grant users all the freedoms discussed above. However, unless the applications' licenses are compatible, combining programs by mixing source code or directly linking binaries is problematic, because of license technicalities. Programs indirectly connected together may avoid this problem. The majority of free software falls under a small set of licenses. The most popular of these licenses are: The MIT License The GNU General Public License v2 (GPLv2) The Apache License The GNU General Public License v3 (GPLv3) The BSD License The GNU Lesser General Public License (LGPL) The Mozilla Public License (MPL) The Eclipse Public License The Free Software Foundation and the Open Source Initiative both publish lists of licenses that they find to comply with their own definitions of free software and open-source software respectively: List of FSF approved software licenses List of OSI approved software licenses The FSF list is not prescriptive: free-software licenses can exist that the FSF has not heard about, or considered important enough to write about. So it is possible for a license to be free and not in the FSF list. The OSI list only lists licenses that have been submitted, considered and approved. All open-source licenses must meet the Open Source Definition in order to be officially recognized as open source software. Free software, on the other hand, is a more informal classification that does not rely on official recognition. Nevertheless, software licensed under licenses that do not meet the Free Software Definition cannot rightly be considered free software. Apart from these two organizations, the Debian project is seen by some to provide useful advice on whether particular licenses comply with their Debian Free Software Guidelines. Debian does not publish a list of licenses, so its judgments have to be tracked by checking what software they have allowed into their software archives. That is summarized at the Debian web site. It is rare that a license announced as being in-compliance with the FSF guidelines does not also meet the Open Source Definition, although the reverse is not necessarily true (for example, the NASA Open Source Agreement is an OSI-approved license, but non-free according to FSF). There are different categories of free software. Public-domain software: the copyright has expired, the work was not copyrighted (released without copyright notice before 1988), or the author has released the software onto the public domain with a waiver statement (in countries where this is possible). Since public-domain software lacks copyright protection, it may be freely incorporated into any work, whether proprietary or free. The FSF recommends the CC0 public domain dedication for this purpose. Permissive licenses, also called BSD-style because they are applied to much of the software distributed with the BSD operating systems. The author retains copyright solely to disclaim warranty and require proper attribution of modified works, and permits redistribution and modification, even closed-source ones. Copyleft licenses, with the GNU General Public License being the most prominent: the author retains copyright and permits redistribution under the restriction that all such redistribution is licensed under the same license. Additions and modifications by others must also be licensed under the same "copyleft" license whenever they are distributed with part of the original licensed product. This is also known as a viral, protective, or reciprocal license. Proponents of permissive and copyleft licenses disagree on whether software freedom should be viewed as a negative or positive liberty. Due to their restrictions on distribution, not everyone considers copyleft licenses to be free. Conversely, a permissive license may provide an incentive to create non-free software by reducing the cost of developing restricted software. Since this is incompatible with the spirit of software freedom, many people consider permissive licenses to be less free than copyleft licenses. Security and reliability There is debate over the security of free software in comparison to proprietary software, with a major issue being security through obscurity. A popular quantitative test in computer security is to use relative counting of known unpatched security flaws. Generally, users of this method advise avoiding products that lack fixes for known security flaws, at least until a fix is available. Free software advocates strongly believe that this methodology is biased by counting more vulnerabilities for the free software systems, since their source code is accessible and their community is more forthcoming about what problems exist as a part of full disclosure, and proprietary software systems can have undisclosed societal drawbacks, such as disenfranchising less fortunate would-be users of free programs. As users can analyse and trace the source code, many more people with no commercial constraints can inspect the code and find bugs and loopholes than a corporation would find practicable. According to Richard Stallman, user access to the source code makes deploying free software with undesirable hidden spyware functionality far more difficult than for proprietary software. Some quantitative studies have been done on the subject. Binary blobs and other proprietary software In 2006, OpenBSD started the first campaign against the use of binary blobs in kernels. Blobs are usually freely distributable device drivers for hardware from vendors that do not reveal driver source code to users or developers. This restricts the users' freedom effectively to modify the software and distribute modified versions. Also, since the blobs are undocumented and may have bugs, they pose a security risk to any operating system whose kernel includes them. The proclaimed aim of the campaign against blobs is to collect hardware documentation that allows developers to write free software drivers for that hardware, ultimately enabling all free operating systems to become or remain blob-free. The issue of binary blobs in the Linux kernel and other device drivers motivated some developers in Ireland to launch gNewSense, a Linux-based distribution with all the binary blobs removed. The project received support from the Free Software Foundation and stimulated the creation, headed by the Free Software Foundation Latin America, of the Linux-libre kernel. , Trisquel is the most popular FSF endorsed Linux distribution ranked by Distrowatch (over 12 months). While Debian is not endorsed by the FSF and does not use Linux-libre, it is also a popular distribution available without kernel blobs by default since 2011. The Linux community uses the term "blob" to refer to all nonfree firmware in a kernel whereas OpenBSD uses the term to refer to device drivers. The FSF does not consider OpenBSD to be blob free under the Linux community's definition of blob. Business model Selling software under any free-software licence is permissible, as is commercial use. This is true for licenses with or without copyleft. Since free software may be freely redistributed, it is generally available at little or no fee. Free software business models are usually based on adding value such as customization, accompanying hardware, support, training, integration, or certification. Exceptions exist however, where the user is charged to obtain a copy of the free application itself. Fees are usually charged for distribution on compact discs and bootable USB drives, or for services of installing or maintaining the operation of free software. Development of large, commercially used free software is often funded by a combination of user donations, crowdfunding, corporate contributions, and tax money. The SELinux project at the United States National Security Agency is an example of a federally funded free-software project. Proprietary software, on the other hand, tends to use a different business model, where a customer of the proprietary application pays a fee for a license to legally access and use it. This license may grant the customer the ability to configure some or no parts of the software themselves. Often some level of support is included in the purchase of proprietary software, but additional support services (especially for enterprise applications) are usually available for an additional fee. Some proprietary software vendors will also customize software for a fee. The Free Software Foundation encourages selling free software. As the Foundation has written, "distributing free software is an opportunity to raise funds for development. Don't waste it!". For example, the FSF's own recommended license (the GNU GPL) states that "[you] may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee." Microsoft CEO Steve Ballmer stated in 2001 that "open source is not available to commercial companies. The way the license is written, if you use any open-source software, you have to make the rest of your software open source." This misunderstanding is based on a requirement of copyleft licenses (like the GPL) that if one distributes modified versions of software, they must release the source and use the same license. This requirement does not extend to other software from the same developer. The claim of incompatibility between commercial companies and free software is also a misunderstanding. There are several large companies, e.g. Red Hat and IBM (IBM acquired RedHat in 2019), which do substantial commercial business in the development of free software. Economic aspects and adoption Free software played a significant part in the development of the Internet, the World Wide Web and the infrastructure of dot-com companies. Free software allows users to cooperate in enhancing and refining the programs they use; free software is a pure public good rather than a private good. Companies that contribute to free software increase commercial innovation. The economic viability of free software has been recognized by large corporations such as IBM, Red Hat, and Sun Microsystems. Many companies whose core business is not in the IT sector choose free software for their Internet information and sales sites, due to the lower initial capital investment and ability to freely customize the application packages. Most companies in the software business include free software in their commercial products if the licenses allow that. Free software is generally available at no cost and can result in permanently lower TCO (total cost of ownership) compared to proprietary software. With free software, businesses can fit software to their specific needs by changing the software themselves or by hiring programmers to modify it for them. Free software often has no warranty, and more importantly, generally does not assign legal liability to anyone. However, warranties are permitted between any two parties upon the condition of the software and its usage. Such an agreement is made separately from the free software license. A report by Standish Group estimates that adoption of free software has caused a drop in revenue to the proprietary software industry by about $60 billion per year. Eric S. Raymond argued that the term free software is too ambiguous and intimidating for the business community. Raymond promoted the term open-source software as a friendlier alternative for the business and corporate world. See also Definition of Free Cultural Works Digital rights Free content List of formerly proprietary software List of free software project directories List of free software for Web 2.0 Services Open format Open standard Open-source hardware Outline of free software :Category:Free software lists and comparisons Appropriate Technology Sustainable Development Gratis versus libre Notes References Further reading Puckette, Miller. "Who Owns our Software?: A first-person case study." eContact (September 2009). Montréal: CEC Hancock, Terry. "The Jargon of Freedom: 60 Words and Phrases with Context". Free Software Magazine. 2010-20-24 External links Software licensing Applied ethics
Free software
[ "Biology" ]
4,738
[ "Behavior", "Human behavior", "Applied ethics" ]
10,779
https://en.wikipedia.org/wiki/Frequency
Frequency (symbol f), most often measured in hertz (symbol: Hz), is the number of occurrences of a repeating event per unit of time. It is also occasionally referred to as temporal frequency for clarity and to distinguish it from spatial frequency. Ordinary frequency is related to angular frequency (symbol ω, with SI unit radian per second) by a factor of 2. The period (symbol T) is the interval of time between events, so the period is the reciprocal of the frequency: . Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as mechanical vibrations, audio signals (sound), radio waves, and light. For example, if a heart beats at a frequency of 120 times per minute (2 hertz), the period—the time interval between beats—is half a second (60 seconds divided by 120). Definitions and units For cyclical phenomena such as oscillations, waves, or for examples of simple harmonic motion, the term frequency is defined as the number of cycles or repetitions per unit of time. The conventional symbol for frequency is f or ν (the Greek letter nu) is also used. The period T is the time taken to complete one cycle of an oscillation or rotation. The frequency and the period are related by the equation The term temporal frequency is used to emphasise that the frequency is characterised by the number of occurrences of a repeating event per unit time. The SI unit of frequency is the hertz (Hz), named after the German physicist Heinrich Hertz by the International Electrotechnical Commission in 1930. It was adopted by the CGPM (Conférence générale des poids et mesures) in 1960, officially replacing the previous name, cycle per second (cps). The SI unit for the period, as for all measurements of time, is the second. A traditional unit of frequency used with rotating mechanical devices, where it is termed rotational frequency, is revolution per minute, abbreviated r/min or rpm. Sixty rpm is equivalent to one hertz. Period versus frequency As a matter of convenience, longer and slower waves, such as ocean surface waves, are more typically described by wave period rather than frequency. Short and fast waves, like audio and radio, are usually described by their frequency. Some commonly used conversions are listed below: Related quantities Rotational frequency, usually denoted by the Greek letter ν (nu), is defined as the instantaneous rate of change of the number of rotations, N, with respect to time: it is a type of frequency applied to rotational motion. Angular frequency, usually denoted by the Greek letter ω (omega), is defined as the rate of change of angular displacement (during rotation), θ (theta), or the rate of change of the phase of a sinusoidal waveform (notably in oscillations and waves), or as the rate of change of the argument to the sine function: The unit of angular frequency is the radian per second (rad/s) but, for discrete-time signals, can also be expressed as radians per sampling interval, which is a dimensionless quantity. Angular frequency is frequency multiplied by 2. Spatial frequency, denoted here by ξ (xi), is analogous to temporal frequency, but with a spatial measurement replacing time measurement, e.g.: Spatial period or wavelength is the spatial analog to temporal period. In wave propagation For periodic waves in nondispersive media (that is, media in which the wave speed is independent of frequency), frequency has an inverse relationship to the wavelength, λ (lambda). Even in dispersive media, the frequency f of a sinusoidal wave is equal to the phase velocity v of the wave divided by the wavelength λ of the wave: In the special case of electromagnetic waves in vacuum, then , where c is the speed of light in vacuum, and this expression becomes When monochromatic waves travel from one medium to another, their frequency remains the same—only their wavelength and speed change. Measurement Measurement of frequency can be done in the following ways: Counting Calculating the frequency of a repeating event is accomplished by counting the number of times that event occurs within a specific time period, then dividing the count by the period. For example, if 71 events occur within 15 seconds the frequency is: If the number of counts is not very large, it is more accurate to measure the time interval for a predetermined number of occurrences, rather than the number of occurrences within a specified time. The latter method introduces a random error into the count of between zero and one count, so on average half a count. This is called gating error and causes an average error in the calculated frequency of , or a fractional error of where is the timing interval and is the measured frequency. This error decreases with frequency, so it is generally a problem at low frequencies where the number of counts N is small. Stroboscope An old method of measuring the frequency of rotating or vibrating objects is to use a stroboscope. This is an intense repetitively flashing light (strobe light) whose frequency can be adjusted with a calibrated timing circuit. The strobe light is pointed at the rotating object and the frequency adjusted up and down. When the frequency of the strobe equals the frequency of the rotating or vibrating object, the object completes one cycle of oscillation and returns to its original position between the flashes of light, so when illuminated by the strobe the object appears stationary. Then the frequency can be read from the calibrated readout on the stroboscope. A downside of this method is that an object rotating at an integer multiple of the strobing frequency will also appear stationary. Frequency counter Higher frequencies are usually measured with a frequency counter. This is an electronic instrument which measures the frequency of an applied repetitive electronic signal and displays the result in hertz on a digital display. It uses digital logic to count the number of cycles during a time interval established by a precision quartz time base. Cyclic processes that are not electrical, such as the rotation rate of a shaft, mechanical vibrations, or sound waves, can be converted to a repetitive electronic signal by transducers and the signal applied to a frequency counter. As of 2018, frequency counters can cover the range up to about 100 GHz. This represents the limit of direct counting methods; frequencies above this must be measured by indirect methods. Heterodyne methods Above the range of frequency counters, frequencies of electromagnetic signals are often measured indirectly utilizing heterodyning (frequency conversion). A reference signal of a known frequency near the unknown frequency is mixed with the unknown frequency in a nonlinear mixing device such as a diode. This creates a heterodyne or "beat" signal at the difference between the two frequencies. If the two signals are close together in frequency the heterodyne is low enough to be measured by a frequency counter. This process only measures the difference between the unknown frequency and the reference frequency. To convert higher frequencies, several stages of heterodyning can be used. Current research is extending this method to infrared and light frequencies (optical heterodyne detection). Examples Light Visible light is an electromagnetic wave, consisting of oscillating electric and magnetic fields traveling through space. The frequency of the wave determines its color: 400 THz ( Hz) is red light, 800 THz () is violet light, and between these (in the range 400–800 THz) are all the other colors of the visible spectrum. An electromagnetic wave with a frequency less than will be invisible to the human eye; such waves are called infrared (IR) radiation. At even lower frequency, the wave is called a microwave, and at still lower frequencies it is called a radio wave. Likewise, an electromagnetic wave with a frequency higher than will also be invisible to the human eye; such waves are called ultraviolet (UV) radiation. Even higher-frequency waves are called X-rays, and higher still are gamma rays. All of these waves, from the lowest-frequency radio waves to the highest-frequency gamma rays, are fundamentally the same, and they are all called electromagnetic radiation. They all travel through vacuum at the same speed (the speed of light), giving them wavelengths inversely proportional to their frequencies. where c is the speed of light (c in vacuum or less in other media), f is the frequency and λ is the wavelength. In dispersive media, such as glass, the speed depends somewhat on frequency, so the wavelength is not quite inversely proportional to frequency. Sound Sound propagates as mechanical vibration waves of pressure and displacement, in air or other substances. In general, frequency components of a sound determine its "color", its timbre. When speaking about the frequency (in singular) of a sound, it means the property that most determines its pitch. The frequencies an ear can hear are limited to a specific range of frequencies. The audible frequency range for humans is typically given as being between about 20 Hz and 20,000 Hz (20 kHz), though the high frequency limit usually reduces with age. Other species have different hearing ranges. For example, some dog breeds can perceive vibrations up to 60,000 Hz. In many media, such as air, the speed of sound is approximately independent of frequency, so the wavelength of the sound waves (distance between repetitions) is approximately inversely proportional to frequency. Line current In Europe, Africa, Australia, southern South America, most of Asia, and Russia, the frequency of the alternating current in household electrical outlets is 50 Hz (close to the tone G), whereas in North America and northern South America, the frequency of the alternating current in household electrical outlets is 60 Hz (between the tones B and B; that is, a minor third above the European frequency). The frequency of the 'hum' in an audio recording can show in which of these general regions the recording was made. Aperiodic frequency Aperiodic frequency is the rate of incidence or occurrence of non-cyclic phenomena, including random processes such as radioactive decay. It is expressed with the unit reciprocal second (s−1) or, in the case of radioactivity, with the unit becquerel. It is defined as a rate, , involving the number of entities counted or the number of events happened (N) during a given time duration (Δt); it is a physical quantity of type temporal rate. See also Audio frequency Bandwidth (signal processing) Chirp Cutoff frequency Downsampling Electronic filter Fourier analysis Frequency band Frequency converter Frequency domain Frequency distribution Frequency extender Frequency grid Frequency level Frequency modulation Frequency spectrum Interaction frequency Least-squares spectral analysis Natural frequency Negative frequency Periodicity (disambiguation) Pink noise Preselector Radar signal characteristics Radio frequency Signaling (telecommunications) Spread spectrum Spectral component Transverter Upsampling Orders of magnitude (frequency) Notes References Sources Further reading External links Keyboard frequencies = naming of notes – The English and American system versus the German system A frequency generator with sound, useful for hearing tests
Frequency
[ "Physics" ]
2,272
[ "Scalar physical quantities", "Wikipedia categories named after physical quantities", "Frequency", "Physical quantities" ]
10,821
https://en.wikipedia.org/wiki/Francium
Francium is a chemical element; it has symbol Fr and atomic number 87. It is extremely radioactive; its most stable isotope, francium-223 (originally called actinium K after the natural decay chain in which it appears), has a half-life of only 22 minutes. It is the second-most electropositive element, behind only caesium, and is the second rarest naturally occurring element (after astatine). Francium's isotopes decay quickly into astatine, radium, and radon. The electronic structure of a francium atom is [Rn] 7s1; thus, the element is classed as an alkali metal. As a consequence of its extreme instability, bulk francium has never been seen. Because of the general appearance of the other elements in its periodic table column, it is presumed that francium would appear as a highly reactive metal if enough could be collected together to be viewed as a bulk solid or liquid. Obtaining such a sample is highly improbable since the extreme heat of decay resulting from its short half-life would immediately vaporize any viewable quantity of the element. Francium was discovered by Marguerite Perey in France (from which the element takes its name) on January 7, 1939. Before its discovery, francium was referred to as eka-caesium or ekacaesium because of its conjectured existence below caesium in the periodic table. It was the last element first discovered in nature, rather than by synthesis. Outside the laboratory, francium is extremely rare, with trace amounts found in uranium ores, where the isotope francium-223 (in the family of uranium-235) continually forms and decays. As little as exists at any given time throughout the Earth's crust; aside from francium-223 and francium-221, its other isotopes are entirely synthetic. The largest amount produced in the laboratory was a cluster of more than 300,000 atoms. Characteristics Francium is one of the most unstable of the naturally occurring elements: its longest-lived isotope, francium-223, has a half-life of only 22 minutes. The only comparable element is astatine, whose most stable natural isotope, astatine-219 (the alpha daughter of francium-223), has a half-life of 56 seconds, although synthetic astatine-210 is much longer-lived with a half-life of 8.1 hours. All isotopes of francium decay into astatine, radium, or radon. Francium-223 also has a shorter half-life than the longest-lived isotope of each synthetic element up to and including element 105, dubnium. Francium is an alkali metal whose chemical properties mostly resemble those of caesium. A heavy element with a single valence electron, it has the highest equivalent weight of any element. Liquid francium—if created—should have a surface tension of 0.05092 N/m at its melting point. Francium's melting point was estimated to be around ; a value of is also often encountered. The melting point is uncertain because of the element's extreme rarity and radioactivity; a different extrapolation based on Dmitri Mendeleev's method gave . A calculation based on the melting temperatures of binary ionic crystals gives . The estimated boiling point of is also uncertain; the estimates and , as well as the extrapolation from Mendeleev's method of , have also been suggested. The density of francium is expected to be around 2.48 g/cm3 (Mendeleev's method extrapolates 2.4 g/cm3). Linus Pauling estimated the electronegativity of francium at 0.7 on the Pauling scale, the same as caesium; the value for caesium has since been refined to 0.79, but there are no experimental data to allow a refinement of the value for francium. Francium has a slightly higher ionization energy than caesium, 392.811(4) kJ/mol as opposed to 375.7041(2) kJ/mol for caesium, as would be expected from relativistic effects, and this would imply that caesium is the less electronegative of the two. Francium should also have a higher electron affinity than caesium and the Fr− ion should be more polarizable than the Cs− ion. Compounds As a result of francium's instability, its salts are only known to a small extent. Francium coprecipitates with several caesium salts, such as caesium perchlorate, which results in small amounts of francium perchlorate. This coprecipitation can be used to isolate francium, by adapting the radiocaesium coprecipitation method of Lawrence E. Glendenin and C. M. Nelson. It will additionally coprecipitate with many other caesium salts, including the iodate, the picrate, the tartrate (also rubidium tartrate), the chloroplatinate, and the silicotungstate. It also coprecipitates with silicotungstic acid, and with perchloric acid, without another alkali metal as a carrier, which leads to other methods of separation. Francium perchlorate Francium perchlorate is produced by the reaction of francium chloride and sodium perchlorate. The francium perchlorate coprecipitates with caesium perchlorate. This coprecipitation can be used to isolate francium, by adapting the radiocaesium coprecipitation method of Lawrence E. Glendenin and C. M. Nelson. However, this method is unreliable in separating thallium, which also coprecipitates with caesium. Francium perchlorate's entropy is expected to be 42.7 e.u (178.7 J mol−1 K−1). Francium halides Francium halides are all soluble in water and are expected to be white solids. They are expected to be produced by the reaction of the corresponding halogens. For example, francium chloride would be produced by the reaction of francium and chlorine. Francium chloride has been studied as a pathway to separate francium from other elements, by using the high vapour pressure of the compound, although francium fluoride would have a higher vapour pressure. Other compounds Francium nitrate, sulfate, hydroxide, carbonate, acetate, and oxalate, are all soluble in water, while the iodate, picrate, tartrate, chloroplatinate, and silicotungstate are insoluble. The insolubility of these compounds are used to extract francium from other radioactive products, such as zirconium, niobium, molybdenum, tin, antimony, the method mentioned in the section above. Francium oxide is believed to disproportionate to the peroxide and francium metal. The CsFr molecule is predicted to have francium at the negative end of the dipole, unlike all known heterodiatomic alkali metal molecules. Francium superoxide (FrO2) is expected to have a more covalent character than its lighter congeners; this is attributed to the 6p electrons in francium being more involved in the francium–oxygen bonding. The relativistic destabilisation of the 6p3/2 spinor may make francium compounds in oxidation states higher than +1 possible, such as [FrVF6]−; but this has not been experimentally confirmed. Isotopes There are 37 known isotopes of francium ranging in atomic mass from 197 to 233. Francium has seven metastable nuclear isomers. Francium-223 and francium-221 are the only isotopes that occur in nature, with the former being far more common. Francium-223 is the most stable isotope, with a half-life of 21.8 minutes, and it is highly unlikely that an isotope of francium with a longer half-life will ever be discovered or synthesized. Francium-223 is a fifth product of the uranium-235 decay series as a daughter isotope of actinium-227; thorium-227 is the more common daughter. Francium-223 then decays into radium-223 by beta decay (1.149 MeV decay energy), with a minor (0.006%) alpha decay path to astatine-219 (5.4 MeV decay energy). Francium-221 has a half-life of 4.8 minutes. It is the ninth product of the neptunium decay series as a daughter isotope of actinium-225. Francium-221 then decays into astatine-217 by alpha decay (6.457 MeV decay energy). Although all primordial 237Np is extinct, the neptunium decay series continues to exist naturally in tiny traces due to (n,2n) knockout reactions in natural 238U. Francium-222, with a half-life of 14 minutes, may be produced as a result of the beta decay of natural radon-222; this process nonetheless not yet been observed, and it is unknown that is this process energetically possible. The least stable ground state isotope is francium-215, with a half-life of 90 ns: it undergoes a 9.54 MeV alpha decay to astatine-211. Applications Due to its instability and rarity, there are no commercial applications for francium. It has been used for research purposes in the fields of chemistry and of atomic structure. Its use as a potential diagnostic aid for various cancers has also been explored, but this application has been deemed impractical. Francium's ability to be synthesized, trapped, and cooled, along with its relatively simple atomic structure, has made it the subject of specialized spectroscopy experiments. These experiments have led to more specific information regarding energy levels and the coupling constants between subatomic particles. Studies on the light emitted by laser-trapped francium-210 ions have provided accurate data on transitions between atomic energy levels which are fairly similar to those predicted by quantum theory. Francium is a prospective candidate for searching for CP violation. History As early as 1870, chemists thought that there should be an alkali metal beyond caesium, with an atomic number of 87. It was then referred to by the provisional name eka-caesium. Erroneous and incomplete discoveries In 1914, Stefan Meyer, Viktor F. Hess, and Friedrich Paneth (working in Vienna) made measurements of alpha radiation from various substances, including 227Ac. They observed the possibility of a minor alpha branch of this nuclide, though follow-up work could not be done due to the outbreak of World War I. Their observations were not precise and sure enough for them to announce the discovery of element 87, though it is likely that they did indeed observe the decay of 227Ac to 223Fr. Soviet chemist Dmitry Dobroserdov was the first scientist to claim to have found eka-caesium, or francium. In 1925, he observed weak radioactivity in a sample of potassium, another alkali metal, and incorrectly concluded that eka-caesium was contaminating the sample (the radioactivity from the sample was from the naturally occurring potassium radioisotope, potassium-40). He then published a thesis on his predictions of the properties of eka-caesium, in which he named the element russium after his home country. Shortly thereafter, Dobroserdov began to focus on his teaching career at the Polytechnic Institute of Odesa, and he did not pursue the element further. The following year, English chemists Gerald J. F. Druce and Frederick H. Loring analyzed X-ray photographs of manganese(II) sulfate. They observed spectral lines which they presumed to be of eka-caesium. They announced their discovery of element 87 and proposed the name alkalinium, as it would be the heaviest alkali metal. In 1930, Fred Allison of the Alabama Polytechnic Institute claimed to have discovered element 87 (in addition to 85) when analyzing pollucite and lepidolite using his magneto-optical machine. Allison requested that it be named virginium after his home state of Virginia, along with the symbols Vi and Vm. In 1934, H.G. MacPherson of UC Berkeley disproved the effectiveness of Allison's device and the validity of his discovery. In 1936, Romanian physicist Horia Hulubei and his French colleague Yvette Cauchois also analyzed pollucite, this time using their high-resolution X-ray apparatus. They observed several weak emission lines, which they presumed to be those of element 87. Hulubei and Cauchois reported their discovery and proposed the name moldavium, along with the symbol Ml, after Moldavia, the Romanian province where Hulubei was born. In 1937, Hulubei's work was criticized by American physicist F. H. Hirsh Jr., who rejected Hulubei's research methods. Hirsh was certain that eka-caesium would not be found in nature, and that Hulubei had instead observed mercury or bismuth X-ray lines. Hulubei insisted that his X-ray apparatus and methods were too accurate to make such a mistake. Because of this, Jean Baptiste Perrin, Nobel Prize winner and Hulubei's mentor, endorsed moldavium as the true eka-caesium over Marguerite Perey's recently discovered francium. Perey took pains to be accurate and detailed in her criticism of Hulubei's work, and finally she was credited as the sole discoverer of element 87. All other previous purported discoveries of element 87 were ruled out due to francium's very limited half-life. Perey's analysis Eka-caesium was discovered on January 7, 1939, by Marguerite Perey of the Curie Institute in Paris, when she purified a sample of actinium-227 which had been reported to have a decay energy of 220 keV. Perey noticed decay particles with an energy level below 80 keV. Perey thought this decay activity might have been caused by a previously unidentified decay product, one which was separated during purification, but emerged again out of the pure actinium-227. Various tests eliminated the possibility of the unknown element being thorium, radium, lead, bismuth, or thallium. The new product exhibited chemical properties of an alkali metal (such as coprecipitating with caesium salts), which led Perey to believe that it was element 87, produced by the alpha decay of actinium-227. Perey then attempted to determine the proportion of beta decay to alpha decay in actinium-227. Her first test put the alpha branching at 0.6%, a figure which she later revised to 1%. Perey named the new isotope actinium-K (it is now referred to as francium-223) and in 1946, she proposed the name catium (Cm) for her newly discovered element, as she believed it to be the most electropositive cation of the elements. Irène Joliot-Curie, one of Perey's supervisors, opposed the name due to its connotation of cat rather than cation; furthermore, the symbol coincided with that which had since been assigned to curium. Perey then suggested francium, after France. This name was officially adopted by the International Union of Pure and Applied Chemistry (IUPAC) in 1949, becoming the second element after gallium to be named after France. It was assigned the symbol Fa, but it was revised to the current Fr shortly thereafter. Francium was the last element discovered in nature, rather than synthesized, following hafnium and rhenium. Further research into francium's structure was carried out by, among others, Sylvain Lieberman and his team at CERN in the 1970s and 1980s. Occurrence 223Fr is the result of the alpha decay of 227Ac and can be found in trace amounts in uranium minerals. In a given sample of uranium, there is estimated to be only one francium atom for every 1 × 1018 uranium atoms. Only about of francium is present naturally in the earth's crust. Production Francium can be synthesized by a fusion reaction when a gold-197 target is bombarded with a beam of oxygen-18 atoms from a linear accelerator in a process originally developed at the physics department of the State University of New York at Stony Brook in 1995. Depending on the energy of the oxygen beam, the reaction can yield francium isotopes with masses of 209, 210, and 211. 197Au + 18O → 209Fr + 6 n 197Au + 18O → 210Fr + 5 n 197Au + 18O → 211Fr + 4 n The francium atoms leave the gold target as ions, which are neutralized by collision with yttrium and then isolated in a magneto-optical trap (MOT) in a gaseous unconsolidated state. Although the atoms only remain in the trap for about 30 seconds before escaping or undergoing nuclear decay, the process supplies a continual stream of fresh atoms. The result is a steady state containing a fairly constant number of atoms for a much longer time. The original apparatus could trap up to a few thousand atoms, while a later improved design could trap over 300,000 at a time. Sensitive measurements of the light emitted and absorbed by the trapped atoms provided the first experimental results on various transitions between atomic energy levels in francium. Initial measurements show very good agreement between experimental values and calculations based on quantum theory. The research project using this production method relocated to TRIUMF in 2012, where over 106 francium atoms have been held at a time, including large amounts of 209Fr in addition to 207Fr and 221Fr. Other synthesis methods include bombarding radium with neutrons, and bombarding thorium with protons, deuterons, or helium ions. 223Fr can also be isolated from samples of its parent 227Ac, the francium being milked via elution with NH4Cl–CrO3 from an actinium-containing cation exchanger and purified by passing the solution through a silicon dioxide compound loaded with barium sulfate. In 1996, the Stony Brook group trapped 3000 atoms in their MOT, which was enough for a video camera to capture the light given off by the atoms as they fluoresce. Francium has not been synthesized in amounts large enough to weigh. Notes References External links Francium at The Periodic Table of Videos (University of Nottingham) WebElements.com – Francium Stony Brook University Physics Dept. Scerri, Eric (2013). A Tale of Seven Elements, Oxford University Press, Oxford, Chemical elements Alkali metals Eponyms Science and technology in France Chemical elements with body-centered cubic structure Chemical elements predicted by Dmitri Mendeleev
Francium
[ "Physics", "Chemistry" ]
4,062
[ "Periodic table", "Chemical elements", "Atoms", "Matter", "Chemical elements predicted by Dmitri Mendeleev" ]
10,822
https://en.wikipedia.org/wiki/Fermium
Fermium is a synthetic chemical element; it has symbol Fm and atomic number 100. It is an actinide and the heaviest element that can be formed by neutron bombardment of lighter elements, and hence the last element that can be prepared in macroscopic quantities, although pure fermium metal has not been prepared yet. A total of 20 isotopes are known, with 257Fm being the longest-lived with a half-life of 100.5 days. Fermium was discovered in the debris of the first hydrogen bomb explosion in 1952, and named after Enrico Fermi, one of the pioneers of nuclear physics. Its chemistry is typical for the late actinides, with a preponderance of the +3 oxidation state but also an accessible +2 oxidation state. Owing to the small amounts of produced fermium and all of its isotopes having relatively short half-lives, there are currently no uses for it outside basic scientific research. Discovery Fermium was first discovered in the fallout from the 'Ivy Mike' nuclear test (1 November 1952), the first successful test of a hydrogen bomb. Initial examination of the debris from the explosion had shown the production of a new isotope of plutonium, : this could only have formed by the absorption of six neutrons by a uranium-238 nucleus followed by two β− decays. At the time, the absorption of neutrons by a heavy nucleus was thought to be a rare process, but the identification of raised the possibility that still more neutrons could have been absorbed by the uranium nuclei, leading to new elements. Element 99 (einsteinium) was quickly discovered on filter papers which had been flown through clouds from the explosion (the same sampling technique that had been used to discover ). It was then identified in December 1952 by Albert Ghiorso and co-workers at the University of California at Berkeley. They discovered the isotope 253Es (half-life 20.5 days) that was made by the capture of 15 neutrons by uranium-238 nuclei – which then underwent seven successive beta decays: Some 238U atoms, however, could capture another amount of neutrons (most likely, 16 or 17). The discovery of fermium (Z = 100) required more material, as the yield was expected to be at least an order of magnitude lower than that of element 99, and so contaminated coral from the Enewetak atoll (where the test had taken place) was shipped to the University of California Radiation Laboratory in Berkeley, California, for processing and analysis. About two months after the test, a new component was isolated emitting high-energy α-particles (7.1 MeV) with a half-life of about a day. With such a short half-life, it could only arise from the β− decay of an isotope of einsteinium, and so had to be an isotope of the new element 100: it was quickly identified as 255Fm (). The discovery of the new elements, and the new data on neutron capture, was initially kept secret on the orders of the U.S. military until 1955 due to Cold War tensions. Nevertheless, the Berkeley team was able to prepare elements 99 and 100 by civilian means, through the neutron bombardment of plutonium-239, and published this work in 1954 with the disclaimer that it was not the first studies that had been carried out on the elements. The "Ivy Mike" studies were declassified and published in 1955. The Berkeley team had been worried that another group might discover lighter isotopes of element 100 through ion-bombardment techniques before they could publish their classified research, and this proved to be the case. A group at the Nobel Institute for Physics in Stockholm independently discovered the element, producing an isotope later confirmed to be 250Fm (t1/2 = 30 minutes) by bombarding a target with oxygen-16 ions, and published their work in May 1954. Nevertheless, the priority of the Berkeley team was generally recognized, and with it the prerogative to name the new element in honour of Enrico Fermi, the developer of the first artificial self-sustained nuclear reactor. Fermi was still alive when the name was proposed, but had died by the time it became official. Isotopes There are 20 isotopes of fermium listed in NUBASE 2016, with atomic weights of 241 to 260, of which Fm is the longest-lived with a half-life of 100.5 days. Fm has a half-life of 3 days, while Fm of 5.3 h, Fm of 25.4 h, Fm of 3.2 h, Fm of 20.1 h, and Fm of 2.6 hours. All the remaining ones have half-lives ranging from 30 minutes to less than a millisecond. The neutron capture product of fermium-257, Fm, undergoes spontaneous fission with a half-life of just 370(14) microseconds; Fm and Fm also undergo spontaneous fission (t1/2 = 1.5(3) s and 4 ms respectively). This means that neutron capture cannot be used to create nuclides with a mass number greater than 257, unless carried out in a nuclear explosion. As Fm alpha decays to Cf, and no known fermium isotopes undergo beta minus decay to the next element, mendelevium, fermium is also the last element that can be synthesized by neutron-capture. Because of this impediment in forming heavier isotopes, these short-lived isotopes Fm constitute the "fermium gap." Occurrence Production Fermium is produced by the bombardment of lighter actinides with neutrons in a nuclear reactor. Fermium-257 is the heaviest isotope that is obtained via neutron capture, and can only be produced in picogram quantities. The major source is the 85 MW High Flux Isotope Reactor (HFIR) at the Oak Ridge National Laboratory in Tennessee, USA, which is dedicated to the production of transcurium (Z > 96) elements. Lower mass fermium isotopes are available in greater quantities, though these isotopes (254Fm and 255Fm) are comparatively short-lived. In a "typical processing campaign" at Oak Ridge, tens of grams of curium are irradiated to produce decigram quantities of californium, milligram quantities of berkelium and einsteinium, and picogram quantities of fermium. However, nanogram quantities of fermium can be prepared for specific experiments. The quantities of fermium produced in 20–200 kiloton thermonuclear explosions is believed to be of the order of milligrams, although it is mixed in with a huge quantity of debris; 4.0 picograms of 257Fm was recovered from 10 kilograms of debris from the "Hutch" test (16 July 1969). The Hutch experiment produced an estimated total of 250 micrograms of 257Fm. After production, the fermium must be separated from other actinides and from lanthanide fission products. This is usually achieved by ion-exchange chromatography, with the standard process using a cation exchanger such as Dowex 50 or TEVA eluted with a solution of ammonium α-hydroxyisobutyrate. Smaller cations form more stable complexes with the α-hydroxyisobutyrate anion, and so are preferentially eluted from the column. A rapid fractional crystallization method has also been described. Although the most stable isotope of fermium is 257Fm, with a half-life of 100.5 days, most studies are conducted on 255Fm (t1/2 = 20.07(7) hours), since this isotope can be easily isolated as required as the decay product of 255Es (t1/2 = 39.8(12) days). Synthesis in nuclear explosions The analysis of the debris at the 10-megaton Ivy Mike nuclear test was a part of long-term project, one of the goals of which was studying the efficiency of production of transuranium elements in high-power nuclear explosions. The motivation for these experiments was as follows: synthesis of such elements from uranium requires multiple neutron capture. The probability of such events increases with the neutron flux, and nuclear explosions are the most powerful neutron sources, providing densities on the order 10 neutrons/cm within a microsecond, i.e. about 10 neutrons/(cm·s). For comparison, the flux of the HFIR reactor is 5 neutrons/(cm·s). A dedicated laboratory was set up right at Enewetak Atoll for preliminary analysis of debris, as some isotopes could have decayed by the time the debris samples reached the U.S. The laboratory was receiving samples for analysis, as soon as possible, from airplanes equipped with paper filters which flew over the atoll after the tests. Whereas it was hoped to discover new chemical elements heavier than fermium, those were not found after a series of megaton explosions conducted between 1954 and 1956 at the atoll. The atmospheric results were supplemented by the underground test data accumulated in the 1960s at the Nevada Test Site, as it was hoped that powerful explosions conducted in confined space might result in improved yields and heavier isotopes. Apart from traditional uranium charges, combinations of uranium with americium and thorium have been tried, as well as a mixed plutonium-neptunium charge. They were less successful in terms of yield, which was attributed to stronger losses of heavy isotopes due to enhanced fission rates in heavy-element charges. Isolation of the products was found to be rather problematic, as the explosions were spreading debris through melting and vaporizing rocks under the great depth of 300–600 meters, and drilling to such depth in order to extract the products was both slow and inefficient in terms of collected volumes. Among the nine underground tests, which were carried between 1962 and 1969 and codenamed Anacostia (5.2 kilotons, 1962), Kennebec (<5 kilotons, 1963), Par (38 kilotons, 1964), Barbel (<20 kilotons, 1964), Tweed (<20 kilotons, 1965), Cyclamen (13 kilotons, 1966), Kankakee (20-200 kilotons, 1966), Vulcan (25 kilotons, 1966) and Hutch (20-200 kilotons, 1969), the last one was most powerful and had the highest yield of transuranium elements. In the dependence on the atomic mass number, the yield showed a saw-tooth behavior with the lower values for odd isotopes, due to their higher fission rates. The major practical problem of the entire proposal, however, was collecting the radioactive debris dispersed by the powerful blast. Aircraft filters adsorbed only about 4 of the total amount and collection of tons of corals at Enewetak Atoll increased this fraction by only two orders of magnitude. Extraction of about 500 kilograms of underground rocks 60 days after the Hutch explosion recovered only about 10 of the total charge. The amount of transuranium elements in this 500-kg batch was only 30 times higher than in a 0.4 kg rock picked up 7 days after the test. This observation demonstrated the highly nonlinear dependence of the transuranium elements yield on the amount of retrieved radioactive rock. In order to accelerate sample collection after the explosion, shafts were drilled at the site not after but before the test, so that the explosion would expel radioactive material from the epicenter, through the shafts, to collecting volumes near the surface. This method was tried in the Anacostia and Kennebec tests and instantly provided hundreds of kilograms of material, but with actinide concentrations 3 times lower than in samples obtained after drilling; whereas such a method could have been efficient in scientific studies of short-lived isotopes, it could not improve the overall collection efficiency of the produced actinides. Though no new elements (apart from einsteinium and fermium) could be detected in the nuclear test debris, and the total yields of transuranium elements were disappointingly low, these tests did provide significantly higher amounts of rare heavy isotopes than previously available in laboratories. For example, 6 atoms of Fm could be recovered after the Hutch detonation. They were then used in the studies of thermal-neutron induced fission of Fm and in discovery of a new fermium isotope Fm. Also, the rare isotope Cm was synthesized in large quantities, which is very difficult to produce in nuclear reactors from its progenitor Cm; the half-life of Cm (64 minutes) is much too short for months-long reactor irradiations, but is very "long" on the explosion timescale. Natural occurrence Because of the short half-life of all known isotopes of fermium, any primordial fermium, that is fermium present on Earth during its formation, has decayed by now. Synthesis of fermium from naturally occurring uranium and thorium in the Earth's crust requires multiple neutron captures, which is extremely unlikely. Therefore, most fermium is produced on Earth in laboratories, high-power nuclear reactors, or in nuclear tests, and is present for only a few months afterward. The transuranic elements americium to fermium did occur naturally in the natural nuclear fission reactor at Oklo, but no longer do so. Chemistry The chemistry of fermium has only been studied in solution using tracer techniques, and no solid compounds have been prepared. Under normal conditions, fermium exists in solution as the Fm3+ ion, which has a hydration number of 16.9 and an acid dissociation constant of 1.6 (pK = 3.8). Fm forms complexes with a wide variety of organic ligands with hard donor atoms such as oxygen, and these complexes are usually more stable than those of the preceding actinides. It also forms anionic complexes with ligands such as chloride or nitrate and, again, these complexes appear to be more stable than those formed by einsteinium or californium. It is believed that the bonding in the complexes of the later actinides is mostly ionic in character: the Fm ion is expected to be smaller than the preceding An ions because of the higher effective nuclear charge of fermium, and hence fermium would be expected to form shorter and stronger metal–ligand bonds. Fermium(III) can be fairly easily reduced to fermium(II), for example with samarium(II) chloride, with which fermium(II) coprecipitates. In the precipitate, the compound fermium(II) chloride (FmCl) was produced, though it was not purified or studied in isolation. The electrode potential has been estimated to be similar to that of the ytterbium(III)/(II) couple, or about −1.15 V with respect to the standard hydrogen electrode, a value which agrees with theoretical calculations. The Fm/Fm couple has an electrode potential of −2.37(10) V based on polarographic measurements. Toxicity Though few people come in contact with fermium, the International Commission on Radiological Protection has set annual exposure limits for the two most stable isotopes. For fermium-253, the ingestion limit was set at 10 becquerels (1 Bq equals one decay per second), and the inhalation limit at 10 Bq; for fermium-257, at 10 Bq and 4,000 Bq respectively. Notes and references Notes References Further reading Robert J. Silva: Fermium, Mendelevium, Nobelium, and Lawrencium, in: Lester R. Morss, Norman M. Edelstein, Jean Fuger (Hrsg.): The Chemistry of the Actinide and Transactinide Elements, Springer, Dordrecht 2006; , p. 1621–1651; . Seaborg, Glenn T. (ed.) (1978) Proceedings of the Symposium Commemorating the 25th Anniversary of Elements 99 and 100, 23 January 1978, Report LBL-7701 Gmelins Handbuch der anorganischen Chemie, System Nr. 71, Transurane: Teil A 1 II, p. 19–20; Teil A 2, p. 47; Teil B 1, p. 84. External links Fermium at The Periodic Table of Videos (University of Nottingham) Chemical elements Chemical elements with face-centered cubic structure Actinides Synthetic elements
Fermium
[ "Physics", "Chemistry" ]
3,423
[ "Matter", "Chemical elements", "Synthetic materials", "Synthetic elements", "Atoms", "Radioactivity" ]
10,826
https://en.wikipedia.org/wiki/Fax
Fax (short for facsimile), sometimes called telecopying or telefax (short for telefacsimile), is the telephonic transmission of scanned printed material (both text and images), normally to a telephone number connected to a printer or other output device. The original document is scanned with a fax machine (or a telecopier), which processes the contents (text or images) as a single fixed graphic image, converting it into a bitmap, and then transmitting it through the telephone system in the form of audio-frequency tones. The receiving fax machine interprets the tones and reconstructs the image, printing a paper copy. Early systems used direct conversions of image darkness to audio tone in a continuous or analog manner. Since the 1980s, most machines transmit an audio-encoded digital representation of the page, using data compression to transmit areas that are all-white or all-black, more quickly. Initially a niche product, fax machines became ubiquitous in offices in the 1980s and 1990s. However, they have largely been rendered obsolete by Internet-based technologies such as email and the World Wide Web, but are still used in some medical administration and law enforcement settings. History Wire transmission Scottish inventor Alexander Bain worked on chemical-mechanical fax-type devices and in 1846 Bain was able to reproduce graphic signs in laboratory experiments. He received British patent 9745 on May 27, 1843, for his "Electric Printing Telegraph". Frederick Bakewell made several improvements on Bain's design and demonstrated a telefax machine. The Pantelegraph was invented by the Italian physicist Giovanni Caselli. He introduced the first commercial telefax service between Paris and Lyon in 1865, some 11 years before the invention of the telephone. In 1880, English inventor Shelford Bidwell constructed the scanning phototelegraph that was the first telefax machine to scan any two-dimensional original, not requiring manual plotting or drawing. An account of Henry Sutton's "telephane" was published in 1896. Around 1900, German physicist Arthur Korn invented the Bildtelegraph, widespread in continental Europe especially following a widely noticed transmission of a wanted-person photograph from Paris to London in 1908, used until the wider distribution of the radiofax. Its main competitors were the Bélinographe by Édouard Belin first, then since the 1930s the Hellschreiber, invented in 1929 by German inventor Rudolf Hell, a pioneer in mechanical image scanning and transmission. The 1888 invention of the telautograph by Elisha Gray marked a further development in fax technology, allowing users to send signatures over long distances, thus allowing the verification of identification or ownership over long distances. On May 19, 1924, scientists of the AT&T Corporation "by a new process of transmitting pictures by electricity" sent 15 photographs by telephone from Cleveland to New York City, such photos being suitable for newspaper reproduction. Previously, photographs had been sent over the radio using this process. The Western Union "Deskfax" fax machine, announced in 1948, was a compact machine that fit comfortably on a desktop, using special spark printer paper. Wireless transmission As a designer for the Radio Corporation of America (RCA), in 1924, Richard H. Ranger invented the wireless photoradiogram, or transoceanic radio facsimile, the forerunner of today's "fax" machines. A photograph of President Calvin Coolidge sent from New York to London on November 29, 1924, became the first photo picture reproduced by transoceanic radio facsimile. Commercial use of Ranger's product began two years later. Also in 1924, Herbert E. Ives of AT&T transmitted and reconstructed the first color facsimile, a natural-color photograph of silent film star Rudolph Valentino in period costume, using red, green and blue color separations. Beginning in the late 1930s, the Finch Facsimile system was used to transmit a "radio newspaper" to private homes via commercial AM radio stations and ordinary radio receivers equipped with Finch's printer, which used thermal paper. Sensing a new and potentially golden opportunity, competitors soon entered the field, but the printer and special paper were expensive luxuries, AM radio transmission was very slow and vulnerable to static, and the newspaper was too small. After more than ten years of repeated attempts by Finch and others to establish such a service as a viable business, the public, apparently quite content with its cheaper and much more substantial home-delivered daily newspapers, and with conventional spoken radio bulletins to provide any "hot" news, still showed only a passing curiosity about the new medium. By the late 1940s, radiofax receivers were sufficiently miniaturized to be fitted beneath the dashboard of Western Union's "Telecar" telegram delivery vehicles. In the 1960s, the United States Army transmitted the first photograph via satellite facsimile to Puerto Rico from the Deal Test Site using the Courier satellite. Radio fax is still in limited use today for transmitting weather charts and information to ships at sea. The closely related technology of slow-scan television is still used by amateur radio operators. Telephone transmission In 1964, Xerox Corporation introduced (and patented) what many consider to be the first commercialized version of the modern fax machine, under the name (LDX) or Long Distance Xerography. This model was superseded two years later with a unit that would set the standard for fax machines for years to come. Up until this point facsimile machines were very expensive and hard to operate. In 1966, Xerox released the Magnafax Telecopiers, a smaller, facsimile machine. This unit was far easier to operate and could be connected to any standard telephone line. This machine was capable of transmitting a letter-sized document in about six minutes. The first sub-minute, digital fax machine was developed by Dacom, which built on digital data compression technology originally developed at Lockheed for satellite communication. By the late 1970s, many companies around the world (especially Japanese firms) had entered the fax market. Very shortly after this, a new wave of more compact, faster and efficient fax machines would hit the market. Xerox continued to refine the fax machine for years after their ground-breaking first machine. In later years it would be combined with copier equipment to create the hybrid machines we have today that copy, scan and fax. Some of the lesser known capabilities of the Xerox fax technologies included their Ethernet enabled Fax Services on their 8000 workstations in the early 1980s. Prior to the introduction of the ubiquitous fax machine, one of the first being the Exxon Qwip in the mid-1970s, facsimile machines worked by optical scanning of a document or drawing spinning on a drum. The reflected light, varying in intensity according to the light and dark areas of the document, was focused on a photocell so that the current in a circuit varied with the amount of light. This current was used to control a tone generator (a modulator), the current determining the frequency of the tone produced. This audio tone was then transmitted using an acoustic coupler (a speaker, in this case) attached to the microphone of a common telephone handset. At the receiving end, a handset's speaker was attached to an acoustic coupler (a microphone), and a demodulator converted the varying tone into a variable current that controlled the mechanical movement of a pen or pencil to reproduce the image on a blank sheet of paper on an identical drum rotating at the same rate. Computer facsimile interface In 1985, Hank Magnuski, founder of GammaLink, produced the first computer fax board, called GammaFax. Such boards could provide voice telephony via Analog Expansion Bus. In the 21st century Although businesses usually maintain some kind of fax capability, the technology has faced increasing competition from Internet-based alternatives. In some countries, because electronic signatures on contracts are not yet recognized by law, while faxed contracts with copies of signatures are, fax machines enjoy continuing support in business. In Japan, faxes are still used extensively as of September 2020 for cultural and They are available for sending to both domestic and international recipients from over 81% of all convenience stores nationwide. Convenience-store fax machines commonly print the slightly re-sized content of the sent fax in the electronic confirmation-slip, in A4 paper size. Use of fax machines for reporting cases during the COVID-19 pandemic has been criticised in Japan for introducing data errors and delays in reporting, slowing response efforts to contain the spread of infections and hindering the transition to remote work. In many corporate environments, freestanding fax machines have been replaced by fax servers and other computerized systems capable of receiving and storing incoming faxes electronically, and then routing them to users on paper or via an email (which may be secured). Such systems have the advantage of reducing costs by eliminating unnecessary printouts and reducing the number of inbound analog phone lines needed by an office. The once ubiquitous fax machine has also begun to disappear from the small office and home office environments. Remotely hosted fax-server services are widely available from VoIP and e-mail providers allowing users to send and receive faxes using their existing e-mail accounts without the need for any hardware or dedicated fax lines. Personal computers have also long been able to handle incoming and outgoing faxes using analog modems or ISDN, eliminating the need for a stand-alone fax machine. These solutions are often ideally suited for users who only very occasionally need to use fax services. In July 2017 the United Kingdom's National Health Service was said to be the world's largest purchaser of fax machines because the digital revolution has largely bypassed it. In June 2018 the Labour Party said that the NHS had at least 11,620 fax machines in operation and in December the Department of Health and Social Care said that no more fax machines could be bought from 2019 and that the existing ones must be replaced by secure email by March 31, 2020. Leeds Teaching Hospitals NHS Trust, generally viewed as digitally advanced in the NHS, was engaged in a process of removing its fax machines in early 2019. This involved quite a lot of e-fax solutions because of the need to communicate with pharmacies and nursing homes which may not have access to the NHS email system and may need something in their paper records. In 2018 two-thirds of Canadian doctors reported that they primarily used fax machines to communicate with other doctors. Faxes are still seen as safer and more secure and electronic systems are often unable to communicate with each other. Hospitals are the leading users for fax machines in the United States where some doctors prefer fax machines over emails, often due to concerns about accidentally violating HIPAA. Capabilities There are several indicators of fax capabilities: group, class, data transmission rate, and conformance with ITU-T (formerly CCITT) recommendations. Since the 1968 Carterfone decision, most fax machines have been designed to connect to standard PSTN lines and telephone numbers. Group Analog Group 1 and 2 faxes are sent in the same manner as a frame of analog television, with each scanned line transmitted as a continuous analog signal. Horizontal resolution depended upon the quality of the scanner, transmission line, and the printer. Analog fax machines are obsolete and no longer manufactured. ITU-T Recommendations T.2 and T.3 were withdrawn as obsolete in July 1996. Group 1 faxes conform to the ITU-T Recommendation T.2. Group 1 faxes take six minutes to transmit a single page, with a vertical resolution of 96 scan lines per inch. Group 1 fax machines are obsolete and no longer manufactured. Group 2 faxes conform to the ITU-T Recommendations T.3 and T.30. Group 2 faxes take three minutes to transmit a single page, with a vertical resolution of 96 scan lines per inch. Group 2 fax machines are almost obsolete, and are no longer manufactured. Group 2 fax machines can interoperate with Group 3 fax machines. Digital A major breakthrough in the development of the modern facsimile system was the result of digital technology, where the analog signal from scanners was digitized and then compressed, resulting in the ability to transmit high rates of data across standard phone lines. The first digital fax machine was the Dacom Rapidfax first sold in late 1960s, which incorporated digital data compression technology developed by Lockheed for transmission of images from satellites. Group 3 and 4 faxes are digital formats and take advantage of digital compression methods to greatly reduce transmission times. Group 3 faxes conform to the ITU-T Recommendations T.30 and T.4. Group 3 faxes take between 6 and 15 seconds to transmit a single page (not including the initial time for the fax machines to handshake and synchronize). The horizontal and vertical resolutions are allowed by the T.4 standard to vary among a set of fixed resolutions: Horizontal: 100 scan lines per inch Vertical: 100 scan lines per inch ("Basic") Horizontal: 200 or 204 scan lines per inch Vertical: 100 or 98 scan lines per inch ("Standard") Vertical: 200 or 196 scan lines per inch ("Fine") Vertical: 400 or 391 (note not 392) scan lines per inch ("Superfine") Horizontal: 300 scan lines per inch Vertical: 300 scan lines per inch Horizontal: 400 or 408 scan lines per inch Vertical: 400 or 391 scan lines per inch ("Ultrafine") Group 4 faxes are designed to operate over 64 kbit/s digital ISDN circuits. They conform to the ITU-T Recommendations T.563 (Terminal characteristics for Group 4 facsimile apparatus), T.503 (Document application profile for the interchange of Group 4 facsimile documents), T.521 (Communication application profile BT0 for document bulk transfer based on the session service), T.6 (Facsimile coding schemes and coding control functions for Group 4 facsimile apparatus) specifying resolutions, a superset of the resolutions from T.4 , T.62 (Control procedures for teletex and Group 4 facsimile services), T.70 (Network-independent basic transport service for the telematic services), and T.411 to T.417 (concerned with aspects of the Open Document Architecture). Fax Over IP (FoIP) can transmit and receive pre-digitized documents at near-realtime speeds using ITU-T recommendation T.38 to send digitised images over an IP network using JPEG compression. T.38 is designed to work with VoIP services and often supported by analog telephone adapters used by legacy fax machines that need to connect through a VoIP service. Scanned documents are limited to the amount of time the user takes to load the document in a scanner and for the device to process a digital file. The resolution can vary from as little as 150 DPI to 9600 DPI or more. This type of faxing is not related to the e-mail–to–fax service that still uses fax modems at least one way. Class Computer modems are often designated by a particular fax class, which indicates how much processing is offloaded from the computer's CPU to the fax modem. Class 1 (also known as Class 1.0) fax devices do fax data transfer, while the T.4/T.6 data compression and T.30 session management are performed by software on a controlling computer. This is described in ITU-T recommendation T.31. What is commonly known as "Class 2" is an unofficial class of fax devices that perform T.30 session management themselves, but the T.4/T.6 data compression is performed by software on a controlling computer. Implementations of this "class" are based on draft versions of the standard that eventually significantly evolved to become Class 2.0. All implementations of "Class 2" are manufacturer-specific. Class 2.0 is the official ITU-T version of Class 2 and is commonly known as Class 2.0 to differentiate it from many manufacturer-specific implementations of what is commonly known as "Class 2". It uses a different but standardized command set than the various manufacturer-specific implementations of "Class 2". The relevant ITU-T recommendation is T.32. Class 2.1 is an improvement of Class 2.0 that implements faxing over V.34 (33.6 kbit/s), which boosts faxing speed from fax classes "2" and 2.0, which are limited to 14.4 kbit/s. The relevant ITU-T recommendation is T.32 Amendment 1. Class 2.1 fax devices are referred to as "super G3". Data transmission rate Several different telephone-line modulation techniques are used by fax machines. They are negotiated during the fax-modem handshake, and the fax devices will use the highest data rate that both fax devices support, usually a minimum of 14.4 kbit/s for Group 3 fax. {| class="wikitable" !ITU standard !Released date !Data rates (bit/s) !Modulation method |- |V.27 |1988 |4800, 2400 |PSK |- |V.29 |1988 |9600, 7200, 4800 |QAM |- |V.17 |1991 |, , 9600, 7200 |TCM |- |V.34 |1994 | |QAM |- |V.34bis |1998 | |QAM |- |ISDN |1986 | |4B3T / 2B1Q (line coding) |} "Super Group 3" faxes use V.34bis modulation that allows a data rate of up to 33.6 kbit/s. Compression As well as specifying the resolution (and allowable physical size) of the image being faxed, the ITU-T T.4 recommendation specifies two compression methods for decreasing the amount of data that needs to be transmitted between the fax machines to transfer the image. The two methods defined in T.4 are: Modified Huffman (MH). Modified READ (MR) (Relative Element Address Designate), optional. An additional method is specified in T.6: Modified Modified READ (MMR). Later, other compression techniques were added as options to ITU-T recommendation T.30, such as the more efficient JBIG (T.82, T.85) for bi-level content, and JPEG (T.81), T.43, MRC (T.44), and T.45 for grayscale, palette, and colour content. Fax machines can negotiate at the start of the T.30 session to use the best technique implemented on both sides. Modified Huffman Modified Huffman (MH), specified in T.4 as the one-dimensional coding scheme, is a codebook-based run-length encoding scheme optimised to efficiently compress whitespace. As most faxes consist mostly of white space, this minimises the transmission time of most faxes. Each line scanned is compressed independently of its predecessor and successor. Modified READ Modified READ, specified as an optional two-dimensional coding scheme in T.4, encodes the first scanned line using MH. The next line is compared to the first, the differences determined, and then the differences are encoded and transmitted. This is effective, as most lines differ little from their predecessor. This is not continued to the end of the fax transmission, but only for a limited number of lines until the process is reset, and a new "first line" encoded with MH is produced. This limited number of lines is to prevent errors propagating throughout the whole fax, as the standard does not provide for error correction. This is an optional facility, and some fax machines do not use MR in order to minimise the amount of computation required by the machine. The limited number of lines is 2 for "Standard"-resolution faxes, and 4 for "Fine"-resolution faxes. Modified Modified READ The ITU-T T.6 recommendation adds a further compression type of Modified Modified READ (MMR), which simply allows a greater number of lines to be coded by MR than in T.4. This is because T.6 makes the assumption that the transmission is over a circuit with a low number of line errors, such as digital ISDN. In this case, the number of lines for which the differences are encoded is not limited. JBIG In 1999, ITU-T recommendation T.30 added JBIG (ITU-T T.82) as another lossless bi-level compression algorithm, or more precisely a "fax profile" subset of JBIG (ITU-T T.85). JBIG-compressed pages result in 20% to 50% faster transmission than MMR-compressed pages, and up to 30 times faster transmission if the page includes halftone images. JBIG performs adaptive compression, that is, both the encoder and decoder collect statistical information about the transmitted image from the pixels transmitted so far, in order to predict the probability for each next pixel being either black or white. For each new pixel, JBIG looks at ten nearby, previously transmitted pixels. It counts, how often in the past the next pixel has been black or white in the same neighborhood, and estimates from that the probability distribution of the next pixel. This is fed into an arithmetic coder, which adds only a small fraction of a bit to the output sequence if the more probable pixel is then encountered. The ITU-T T.85 "fax profile" constrains some optional features of the full JBIG standard, such that codecs do not have to keep data about more than the last three pixel rows of an image in memory at any time. This allows the streaming of "endless" images, where the height of the image may not be known until the last row is transmitted. ITU-T T.30 allows fax machines to negotiate one of two options of the T.85 "fax profile": In "basic mode", the JBIG encoder must split the image into horizontal stripes of 128 lines (parameter L0 = 128) and restart the arithmetic encoder for each stripe. In "option mode", there is no such constraint. Matsushita Whiteline Skip A proprietary compression scheme employed on Panasonic fax machines is Matsushita Whiteline Skip (MWS). It can be overlaid on the other compression schemes, but is operative only when two Panasonic machines are communicating with one another. This system detects the blank scanned areas between lines of text, and then compresses several blank scan lines into the data space of a single character. (JBIG implements a similar technique called "typical prediction", if header flag TPBON is set to 1.) Typical characteristics Group 3 fax machines transfer one or a few printed or handwritten pages per minute in black-and-white (bitonal) at a resolution of 204×98 (normal) or 204×196 (fine) dots per square inch. The transfer rate is 14.4 kbit/s or higher for modems and some fax machines, but fax machines support speeds beginning with 2400 bit/s and typically operate at 9600 bit/s. The transferred image formats are called ITU-T (formerly CCITT) fax group 3 or 4. Group 3 faxes have the suffix .g3 and the MIME type image/g3fax. The most basic fax mode transfers in black and white only. The original page is scanned in a resolution of 1728 pixels/line and 1145 lines/page (for A4). The resulting raw data is compressed using a modified Huffman code optimized for written text, achieving average compression factors of around 20. Typically a page needs 10 s for transmission, instead of about three minutes for the same uncompressed raw data of 1728×1145 bits at a speed of 9600 bit/s. The compression method uses a Huffman codebook for run lengths of black and white runs in a single scanned line, and it can also use the fact that two adjacent scanlines are usually quite similar, saving bandwidth by encoding only the differences. Fax classes denote the way fax programs interact with fax hardware. Available classes include Class 1, Class 2, Class 2.0 and 2.1, and Intel CAS. Many modems support at least class 1 and often either Class 2 or Class 2.0. Which is preferable to use depends on factors such as hardware, software, modem firmware, and expected use. Printing process Fax machines from the 1970s to the 1990s often used direct thermal printers with rolls of thermal paper as their printing technology, but since the mid-1990s there has been a transition towards plain-paper faxes: thermal transfer printers, inkjet printers and laser printers. One of the advantages of inkjet printing is that inkjets can affordably print in color; therefore, many of the inkjet-based fax machines claim to have color fax capability. There is a standard called ITU-T30e (formally ITU-T Recommendation T.30 Annex E ) for faxing in color; however, it is not widely supported, so many of the color fax machines can only fax in color to machines from the same manufacturer. Stroke speed Stroke speed in facsimile systems is the rate at which a fixed line perpendicular to the direction of scanning is crossed in one direction by a scanning or recording spot. Stroke speed is usually expressed as a number of strokes per minute. When the fax system scans in both directions, the stroke speed is twice this number. In most conventional 20th century mechanical systems, the stroke speed is equivalent to drum speed. Fax paper As a precaution, thermal fax paper is typically not accepted in archives or as documentary evidence in some courts of law unless photocopied. This is because the image-forming coating is eradicable and brittle, and it tends to detach from the medium after a long time in storage. Fax tone A CNG tone is an 1100 Hz tone transmitted by a fax machine when it calls another fax machine. Fax tones can cause complications when implementing fax over IP. Internet fax One popular alternative is to subscribe to an Internet fax service, allowing users to send and receive faxes from their personal computers using an existing email account. No software, fax server or fax machine is needed. Faxes are received as attached TIFF or PDF files, or in proprietary formats that require the use of the service provider's software. Faxes can be sent or retrieved from anywhere at any time that a user can get Internet access. Some services offer secure faxing to comply with stringent HIPAA and Gramm–Leach–Bliley Act requirements to keep medical information and financial information private and secure. Utilizing a fax service provider does not require paper, a dedicated fax line, or consumable resources. Another alternative to a physical fax machine is to make use of computer software which allows people to send and receive faxes using their own computers, utilizing fax servers and unified messaging. A virtual (email) fax can be printed out and then signed and scanned back to computer before being emailed. Also the sender can attach a digital signature to the document file. With the surging popularity of mobile phones, virtual fax machines can now be downloaded as applications for Android and iOS. These applications make use of the phone's internal camera to scan fax documents for upload or they can import from various cloud services. Related standards T.4 is the umbrella specification for fax. It specifies the standard image sizes, two forms of image-data compression (encoding), the image-data format, and references, T.30 and the various modem standards. T.6 specifies a compression scheme that reduces the time required to transmit an image by roughly 50-percent. T.30 specifies the procedures that a sending and receiving terminal use to set up a fax call, determine the image size, encoding, and transfer speed, the demarcation between pages, and the termination of the call. T.30 also references the various modem standards. V.21, V.27ter, V.29, V.17, V.34: ITU modem standards used in facsimile. The first three were ratified prior to 1980, and were specified in the original T.4 and T.30 standards. V.34 was published for fax in 1994. T.37 The ITU standard for sending a fax-image file via e-mail to the intended recipient of a fax. T.38 The ITU standard for sending Fax over IP (FoIP). G.711 pass through - this is where the T.30 fax call is carried in a VoIP call encoded as audio. This is sensitive to network packet loss, jitter and clock synchronization. When using voice high-compression encoding techniques such as, but not limited to, G.729, some fax tonal signals may not be correctly transported across the packet network. image/t38 MIME-type SSL Fax An emerging standard that allows a telephone based fax session to negotiate a fax transfer over the internet, but only if both sides support the standard. The standard is partially based on T.30 and is being developed by Hylafax+ developers. See also 3D Fax Black fax Called subscriber identification (CSID) Error correction mode (ECM) Fax art Fax demodulator Fax modem Fax server Faxlore Fultograph Image scanner Internet fax Junk fax Radiofax—image transmission over HF radio Slow-scan television T.38 Fax-over-IP Telautograph Telex Teletex Transmitting Subscriber Identification (TSID) Wirephoto References Further reading Coopersmith, Jonathan, Faxed: The Rise and Fall of the Fax Machine (Johns Hopkins University Press, 2015) 308 pp. "Transmitting Photographs by Telegraph", Scientific American article, 12 May 1877, p. 297 External links Group 3 Facsimile Communication—A '97 essay with technical details on compression and error codes, and call establishment and release. ITU T.30 Recommendation American inventions Computer peripherals English inventions German inventions Italian inventions ITU-T recommendations Japanese inventions Office equipment Scottish inventions Telecommunications equipment
Fax
[ "Technology" ]
6,387
[ "Computer peripherals", "Components" ]
17,453,551
https://en.wikipedia.org/wiki/Sarcodon%20imbricatus
Sarcodon imbricatus, commonly known as the shingled hedgehog or scaly hedgehog, is a species of tooth fungus in the order Thelephorales. The mushroom is edible. Many sources report it has a bitter taste, but others have found it delicious and suspect that the bitter specimens may be similar related species. The mushroom has a large, brownish cap with large brown scales and may reach 30 cm (12 in) in diameter. On the underside it sports greyish, brittle teeth instead of gills, and has white flesh. Its spore print is brown. It is associated with spruce (Picea), appearing in autumn. It ranges throughout North America and Europe, although collections from the British Isles are now assigned to the similar species Sarcodon squamosus. Taxonomy The Swedish botanist Olof Celsius reported in 1732 that Sarcodon imbricatus occurred in the vicinity of Uppsala, and Carl Linnaeus wrote of it in his 1737 work Flora lapponica. It was one of the species initially described by Linnaeus, as Hydnum imbricatum, in the second volume of his Species Plantarum in 1753. The specific epithet is the Latin imbricatus meaning "tiled" or "with overlapping tiles". It was then placed in the genus Sarcodon by Finnish mycologist Petter Adolf Karsten in 1881. For many years, Sarcodon imbricatus was described associated with both spruce and pine, although the latter forms were smaller and noted to be more palatable by mushroom hunters in Norway. Furthermore, the mushroom has been used as a source of pigment and collectors noted that fresh specimens collected under pine yielded pigment, but only old ones collected under spruce. Molecular analysis of the DNA revealed the two forms to be distinct genetically, and thus populations of what had been described as S. imbricatus were now assigned to Sarcodon squamosus, which includes collections in the British Isles and the Netherlands. Description The mushrooms, or fruiting bodies, can be quite large in size. The brownish or buff cap measures up to 30 cm (12 in) in diameter and is covered with coarse darker brown scales, becoming darker and upturned with age. It is funnel-shaped. The underside bears soft, pale grey 'teeth' rather than gills. These are 0.5–1.5 cm long, grayish brown (darkening with age), and brittle. The pale grey or brown stipe may reach high and wide, may be narrower at the base, and is sometimes eccentric. The spores are brown. Similar species From above, it may be confused with the old man of the woods (Strobilomyces strobilaceus) as both have a similar shaggy cap. The bitter and inedible Sarcodon amarascens can be distinguished by its bluish-black stripe. S. scabrosus is also similar. Distribution and habitat The fruit bodies of Sarcodon imbricatus grow in association with firs (Abies), especially in hilly or mountainous areas, and can appear on sandy or chalk soils in fairy rings. The usual fruiting season in August to October. It ranges throughout North America and Europe, although collections from the British Isles are now assigned to another species, Sarcodon squamosus. Uses Old mushrooms of Sarcodon imbricatus and related species contain blue-green pigments, which are used for dyeing wool in Norway. Edibility The fungus can be bitter, although this is less apparent in younger specimens. Submerging the mushrooms in boiling water will remove this. It can be pickled or dried and used as flavouring. In Bulgaria it is collected, dried and finely ground to be used as an aromatic mushroom flour. It is reported as edible but of poor quality in the United States by some sources but as deliciously edible by others. It may cause gastrointestinal upsets. In Korea, mushroom tea is made from it. Distinctive spicy aroma of fried younger specimens has made it an expensive delicacy on Japanese food market. References External links Sarcodon imbricatus at Mushroom Expert imbricatus Edible fungi Fungi described in 1753 Fungi of Europe Fungi of North America Taxa named by Carl Linnaeus Fungus species Fungi used for fiber dyes
Sarcodon imbricatus
[ "Biology" ]
880
[ "Fungi", "Fungus species" ]
17,453,762
https://en.wikipedia.org/wiki/Leotia%20viscosa
Leotia viscosa, commonly known as chicken lips, as well as jelly baby and green jelly drops, is a species of mushroom in the Leotiaceae family. Its stipe is yellow, and the cap is green. The cap comes in a variety of shapes. The edibility of this mushroom is unknown. It grows under oak trees or on dead logs. References External links Leotia viscosa at Mushroom Expert Helotiales Fungi of North America Fungi described in 1822 Fungus species
Leotia viscosa
[ "Biology" ]
101
[ "Fungi", "Fungus species" ]
17,454,635
https://en.wikipedia.org/wiki/Sleep%20onset%20latency
In sleep science, sleep onset latency (SOL) is the length of time that it takes to accomplish the transition from full wakefulness to sleep, normally to the lightest of the non-REM sleep stages. Sleep latency studies Pioneering Stanford University sleep researcher William C. Dement reports early development of the concept, and of the first test for it, the Multiple Sleep Latency Test (MSLT), in his book The Promise of Sleep. Dement and colleagues including Mary Carskadon had been seeking an objective measure of daytime sleepiness to help assess the effects of sleep disorders. In the course of evaluating experimental results, they realized that the amount of time it took to fall asleep in bed was closely linked to the subjects' own self-evaluated level of sleepiness. "This may not seem like an earthshaking epiphany, but conceiving and developing an objective measure of sleepiness was perhaps one of the most important advances in sleep science," Dement and coauthor Christopher Vaughn write of the discovery. When they initially developed the MSLT, Dement and others put subjects in a quiet, dark room with a bed and asked them to lie down, close their eyes and relax. They noted the number of minutes, ranging from 0 to 20, that it took a subject to fall asleep. If a volunteer was still awake after 20 minutes, the experiment was ended and the subject given a maximal alertness/minimal sleepiness rating. When scientists deprived subjects of sleep, they found sleep latency levels could drop below 1, i.e., subjects could fall asleep in less than a minute. The amount of sleep loss was directly linked to changes in sleep latency scores. The studies eventually led Dement and Carskadon to conclude that "the brain keeps an exact accounting of how much sleep it is owed". Not getting enough sleep during any given period of time leads to a phenomenon called sleep debt, which lowers sleep latency scores and makes sleep-deprived individuals fall asleep more quickly. Home testing of sleep latency For home-testing for an unusually low sleep latency and potential sleep deprivation, the authors point to a technique developed by Nathaniel Kleitman, the "father of sleep research". The subject reclines in a quiet, darkened room and drapes a hand holding a spoon over the edge of the bed or chair, placing a plate on the floor beneath the spoon. After checking the time, the subject tries to relax and fall asleep. When sleep is attained, the spoon will fall and strike the plate, awakening the subject, who then checks to see how much time has passed. The number of minutes passed is the sleep onset latency at that particular hour on that particular day. Dement advises against doing these evaluations at night when sleep onset latency can naturally be lower, particularly in older people. Instead, he suggests testing sleep onset latency during the day, ideally at 10:00 a.m., 12:30 p.m. and 3:00 p.m. A sleep onset latency of 0 to 5 minutes indicates severe sleep deprivation, 5 to 10 minutes is "troublesome", 10 to 15 minutes indicates a mild but "manageable" degree of sleep debt, and 15 to 20 minutes is indicative of "little or no" sleep debt Biomarkers of sleepiness Contemporary sleep researchers, including Paul Shaw of Washington University School of Medicine in St. Louis, have been pursuing development of biological indicators, or biomarkers, of sleepiness. In December 2006, Shaw reported online in The Proceedings of the National Academy of Sciences that his lab had shown that levels of amylase increased in fruit fly saliva when the flies were sleep-deprived. He then showed that human amylase also increased as human subjects were deprived of sleep. See also Sleep onset References Sleep medicine Sleep disorders Sleep
Sleep onset latency
[ "Biology" ]
782
[ "Behavior", "Sleep", "Sleep disorders", "Sleep medicine" ]
17,455,004
https://en.wikipedia.org/wiki/Pharmacokinetics%20simulation
Pharmacokinetics simulation is a simulation method used in determining the safety levels of a drug during its development. Purpose Pharmacokinetics simulation gives an insight to drug efficacy and safety before exposure of individuals to the new drug that might help to improve the design of a clinical trial. Pharmacokinetics simulations help in addition in therapy planning, to stay within the therapeutic range under various physiological and pathophysiological conditions, e.g., chronic kidney disease. Simulators Simcyp Simulator and GastroPlus (from Simulations Plus) are simulators that take account for individual variabilities. PharmaCalc v02 and PharmaCalcCL allow to simulate individual plasma-concentration time curves based on (published) pharmacokinetic parameters such as half-life, volume of distribution etc. Simulation Computational chemistry
Pharmacokinetics simulation
[ "Chemistry" ]
177
[ "Theoretical chemistry stubs", "Theoretical chemistry", "Computational chemistry", "Computational chemistry stubs", "Physical chemistry stubs" ]
17,455,275
https://en.wikipedia.org/wiki/Display%20aspect%20ratio
The display aspect ratio (DAR) is the aspect ratio of a display device and so the proportional relationship between the physical width and the height of the display. It is expressed as two numbers separated by a colon (x:y), where x corresponds to the width and y to the height. Common aspect ratios for displays, past and present, include 5:4, 4:3, 16:10, and 16:9. To distinguish: The display aspect ratio (DAR) is calculated from the physical width and height of a display, measured each in inch or cm (Display size). The pixel aspect ratio (PAR) is calculated from the width and height of one pixel. The storage aspect ratio (SAR) is calculated from the numbers of pixels in width and height stated in the display resolution. Because the units cancel out, all aspect ratios are unitless. Diagonal and area The size of a television set or computer monitor is given as the diagonal measurement of its display area, usually in inches. Wider aspect ratios result in smaller overall area, given the same diagonal. TVs Most televisions were built with an aspect ratio of 4:3 until the late 2000s, when widescreen TVs with 16:9 displays became the standard. This aspect ratio was chosen as the geometric mean between 4:3 and 2.35:1, an average of the various aspect ratios used in film. While 16:9 is well-suited for modern HDTV broadcasts, older 4:3 video has to be either padded with bars on the left and right side (pillarboxed), cropped or stretched, while movies shot with wider aspect ratios are usually letterboxed, with black bars at the top and bottom. Since the turn of the 21st century, many music videos began shooting on widescreen aspect ratio. Computer displays As of 2016, most computer monitors use widescreen displays with an aspect ratio of 16:9, although some portable PCs use narrower aspect ratios like 3:2 and 16:10 while some high-end desktop monitors have adopted ultrawide displays. The following table summarises the different aspect ratios that have been used in computer displays: † The aspect ratio is approximate. History 4:3, 5:4 and 16:10 Until about 2003, most computer monitors used an aspect ratio of 4:3, and in some cases 5:4. For cathode ray tubes (CRTs) 4:3 was most common even in resolutions where this meant the pixels would not be square (e.g. 320×200 or 1280×1024 on a 4:3 display). Between 2003 and 2006, monitors with 16:10 aspect ratio became commonly available, first in laptops and later also in standalone computer monitors. Reasons for this transition was productive uses for such monitors, i.e. besides widescreen movie viewing and computer game play, are the word processor display of two standard A4 or letter pages side by side, as well as CAD displays of large-size drawings and CAD application menus at the same time. 16:10 became the most common sold aspect ratio for widescreen computer monitors until 2008. 16:9 In 2008, the computer industry started to move from 4:3 and 16:10 to 16:9 as the standard aspect ratio for monitors and laptops. A 2008 report by DisplaySearch cited a number of reasons for this shift, including the ability for PC and monitor manufacturers to expand their product ranges by offering products with wider screens and higher resolutions, helping consumers to more easily adopt such products and "stimulating the growth of the notebook PC and LCD monitor market". By 2010, virtually all computer monitor and laptop manufacturers had also moved to the 16:9 aspect ratio, and the availability of 16:10 aspect ratio in mass market had become very limited. In 2011, non-widescreen displays with 4:3 aspect ratios still were being manufactured, but in small quantities. The reasons for this according to Bennie Budler, product manager of IT products at Samsung South Africa was that the "demand for the old 'Square monitors' has decreased rapidly over the last couple of years". He also predicted that "by the end of 2011, production on all 4:3 or similar panels will be halted due to a lack of demand." In 2012, 1920×1080 was the most commonly used resolution among Steam users. At the same time, the most common resolution globally was 1366×768, overtaking the previous leader 1024×768. In 2021, the 2K resolution of 1920×1080 was used by two thirds of the Steam users for the primary display with 1366×768 and 2560×1440 both at about eight percent taking the majority of the remaining resolutions. 3:2 3:2 displays first appeared in laptop computers in 2001 with the PowerBook G4 line, but did not enter the mainstream until the 2010s with the Chromebook Pixel and 2-in-1 PCs like Microsoft's Surface line. As of 2018, a number of manufacturers are either producing or planning to produce portable PCs with 3:2 displays. 21:9 Since 2014, a number of high-end desktop monitors have been released that use ultrawide displays with aspect ratios that roughly match the various anamorphic formats used in film, but are commonly marketed as 21:9. Resolutions for such displays include 2560×1080 (64:27), 3440×1440 (43:18) and 3840×1600 (12:5). 32:9 In 2017, Samsung released a curved gaming display with an aspect ratio of 32:9 and resolution of 3840×1080. 256:135 Since 2011, several monitors complying with the Digital Cinema Initiatives 4K standard have been produced; this standard specifies a resolution of 4096×2160, giving an aspect ratio of ≈1.896:1. 1:1 A 1:1 aspect ratio results in a square display. One of the available monitors for desktop use of this format is Eizo EV2730Q (27", 1920 × 1920 Pixels, from 2015), however such monitors are also often found in air traffic control displays (connected using standard computer cabling, like DVI or DisplayPort) and on aircraft as part of avionic equipment (often connected directly using LVDS, SPI interfaces or other specialized means). This 1920×1920 display can also be used as the centerpiece of a three-monitor array with one WUXGA set in vertical position on each side, resulting in 4320×1920 (a ratio of 9:4) - and no distortion with the Eizo 27" 1:1 if the side displays are 22". Suitability for software and content Games From 2005 to 2013, most video games were mainly made for the 16:9 aspect ratio and 16:9 computer displays therefore offer the best compatibility. 16:9 video games are letterboxed on a 16:10 or 4:3 display or have reduced field of view. As of 2013, many games are adopting support for 21:9 ultrawide resolutions, which can give a gameplay advantage due to increased field of view, although this is not always the case. 4:3 monitors have the best compatibility with older games released prior to 2005 when that aspect ratio was the mainstream standard for computer displays. Video As of 2017, the most common aspect ratio for TV broadcasts is 16:9, whereas movies are generally made in the wider 21:9 aspect ratio. Most modern TVs are 16:9, which causes letterboxing when viewing 21:9 content, and pillarboxing when viewing 4:3 content such as older films or TV broadcasts, unless the content is cropped or stretched to fill the entire display. Productivity applications For viewing documents in A4 paper size (which has a 1.41:1 aspect ratio), whether in portrait mode or two side-by-side in landscape mode, 4:3, 2:3 or 16:10 fit best. For photographs in the standard 135 film and print size (with a 3:2 aspect ratio), 2:3 or 16:10 fit best; for photographs taken with older consumer-level digital cameras, 4:3 fits perfectly. Smartphones Until 2010, smartphones used different aspect ratios, including 3:2 and 5:3. From 2010 to 2017 most smartphone manufacturers switched to using 16:9 widescreen displays, driven at least partly by the growing popularity of HD video using the same aspect ratio. Since 2017, a number of smartphones have been released using 18:9 or even wider aspect ratios (such as 19.5:9 or 20:9); such displays are expected to appear on increasingly more phones. Reasons for this trend include the ability for manufacturers to use a nominally larger display without increasing the width of the phone, being able to accommodate the on-screen navigation buttons without reducing usable app area, more area available for split-screen apps in portrait orientation, as well as the 18:9 ratio being well-suited for VR applications and the proposed Univisium film format. On the other hand, the disadvantages of taller 18:9 aspect ratio phones with some phones even going up to 20:9, 21:9, or even 22:9 in the case of Samsung's Z Flip series, is reduced one-handed reachability, being less convenient to carry around in the pocket as they stick out and reduced overall screen surface area. See also 14:9 aspect ratio Computer monitor Display resolution Field of view in video games Display resolution standards Ultrawide formats References Display technology Engineering ratios
Display aspect ratio
[ "Mathematics", "Engineering" ]
1,965
[ "Metrics", "Engineering ratios", "Quantity", "Electronic engineering", "Display technology" ]
17,455,588
https://en.wikipedia.org/wiki/Tomek%20Bartoszy%C5%84ski
Tomasz "Tomek" Bartoszyński (born 16 May 1957) is a Polish-American mathematician who works in set theory. Biography Bartoszyński studied mathematics at the University of Warsaw from 1976 to 1981, and worked there from 1981 to 1987. In 1984, he defended his Ph.D. thesis Combinatorial aspects of measure and category; his advisor was Wojciech Guzicki. In 2004, he received his habilitation from the Polish Academy of Sciences. From 1986 onward, he worked in the United States: he taught at the University of California in Berkeley and Davis. From 1990 to 2006, he was a professor (full professor from 1998-2006) at Boise State University. In 1990-1991, he visited the Hebrew University of Jerusalem as a fellow of the Lady Davis foundation, and in 1996/97 he visited the Free University of Berlin as a Humboldt fellow. Currently, he is one of the program directors at the National Science Foundation (NSF), responsible for combinatorics, foundations, and probability. Family His father, Robert Bartoszyński, is a statistician. His wife, Joanna Kania-Bartoszyńska, is the NSF program director for topology and geometric analysis. Scientific work Bartoszyński's work is mainly concerned with forcing, specifically with applications of forcing to the set theory of the real line. He has written about 50 papers in this field, as well as a monograph: Tomek Bartoszyński and Haim Judah: Set theory. On the structure of the real line. A K Peters, Ltd., Wellesley, MA, 1995; See also Cichoń's diagram Baire property References Polish set theorists American logicians Polish logicians Polish emigrants to the United States 1957 births Mathematical logicians 20th-century American mathematicians 21st-century American mathematicians Living people University of Warsaw alumni 21st-century Polish philosophers
Tomek Bartoszyński
[ "Mathematics" ]
388
[ "Mathematical logic", "Mathematical logicians" ]
17,455,827
https://en.wikipedia.org/wiki/North%E2%80%93South%20Pipeline
The North–South Pipeline, also known as the Sugarloaf Pipeline, is a water pipeline in Central Victoria, Australia, north-east of Melbourne that is part of Victoria's water system, acting as a link between Melbourne's water grid and the Murray-Goulburn water grid, supplying water via a series of existing and proposed pipelines. The 70-kilometre pipeline was connected to Melbourne in February 2010 to carry water from the Goulburn River to Melbourne's Sugarloaf Reservoir. It is the government's policy that it only be used in times of critical human need: when Melbourne's total water storages are less than 30% full on 30 November of any year. The pipeline can transfer a portion of Lake Eildon's water that is set aside for Melbourne, called the critical water reserve. This was 38,400 megalitres at 2 June 2014, and any changes are based on Goulburn-Murray Water's advice. The North–South Pipeline was presented through the late 2000s as being part of the Victorian government's "Our Water, Our Future", which included other major projects such as the Wonthaggi Desalination Plant, the Cardinia Pipeline and a proposed interconnector to Geelong. The pipeline runs between a location on the Goulburn River, near Yea and heads south towards the Sugarloaf Reservoir north-east of Melbourne, along the Melba Highway. The Goulburn River is a major tributary of the Murray-Darling river system and major agricultural region, whilst Sugarloaf Reservoir is a major storage reservoir for Melbourne's water supply. The pipeline cost $750 million and was delivered under an alliance model between Melbourne Water, John Holland, Sinclair Knight Merz and GHD. The pipeline is expected to add up to 75 billion litres annually to Melbourne's water supply, roughly one third of the 225 gigalitres proposed to be saved by irrigation and modernisation plans and projects in northern Victoria's Murray-Goulburn Irrigation District. The 225 gigalitres in savings is intended to be split 75 to Melbourne, 75 to irrigators and 75 to the watercourses themselves. Context In 2007, the Victorian Government announced the "Foodbowl Modernisation Plan" to save 225 gigalitres (GL) of water through a $1 billion investment in the Murray–Goulburn Gravity Irrigation Districts. This was later increased to $2 billion with another 200 GL of savings identified. The Goulburn is Victoria's largest and longest river, accounting for an average annual flow of 3,040 GL. Of this, about 700 GL is used within the Goulburn-Broken basin, and a further 850 GL transferred to irrigation areas outside the basin. After transmission losses of about 670 GL, a net outflow from the basin to the Murray of 350 GL remains. The population of the basin from this source is given as 100,000. Including Shepparton, Echuca and the tributary Broken catchment- it is estimated at 250,000. The notion of diverting water out of the Murray Darling Basin to provide urban water supply has stirred emotions among country and city residents. Diverting water between basins is not new however. The Snowy Mountains Scheme diverts Snowy River flows from its own catchment to the Murray Darling Basin in earlier decades. This source supplies 2,100 GL of water for generating power and providing irrigation water. This compares with the 75 GL contested water savings to be diverted out of the Basin for urban use. Although most people in Victoria live in Melbourne, the city only uses 8% of the regulated surface water, the major portion going to irrigation supply. As of September 2009, Melbourne's water storages were less than 30% full prior to the onset of the drier summer period. This situation had generally worsened over the course of previous twelve years. Inflows into Melbourne's storages over that twelve-year period averaged almost 40% less than the previous long-term average. As of 2009, consumption in Melbourne was about 450 GL /year. With decreasing inflows to its water catchments and continuing population growth, a shortfall of supply of up to 200 GL / year is anticipated by 2055. A number of strategies have been proposed, including reducing individual consumer demand, recycling "grey" water and sewerage, various means of conservation, and sourcing additional water from elsewhere: including the Goulburn River and Kilcunda desalination plant. The strategy proposes to meet the projected shortfall of demand over existing supply 42% from conservation, and 53% from additional sources. South-eastern Australia experienced widespread drought at the beginning of the 21st century, which has been linked to human-induced climate change. This has impacted upon rainfall in the region. The amount of water in Melbourne's water storage dams decreased between 1998 and 2009, as a result of those droughts, despite frequent water-usage restrictions. In June 2007, the Victorian Government released their water plan, Our Water Our Future. As part of this plan, the government announced its intention to develop a seawater reverse osmosis desalination plant and construct a pipeline to augment Melbourne's water supply, as well as other regional supply systems. With the completion of the pipeline and desalination plant, it is anticipated that certain water restrictions will be removed. Route The pipeline traverses a distance of approximately 70 km from a location on the Goulburn River, near Yea () and heads in a south-southeast direction towards the Sugarloaf Reservoir () northeast of Melbourne. The route travels roughly alongside the Melba Highway, southwards, until it splits in its direction towards the reservoir. The pipeline's route cuts through parts of the Kinglake National Park and Toolangi State Forest. Criticism and opposition The North–South Pipeline has been criticised by environmental groups, irrigators and communities directly affected by the pipeline, some suggesting that the project is politically influenced and founded on an incorrect interpretation of available data. The Federal Opposition, Victorian Liberal Party, the National Party, and the Australian Greens opposed the pipeline, with these objections detailed in the minority (dissenting) report section for the Water Amendment Bill 2008 (Saving the Goulburn and Murray Rivers), which also highlighted the many objections to the pipeline in the majority of submissions to the Senate Inquiry. Groups such as Plug the Pipe, the Victorian Farmers Federation, and Watershed Victoria also opposed the pipeline. Greenhouse gas emissions over a 100-year life are estimated at 13.3 million tonnes of carbon dioxide equivalent, the vast majority of which is generated by pumping water over the Great Dividing Range. The pipeline operator, Melbourne Water, states that only renewable energy is used for pumping and the system generates its own energy as water from the pipeline enters Sugarloaf Reservoir."". However, the hydro-generator installed at Sugarloaf Reservoir (as at 2024) has never been commissioned due to excessive vibration caused by design deficiencies. On 20 October 2006 the Victorian Premier released a report "The Central Region Sustainable Water Strategy" in which the government claimed it wasn't viable to take water from the Goulburn Valley to Melbourne. The CSIRO has since released several reports of a similar opinion and has commented on the detrimental effect such a diversion may have on both the environment and the agricultural production. Victorian Premier, Mr Brumby, has responded to criticisms saying, "Our estimates on the Food Bowl are actually quite conservative … Even in the dry years you've got 690 gigalitres of water lost," he said. "I'm more convinced than ever it's the right project." On 4 January 2010, The Age newspaper published an analysis showing that "Melbourne may never need water from the controversial north–south pipeline, with a stocktake showing existing storages and minimal rainfall should easily supply the city beyond the start of the desalination plant next year." In the Victorian state election in November 2010 a new government led by the Liberal Party's Ted Baillieu was elected. Baillieu has declared that his government will shut down the pipeline. Statistics withheld Statistics relating to water saved by irrigation upgrades in the Goulburn-Murray district for the year to June 2009 were expected to be released by the State Government by the end of the year (31 December 2009). However, by 1 January 2010, the "Our Water Our Future" website still displayed the message that the savings would be released "before the end of 2009". As The Age reported; "A spokesman for Water Minister Tim Holding said the audited savings would be released "shortly", but neither he nor the Department of Sustainability and Environment could offer a specific reason for the delay." Environmental effects The Federal Government set conditions on construction of the pipeline under the Environmental Protection Biodiversity Conservation (EPBC) Act. In June, Friends of the Earth (FoE) released its report Out of sight, out of mind?. The group's assessment of the ecological impacts of the North South pipeline claimed that a 12 kilometre long, 30 metre wide corridor cleared through the Toolangi State Forest had affected at least four Special Protection Zones (SPZ). The Sugarloaf Pipeline Alliance prepared a compliance report on the pipeline which concluded that the known or possible habitats of a number of vulnerable species, including matted flax-lily, the golden sun moth, striped legless lizard, and growling grass frog had been compromised during the construction of the pipeline. Project timeline 2006 20 October – report released by the State Government "The Central Region Sustainable Water Strategy", which suggested it was not viable to take water from the Goulburn Valley to Melbourne. 2007 The "Foodbowl Modernisation Plan" is announced by the State Government. June – the Victorian Government announces their intention to build the North–South Pipeline as part of the Our Water Our Future water plan. 2009 31 December – Water savings from the Goulburn-Murray district irrigation upgrades expected to be released are withheld without explanation. 2010 10 February – Pipeline is turned on at Sugarloaf Reservoir. 7 September – Due to heavy Spring rains, flows through the pipeline were temporarily suspended. References External links https://web.archive.org/web/20091014181953/http://www.sugarloafpipeline.com.au/ https://web.archive.org/web/20080721045046/http://www.ourwater.vic.gov.au/programs/irrigation-renewal/nvirp https://web.archive.org/web/20090629061029/http://www.melbournewater.com.au/content/current_projects/water_supply/sugarloaf_pipeline_project/sugarloaf_pipeline_project.asp Freshwater pipelines Interbasin transfer Water resource management in Victoria (state)
North–South Pipeline
[ "Environmental_science" ]
2,241
[ "Hydrology", "Interbasin transfer" ]
17,456,004
https://en.wikipedia.org/wiki/Musicovery
Musicovery is an interactive and customised French webradio service. Listeners rate songs, resulting in a personalized programme. Reviewers have commented that unlike services that are governed by the user's choice of artist or genre, this method results in more discovery of artists to which the user might not otherwise have been exposed; The Washington Post's reviewer gave the example of "segueing from a West Coast R&B band to a folk–rock group from Algeria". Musicovery provides dance mix — with the ability to specify the desired dance tempo — and similar artist features, as well as the option of a low fidelity free service or a premium service with better sound. Users have the option to limit the selection to a specific year or range of years, as well as to deselect any genre, and genres are color-keyed to the graphic interface. At least one U.S. newspaper reported that major advertising agency JWT listed Musicovery on its 2007 list of "80 things to watch in 2008," a list of trends and new products and services; however, the article does not indicate whether the agency had any business relationship with companies on the list. The webradio service is accessible on mobile phone (on 3G/Symbian Nokia Devices) iPhone and iPod Touch. Music files provided by the service are streamed, not downloaded, and the listener can buy all the songs played or tagged as favorite from major online music retailers iTunes, Amazon, and eBay. Announcement On January 2, 2017, Musicovery announced on Facebook that it has closed its webradio, and is currently only working as B2B in supplying playlists to other streaming services. We are sad to announce to our users that Musicovery is closing its smart radio. Musicovery is now focusing on providing to other music services its recommendation and playlist engine. We thank all listeners for their support. We hope you enjoyed the trip! History Musicovery was founded by French programmers Vincent Castaignet and Frédéric Vavrille. They combined their technology in January 2006 to create the internet site Musicovery.com. Castaignet has created the "Mood pad", a proprietary technology enabling listeners to find intuitively music matching their mood (see Technology). Vavrille is the creator of Liveplasma.com, a graphic discovery engine. Liveplasma was created in 2004 and is a graphical interface based on Liveplasma technology. The Liveplasma technology creates a graphical map of possible tastes, with songs most likely to please the user appearing closer to the current song on the visual map than songs less similar. Musicovery webradio was beta-tested in June 2006 and launched in November 2006. After four years, a completely redesigned Musicovery website was launched in July 2010. The new version does not feature Liveplasma technology, and has many fewer features available in the free version of the site. An iPhone application was launched in September 2010. In January 2017 Musicovery closed its services to end-users and is now only supplying playlists to other streaming providers. The Musicovery team is based in Paris, France. Technology Musicovery relies on proprietary technology developed by its founders: The "mood pad" technology: a music description methodology that enables to position any song on a 2 dimension continuous space (the “mood pad”); songs are described with 40 musical parameters; The technology is the result of 3-year research on music description and human acoustic perception. The streaming music is at low quality 32 kbit/s on the free platform, and at good quality 128 kbit/s on the paid platform. International service The music catalogs are specific for each country and the site is translated in local languages (English, Spanish, Portuguese, Italian, German, and French). The interface is adapted to local specifications where appropriate (for instance, country and folk genres are added for the US audience). The popularity scale of each song, a parameter influencing music programming, is specific to each country. While critics generally have praised the site's graphic interface, ability to make unusual but fitting song suggestions, and innovation in using mood as a grouping strategy, one otherwise favorable review complained of the "limited overall title base". Another reviewer noted that the technology used to classify each song by mood had its limitations, observing that the service "persists in, for example, believing that "Crazy Frog" puts me in a good mood, whereas this techno air instead gives me a death wish." Premium Features Skipping Banning a song skips it automatically Songs can also be skipped by removing them from the map High quality audio Play only favorite songs Play any song on the map at any time No ads Access artist and song info Notes Additional references Francesca Steele. "The Web Watcher: Hillary Clinton as Rocky Balboa; Musicovery," The Times (London), April 23, 2008. Retrieved 2008-07-01. Minutes – http://www.20minutes.fr/article/217964/High-Tech-Pratique-Musicovery-est-une-webradio-interactive-gratuite-et-en-francais-Cette-application-est-utilisable-La-version-gratuite-est-financee.php Brad Kava. "Take your pick! Our critics pick the best bets for Dec. 17–23," San Jose Mercury News (CA), December 17, 2006, page 3C. Jenn Kistler. "From the editor: Discover more music," Las Cruces Sun-News (NM), December 19, 2007, Pulse section. Steve Woodward. "Personal Tech/Playlists: Log on and listen up: Song suggestions for you," The Oregonian (Portland, OR), February 23, 2007, Living section, page E1. External links Liveplasma Internet radio Online music and lyrics databases Defunct websites American music websites Internet properties established in 2006 Internet properties disestablished in 2017
Musicovery
[ "Technology" ]
1,244
[ "Multimedia", "Internet radio" ]
17,456,938
https://en.wikipedia.org/wiki/Valuation%20%28logic%29
In logic and model theory, a valuation can be: In propositional logic, an assignment of truth values to propositional variables, with a corresponding assignment of truth values to all propositional formulas with those variables. In first-order logic and higher-order logics, a structure, (the interpretation) and the corresponding assignment of a truth value to each sentence in the language for that structure (the valuation proper). The interpretation must be a homomorphism, while valuation is simply a function. Mathematical logic In mathematical logic (especially model theory), a valuation is an assignment of truth values to formal sentences that follows a truth schema. Valuations are also called truth assignments. In propositional logic, there are no quantifiers, and formulas are built from propositional variables using logical connectives. In this context, a valuation begins with an assignment of a truth value to each propositional variable. This assignment can be uniquely extended to an assignment of truth values to all propositional formulas. In first-order logic, a language consists of a collection of constant symbols, a collection of function symbols, and a collection of relation symbols. Formulas are built out of atomic formulas using logical connectives and quantifiers. A structure consists of a set (domain of discourse) that determines the range of the quantifiers, along with interpretations of the constant, function, and relation symbols in the language. Corresponding to each structure is a unique truth assignment for all sentences (formulas with no free variables) in the language. Notation If is a valuation, that is, a mapping from the atoms to the set , then the double-bracket notation is commonly used to denote a valuation; that is, for a proposition . See also Algebraic semantics References , chapter 6 Algebra of formalized languages. Semantic units Model theory Interpretation (philosophy)
Valuation (logic)
[ "Mathematics" ]
370
[ "Mathematical logic", "Model theory" ]
17,457,858
https://en.wikipedia.org/wiki/DSQI
DSQI (design structure quality index) is an architectural design metric used to evaluate a computer program's design structure and the efficiency of its modules. The metric was developed by the United States Air Force Systems Command. The result of DSQI calculations is a number between 0 and 1. The closer to 1, the higher the quality. It is best used on a comparison basis, i.e., with previous successful projects. References External links DSQI Calculator Software metrics
DSQI
[ "Mathematics", "Engineering" ]
99
[ "Metrics", "Quantity", "Software engineering stubs", "Software metrics", "Software engineering" ]
17,458,039
https://en.wikipedia.org/wiki/PPT-Petoletka
PPT Petoletka () is a Serbian manufacturer of hydraulics and pneumatics, headquartered in Trstenik, Serbia. History Foundation "Prva Petoletka"-Trstenik was founded on 23 March 1949, by decision of the Government of People's Federal Republic of Yugoslavia, at the beginning of the first Five-year plan of development, after which it was named. 1980s The factory entered its "golden era" during the 1980s. During this time the factory had nearly 20,000 employees. The design offices of Prva Petoletka designed hydraulic systems for many of the hydroelectric power plants in Yugoslavia, the hydraulic drive on a class of massive roto-excavators for surface mines, drive- and control-systems for the sets of the National Theatre in Belgrade, electro-hydraulic system for lifting of the main dome of the Temple of Saint Sava in Belgrade, hydraulics for a modern second-generation tank (the M-84), an inclined marine railway installed in the shipyard in Kladovo, landing gear for Boeing, as well as many large centralized lubrication systems in Yugoslavia. Prva Petoletka was an important exporter, delivering its products to over 30 countries on several continents. At the height of production, PPT exported about 40% of its products. Petoletka cooperated with major world companies such as Boeing, Bugatti, Lucas, Bendix, Daimler, Martin Merkel, Pol-Mot, Orsta Hydraulik, Wabco Westinghouse, Linde Guldner, Ermeto, and Zahnradfabrik. Prva Petoletka was contracted to lift the 4000 ton central dome of the Cathedral of Saint Sava in 1989. The height of the dome was 37 metres prior to its retrofit and together with the ten metre cross it weighed 40,000 kN. The dome was lifted to a height of 43 m. Arch carriers and pendentives occupy the rest of the space between the dome and the supporting construction. The lifting process was very slow and took forty days to complete. 1990–2001 In the 1990s Yugoslavia collapsed and war started in Croatia and Bosnia. International sanctions were imposed by the United Nations, which led to political isolation and economic decline for Serbia. This resulted in a crisis for the factory and its workers when PPT lost its ability to export products. Many high-profile engineers left Petoletka during the 1990s and started their own private companies. The state did not have funds to invest in factory machines and equipment and this resulted in most of factory's equipment becoming obsolete. 2001–2016 Since the 2000s, the PPT has sought privatization. The first tender for the sale of Prva Petoletka, held in June 2005, was unsuccessful. The second tender, opened in November 2007, also failed. The government decided to sell the factory in parts rather than as a whole because of the lack of interested parties to buy entire company. In 2009, an agreement was reached with the Russian company "Bummash" from Izhevsk, but deal failed once again. After that, restructuring of PPT has started and some members of the group have already found strategic partners. During the 2010s, the company maintained production facilities in Trstenik, Vrnjacka Banja, Brus, Aleksandrovac, Novi Pazar, Leposavic, and Belgrade in Serbia and Bijelo Polje in Montenegro. Its headquarters and main production facilities were located in Trstenik. In January 2016, after two decades of insolvency, the company has declared bankruptcy. 2016–present In January 2016, a new business entity under name “PPT Petoletka doo” was registered and took most of Prva Petoletka's former employees. As of 2017, its main contractor is Russian Kamaz. See also Hydraulics Pneumatics Mechanical engineering List of companies of the Socialist Federal Republic of Yugoslavia References External links 1949 establishments in Serbia Companies based in Trstenik D.o.o. companies in Serbia Government-owned companies of Serbia Hydraulics Manufacturing companies established in 1949 Manufacturing companies of Serbia Serbian brands
PPT-Petoletka
[ "Physics", "Chemistry" ]
855
[ "Physical systems", "Hydraulics", "Fluid dynamics" ]
17,458,495
https://en.wikipedia.org/wiki/Rank%20%28graph%20theory%29
In graph theory, a branch of mathematics, the rank of an undirected graph has two unrelated definitions. Let equal the number of vertices of the graph. In the matrix theory of graphs the rank of an undirected graph is defined as the rank of its adjacency matrix. Analogously, the nullity of the graph is the nullity of its adjacency matrix, which equals . In the matroid theory of graphs the rank of an undirected graph is defined as the number , where is the number of connected components of the graph. Equivalently, the rank of a graph is the rank of the oriented incidence matrix associated with the graph. Analogously, the nullity of the graph is the nullity of its oriented incidence matrix, given by the formula , where n and c are as above and m is the number of edges in the graph. The nullity is equal to the first Betti number of the graph. The sum of the rank and the nullity is the number of edges. Examples A sample graph and matrix: (corresponding to the four edges, e1–e4): In this example, the matrix theory rank of the matrix is 4, because its column vectors are linearly independent. See also Circuit rank Cycle rank Nullity (graph theory) Notes References . Hedetniemi, S. T., Jacobs, D. P., Laskar, R. (1989), Inequalities involving the rank of a graph. Journal of Combinatorial Mathematics and Combinatorial Computing, vol. 6, pp. 173–176. Bevis, Jean H., Blount, Kevin K., Davis, George J., Domke, Gayla S., Miller, Valerie A. (1997), The rank of a graph after vertex addition. Linear Algebra and its Applications, vol. 265, pp. 55–69. Algebraic graph theory Graph connectivity Graph invariants
Rank (graph theory)
[ "Mathematics" ]
400
[ "Graph connectivity", "Graph theory", "Graph invariants", "Mathematical relations", "Algebra", "Algebraic graph theory" ]
17,458,663
https://en.wikipedia.org/wiki/Nullity%20%28graph%20theory%29
The nullity of a graph in the mathematical subject of graph theory can mean either of two unrelated numbers. If the graph has n vertices and m edges, then: In the matrix theory of graphs, the nullity of the graph is the nullity of the adjacency matrix A of the graph. The nullity of A is given by n − r where r is the rank of the adjacency matrix. This nullity equals the multiplicity of the eigenvalue 0 in the spectrum of the adjacency matrix. See Cvetkovič and Gutman (1972), Cheng and Liu (2007), and Gutman and Borovićanin (2011). In the matroid theory the nullity of the graph is the nullity of the oriented incidence matrix M associated with the graph. The nullity of M is given by m − n + c, where, c is the number of components of the graph and n − c is the rank of the oriented incidence matrix. This name is rarely used; the number is more commonly known as the cycle rank, cyclomatic number, or circuit rank of the graph. It is equal to the rank of the cographic matroid of the graph. It also equals the nullity of the Laplacian matrix of the graph, defined as L = D − A, where D is the diagonal matrix of vertex degrees; the Laplacian nullity equals the cycle rank because L = M MT (M times its own transpose). See also Rank (graph theory) References Bo Cheng and Bolian Liu (2007), On the nullity of graphs. Electronic Journal of Linear Algebra, vol. 16, article 5, pp. 60–67. Dragoš M. Cvetkovič and Ivan M. Gutman (1972), The algebraic multiplicity of the number zero in the spectrum of a bipartite graph. Matematički Vesnik (Beograd), vol. 9, pp. 141–150. Ivan Gutman and Bojana Borovićanin (2011), Nullity of graphs: an updated survey. Zbornik Radova (Beograd), vol. 14, no. 22 (Selected Topics on Applications of Graph Spectra), pp. 137–154. Graph theory
Nullity (graph theory)
[ "Mathematics" ]
472
[ "Graph theory stubs", "Discrete mathematics", "Graph theory", "Combinatorics", "Mathematical relations" ]
17,458,989
https://en.wikipedia.org/wiki/MAX232
The MAX232 is an integrated circuit by Maxim Integrated Products, now a subsidiary of Analog Devices, that converts signals from a TIA-232 (RS-232) serial port to signals suitable for use in TTL-compatible digital logic circuits. The MAX232 is a dual transmitter / dual receiver that typically is used to convert the RX, TX, CTS, RTS signals. The drivers provide TIA-232 voltage level outputs (about ±7.5 volts) from a single 5-volt supply by on-chip charge pumps and external capacitors. This makes it useful for implementing TIA-232 in devices that otherwise do not need any other voltages. The receivers translates the TIA-232 input voltages (up to ±25 volts, though MAX232 supports up to ±30 volts) down to standard 5 volt TTL levels. These receivers have a typical threshold of 1.3 volts and a typical hysteresis of 0.5 volts. The MAX232 replaced an older pair of chips MC1488 and MC1489 that performed similar RS-232 translation. The MC1488 quad transmitter chip required 12 volt and −12 volt power, and MC1489 quad receiver chip required 5 volt power. The main disadvantages of this older solution was the ±12 volt power requirement, only supported 5 volt digital logic, and two chips instead of one. History The MAX232 was proposed by Charlie Allen and designed by Dave Bingham. Maxim Integrated Products announced the MAX232 no later than 1986. Versions The later MAX232A is backward compatible with the original MAX232 but may operate at higher baud rates and can use smaller external capacitors 0.1 μF in place of the 1.0 μF capacitors used with the original device. The newer MAX3232 and MAX3232E are also backwards compatible, but operates at a broader voltage range, from 3 to 5.5 V. Pin-to-pin compatible versions from other manufacturers are ICL232, SP232, ST232, ADM232 and HIN232. Texas Instruments makes compatible chips, using MAX232 as the part number. Voltage levels The MAX232 translates a TTL logic 0 input to between +3 and +15 V, and changes TTL logic 1 input to between −3 and −15 V, and vice versa for converting from TIA-232 to TTL. (The TIA-232 uses opposite voltages for data and control lines, see RS-232 voltage levels.) Applications The MAX232(A) has two receivers that convert from RS-232 to TTL voltage levels, and two drivers that convert from TTL logic to RS-232 voltage levels. As a result, only two out of all RS-232 signals can be converted in each direction. Typically, the first driver/receiver pair of the MAX232 is used for TX and RX signals, and the second one for CTS and RTS signals. There are not enough drivers/receivers in the MAX232 to also connect the DTR, DSR, and DCD signals. Usually, these signals can be omitted when, for example, communicating with a PC's serial interface, or when special cables render them unnecessary. If the DTE requires these signals, a second MAX232 or some other IC from the MAX232 family can be used. Derivatives The MAX232 family was subsequently extended by Maxim to versions with four transmitters (the MAX234) and a version with four receivers and four transmitters (the MAX248), as well as several other combinations of receivers and transmitters. A notable addition is the MAX316x which is able to be electrically reconfigured between differential 5 V (RS-422 and RS-485) and single-ended RS-232 albeit at reduced voltage. References External links MAX232 & MAX232A, Maxim Investigating Fake MAX3232 TTL-to-RS-232 Chips Integrated circuits
MAX232
[ "Technology", "Engineering" ]
836
[ "Computer engineering", "Integrated circuits" ]
17,459,765
https://en.wikipedia.org/wiki/Honing%20%28metalworking%29
Honing is an abrasive machining process that produces a precision surface on a metal workpiece by scrubbing an abrasive grinding stone or grinding wheel against it along a controlled path. Honing is primarily used to improve the geometric form of a surface, but can also improve the surface finish. Typical applications are the finishing of cylinders for internal combustion engines, air bearing spindles and gears. There are many types of hones, but all consist of one or more abrasive stones that are held under pressure against the surface they are working on. Other similar processes are lapping and superfinishing. Honing machines A honing machine is a precision tool used in machining to improve the surface finish and dimensional accuracy of component. It operates by using abrasive honing tools, which rotate and reciprocate inside the components, typically a cylinder or bore. This process enhances the internal surface quality, achieving precise dimensions and smooth finishes. Honing machines come in various types, including cylindrical, vertical, and horizontal models. Cylindrical honing machines are designed for interior surfaces of cylindrical components, while vertical and horizontal models are suited for different orientations and sizes of workpieces. These machines are essential in manufacturing for achieving high precision and consistency in parts such as engine cylinders and hydraulic components. Advanced models, such as auto-gauging and expansion single-pass honing machines, feature automation and real-time measurement systems to further enhance efficiency and accuracy. Honing fixtures Honing fixtures are specialized tools used in the honing process of machining, designed to ensure precise alignment and stability of components during the honing operation. These fixtures are essential for achieving high accuracy and surface finish in cylindrical and other intricate components. Typically employed in manufacturing and maintenance applications, honing fixtures facilitate the effective removal of material to achieve desired tolerances and surface quality. Honing stones Honing uses a special tool, called a honing stone or a hone, to achieve a precision surface. The hone is composed of abrasive grains that are bound together with an adhesive. Generally, honing grains are irregularly shaped and about 10 to 50 micrometers in diameter (300 to 1500 mesh grit). Smaller grain sizes produce a smoother surface on the workpiece. A honing stone is similar to a grinding wheel in many ways, but honing stones are usually more friable, so that they conform to the shape of the workpiece as they wear in. To counteract their friability, honing stones may be treated with wax or sulfur to improve life; wax is usually preferred for environmental reasons. Any abrasive material may be used to create a honing stone, but the most commonly used are corundum, silicon carbide, cubic boron nitride, and diamond. The choice of abrasive material is usually driven by the characteristics of the workpiece material. In most cases, corundum or silicon carbide are acceptable, but extremely hard workpiece materials must be honed using superabrasives. The hone is usually turned in the bore while being moved in and out. Special cutting fluids are used to give a smooth cutting action and to remove the material that has been abraded. Machines can be portable, simple manual machines, or fully automatic with gauging depending on the application. Modern advances in abrasives have made it possible to remove much larger amount of material than was previously possible. This has displaced grinding in many applications where "through machining" is possible. External hones perform the same function on shafts. Process mechanics Since honing stones look similar to grinding wheels, it is tempting to think of honing as a form of low-stock removal grinding. Instead, it is better to think of it as a self-truing grinding process. In grinding, the wheel follows a simple path. For example, in plunge grinding a shaft, the wheel moves in towards the axis of the part, grinds it, and then moves back out. Since each slice of the wheel repeatedly contacts the same slice of the workpiece, any inaccuracies in the geometric shape of the grinding wheel will be transferred onto the part. Therefore, the accuracy of the finished workpiece geometry is limited to the accuracy of the truing dresser. The accuracy becomes even worse as the grind wheel wears, so truing must occur periodically to reshape it. The limitation on geometric accuracy is overcome in honing because the honing stone follows a complex path. In bore honing, for example, the stone moves along two paths simultaneously. The stones are pressed radially outward to enlarge the hole while they simultaneously oscillate axially. Due to the oscillation, each slice of the honing stones touch a large area of the workpiece. Therefore, imperfections in the honing stone's profile cannot transfer to the bore. Instead, both the bore and the honing stones conform to the average shape of the honing stones' motion, which in the case of bore honing is a cylinder. This averaging effect occurs in all honing processes; both the workpiece and stones erode until they conform to the average shape of the stones' cutting surface. Since the honing stones tend to erode towards a desired geometric shape, there is no need to true them. As a result of the averaging effect, the accuracy of a honed component often exceeds the accuracy of the machine tool that created it. The path of the stone is not the only difference between grinding and honing machines, they also differ in the stiffness of their construction. Honing machines are much more compliant than grinders. The purpose of grinding is to achieve a tight size tolerance. To do this, the grinding wheel must be moved to an exact position relative to the workpiece. Therefore, a grinding machine must be very stiff and its axes must move with very high precision. A honing machine is relatively inaccurate and imperfect. Instead of relying on the accuracy of the machine tool, it relies on the averaging effect between the stone and the workpiece. Compliance is a requirement of a honing machine that is necessary for the averaging effect to occur. This leads to an obvious difference between the two machines: in a grinder the stone is rigidly attached to a slide, while in honing the stone is actuated with pneumatic or hydraulic pressure. High-precision workpieces are usually ground and then honed. Grinding determines the size, and honing improves the shape. The difference between honing and grinding is not always the same. Some grinders have complex movements and are self-truing, and some honing machines are equipped with in-process gauging for size control. Many through-feed grinding operations rely on the same averaging effect as honing. Honing configurations Bore honing Flat honing OD honing / Super Finish / Fine Finish (taper and straight) Spherical honing Track/raceway honing Economics Since honing is a high-precision process, it is also relatively expensive. Therefore, it is only used in components that demand the highest level of precision. It is typically the last manufacturing operation before the part is shipped to a customer. The dimensional size of the object is established by preceding operations, the last of which is usually grinding. Then the part is honed to improve a form characteristic such as surface finish, roundness, flatness, cylindricity, or sphericity. Performance advantages of honed surfaces Since honing is a relatively expensive manufacturing process, it can only be economically justified for applications that require very good form accuracy. The improved shape after honing may result in a quieter running or higher-precision component. The flexible honing tool provides a relatively inexpensive honing process. This tool produces a controlled surface condition unobtainable by any other method. It involves finish, geometry and metallurgical structure. A high-percentage plateau free of cut, torn and folded metal is produced. The flexible hone is a resilient, flexible honing tool with a soft cutting action. The abrasive globules each have independent suspension that assures the tool to be self-centering, self-aligning to the bore, and self-compensating for wear. Cross-hatch finish A "cross-hatch" pattern is used to retain oil or grease to ensure proper lubrication and ring seal of pistons in cylinders. A smooth glazed cylinder wall can cause piston ring and cylinder scuffing. The "cross-hatch" pattern is used on brake rotors and flywheels. Plateau finish The plateau finish is one characterised by the removal of "peaks" in the metal while leaving the cross-hatch intact for oil retention. The plateaued finish increases the bearing area of the finish and does not require the piston or ring to "break in" the cylinder walls. Plateau honing specification: Rz (10pt Roughness Height)... 3–6 micrometres, Rpk (Reduced Peak Height).... ≤0.3 micrometres, Rk (Core Roughness Depth).... 0.3–1.5 micrometres, Rvk (Reduced Valley Depth)... 0.8–2.0 micrometres. A profilometer provides modern, defined descriptions of cylinder bore finish that include "RPK" (Reduced Peak Height), "RVK" (Reduced Valley Depth) and "RK" (Core Roughness Depth), which is based on both the "RPK" and "RVK" measurements. See also Single-pass bore finishing Notes Grinding and lapping Machine tools
Honing (metalworking)
[ "Engineering" ]
1,982
[ "Machine tools", "Industrial machinery" ]
17,459,933
https://en.wikipedia.org/wiki/Open%20Core%20Protocol
The Open Core Protocol (OCP) is a protocol for on-chip subsystem communications. It is an openly licensed, core-centric protocol and defines a bus-independent, configurable interface. OCP International Partnership (OCP-IP) produces OCP specifications. OCP data transfer models range from simple request-grant handshaking through pipelined request-response to complex out-of-order operations. Legacy IP cores can be adapted to OCP, while new implementations may take advantage of advanced features: designers select only those features and signals encompassing a core's specific data, control and test configuration. The Open Core Protocol (OCP) is one of several FPGA processor interconnects used to connect soft FPGA peripherals to FPGA CPUs—both soft microprocessor and hard-macro processor. Other such interconnects include Advanced eXtensible Interface (AXI), Avalon, and the Wishbone bus. FPGA vendor Altera joined the Open Core Protocol International Partnership in 2010. Advantages Eliminates the ongoing task of interface protocol (re)definition, verification, documentation and support Readily adapts to support new core capabilities Test bench portability simplifies (re)verification Limits test suite modifications for core enhancements Interfaces to any bus structure or on-chip network Delivers industry-standard flexibility and reuse Point-to-point protocol can directly interface two cores Disadvantages Neither Altera nor Xilinx, the two largest FPGA vendors, supports this protocol. References External links Computer peripherals
Open Core Protocol
[ "Technology" ]
323
[ "Computer peripherals", "Components" ]
1,022,111
https://en.wikipedia.org/wiki/Monoclonality
In biology, monoclonality refers to the state of a line of cells that have been derived from a single clonal origin. Thus, "monoclonal cells" can be said to form a single clone. The term monoclonal comes . The process of replication can occur in vivo, or may be stimulated in vitro for laboratory manipulations. The use of the term typically implies that there is some method to distinguish between the cells of the original population from which the single ancestral cell is derived, such as a random genetic alteration, which is inherited by the progeny. Common usages of this term include: Monoclonal antibody: a single hybridoma cell, which by chance includes the appropriate V(D)J recombination to produce the desired antibody, is cloned to produce a large population of identical cells. In informal laboratory jargon, the monoclonal antibodies isolated from cell culture supernatants of these hybridoma clones (hybridoma lines) are simply called monoclonals. Monoclonal neoplasm (tumor): A single aberrant cell which has undergone carcinogenesis reproduces itself into a cancerous mass. Monoclonal plasma cell (also called plasma cell dyscrasia): A single aberrant plasma cell which has undergone carcinogenesis reproduces itself, which in some cases is cancerous. References Biology terminology
Monoclonality
[ "Biology" ]
282
[ "nan" ]
1,022,185
https://en.wikipedia.org/wiki/Baldwin%20effect
In evolutionary biology, the Baldwin effect describes an effect of learned behaviour on evolution. James Mark Baldwin and others suggested that an organism's ability to learn new behaviours (e.g. to acclimatise to a new stressor) will affect its reproductive success and will therefore have an effect on the genetic makeup of its species through natural selection. It posits that subsequent selection might reinforce the originally learned behaviors, if adaptive, into more in-born, instinctive ones. Though this process appears similar to Lamarckism, that view proposes that living things inherited their parents' acquired characteristics. The Baldwin effect only posits that learning ability, which is genetically based, is another variable in / contributor to environmental adaptation. First proposed during the Eclipse of Darwinism in the late 19th century, this effect has been independently proposed several times, and today it is generally recognized as part of the modern synthesis. "A New Factor in Evolution" The effect, then unnamed, was put forward in 1896 in a paper "A New Factor in Evolution" by the American psychologist James Mark Baldwin, with a second paper in 1897. The paper proposed a mechanism for specific selection for general learning ability. As the historian of science Robert Richards explains: Selected offspring would tend to have an increased capacity for learning new skills rather than being confined to genetically coded, relatively fixed abilities. In effect, it places emphasis on the fact that the sustained behaviour of a species or group can shape the evolution of that species. The "Baldwin effect" is better understood in evolutionary developmental biology literature as a scenario in which a character or trait change occurring in an organism as a result of its interaction with its environment becomes gradually assimilated into its developmental genetic or epigenetic repertoire. In the words of the philosopher of science Daniel Dennett: An update to the Baldwin effect was developed by Jean Piaget, Paul Weiss, and Conrad Waddington in the 1960s–1970s. This new version included an explicit role for the social in shaping subsequent natural change in humans (both evolutionary and developmental), with reference to alterations of selection pressures. Subsequent research shows that Baldwin was not the first to identify the process; Douglas Spalding mentioned it in 1873. Controversy and acceptance Initially Baldwin's ideas were not incompatible with the prevailing, but uncertain, ideas about the mechanism of transmission of hereditary information and at least two other biologists put forward very similar ideas in 1896. In 1901, Maurice Maeterlinck referred to behavioural adaptations to prevailing climates in different species of bees as "what had merely been an idea, therefore, and opposed to instinct, has thus by slow degrees become an instinctive habit". The Baldwin effect theory subsequently became more controversial, with scholars divided between "Baldwin boosters" and "Baldwin skeptics". The theory was first called the "Baldwin effect" by George Gaylord Simpson in 1953. Simpson "admitted that the idea was theoretically consistent, that is, not inconsistent with the modern synthesis", but he doubted that the phenomenon occurred very often, or if so, could be proven to occur. In his discussion of the reception of the Baldwin-effect theory Simpson points out that the theory appears to provide a reconciliation between a neo-Darwinian and a neo-Lamarckian approach and that "Mendelism and later genetic theory so conclusively ruled out the extreme neo-Lamarckian position that reconciliation came to seem unnecessary". In 1942, the evolutionary biologist Julian Huxley promoted the Baldwin effect as part of the modern synthesis, saying the concept had been unduly neglected by evolutionists. In the 1960s, the evolutionary biologist Ernst Mayr contended that the Baldwin effect theory was untenable because the argument is stated in terms of the individual genotype, whereas what is really exposed to the selection pressure is a phenotypically and genetically variable population; it is not sufficiently emphasized that the degree of modification of the phenotype is in itself genetically controlled; it is assumed that phenotypic rigidity is selectively superior to phenotypic flexibility. In 1987 Geoffrey Hinton and Steven Nowlan demonstrated by computer simulation that learning can accelerate evolution, and they associated this with the Baldwin effect. Paul Griffiths suggests two reasons for the continuing interest in the Baldwin effect. The first is the role mind is understood to play in the effect. The second is the connection between development and evolution in the effect. Baldwin's account of how neurophysiological and conscious mental factors may contribute to the effect brings into focus the question of the possible survival value of consciousness. Still, David Depew observed in 2003, "it is striking that a rather diverse lot of contemporary evolutionary theorists, most of whom regard themselves as supporters of the Modern Synthesis, have of late become 'Baldwin boosters'". These According to Dennett, also in 2003, recent work has rendered the Baldwin effect "no longer a controversial wrinkle in orthodox Darwinism". Potential genetic mechanisms underlying the Baldwin effect have been proposed for the evolution of natural (genetically determinant) antibodies. In 2009, empirical evidence for the Baldwin effect was provided from the colonisation of North America by the house finch. The Baldwin effect has been incorporated into the extended evolutionary synthesis. Comparison with genetic assimilation The Baldwin effect has been confused with, and sometimes conflated with, a different evolutionary theory also based on phenotypic plasticity, C. H. Waddington's genetic assimilation. The Baldwin effect includes genetic accommodation, of which one type is genetic assimilation. Science historian Laurent Loison has written that "the Baldwin effect and genetic assimilation, even if they are quite close, should not be conflated". See also Notes References External links Baldwinian evolution Bibliography Extended evolutionary synthesis Evolutionary biology Selection
Baldwin effect
[ "Biology" ]
1,161
[ "Evolutionary biology", "Evolutionary processes", "Selection" ]
1,022,200
https://en.wikipedia.org/wiki/Aerial%20work%20platform
An aerial work platform (AWP), also an aerial device, aerial lift, boom lift, bucket truck, cherry picker, elevating work platform (EWP), mobile elevating work platform (MEWP), or scissor lift, is a mechanical device used to provide temporary access for people or equipment to inaccessible areas, usually at height. There are various distinct types of mechanized access platforms. They are generally used for temporary, flexible access purposes such as maintenance and construction work or by firefighters for emergency access, which distinguishes them from permanent access equipment such as elevators. They are designed to lift limited weights — usually less than a ton, although some have a higher safe working load (SWL) — distinguishing them from most types of cranes. They are usually capable of being set up and operated by a single person. Regardless of the task they are used for, aerial work platforms may provide additional features beyond transport and access, including being equipped with electrical outlets or compressed air connectors for power tools. They may also be equipped with specialist equipment, such as carrying frames for window glass. Underbridge units are also available to lift operators down to a work area. As the name suggests, cherry pickers were initially developed to facilitate the picking of cherries. Jay Eitel invented the device in 1944 after a frustrating day spent picking cherries using a ladder. He went on to launch the Telsta Corporation, Sunnyvale, CA in 1953 to manufacture the device. Another early cherry picker manufacturer was Stemm Brothers, Leavenworth, WA. Other uses for cherry pickers quickly evolved. Lifting mechanisms There are several distinct types of aerial work platforms, which all have specific features which make them more or less desirable for different applications. The key difference is in the drive mechanism which propels the working platform to the desired location. Most are powered by either hydraulics or possibly pneumatics. The different techniques also reflect in the pricing and availability of each type. Aerial device Aerial devices were once exclusively operated by hydraulic pistons, powered by diesel or gasoline motors on the base unit. Lightweight electrically powered units are gaining popularity for window-cleaning or other maintenance operations, especially indoors and in isolated courtyards, where heavier hydraulic equipment cannot be used. Aerial devices are the closest in appearance to a crane – consisting of a number of jointed sections, which can be controlled to extend the lift in a number of different directions, which can often include "up and over" applications. The most common type of aerial device are known in the AWP industry as knuckle boom lifts or articulated boom lifts, due to their distinctive shape, providing easy access to awkward high reach positions. This type of AWP is the most likely of the types to be known as a "cherry picker", owing to its origins, where it was designed for use in orchards (though not just cherry orchards). It lets the picker standing in the transport basket pick fruit high in a tree with relative ease (with the jointed design ensuring minimum damage to the tree). The term "cherry picker" has become generic, and is commonly used to describe articulated lifts (and more rarely all AWPs). Another type of aerial device is a straight boom lift or telescopic boom lift, which as its name suggests has a boom that extends straight out for direct diagonal or vertical reach by the use of telescoping sections, letting you take full advantage of the boom length range. Some AWPS are classified as spider lifts due to the appearance of their legs as they unfold, extend and stabilise, providing a wide supportive base to operate safely. These legs can be manual or hydraulic (usually depending on size and price of the machine). AWPs are widely used for maintenance and construction of all types, including extensively in the power and telecommunications industries to service overhead lines, and in arboriculture to provide an independent work platform on difficult or dangerous trees. A specialist type of the articulated lift is the type of fire apparatus used by firefighters worldwide as a vehicle to provide high level or difficult access. These types of platforms often have additional features such as a piped water supply and water cannon to aid firefighters in their task. Scissor lift A scissor lift is a type of platform that can usually only move vertically. The mechanism to achieve this is the use of linked, folding supports in a criss-cross X pattern, known as a pantograph (or scissor mechanism). The upward motion is achieved by the application of pressure to the outside of the lowest set of supports, elongating the crossing pattern, and propelling the work platform vertically. The platform may also have an extending deck to allow closer access to the work area, because of the inherent limits of vertical-only movement. The contraction of the scissor action can be hydraulic, pneumatic or mechanical (via a leadscrew or rack and pinion system). Depending on the power system employed on the lift, it may require no power to descend, able to do so with a simple release of hydraulic or pneumatic pressure. This is the main reason that these methods of powering the lifts are preferred, as it allows a fail-safe option of returning the platform to the ground by release of a manual valve. Apart from the height and width variables, there are a few considerations required when choosing a scissor lift. Electric scissor lifts have smaller tyres and can be charged by a standard power point. These machines usually suit level ground surfaces and have zero or minimal fuel emissions. Diesel scissor lifts have larger rough terrain tyres with high ground clearance for uneven outdoor surface conditions. Many machines contain outriggers that can be deployed to stabilise the machine for operation. Hotel lift There are a number of smaller lifts that use mechanical devices to extend, such as rack and pinion or screw threads. These often have juxtaposed sections that move past each other in order to facilitate movement, usually in a vertical direction only. These lifts usually have limited capability in terms of weight and extension, and are most often used for internal maintenance tasks, such as changing light bulbs. Motive mechanisms AWPs, by their nature, are designed for temporary works and therefore frequently require transportation between sites, or simply around a single site (often as part of the same job). For this reason, they are almost all designed for easy movement and the ability to ride up and down truck ramps. Unpowered These usually smaller units have no motive drive and require external force to move them. Dependent on size and whether they are wheeled or otherwise supported, this may be possible by hand, or may require a vehicle for towing or transport. Small non-powered AWPs can be light enough to be transported in a pickup truck bed, and can usually be moved through a standard doorway. Self-propelled These units are able to drive themselves (on wheels or tracks) around a site (they usually require to be transported to a site, for reasons of safety and economy). In some instances, these units will be able to move whilst the job is in progress, although this is not possible on units which require secure outriggers, and therefore most common on the scissor lift types. The power can be almost any form of standard mechanical drive system, including electric or gasoline powered, or in some cases, a hybrid (especially where it may be used both inside and outside). Such person lifts are distinguished from telescopic handlers in that the latter are true cranes designed to deliver cargo loads such as pallets full of construction materials (rather than just a person with some tools). Vehicle-mounted Some units are mounted on a vehicle, usually a truck. They can also be mounted on a flat-back pick-up van known as a self drive, though other vehicles are possible, such as flatcars. This vehicle provides mobility, and may also help stabilize the unit – though outrigger stabilizers are still typical, especially as vehicle-mounted AWPs are amongst the largest of their kind. The vehicle may also increase functionality by serving as a mobile workshop or store. Control The power assisted drive (if fitted) and lift functions of an AWP are controlled by an operator, who can be situated either on the work platform itself, or at a control panel at the base of the unit. Some models are fitted with a panel at both locations or with a remote control, giving operator a choice of position. A control panel at the base can also function as a safety feature if for any reason the operator is at height and becomes unable to operate his controls. Even models not fitted with a control panel at the base are usually fitted with an emergency switch of some sort, which allows manual lowering of the lift (usually by the release of hydraulic or pneumatic pressure) in the event of an emergency or power failure. Controls vary by model, but are frequently either buttons or a joystick. The type and complexity of these will depend on the functions the platform is able to perform, such as: Vertical movement Lateral movement Rotational movement (cardinal direction) Platform / basket movement — normally, the system automatically levels the platform, regardless of boom position, but some allow overrides, tilting up to 90° for work in difficult locations. Ground movement (in self-propelled models) Safety The majority of manufacturers and operators have strict safety criteria for the operation of AWPs. In some countries, a licence and insurance is required to operate some types of AWP. Most protocols advocate training every operator, whether mandated or not. Most operators adopt a checklist of verifications to be completed before each use. Manufacturers recommend regular maintenance schedules. Work platforms are fitted with safety or guard rails around the platform itself to contain operators and passengers. This is supplemented in most models by a restraining point, designed to secure a safety harness or fall arrester. Some work platforms also have a lip around the floor of the platform itself to avoid tools or supplies being accidentally kicked off the platform. Some protocols require all equipment to be attached to the structure by individual lanyards. When using AWPs in the vicinity of overhead power lines, users may be electrocuted if the lift comes into contact with electrical wiring. Non-conductive materials, such as fiberglass, may be used to reduce this hazard. 'No Go Zones' may be designated near electrical hazards. AWPs often come equipped with a variety of tilt sensors. The most commonly activated sensor is an overweight sensor that will not allow the platform to raise if the maximum operating weight is exceeded. Sensors within the machine detect that weight on the platform is off-balance to such a point as to risk a possible tip-over if the platform is raised further. Another sensor will refuse to extend the platform if the machine is on a significant incline. Some models of AWPs additionally feature counterweights, which extend in order to offset the danger of tipping the machine inherent in extending items like booms or bridges. As with most dangerous mechanical devices, all AWPs are fitted with an emergency stop button which may be activated by a user in the event of a malfunction or danger. Best practice dictates fitting of emergency stop buttons on the platform and at the base as a minimum. Other safety features include automatic self-checking of the AWP's working parts, including a voltmeter that detects if the lift has insufficient power to complete its tasks and preventing operation if supply voltage is insufficient. Some AWPs provide manual lowering levers at the base of the machine, allowing operators to lower the platform to the ground in the event of a power or control failure, or unauthorized use of the machine. Rental equipment AWPs are often bought by equipment rental companies, who then rent them out to construction companies or individuals needing these specialised machines. The market for these machines is known to be marked by especially strong boom and bust cycles, and after a great demand in the 1990s the market crashed in 2001, leading to a strong contraction amongst the manufacturers. The industry began a strong growth period again in 2003 that resulted in peak shipments in 2007 prior to the economic crash in 2008. The 2008 crash has caused a strong consolidation amongst rental companies and the industry reached high unit shipment levels again in 2018. See also Early aerials Forklift truck Helix tower Lift table Long reach excavator References External links International Powered Access Federation 1944 introductions American inventions Articles containing video clips Vehicles by type Vertical transport devices ru:Автомобиль-вышка
Aerial work platform
[ "Technology" ]
2,544
[ "Vertical transport devices", "Transport systems" ]
1,022,286
https://en.wikipedia.org/wiki/Derivative%20algebra%20%28abstract%20algebra%29
In abstract algebra, a derivative algebra is an algebraic structure of the signature <A, ·, +, ', 0, 1, D> where <A, ·, +, ', 0, 1> is a Boolean algebra and D is a unary operator, the derivative operator, satisfying the identities: 0D = 0 xDD ≤ x + xD (x + y)D = xD + yD. xD is called the derivative of x. Derivative algebras provide an algebraic abstraction of the derived set operator in topology. They also play the same role for the modal logic wK4 = K + (p∧□p → □□p) that Boolean algebras play for ordinary propositional logic. References Esakia, L., Intuitionistic logic and modality via topology, Annals of Pure and Applied Logic, 127 (2004) 155-170 McKinsey, J.C.C. and Tarski, A., The Algebra of Topology, Annals of Mathematics, 45 (1944) 141-191 Algebras Boolean algebra Topology
Derivative algebra (abstract algebra)
[ "Physics", "Mathematics" ]
216
[ "Boolean algebra", "Algebra stubs", "Mathematical structures", "Algebras", "Mathematical logic", "Fields of abstract algebra", "Topology", "Space", "Algebraic structures", "Geometry", "Spacetime", "Algebra" ]
1,022,321
https://en.wikipedia.org/wiki/Roe%20effect
The Roe effect is a hypothesis about the long-term effect of abortion on the political balance of the United States, which suggests that since supporters of the legalization of abortion cause the erosion of their own political base, the practice of abortion will eventually lead to the restriction or illegalization of abortion. It is named after Roe v. Wade, the U.S. Supreme Court case that effectively legalized abortion nationwide in the U.S. Its best-known proponent is James Taranto of The Wall Street Journal who coined the phrase "Roe effect" in Best of the Web Today, his OpinionJournal.com column. Put simply, this hypothesis holds that: Those who favor legal abortion are much more likely to have the procedure than those who oppose it. Children usually follow their parents' political leanings. Therefore, pro-abortion rights parents will have more abortions and, hence, fewer children. Therefore, the pro-abortion rights population gradually shrinks in proportion to the anti-abortion population. Therefore, support for legal abortions will decline over time. A similar argument suggests that political groups that oppose abortion will tend to have more supporters in the long run than those who support it. In 2005, the Wall Street Journal published a detailed explanation and statistical evidence that Taranto says supports his hypothesis. Taranto first discussed the concept in January 2003, and named it in December 2003. He later suggested that the Roe effect serves as an explanation for the fact that the fall in teen birth rates is "greatest in liberal states, where pregnant teenagers would be more likely to [have abortions] and thus less likely to carry their babies to term." The Journal has also published articles about this topic by Larry L. Eastland and Arthur C. Brooks. Eastland has argued that Democrats have had higher rates of abortion than Republicans following Roe, while Brooks points out liberals have a lower fertility rate than conservatives. According to American historian Elizabeth Fox-Genovese the existence of such an effect "cannot be doubted" but "its nature, causes, and consequences may be." Fox-Genovese said that "Taranto has advanced an arresting argument that deserves more extended treatment." Wellesley College Professor of Economics Phillip Levine, while acknowledging that Taranto's hypothesis cannot be dismissed out of hand, has said there are several flaws in Taranto's reasoning. He writes that the conditions laid out by Taranto make several incorrect assumptions, most notably that pregnancies are events that are completely out of the control of the women. He writes, "If people engage in sexual activity (or not), or choose to use birth control (or not), independent of outside influences, then [Taranto's and Eastland's] statistical statements would be valid." Levine concludes that the hypothesis passes the test of plausibility but that it "would be unwarranted to draw any definitive conclusions regarding the actual contribution of the Roe Effect in determining contemporary political outcomes." References See also Abortion in the United States Legalized abortion and crime effect William Bennett Abortion in the United States Hypotheses Politics of the United States Demography
Roe effect
[ "Environmental_science" ]
635
[ "Demography", "Environmental social science" ]
1,022,443
https://en.wikipedia.org/wiki/Andy%20Hopper
Sir Andrew Hopper (born 9 May 1953) is a British-Polish computer technologist and entrepreneur. He is Chairman of lowRISC CIC, a Commissioner of the Royal Commission for the Exhibition of 1851, former Treasurer and Vice-President of the Royal Society, Professor Emeritus of Computer Technology at the University of Cambridge, an Honorary Fellow of Trinity Hall, Cambridge and Corpus Christi College, Cambridge. Education Hopper was educated at Quintin Kynaston School in London after which he went to study for a Bachelor of Science degree at Swansea University before going to the University of Cambridge Computer Laboratory and Trinity Hall, Cambridge in 1974 for postgraduate work. Hopper was awarded his PhD in 1978 for research into local area computer communications networks supervised by David Wheeler. Research and career Hopper's PhD, completed in 1977 was in the field of communications networks, and he worked with Maurice Wilkes on the creation of the Cambridge Ring and its successors. Hopper's interests include computer networks, multimedia systems, Virtual Network Computing, sentient computing and ubiquitous data. His most cited paper describes the indoor location system called the Active Badge. He has contributed to a discussion of the privacy challenges relating to surveillance. He is a proponent of Digital Commons industrial and societal infrastructure. After more than 20 years at Cambridge University Computer Laboratory, Hopper was elected Chair of Communications Engineering at Cambridge University Engineering Department in 1997. He returned to the Computer Laboratory as Professor of Computer Technology and Head of Department in 2004. Hopper's research under the title Computing for the Future of the Planet examines the uses of computers, data and AI for assuring the sustainability of the planet. Hopper has supervised approximately fifty PhD students. An annual PhD studentship has been named after him. Commercial activities In 1978, Hopper co-founded Orbis Ltd to develop networking technologies. He worked with Hermann Hauser and Chris Curry, founders of Acorn Computers Ltd. Orbis became a division of Acorn in 1979 and continued to work with the Cambridge Ring. While at Acorn, Hopper contributed to design some of the chips for the BBC Micro and helped conceive the project which led to the design of the ARM microprocessor. When Acorn was acquired by Olivetti in 1985, Hauser became vice-president for research at Olivetti, in which role he co-founded the Olivetti Research Laboratory in 1986 with Hopper; Hopper became its managing director. In 1985, after leaving Acorn, Hopper co-founded Qudos, a company producing CAD software and doing chip prototyping. He remained a director until 1989. In 1993, Hopper set up Advanced Telecommunication Modules Ltd with Hermann Hauser. This company went public on the NASDAQ as Virata in 1999. The company was acquired by Conexant Systems on 1 March 2004. In 1995, Hopper co-founded Telemedia Systems, now called IPV, and was its chairman until 2003. In 1997, Hopper co-founded Adaptive Broadband Ltd (ABL) to further develop the 'Wireless ATM' project started at ORL in the early 90s. ABL was bought by California Microwave, Inc in 1998. In January 2000, Hopper co-founded Cambridge Broadband which was to develop broadband fixed wireless equipment; he was non-executive chairman from 2000 – 2005. In 2002 Hopper was involved in the founding of Ubisense Ltd to further develop the location technologies and sentient computing concepts that grew out of the ORL Active Badge system. Hopper became a director in 2003 and was chairman between 2006 and 2015 during which the company made its initial public offering (IPO) in June 2011. In 2002, Hopper co-founded RealVNC and served as chairman until the company was sold in 2021. In 2002, Hopper co-founded Level 5 Networks and was a director until 2008, just after it merged with Solarflare. From 2005 until 2009, Hopper was chairman of Adventiq, a joint venture between Adder and RealVNC, developing a VNC-based system-on-a-chip. In 2013 Hopper co-founded TxtEz, a company looking to commoditise B2C communication in Africa. Since 2019 he has been Chairman of lowRISC Community Interest Company which develops industrial-strength open source hardware. Hopper was an advisor to Hauser's venture capital firm Amadeus Capital Partners from 2001 until 2005. He was also an advisor to the Cambridge Gateway Fund from 2001 until 2006. Awards and honours Hopper is a Fellow of the Institution of Engineering and Technology (FIET) and was a Trustee from 2003 until 2006, and again between 2009 and 2013. In 2004, Hopper was awarded the Mountbatten Medal of the IET (then IEE). He served as president of the IET between 2012 and 2013. Hopper was elected a Fellow of the Royal Academy of Engineering in 1996 and awarded their silver medal in 2003. He was a member of the Council of the Royal Academy of Engineering from 2007 to 2010. In 2013, he was part of the RealVNC team to receive the MacRobert Award. In 1999, Hopper gave the Royal Society's Clifford Paterson Lecture on Progress and research in the communications industry published under the title Sentinent Computing and was thus awarded the society's bronze medal for achievement. In May 2006, he was elected a Fellow of the Royal Society. He was a member of the Council of the Royal Society between 2009 and 2011. In 2017, Hopper become treasurer and vice-president of the Royal Society and was awarded the Bakerian Lecture and Prize. In the 2007 New Year Honours, Hopper was made an CBE for services to the computer industry. In 2004, Hopper was awarded the Association for Computing Machinery's SIGMOBILE Outstanding Contribution Award and in 2016 the Test-of-Time Award for the Active Badge paper. In July 2005, Hopper was awarded an Honorary Fellowship of Swansea University. In 2010 Hopper was awarded an Honorary Degree from Queen's University Belfast. In 2011 Hopper was elected as member of the Council and Trustee of the University of Cambridge and a member of the Finance Committee. Hopper serves on several academic advisory boards. In 2005, he was appointed to the advisory board of the Institute of Electronics Communications and Information Technology at Queen's University Belfast. In 2008 he joined the advisory board of the Department of Computer Science, University of Oxford. In 2011 he was appointed a member of the advisory board of the School of Computer and Communication Sciences at the École Polytechnique Fédérale de Lausanne. Since 2018 he has been a Commissioner of the Royal Commission for the Exhibition of 1851. He was knighted in the 2021 Birthday Honours for services to computer technology. Personal life Hopper married Alison Gail Smith, Professor of Plant Biochemistry at the University of Cambridge, in 1988. They have two children, William and Merrill. He is a qualified pilot with over 6,000 hours logged, including a round the world flight, and his house near Cambridge has an airstrip from which he flies his six-seater Cessna light aircraft. References 1953 births Living people Members of the University of Cambridge Computer Laboratory British computer scientists British technology company founders Acorn Computers Alumni of Swansea University Fellows of Corpus Christi College, Cambridge Fellows of the Royal Society Fellows of the Royal Academy of Engineering Commanders of the Order of the British Empire Fellows of the Institution of Engineering and Technology Polish emigrants to the United Kingdom Businesspeople from Warsaw British businesspeople Alumni of Trinity Hall, Cambridge People from Little Shelford Knights Bachelor
Andy Hopper
[ "Engineering" ]
1,494
[ "Institution of Engineering and Technology", "Fellows of the Institution of Engineering and Technology" ]
1,022,471
https://en.wikipedia.org/wiki/Solarium%20%28myrmecology%29
In myrmecology, a solarium is an above-ground earthen structure constructed by some ant species for the purpose of nest thermoregulation and brood incubation. Solaria are usually dome-shaped and fashioned from a paper-thin layer of soil, connected to the main nest by way of subterranean runs. Some species, such as Formica candida, construct solaria using plant materials. Tapinoma erraticum is an example of a solaria-constructing species whose skill at so doing was noted by Horace Donisthorpe in the early 20th century in his book British Ants, their Life Histories and Classification. References Myrmecology Shelters built or used by animals
Solarium (myrmecology)
[ "Biology" ]
141
[ "Ethology", "Behavior", "Shelters built or used by animals" ]
1,022,803
https://en.wikipedia.org/wiki/Drive%20shaft
A drive shaft, driveshaft, driving shaft, tailshaft (Australian English), propeller shaft (prop shaft), or Cardan shaft (after Girolamo Cardano) is a component for transmitting mechanical power, torque, and rotation, usually used to connect other components of a drivetrain that cannot be connected directly because of distance or the need to allow for relative movement between them. As torque carriers, drive shafts are subject to torsion and shear stress, equivalent to the difference between the input torque and the load. They must therefore be strong enough to bear the stress, while avoiding too much additional weight as that would in turn increase their inertia. To allow for variations in the alignment and distance between the driving and driven components, drive shafts frequently incorporate one or more universal joints, jaw couplings, or rag joints, and sometimes a splined joint or prismatic joint. History The term driveshaft first appeared during the mid-19th century. In Stover's 1861 patent reissue for a planing and matching machine, the term is used to refer to the belt-driven shaft by which the machine is driven. The term is not used in his original patent. Another early use of the term occurs in the 1861 patent reissue for the Watkins and Bryson horse-drawn mowing machine. Here, the term refers to the shaft transmitting power from the machine's wheels to the gear train that works the cutting mechanism. In the 1890s, the term began to be used in a manner closer to the modern sense. In 1891, for example, Battles referred to the shaft between the transmission and driving trucks of his Climax locomotive as the drive shaft, and Stillman referred to the shaft linking the crankshaft to the rear axle of his shaft-driven bicycle as a drive shaft. In 1899, Bukey used the term to describe the shaft transmitting power from the wheel to the driven machinery by a universal joint in his Horse-Power. In the same year, Clark described his Marine Velocipede using the term to refer to the gear-driven shaft transmitting power through a universal joint to the propeller shaft. Crompton used the term to refer to the shaft between the transmission of his steam-powered Motor Vehicle of 1903 and the driven axle. The pioneering automobile industry company, Autocar, was the first to use a drive shaft in a gasoline-powered car. Built in 1901, today this vehicle is in the collection of the Smithsonian Institution. Automotive drive shaft Vehicles An automobile may use a longitudinal shaft to deliver power from an engine/transmission to the other end of the vehicle before it goes to the wheels. A pair of short drive shafts is commonly used to send power from a central differential, transmission, or transaxle to the wheels. Front-engine, rear-wheel drive In front-engined, rear-wheel drive vehicles, a longer drive shaft is also required to send power the length of the vehicle. Two forms dominate: The torque tube with a single universal joint and the more common Hotchkiss drive with two or more joints. This system became known as Système Panhard after the automobile company Panhard et Levassor which patented it. Most of these vehicles have a clutch and gearbox (or transmission) mounted directly on the engine, with a drive shaft leading to a final drive in the rear axle. When the vehicle is stationary, the drive shaft does not rotate. Some vehicles (generally sports cars, such as the Chevrolet Corvette C5/C6/C7, Alfa Romeo Alfetta and Porsche 924/944/928), seeking improved weight balance between front and rear, use a rear-mounted transaxle. In some non-Porsche models, this places the clutch and transmission at the rear of the car and the drive shaft between them and the engine. In this case the drive shaft rotates continuously with the engine, even when the car is stationary and out of gear. However, the Porsche 924/944/928 models have the clutch mounted to the back of the engine in a bell housing and the drive shaft from the clutch output, located inside of a hollow protective torque tube, transfers power to the rear mounted transaxle (transmission + differential). Thus the Porsche driveshaft only rotates when the rear wheels are turning as the engine-mounted clutch can decouple engine crankshaft rotation from the driveshaft. So for Porsche, when the driver is using the clutch while briskly shifting up or down (manual transmission), the engine can rev freely with the driver's accelerator pedal input, since with the clutch disengaged, the engine and flywheel inertia is relatively low and is not burdened with the added rotational inertia of the driveshaft. The Porsche torque tube is solidly fastened to both the engine's bell housing and to the transaxle case, fixing the length and alignment between the bell housing and the transaxle and greatly minimizing rear wheel drive reaction torque from twisting the transaxle in any plane. A drive shaft connecting a rear differential to a rear wheel may be called a half-shaft. The name derives from the fact that two such shafts are required to form one rear axle. Early automobiles often used chain drive or belt drive mechanisms rather than a drive shaft. Some used electrical generators and motors to transmit power to the wheels. Front-wheel drive In British English, the term drive shaft is restricted to a transverse shaft that transmits power to the wheels, especially the front wheels. The shaft connecting the gearbox to a rear differential is called a "propeller shaft", or "prop-shaft". A prop-shaft assembly consists of a propeller shaft, a slip joint and one or more universal joints. Where the engine and axles are separated from each other, as on four-wheel drive and rear-wheel drive vehicles, it is the propeller shaft that serves to transmit the drive force generated by the engine to the axles. Several different types of drive shaft are used in the automotive industry: One-piece drive shaft Two-piece drive shaft Slip-in-tube drive shaft The slip-in-tube drive shaft is a new type that improves crash safety. It can be compressed to absorb energy in the event of a crash, so is also known as a "collapsible drive shaft". Four wheel and all-wheel drive These evolved from the front-engine rear-wheel drive layout. A new form of transmission called the transfer case was placed between transmission and final drives in both axles. This split the drive to the two axles and may also have included reduction gears, a dog clutch or differential. At least two drive shafts were used, one from the transfer case to each axle. In some larger vehicles, the transfer box was centrally mounted and was itself driven by a short drive shaft. In vehicles the size of a Land Rover, the drive shaft to the front axle is noticeably shorter and more steeply articulated than the rear shaft, making it a more difficult engineering problem to build a reliable drive shaft, and which may involve a more sophisticated form of universal joint. Modern light cars with all-wheel drive (notably Audi or the Fiat Panda) may use a system that more closely resembles a front-wheel drive layout. The transmission and final drive for the front axle are combined into one housing alongside the engine, and a single drive shaft runs the length of the car to the rear axle. This is a favoured design where the torque is biased to the front wheels to give car-like handling, or where the maker wishes to produce both four-wheel drive and front-wheel drive cars with many shared components. Research and development The automotive industry also uses drive shafts at testing plants. At an engine test stand, a drive shaft is used to transfer a certain speed or torque from the internal combustion engine to a dynamometer. A "shaft guard" is used at a shaft connection to protect against contact with the drive shaft and for detection of a shaft failure. At a transmission test stand a drive shaft connects the prime mover with the transmission. Symptoms of a bad drive shaft An automotive drive shaft can typically last about 120,000 kilometres. However, if the vehicle experiencing any of the signs below, drivers should get it checked as soon as possible. Clicking or squeaking noise: Driver can either hear a clicking, squeaking, or grinding noises coming from underneath the vehicle when driving. Clunking sounds: Driver can hear the noises especially when turning the vehicle, accelerating or even putting it into reverse. Vibration: An early and common symptom of a failing drive shaft is an intense vibration coming from underneath the vehicle. Worn out couplings, u-joints, or bearings cause excessive drive shaft vibration. Turning problems: Problems with turning the vehicle, both during slow and high-speed drives, is another significant sign of a bad drive shaft. Cardan shaft park brakes A cardan shaft park brake works on the drive shaft rather than the wheels. These brakes are commonly used on small trucks. This type of brake is prone to failure and has led to incidents where the truck has run away on a slope, leading to safety alerts being issued. Heavy vehicles that have this type of park brake usually have a ratchet handle similar to a car's hand brake or parking brake, as opposed to an air brake button or lever. Risk factors for drivers include parking on a steep slope when heavily loaded, not applying the brake with enough force, changing the load or load balance while parked on a slope, or parking where one side of the vehicle is able to slip. Using chocks on the wheels is one way of preventing the vehicle from moving on a slope. Motorcycle drive shafts Drive shafts have been used on motorcycles since before WW1, such as the Belgian FN motorcycle from 1903 and the Stuart Turner Stellar motorcycle of 1912. As an alternative to chain and belt drives, drive shafts offer long-lived, clean, and relatively maintenance-free operation. A disadvantage of shaft drive on a motorcycle is that helical gearing, spiral bevel gearing or similar is needed to turn the power 90° from the shaft to the rear wheel, losing some power in the process. BMW has produced shaft drive motorcycles since 1923; and Moto Guzzi have built shaft-drive V-twins since the 1960s. The British company, Triumph and the major Japanese brands, Honda, Suzuki, Kawasaki and Yamaha, have produced shaft drive motorcycles. Lambretta motorscooters type A up to type LD are shaft-driven the NSU Prima scooter is also shaft-driven Motorcycle engines positioned such that the crankshaft is longitudinal and parallel to the frame are often used for shaft-driven motorcycles. This requires only one 90° turn in power transmission, rather than two. Bikes from Moto Guzzi and BMW, plus the Triumph Rocket III and Honda ST series all use this engine layout. Motorcycles with shaft drive are subject to shaft effect, where the chassis climbs when power is applied. This effect, which is the opposite of that exhibited by chain-drive motorcycles, is counteracted with systems such as BMW's Paralever, Moto Guzzi's CARC and Kawasaki's Tetra Lever. Marine drive shafts On a power-driven ship, the drive shaft, or propeller shaft, usually connects the propeller outside the vessel to the driving machinery inside, passing through at least one shaft seal or stuffing box where it intersects the hull. The thrust, the axial force generated by the propeller, is transmitted to the vessel by the thrust block or thrust bearing, which, in all but the smallest of boats, is incorporated in the main engine or gearbox. Shafts can be made of stainless steel or composite materials depending on what type of ship will install them. The portion of the drive train which connects directly to the propeller is known as the tail shaft. Locomotive drive shafts The Shay, Climax and Heisler locomotives, all introduced in the late 19th century, used quill drives to couple power from a centrally mounted multi-cylinder engine to each of the trucks supporting the engine. On each of these geared steam locomotives, one end of each drive shaft was coupled to the driven truck through a universal joint while the other end was powered by the crankshaft, transmission or another truck through a second universal joint. A quill drive also has the ability to slide lengthways, effectively varying its length. This is required to allow the bogies to rotate when passing a curve. Cardan shafts are used in some diesel locomotives (mainly diesel-hydraulics, such as British Rail Class 52) and some electric locomotives (e.g. British Rail Class 91). They are also widely used in diesel multiple units. Drive shafts in bicycles The drive shaft has served as an alternative to a chain-drive in bicycles for the past century, never becoming very popular. A shaft-driven bicycle (or "Acatène", from an early maker) has several advantages and disadvantages: Advantages Drive system is less likely to become jammed. The rider cannot become dirtied from chain grease or injured by "chain bite" when clothing or a body part catches between an unguarded chain and a sprocket. Lower maintenance than a chain system when the drive shaft is enclosed in a tube. More consistent performance. Dynamic Bicycles claims that a drive shaft bicycle can deliver 94% efficiency, whereas a chain-driven bike can deliver anywhere from 75 to 97% efficiency based on condition. Disadvantages A drive shaft system weighs more than a chain system, usually heavier. Many of the advantages claimed by drive shaft's proponents can be achieved on a chain-driven bicycle, such as covering the chain and sprockets. Use of lightweight derailleur gears with a high number of ratios is impossible, although hub gears can be used. Wheel removal can be complicated in some designs (as it is for some chain-driven bicycles with hub gears). PTO drive shafts Drive shafts are one method of transferring power from an engine and PTO to vehicle-mounted accessory equipment, such as an air compressor. Drive shafts are used when there is not enough space beside the engine for the additional accessory; the shaft bridges the gap between the engine PTO and the accessory, allowing the accessory to be mounted elsewhere on the vehicle. Drive shaft production Nowadays new possibilities exist for the production process of drive shafts. The filament winding production process is gaining popularity for the creation of composite drive shafts. Several companies in the automotive industry are looking to adopt this knowledge for their high volume production process. See also Giubo List of auto parts Quill drive Shaft alignment Shaft collar References External links Vehicle parts Mechanical power control Mechanical power transmission Shaft drives
Drive shaft
[ "Physics", "Technology" ]
2,981
[ "Mechanics", "Vehicle parts", "Mechanical power transmission", "Mechanical power control", "Components" ]
1,022,815
https://en.wikipedia.org/wiki/Borthwood%20Copse
Borthwood Copse, near Sandown, Isle of Wight, England is a piece of woodland owned by the National Trust and is one of the numerous copses which make up part of the medieval forest which covered most of the eastern end of the Island. Borthwood Copse sits on the outskirts of Newchurch, and is close to the neighbouring hamlet of Apse Heath and the villages of Queen's Bower and Alverstone. Borthwood Copse was originally a royal hunting ground. It was bequeathed to the National Trust in 1926 by Frank Morey. He had purchased it a few years earlier to preserve it for wildlife. Subsequent additions have been added to the land and it now covers a total of . There are some ancient oaks, and a distinctive grove of beech trees which stand amongst glades of coppiced sweet chestnut and hazel. The woodland is one of the very few examples of working coppice with standards which can be seen on the Isle of Wight. A bridleway and many smaller paths lead through the woodland, which is open to the public. It is particularly popular with visitors in the autumn with its vivid colours and, in the springtime, when carpeted with bluebells. Borthwood Copse is one of the countless locations in the Eastern Isle of Wight that are home to large numbers of Red Squirrels. Owing to its position on the downs, much of Borthwood Copse is hilly, and in wet weather the soil often becomes waterlogged and marshy, making travel through the copse on foot difficult. Within the wood is a viewpoint looking east from where you can catch a glimpse of Culver Down and the sea. As the copse climbs a small hill, Bembridge Windmill can be seen in the distance through the downs on clear days. Wildlife includes dormice, red squirrels, a wide range of bats, and many invertebrates. The view point is called Kite Hill. References See also List of old growth forests Forests and woodlands of the Isle of Wight National Trust properties on the Isle of Wight Old-growth forests
Borthwood Copse
[ "Biology" ]
423
[ "Old-growth forests", "Ecosystems" ]
1,022,879
https://en.wikipedia.org/wiki/Rural%20delivery%20service
Rural delivery service refers to services for the delivery of mail to rural areas. In many countries, rural mail delivery follows different rules and practices from that in urban areas. For example, in some areas rural delivery may require homeowners to travel to a centralized mail delivery depot or a community mailbox rather than being directly served by a door-to-door mail carrier; and even if direct door-to-door delivery is offered, houses still may even not have their own unique mailing addresses at all, but an entire road instead may be assigned a single common address, such as a rural route number. Examples include Rural Free Delivery in the United States, the rural route system in Canada, and the Rural Mail Box addressing system in Australia. Because of the differences in the handling and delivery of mail in rural areas, rural letter carriers often follow different regulatory standards from urban postal workers; for example, rural postal delivery workers may not be required to wear a uniform and may be allowed to use their own vehicles rather than driving a postal truck. In Canada, rural letter carriers were for many years not considered employees of Canada Post but private contractors. See also List of postal entities Timeline of postal history Rural Internet References Postal systems Postal services Delivery Service Delivery Service
Rural delivery service
[ "Technology" ]
251
[ "Transport systems", "Postal systems" ]
1,022,948
https://en.wikipedia.org/wiki/Detroit%E2%80%93Windsor%20tunnel
The Detroit–Windsor tunnel (), also known as the Detroit–Canada tunnel, is an international highway tunnel connecting the cities of Detroit, Michigan, United States and Windsor, Ontario, Canada. It is the second-busiest crossing between the United States and Canada, the first being the Ambassador Bridge, which also connects the two cities, which are situated on the Detroit River. Overview The tunnel is long (nearly a mile). At its lowest point, the two-lane roadway is below the river surface. There is a wide no-anchor zone enforced on river traffic around the tunnel. The tunnel has three main levels. The bottom level brings in fresh air under pressure, which is forced into the mid level, where the traffic lanes are located. The ventilation system forces vehicle exhaust into the third level, which is then vented at each end of the tunnel. History Construction Construction began on the tunnel in the summer of 1928. The Detroit–Windsor tunnel was built by the firm Parsons, Klapp, Brinckerhoff and Douglas (the same firm that built the Holland Tunnel). The executive engineer was Burnside A. Value, the engineer of design was Norwegian-American engineer Søren Anton Thoresen, while fellow Norwegian-American Ole Singstad consulted, and designed the ventilation. Three different methods were used to construct the tunnel. The approaches were constructed using the cut-and-cover method. Beyond the approaches, a tunneling shield method was used to construct hand-bored tunnels. Most of the river section used the immersed tube method in which steam-powered dredgers dug a trench in the river bottom and then covered over with of mud. The nine -long tubes measured in diameter. The Detroit–Windsor tunnel was completed in 1930 at a total cost of approximately $25 million (around $ in dollars). It was the third underwater vehicular tunnel constructed in the United States, following the Holland Tunnel, between Jersey City, New Jersey, and downtown Manhattan, New York, and the Posey Tube, between Oakland and Alameda, California. Its creation followed the opening of cross-border rail freight tunnels including the St. Clair Tunnel between Port Huron, Michigan, and Sarnia, Ontario, in 1891 and the Michigan Central Railway Tunnel between Detroit and Windsor in 1910. The cities of Detroit and Windsor hold the distinction of jointly creating both the second and third tunnels between two nations in the world. The Detroit–Windsor tunnel is the world's third tunnel between two nations, and the first international vehicle tunnel. The Michigan Central Railway Tunnel, also under the Detroit River, was the second tunnel between two nations. The St. Clair Tunnel, between Port Huron, Michigan, and Sarnia, Ontario, under the St. Clair River, was the first. Operations since 2007 In 2007, billionaire Manuel Moroun, owner of the nearby Ambassador Bridge, attempted to purchase the American side of the tunnel. In 2008, the City of Windsor controversially attempted to purchase the American side for $75 million, but the deal fell through after a scandal involving then-Detroit Mayor Kwame Kilpatrick. Soon afterward, the city's finances were badly hit in a recession and the tunnel's future was in question. Following Detroit's July 2013 bankruptcy filing, Windsor Mayor Eddie Francis said that his city would consider purchasing Detroit's half of the tunnel if it was offered for sale. On July 25, 2013, the lessor, manager and operator of the tunnel, Detroit Windsor Tunnel LLC, and its parent company, American Roads, LLC, voluntarily filed for chapter 11 bankruptcy protection in the United States Bankruptcy Court for the Southern District of New York. The American lease was eventually purchased by Syncora Guarantee, a Bermuda-based insurance company. Soon afterward, the lease with Detroit was extended to 2040. Both Syncora and Windsor retained the Windsor-Detroit Tunnel Corporation to manage the daily operations and upkeep of the tunnel. In May 2018, Syncora sold its interest in American Roads, LLC for $220 million to DIF Capital Partners, a Dutch-based investment fund management company specializing in infrastructure assets. A $21.6 million renovation of the tunnel began in October 2017 to replace the aging concrete ceiling, along with other improvements to the infrastructure. Completion of the project was initially scheduled for June 2018, but is ongoing as of 2021. Usage The Detroit–Windsor tunnel crosses the Canada–United States border; an International Boundary Commission plaque marking the boundary in the tunnel is between flags of the two countries. The tunnel is the second-busiest crossing between the United States and Canada after the nearby Ambassador Bridge. A 2004 Border Transportation Partnership study showed that 150,000 jobs in the region and $13 billion (U.S.) in annual production depend on the Windsor-Detroit international border crossing. Between 2001 and 2005, profits from the tunnel peaked, with the cities receiving over $6 million annually. A steep decline in traffic eliminated profits from the tunnel from 2008 until 2012, with a modest recovery in the years since. Traffic About 13,000 vehicles a day use the tunnel despite having one lane in each direction and not allowing large trucks. Historically, the tunnel carried a smaller amount of commercial traffic than other nearby crossings because of physical and cargo restraints, as well as limits on accessing roadways. Passenger automobile traffic on the tunnel increased from 1972, until it peaked in 1999 at just under 10 million vehicle crossings annually. After 1999, automobile crossings through the tunnel declined, dropping under 5 million for the first time in over three decades in 2007. Traffic on the tunnel later recovered slightly in the following years when the economy began to improve after 2008. Tolls Tolls were last increased on the Canadian side in July 2021, 37% for those using Canadian currency and 11% using American currency. Standard tolls for non-commercial Canada-bound vehicles are US$7.50 and C$7.50; United States-bound tolls are also US$6.75 but C$6.75. For frequent crossers, the Nexpress Toll Card for cheaper rates. Commercial vehicles and buses are charged higher rates. Motorcycles, scooters and bicycles are prohibited. Features Tunnel truck for disabled vehicles When the tunnel first opened in the 1930s the operators had a unique rescue vehicle to tow out disabled vehicles without having to back in or turn around to perform this role. The vehicle had two drivers, one facing in the opposite direction of the other. The vehicle was driven in, the disabled vehicle was hooked up, then the driver facing the other way drove it out. This emergency vehicle also had of water hose with power drive and chemical fire extinguishers. CKLW, WJR and the tunnel In the late 1960s, Windsor radio station CKLW AM 800 engineered a wiring setup which has allowed the station's signal to be heard clearly by automobiles traveling through the tunnel. Currently Detroit radio station WJR AM 760 can be heard clearly in the tunnel. Ventilation The upper and lower levels of the tunnel are used as exhaust and intake air ducts. One hundred-foot ventilation towers on both ends of the tunnel enable air exchange once every 90 seconds. Photo gallery See also Ambassador Bridge Gordie Howe International Bridge, a second bridge crossing currently under construction Detroit International Riverfront Transportation in metropolitan Detroit Detroit–Windsor References External links Windsor Detroit Borderlink Limited (Windsor Plaza) Detroit Windsor Tunnel LLC (Detroit Plaza) Tunnel Bus Detroit News archives: The Building of the Detroit–Windsor Tunnel Transport in Windsor, Ontario Transportation buildings and structures in Detroit Tunnels in Michigan Road tunnels in Ontario Crossings of the Detroit River Toll tunnels in the United States Canada–United States border crossings Historic Civil Engineering Landmarks Tunnels completed in 1930 Buildings and structures in Windsor, Ontario Toll tunnels in Canada Articles containing video clips Road tunnels in the United States Immersed tube tunnels in Canada Immersed tube tunnels in the United States International tunnels 1930 establishments in Michigan 1930 establishments in Ontario
Detroit–Windsor tunnel
[ "Engineering" ]
1,599
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
1,023,011
https://en.wikipedia.org/wiki/Allen%27s%20rule
Allen's rule is an ecogeographical rule formulated by Joel Asaph Allen in 1877, broadly stating that animals adapted to cold climates have shorter and thicker limbs and bodily appendages than animals adapted to warm climates. More specifically, it states that the body surface-area-to-volume ratio for homeothermic animals varies with the average temperature of the habitat to which they are adapted (i.e. the ratio is low in cold climates and high in hot climates). Explanation Allen's rule predicts that endothermic animals with the same body volume should have different surface areas that will either aid or impede their heat dissipation. Because animals living in cold climates need to conserve as much heat as possible, Allen's rule predicts that they should have evolved comparatively low surface area-to-volume ratios to minimize the surface area by which they dissipate heat, allowing them to retain more heat. For animals living in warm climates, Allen's rule predicts the opposite: that they should have comparatively high ratios of surface area to volume. Because animals with low surface area-to-volume ratios would overheat quickly, animals in warm climates should, according to the rule, have high surface area-to-volume ratios to maximize the surface area through which they dissipate heat. In animals Though there are numerous exceptions, many animal populations appear to conform to the predictions of Allen's rule. The polar bear has stocky limbs and very short ears that are in accordance with the predictions of Allen's rule, so does the snow leopard. In 2007, R.L. Nudds and S.A. Oswald studied the exposed lengths of seabirds' legs and found that the exposed leg lengths were negatively correlated with Tmaxdiff (body temperature minus minimum ambient temperature), supporting the predictions of Allen's rule. J.S. Alho and colleagues argued that tibia and femur lengths are highest in populations of the common frog that are indigenous to the middle latitudes, consistent with the predictions of Allen's rule for ectothermic organisms. Populations of the same species from different latitudes may also follow Allen's rule. R.L. Nudds and S.A. Oswald argued in 2007 that there is poor empirical support for Allen's rule, even if it is an "established ecological tenet". They said that the support for Allen's rule mainly draws from studies of single species, since studies of multiple species are "confounded" by the scaling effects of Bergmann's rule and alternative adaptations that counter the predictions of Allen's rule. J.S. Alho and colleagues argued in 2011 that, although Allen's rule was originally formulated for endotherms, it can also be applied to ectotherms, which derive body temperature from the environment. In their view, ectotherms with lower surface area-to-volume ratios would heat up and cool down more slowly, and this resistance to temperature change might be adaptive in "thermally heterogeneous environments". Alho said that there has been a renewed interest in Allen's rule due to global warming and the "microevolutionary changes" that are predicted by the rule. In humans Marked differences in limb lengths have been observed when different portions of a given human population reside at different altitudes. Environments at higher altitudes generally experience lower ambient temperatures. In Peru, individuals who lived at higher elevations tended to have shorter limbs, whereas those from the same population who inhabited the more low-lying coastal areas generally had longer limbs and larger trunks. Katzmarzyk and Leonard similarly noted that human populations appear to follow the predictions of Allen's rule.:494 There is a negative association between body mass index and mean annual temperature for indigenous human populations,:490 meaning that people who originate from colder regions have a heavier build for their height and people who originate from warmer regions have a lighter build for their height. Relative sitting height is also negatively correlated with temperature for indigenous human populations,:487–88 meaning that people who originate from colder regions have proportionally shorter legs for their height and people who originate from warmer regions have proportionally longer legs for their height. In 1968, A.T. Steegman investigated the assumption that Allen's rule caused the structural configuration of the face of human populations adapted to polar climate. Steegman did an experiment that involved the survival of rats in the cold. Steegman said that the rats with narrow nasal passages, broader faces, shorter tails and shorter legs survived the best in the cold. Steegman said that the experimental results had similarities with the Arctic Mongoloids, particularly the Eskimo and Aleut, because these have similar morphological features in accordance with Allen's rule: a narrow nasal passage, relatively large heads, long to round heads, large jaws, relatively large bodies, and short limbs. Allen's rule may have also resulted in wide noses and alveolar and/or maxillary prognathism being more common in human populations in warmer regions, and the opposite in colder regions. Mechanism A contributing factor to Allen's rule in vertebrates may be that the growth of cartilage is at least partly dependent on temperature. Temperature can directly affect the growth of cartilage, providing a proximate biological explanation for this rule. Experimenters raised mice either at 7 degrees, 21 degrees or 27 degrees Celsius and then measured their tails and ears. They found that the tails and ears were significantly shorter in the mice raised in the cold in comparison to the mice raised at warmer temperatures, even though their overall body weights were the same. They also found that the mice raised in the cold had less blood flow in their extremities. When they tried growing bone samples at different temperatures, the researchers found that the samples grown in warmer temperatures had significantly more growth of cartilage than those grown in colder temperatures. See also Bergmann's rule, which correlates latitude with body mass in animals Gloger's rule, which correlates humidity with pigmentation in animals References Physiology Ecogeographic rules
Allen's rule
[ "Biology" ]
1,257
[ "Biological rules", "Ecogeographic rules", "Physiology" ]
1,023,023
https://en.wikipedia.org/wiki/Liniment
Liniment (from , meaning "to anoint"), also called embrocation and heat rub, is a medicated topical preparation for application to the skin. Some liniments have a viscosity similar to that of water; others are lotion or balm; still, others are in transdermal patches, soft solid sticks, and sprays. Liniment usually is rubbed into the skin, which the active ingredients penetrate. Liniments are typically sold to relieve pain and stiffness, such as from muscular aches and strains, and arthritis. These are typically formulated from alcohol, acetone, or similar quickly evaporating solvents and contain counterirritant aromatic chemical compounds, such as methyl salicylate, benzoin resin, menthol, and capsaicin. They produce a feeling of warmth within the muscle of the area they are applied to, typically acting as rubefacients via a counterirritant effect. Methyl salicylate, which is the analgesic ingredient in some heat rubs, can be toxic if used in excess. Heating pads are also not recommended for use with heat rubs, because the added warmth may cause overabsorption of the active ingredients. Notable liniments A.B.C. Liniment was used from approximately 1880 to 1935. It was named for its three primary ingredients: aconite, belladonna, and chloroform. There were numerous examples of poisoning from the mixture, resulting in at least one death. Amrutanjan is an analgesic balm manufactured by Amrutanjan Healthcare. It was founded in 1893 by journalist and freedom fighter, Kasinathuni Nageswara Rao. Bengay, spelled Ben-Gay before 1995, was developed in France by Dr. Jules Bengué, and brought to America in 1898. It was originally produced by Pfizer Consumer Healthcare, which was acquired by Johnson & Johnson. Dr. Cox's Barbed Wire Liniment and Antiseptic, made by Myers Laboratory. Marketed as treatment for minor wounds (contains iodine) for livestock and humans, such as barbed wire scratches. IcyHot is a line of liniments produced and marketed by Chattem, now a subsidiary of Sanofi. Mentholatum Ointment, branded Deep Heat outside of the US, was introduced in December 1894 and is still produced today with numerous variations. Minard's Liniment: Dr. Levi Minard of Nova Scotia, branded as "The King of Pain," created his well-known liniment from camphor, ammonia water, and medical turpentine. Nine oils: a 19th-century preparation used on both horses and humans. Although druggists' books sometimes specified recipes, street doctors often promoted any kind of oil as the "nine oils". Opodeldoc: invented by the Renaissance physician Paracelsus. RUB A535: introduced in 1919 and manufactured by Church & Dwight in Canada. Thermacare: Acquired in 2020 by Italy's Angelini when it was spun off following the merger of Pfizer with GlaxoSmithKline's consumer healthcare division. Tiger Balm was developed during the 1870s in Rangoon, Burma by herbalist Aw Chu Kin, and brought to market by his sons. It is composed of 16% menthol and 28% oil of wintergreen. Watkins Liniment: One of Watkins Incorporated's original products. Use on horses Liniments are commonly used on horses following exercise, applied either by rubbing on full-strength, especially on the legs; or applied in a diluted form, usually added to a bucket of water and sponged on the body. They are used in hot weather to help cool down a horse after working, the alcohol cooling through rapid evaporation, and counterirritant oils dilating capillaries in the skin, increasing the amount of blood releasing heat from the body. Many horse liniment formulas in diluted form have been used on humans, though products for horses which contain DMSO are not suitable for human use, as DMSO carries the topical product into the bloodstream. Horse liniment ingredients such as menthol, chloroxylenol, or iodine are also used in different formulas in products used by humans. Absorbine, a horse liniment product manufactured by W.F. Young, Inc., was reformulated for humans and marketed as Absorbine Jr. The company also acquired other liniment brands including Bigeloil and RefreshMint. The equine version of Absorbine is sometimes used by humans, though, anecdotally, its benefits in humans may be because the smell of menthol releases serotonin, or due to a placebo effect. Earl Sloan was a US entrepreneur who made his initial fortune selling his father's horse liniment formula beginning in the period following the Civil War. Sloan's liniment with capsicum as a key ingredient was also marketed for human use. He later sold his company to the predecessor of Warner–Lambert, which was purchased in 2000 by Pfizer. References External links Dosage forms Drug delivery devices Ointments
Liniment
[ "Chemistry" ]
1,078
[ "Pharmacology", "Drug delivery devices" ]
1,023,079
https://en.wikipedia.org/wiki/Malachite%20green
Malachite green is an organic compound that is used as a dyestuff and controversially as an antimicrobial in aquaculture. Malachite green is traditionally used as a dye for materials such as silk, leather, and paper. Despite its name the dye is not prepared from the mineral malachite; the name just comes from the similarity of color. Structures and properties Malachite green is classified in the dyestuff industry as a triarylmethane dye and also using in pigment industry. Formally, malachite green refers to the chloride salt , although the term malachite green is used loosely and often just refers to the colored cation. The oxalate salt is also marketed. The anions have no effect on the color. The intense green color of the cation results from a strong absorption band at 621 nm (extinction coefficient of ). Malachite green is prepared by the condensation of benzaldehyde and dimethylaniline to give leuco malachite green (LMG): C6H5CHO + C6H5N(CH3)2 -> (C6H5N(CH3)2)2C6H5 + H2O Second, this colorless leuco compound, a relative of triphenylmethane, is oxidized to the cation that is MG: A typical oxidizing agent is manganese dioxide. Hydrolysis of MG gives an alcohol: This alcohol is important because it, not MG, traverses cell membranes. Once inside the cell, it is metabolized into LMG. Only the cation MG is deeply colored, whereas the leuco and alcohol derivatives are not. This difference arises because only the cationic form has extended pi-delocalization, which allows the molecule to absorb visible light. Preparation The leuco form of malachite green was first prepared by Hermann Fischer in 1877 by condensing benzaldehyde and dimethylaniline in the molecular ratio 1:2 in the presence of sulfuric acid. Uses Malachite green is traditionally used as a dye. Kilotonnes of MG and related triarylmethane dyes are produced annually for this purpose. MG is active against the oomycete Saprolegnia, which infects fish eggs in commercial aquaculture, MG has been used to treat Saprolegnia and is used as an antibacterial. It is a very popular treatment against Ichthyophthirius multifiliis in freshwater aquaria. The principal metabolite, leuco-malachite green (LMG), is found in fish treated with malachite green, and this finding is the basis of controversy and government regulation. See also Antimicrobials in aquaculture. MG has frequently been used to catch thieves and pilferers. The bait, usually money, is sprinkled with the anhydrous powder. Anyone handling the contaminated money will find that on upon washing the hands, a green stain on the skin that lasts for several days will result. Niche uses Numerous niche applications exploit the intense color of MG. It is used as a biological stain for microscopic analysis of cell biology and tissue samples. In the Gimenez staining method, basic fuchsin stains bacteria red or magenta, and malachite green is used as a blue-green counterstain. Malachite green is also used in endospore staining, since it can directly stain endospores within bacterial cells; here a safranin counterstain is often used. Malachite green is a part of Alexander's pollen stain. Malachite green can also be used as a saturable absorber in dye lasers, or as a pH indicator between pH 0.2–1.8. However, this use is relatively rare. Leuco-malachite green (LMG) is used as a detection method for latent blood in forensic science. Hemoglobin catalyzes the reaction between LMG and hydrogen peroxide, converting the colorless LMG into malachite green. Therefore, the appearance of a green color indicates the presence of blood. A set of malachite green derivatives is also a key component in a fluorescence microscopy tool called the fluorogen activating protein/fluorogen system. Malachite green is in a class of molecules called fluorophores. When malachite green's rotational freedom is restricted, it transforms from a non fluorescent molecule to a highly fluorescent molecule. In the fluorogen activating protein tool, established by a group at Carnegie Mellon University, Malachite green binds a specific fluorogen activating protein to become highly fluorescent. Expression of the fluorogen activating protein as fusions of targeting domains can impart subcellular localization. Its use is similar to that of GFP but has the added benefit of having a 'dark state' before the malachite green fluorophore is added. This is especially useful for FRET studies. Regulation In 1992, Canadian authorities determined that eating fish contaminated with malachite green posed a significant health risk. Malachite green was classified a Class II Health Hazard. Due to its low manufacturing cost, malachite green is still used in certain countries with less restrictive laws for non aquaculture purposes. In 2005, analysts in Hong Kong found traces of malachite green in eels and fish imported from China. In 2006, the United States Food and Drug Administration (FDA) detected malachite green in seafood from China, among others, where the substance is also banned for use in aquaculture. In June 2007, the FDA blocked the importation of several varieties of seafood due to continued malachite green contamination. Malachite green has been banned in the United States since 1983 in food-related applications. The substance is also banned in the United Kingdom. It is prohibited from the use in food in Macao. Animals metabolize malachite green to its leuco form. Being lipophillic (the leuco form has a log P of 5.70), the metabolite is retained in catfish muscle longer (HL = 10 days) than is the parent molecule (HL = 2.8 days). Toxicity The (oral, mouse) is 80 mg/kg. Rats fed malachite green experience "a dose-related increase in liver DNA adducts" along with lung adenomas. Leucomalachite green causes an "increase in the number and severity of changes". As leucomalachite green is the primary metabolite of malachite green and is retained in fish muscle much longer, most human dietary intake of malachite green from eating fish would be in the leuco form. During the experiment, rats were fed up to 543 ppm of leucomalachite green, an extreme amount compared to the average 5 ppb discovered in fish. After a period of two years, an increase in lung adenomas in male rats was discovered but no incidences of liver tumors. Therefore, it could be concluded that malachite green caused carcinogenic symptoms, but a direct link between malachite green and liver tumor was not established. Detection Although malachite green has almost no fluorescence in aqueous solution (quantum yield 7.9x10−5), several research groups have developed technologies to detect malachite green. For example, Zhao et al., demonstrated the use of malachite green aptamer in microcantilever based sensors to detect low concentration of malachite green. References Further reading Schoettger, 1970; Smith and Heath, 1979; Gluth and Hanke, 1983. Bills et al. (1977) External links U.S. National Institutes of Health U.S. Food and Drug Administration U.K. Department of Health Malachite green - endospore staining technique (video) Malachite Green Dyes Triarylmethane dyes Staining dyes PH indicators Antimicrobials Aromatic amines Fish medicine Dimethylamino compounds Carbocations
Malachite green
[ "Chemistry", "Materials_science", "Biology" ]
1,693
[ "Antimicrobials", "Titration", "PH indicators", "Chromism", "Chemical tests", "Equilibrium chemistry", "Biocides" ]
1,023,294
https://en.wikipedia.org/wiki/America%27s%20Space%20Prize
America's Space Prize was a US$50 million space competition in orbital spaceflight established and funded in 2004 by hotel entrepreneur Robert Bigelow. The prize would have been awarded to the first US-based privately funded team to design and build a reusable crewed capsule capable of flying 5 astronauts to a Bigelow Aerospace inflatable space module. The criteria also required the capsule be recovered and flown again in 60 days. The prize expired January 10, 2010, without a winner or any test flights attempted. The teams were required to have been based in the United States. History The prize was announced by Bigelow on 17 December 2003—the 100th anniversary of the Wright brothers first powered aircraft flight. Prize rules A set of ten criteria were set up in order for a contestant to win the prize. The spacecraft must reach a minimum altitude of 400 kilometers (approximately 250 miles); The spacecraft must reach a minimum velocity sufficient to complete two (2) full orbits at altitude before returning to Earth; The spacecraft must carry no less than a crew of five (5) people; The spacecraft must dock or demonstrate its ability to dock with a Bigelow Aerospace inflatable space habitat, and be capable of remaining on station at least six (6) months; The spacecraft must perform two (2) consecutive, safe and successful orbital missions within a period of sixty (60) calendar days, subject to Government regulations; No more than twenty percent (20%) of the spacecraft may be composed of expendable hardware; The contestant must be domiciled in the United States of America. The contestant must have its principal place of business in the United States of America; The Competitor must not accept or utilize government development funding related to this contest of any kind, nor shall there be any government ownership of the competitor. Use in government test facilities shall be permitted; and The spacecraft must complete its two (2) missions safely and successfully, with all five (5) crew members aboard for the second qualifying flight, before the competition's deadline of Jan. 10, 2010 Contestants Since the launch of the prize, 40 companies had expressed interest, but either did not have the money which would apparently be needed, or, in the case of SpaceX, were ineligible due to having accepted government funding. Despite the lack of interest, Bigelow did not revise the prize rules, planning instead to seek transportation to space other ways. A few contestants have been: Interorbital Systems JP Aerospace SpaceDev SpaceX See also Orteig Prize Ansari X Prize Mprize N-prize List of space technology awards References External links America's Space Prize in Encyclopedia Astronautica Exclusive: Rules Set for $50 Million 'America’s Space Prize' SPACE.COM (November 8, 2004) Space organizations Space-related awards Challenge awards Private spaceflight 2004 establishments in the United States 2010 disestablishments in the United States
America's Space Prize
[ "Astronomy", "Technology" ]
593
[ "Science and technology awards", "Space-related awards", "Astronomy organizations", "Space organizations" ]
1,023,353
https://en.wikipedia.org/wiki/Burgers%27%20equation
Burgers' equation or Bateman–Burgers equation is a fundamental partial differential equation and convection–diffusion equation occurring in various areas of applied mathematics, such as fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow. The equation was first introduced by Harry Bateman in 1915 and later studied by Johannes Martinus Burgers in 1948. For a given field and diffusion coefficient (or kinematic viscosity, as in the original fluid mechanical context) , the general form of Burgers' equation (also known as viscous Burgers' equation) in one space dimension is the dissipative system: The term can also rewritten as . When the diffusion term is absent (i.e. ), Burgers' equation becomes the inviscid Burgers' equation: which is a prototype for conservation equations that can develop discontinuities (shock waves). The reason for the formation of sharp gradients for small values of becomes intuitively clear when one examines the left-hand side of the equation. The term is evidently a wave operator describing a wave propagating in the positive -direction with a speed . Since the wave speed is , regions exhibiting large values of will be propagated rightwards quicker than regions exhibiting smaller values of ; in other words, if is decreasing in the -direction, initially, then larger 's that lie in the backside will catch up with smaller 's on the front side. The role of the right-side diffusive term is essentially to stop the gradient becoming infinite. Inviscid Burgers' equation The inviscid Burgers' equation is a conservation equation, more generally a first order quasilinear hyperbolic equation. The solution to the equation and along with the initial condition can be constructed by the method of characteristics. Let be the parameter characterising any given characteristics in the - plane, then the characteristic equations are given by Integration of the second equation tells us that is constant along the characteristic and integration of the first equation shows that the characteristics are straight lines, i.e., where is the point (or parameter) on the x-axis (t = 0) of the x-t plane from which the characteristic curve is drawn. Since at -axis is known from the initial condition and the fact that is unchanged as we move along the characteristic emanating from each point , we write on each characteristic. Therefore, the family of trajectories of characteristics parametrized by is Thus, the solution is given by This is an implicit relation that determines the solution of the inviscid Burgers' equation provided characteristics don't intersect. If the characteristics do intersect, then a classical solution to the PDE does not exist and leads to the formation of a shock wave. Whether characteristics can intersect or not depends on the initial condition. In fact, the breaking time before a shock wave can be formed is given by Complete integral of the inviscid Burgers' equation The implicit solution described above containing an arbitrary function is called the general integral. However, the inviscid Burgers' equation, being a first-order partial differential equation, also has a complete integral which contains two arbitrary constants (for the two independent variables). Subrahmanyan Chandrasekhar provided the complete integral in 1943, which is given by where and are arbitrary constants. The complete integral satisfies a linear initial condition, i.e., . One can also construct the general integral using the above complete integral. Viscous Burgers' equation The viscous Burgers' equation can be converted to a linear equation by the Cole–Hopf transformation, which turns it into the equation which can be integrated with respect to to obtain where is an arbitrary function of time. Introducing the transformation (which does not affect the function ), the required equation reduces to that of the heat equation The diffusion equation can be solved. That is, if , then The initial function is related to the initial function by where the lower limit is chosen arbitrarily. Inverting the Cole–Hopf transformation, we have which simplifies, by getting rid of the time-dependent prefactor in the argument of the logarthim, to This solution is derived from the solution of the heat equation for that decays to zero as ; other solutions for can be obtained starting from solutions of that satisfies different boundary conditions. Some explicit solutions of the viscous Burgers' equation Explicit expressions for the viscous Burgers' equation are available. Some of the physically relevant solutions are given below: Steadily propagating traveling wave If is such that and and , then we have a traveling-wave solution (with a constant speed ) given by This solution, that was originally derived by Harry Bateman in 1915, is used to describe the variation of pressure across a weak shock wave. When and to with . Delta function as an initial condition If , where (say, the Reynolds number) is a constant, then we have In the limit , the limiting behaviour is a diffusional spreading of a source and therefore is given by On the other hand, In the limit , the solution approaches that of the aforementioned Chandrasekhar's shock-wave solution of the inviscid Burgers' equation and is given by The shock wave location and its speed are given by and N-wave solution The N-wave solution comprises a compression wave followed by a rarafaction wave. A solution of this type is given by where may be regarded as an initial Reynolds number at time and with , may be regarded as the time-varying Reynold number. Other forms Multi-dimensional Burgers' equation In two or more dimensions, the Burgers' equation becomes One can also extend the equation for the vector field , as in Generalized Burgers' equation The generalized Burgers' equation extends the quasilinear convective to more generalized form, i.e., where is any arbitrary function of u. The inviscid equation is still a quasilinear hyperbolic equation for and its solution can be constructed using method of characteristics as before. Stochastic Burgers' equation Added space-time noise , where is an Wiener process, forms a stochastic Burgers' equation This stochastic PDE is the one-dimensional version of Kardar–Parisi–Zhang equation in a field upon substituting . See also Chaplygin's equation Conservation equation Euler–Tricomi equation Fokker–Planck equation KdV-Burgers equation References External links Burgers' Equation at EqWorld: The World of Mathematical Equations. Burgers' Equation at NEQwiki, the nonlinear equations encyclopedia. Conservation equations Equations of fluid dynamics Fluid dynamics
Burgers' equation
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
1,372
[ "Equations of fluid dynamics", "Equations of physics", "Chemical engineering", "Conservation laws", "Mathematical objects", "Equations", "Piping", "Fluid dynamics", "Conservation equations", "Symmetry", "Physics theorems" ]
1,023,378
https://en.wikipedia.org/wiki/Albite
Albite is a plagioclase feldspar mineral. It is the sodium endmember of the plagioclase solid solution series. It represents a plagioclase with less than 10% anorthite content. The pure albite endmember has the formula . It is a tectosilicate. Its color is usually pure white, hence its name from Latin, . It is a common constituent in felsic rocks. Properties Albite crystallizes with triclinic pinacoidal forms. Its specific gravity is about 2.62 and it has a Mohs hardness of 6 to 6.5. Albite almost always exhibits crystal twinning often as minute parallel striations on the crystal face. Albite often occurs as fine parallel segregations alternating with pink microcline in perthite as a result of exolution on cooling. There are two variants of albite, which are referred to as 'low albite' and 'high albite'; the latter is also known as 'analbite'. Although both variants are triclinic, they differ in the volume of their unit cell, which is slightly larger for the 'high' form. The 'high' form can be produced from the 'low' form by heating above High albite can be found in meteor impact craters such as in Winslow, Arizona. Upon further heating to more than the crystal symmetry changes from triclinic to monoclinic; this variant is also known as 'monalbite'. Albite melts at . Oftentimes, potassium can replace the sodium characteristic in albite at amounts of up to 10%. When this is exceeded the mineral is then considered to be anorthoclase. Occurrence It occurs in granitic and pegmatite masses (often as the variety cleavelandite), in some hydrothermal vein deposits, and forms part of the typical greenschist metamorphic facies for rocks of originally basaltic composition. Minerals that albite is often considered associated with in occurrence include biotite, hornblende, orthoclase, muscovite and quartz. Discovery Albite was first reported in 1815 for an occurrence in Finnbo, Falun, Dalarna, Sweden. Use Albite is used as a gemstone, albeit semiprecious. Albite is also used by geologists as it is identified as an important rock forming mineral. There is some industrial use for the mineral such as the manufacture of glass and ceramics. One of the iridescent varieties of albite, discovered in 1925 near the White Sea coast by academician Alexander Fersman, became widely known under the trade name belomorite. References External links Mineral galleries Tectosilicates Feldspar Triclinic minerals Luminescent minerals Gemstones Minerals in space group 2
Albite
[ "Physics", "Chemistry" ]
596
[ "Luminescence", "Luminescent minerals", "Materials", "Gemstones", "Matter" ]
1,023,388
https://en.wikipedia.org/wiki/Eridanus%20%28constellation%29
Eridanus is a constellation which stretches along the southern celestial hemisphere. It is represented as a river. One of the 48 constellations listed by the 2nd century AD astronomer Ptolemy, it remains one of the 88 modern constellations. It is the sixth largest of the modern constellations. The same name was later taken as a Latin name for the real Po River and also for the name of a minor river in Athens. Features Stars At its southern end is the magnitude 0.5 star Achernar, designated Alpha Eridani. It is a blue-white hued main sequence star 144 light-years from Earth, whose traditional name means "the river's end". Achernar is a very peculiar star because it is one of the flattest stars known. Observations indicate that its radius is about 50% larger at the equator than at the poles. This distortion occurs because the star is spinning extremely rapidly. There are several other noteworthy stars in Eridanus, including some double stars. Beta Eridani, traditionally called Cursa, is a blue-white star of magnitude 2.8, 89 light-years from Earth. Its place to the south of Orion's foot gives it its name, which means "the footstool". Theta Eridani, called Acamar, is a binary star with blue-white components, distinguishable in small amateur telescopes and 161 light-years from Earth. The primary is of magnitude 3.2 and the secondary is of magnitude 4.3. 32 Eridani is a binary star 290 light-years from Earth. The primary is a yellow-hued star of magnitude 4.8 and the secondary is a blue-green star of magnitude 6.1. 32 Eridani is visible in small amateur telescopes. 39 Eridani is a binary star also divisible in small amateur telescopes, 206 light-years from Earth. The primary is an orange-hued giant star of magnitude 4.9 and the secondary is of magnitude 8. 40 Eridani is a triple star system consisting of an orange main-sequence star, a white dwarf, and a red dwarf. The orange main-sequence star is the primary of magnitude 4.4, and the white secondary of magnitude 9.5 is the most easily visible white dwarf. The red dwarf, of magnitude 11, orbits the white dwarf every 250 years. The 40 Eridani system is 16 light-years from Earth. p Eridani is a binary star with two orange components, 27 light-years from Earth. The magnitude 5.8 primary and 5.9 secondary have an orbital period of 500 years. Epsilon Eridani (the proper name is Ran) is a star with one extrasolar planet similar to Jupiter. It is an orange-hued main-sequence star of magnitude 3.7, 10.5 light-years from Earth. Its one planet, with an approximate mass of one Jupiter mass, has a period of 7 years. Supervoid The Eridanus Supervoid is a large supervoid (an area of the universe devoid of galaxies) discovered . At a diameter of about one billion light years it is the second largest known void, superseded only by the Giant Void in Canes Venatici. It was discovered by linking a "cold spot" in the cosmic microwave background to an absence of radio galaxies in data of the United States National Radio Astronomy Observatory's Very Large Array Sky Survey. There is some speculation that the void may be due to quantum entanglement between our universe and another. Deep-sky objects NGC 1535 is a small blue-gray planetary nebula visible in small amateur telescopes, with a disk visible in large amateur instruments. 2000 light-years away, it is of the 9th magnitude. A portion of the Orion Molecular Cloud Complex can be found in the far northeastern section of Eridanus. IC 2118 is a faint reflection nebula believed to be an ancient supernova remnant or gas cloud illuminated by nearby supergiant star Rigel in Orion. Eridanus contains the galaxies NGC 1232, NGC 1234, NGC 1291 and NGC 1300, a grand design barred spiral galaxy. NGC 1300 is a face-on barred spiral galaxy located 61 (plus or minus 8) million light-years away. The center of the bar shows an unusual structure: within the overall spiral structure, a grand design spiral that is 3,300 light-years in diameter exists. Its spiral arms are tightly wound. Meteor showers The Nu Eridanids, a recently discovered meteor shower, radiate from the constellation between August 30 and September 12 every year; the shower's parent body is an unidentified Oort cloud object. Another meteor shower in Eridanus is the Omicron Eridanids, which peak between November 1 and 10. Visualizations Eridanus is depicted in ancient sky charts as a flowing river, starting from Orion and flowing in a meandering fashion past Cetus and Fornax and into the southern hemispheric stars. Johann Bayer's Uranometria depicts the river constellation as a flowing river. History and mythology According to one theory, the Greek constellation takes its name from the Babylonian constellation known as the Star of Eridu (MUL.NUN.KI). Eridu was an ancient city in the extreme south of Babylonia; situated in the marshy regions it was held sacred to the god Enki-Ea who ruled the cosmic domain of the Abyss - a mythical conception of the fresh-water reservoir below the Earth's surface. Eridanus is connected to the myth of Phaethon, who took over the reins of his father Helios' sky chariot (i.e., the Sun), but didn't have the strength to control it and so veered wildly in different directions, scorching both Earth and heaven. Zeus intervened by striking Phaethon dead with a thunderbolt and casting him to Earth. The constellation was supposed to be the path Phaethon drove along; in later times, it was considered a path of souls. Since Eridanos was also a Greek name for the Po (Latin Padus), in which the burning body of Phaethon is said by Ovid to have extinguished, the mythic geography of the celestial and earthly Eridanus is complex. Another association with Eridanus is a series of rivers all around the world. First conflated with the Nile River in Egypt, the constellation was also identified with the Po River in Italy. The stars of the modern constellation Fornax were formerly a part of Eridanus. Equivalents The stars that correspond to Eridanus are also depicted as a river in Indian astronomy starting close to the head of Orion just below Auriga. Eridanus is called Srotaswini in Sanskrit, srótas meaning the course of a river or stream. Specifically, it is depicted as the Ganges on the head of Dakshinamoorthy or Nataraja, a Hindu incarnation of Shiva. Dakshinamoorthy himself is represented by the constellation Orion. The stars that correspond to Eridanus cannot be fully seen from China. In Chinese astronomy, the northern part is located within the White Tiger of the West (西方白虎, Xī Fāng Bái Hǔ). The unseen southern part was classified among the Southern Asterisms (近南極星區, Jìnnánjíxīngōu) by Xu Guangqi, based on knowledge of western star charts. Namesakes USS Eridanus (AK-92) was a United States Navy Crater-class cargo ship named after the constellation. was a French cargo liner named after the constellation. See also Eridanus (Chinese astronomy) List of brightest stars Citations References Star Names, Their Lore and Meaning, Richard Hinckley Allen, New York City, Dover, various dates External links The Deep Photographic Guide to the Constellations: Eridanus The clickable Eridanus Epsilon Eridani New 'Vulcan' Planet Tantalizes Astronomers Starry Night Photography - Eridanus Constellation Ian Ridpath's Star Tales – Eridanus Warburg Institute Iconographic Database (medieval and early modern images of Eridanus) Constellations Constellations listed by Ptolemy Equatorial constellations
Eridanus (constellation)
[ "Astronomy" ]
1,686
[ "Constellations listed by Ptolemy", "Constellations", "Sky regions", "Equatorial constellations", "Eridanus (constellation)" ]
1,023,390
https://en.wikipedia.org/wiki/Elastance
Electrical elastance is the reciprocal of capacitance. The SI unit of elastance is the inverse farad (F−1). The concept is not widely used by electrical and electronic engineers, as the value of capacitors is typically specified in units of capacitance rather than inverse capacitance. However, elastance is used in theoretical work in network analysis and has some niche applications, particularly at microwave frequencies. The term elastance was coined by Oliver Heaviside through the analogy of a capacitor to a spring. The term is also used for analogous quantities in other energy domains. In the mechanical domain, it corresponds to stiffness, and it is the inverse of compliance in the fluid flow domain, especially in physiology. It is also the name of the generalized quantity in bond-graph analysis and other schemes that analyze systems across multiple domains. Usage The definition of capacitance (C) is the charge (Q) stored per unit voltage (V). Elastance (S) is the reciprocal of capacitance, thus, Expressing the values of capacitors as elastance is not commonly done by practical electrical engineers, but can be convenient for capacitors in series since their total elastance is simply the sum of their individual elastances. However, elastance is sometimes used by network theorists in their analyses. One advantage of using elastance is that an increase in elastance results in an increase in impedance, aligning with the behavior of the other two basic passive elements, resistance and inductance. An example of the use of elastance can be found in the 1926 doctoral thesis of Wilhelm Cauer. On his path to founding network synthesis, he developed the loop matrix A: where L, R, S, and Z are the network loop matrices of inductance, resistance, elastance, and impedance, respectively, and s is complex frequency. This expression would be significantly more complicated if Cauer had used a matrix of capacitances instead of elastances. The use of elastance here is primarily for mathematical convenience, similar to how mathematicians use radians rather than more common units for angles. Elastance is also applied in microwave engineering. In this field, varactor diodes are used as voltage-variable capacitors in devices such as frequency multipliers, parametric amplifiers, and variable filters. These diodes store charge in their junction when reverse biased, which generates the capacitor effect. The slope of the voltage-stored charge curve in this context is referred to as differential elastance. Units The SI unit of elastance is the reciprocal of the farad (F−1). The term daraf is sometimes used for this unit, but it is not approved by the SI and its use is discouraged. The term daraf is formed by reversing the word farad, in much the same way as the unit mho (a unit of conductance, also not approved by the SI) is formed by writing ohm backwards. The term daraf was coined by Arthur E. Kennelly, who used it as early as 1920. History The terms elastance and elastivity were coined by Oliver Heaviside in 1886. Heaviside coined many of the terms used in circuit analysis today, such as impedance, inductance, admittance, and conductance. His terminology followed the model of resistance and resistivity, with the -ance ending used for extensive properties and the -ivity ending used for intensive properties. Extensive properties are used in circuit analysis (they represent the "values" of components), while intensive properties are used in field analysis. Heaviside's nomenclature was designed to emphasize the connection between corresponding quantities in fields and circuits. Elastivity is the intensive property of a material, corresponding to the bulk property of a component, elastance. It is the reciprocal of permittivity. As Heaviside stated, Here, permittance is Heaviside's term for capacitance. He rejected any terminology that implied a capacitor acted as a container for holding charge. He opposed the terms capacity (capacitance) and capacious (capacitive) along with their inverses, incapacity and incapacious. At the time, the capacitor was often referred to as a condenser (suggesting that the "electric fluid" could be condensed), or as a leyden, after the Leyden jar, an early capacitor, both implying storage. Heaviside preferred a mechanical analogy, viewing the capacitor as a compressed spring, which led to his preference for terms suggesting properties of a spring. Heaviside's views followed James Clerk Maxwell's perspective on electric current, or at least Heaviside's interpretation of it. According to this view, electric current is analogous to velocity, driven by the electromotive force, similar to a mechanical force. At a capacitor, current creates a "displacement" whose rate of change is equivalent to the current. This displacement was seen as an electric strain, like mechanical strain in a compressed spring. Heaviside denied the idea of physical charge flow and accumulation on capacitor plates, replacing it with the concept of the divergence of the displacement field at the plates, which was numerically equal to the charge collected in the flow view. In the late 19th and early 20th centuries, some authors adopted Heaviside's terms elastance and elastivity. Today, however, the reciprocal terms capacitance and permittivity are almost universally preferred by electrical engineers. Despite this, elastance still sees occasional use in theoretical work. One of Heaviside's motivations for choosing these terms was to distinguish them from mechanical terms. Thus, he selected elastivity rather than elasticity to avoid the need to clarify between electrical elasticity and mechanical elasticity. Heaviside carefully crafted his terminology to be unique to electromagnetism, specifically avoiding overlaps with mechanics. Ironically, many of his terms were later borrowed back into mechanics and other domains to describe analogous properties. For example, it is now necessary to differentiate electrical impedance from mechanical impedance in some contexts. Elastance has also been used by some authors in mechanics to describe the analogous quantity, though stiffness is often preferred. However, elastance is widely used for the analogous property in the domain of fluid dynamics, particularly in fields such as biomedicine and physiology. Mechanical analogy Mechanical–electrical analogies are established by comparing the mathematical descriptions of mechanical and electrical systems. Quantities that occupy corresponding positions in equations of the same form are referred to as analogues. There are two main reasons for creating such analogies. The first reason is to explain electrical phenomena in terms of more familiar mechanical systems. For example, the differential equations governing an electrical RLC circuit (inductor-capacitor-resistor circuit) are of the same form as those governing a mechanical mass-spring-damper system. In such cases, the electrical domain is translated into the mechanical domain for easier understanding. The second, and more significant, reason is to analyze systems containing both mechanical and electrical components as a unified whole. This approach is especially beneficial in fields like mechatronics and robotics, where integration of mechanical and electrical elements is common. In these cases, the mechanical domain is often converted into the electrical domain because network analysis in the electrical domain is more advanced and highly developed. The Maxwellian analogy In the analogy developed by Maxwell, now known as the impedance analogy, voltage is analogous to force. The term "electromotive force" used for the voltage of an electric power source reflects this analogy. In this framework, current is analogous to velocity. Since the time derivative of displacement (distance) is equal to velocity and the time derivative of momentum equals force, quantities in other energy domains with similar differential relationships are referred to as generalized displacement, generalized velocity, generalized momentum, and generalized force. In the electrical domain, the generalized displacement is charge, which explains the Maxwellians' use of the term displacement. Since elastance is defined as the ratio of voltage to charge, its analogue in other energy domains is the ratio of a generalized force to a generalized displacement. Therefore, elastance can be defined in any energy domain. The term elastance is used in the formal analysis of systems involving multiple energy domains, such as in bond graphs. Other analogies Maxwell's analogy is not the only method for constructing analogies between mechanical and electrical systems. There are multiple ways to create such analogies. One commonly used system is the mobility analogy. In this analogy, force is mapped to current rather than voltage. As a result, electrical impedance no longer corresponds directly to mechanical impedance, and similarly, electrical elastance no longer corresponds to mechanical elastance. See also Compliance (physiology) Elasticity (physics) References Bibliography Blake, F. C., "On electrostatic transformers and coupling coefficients", Journal of the American Institute of Electrical Engineers, vol.  40, no. 1, pp. 23–29, January 1921 Borutzky, Wolfgang, Bond Graph Methodology, Springer, 2009 . Busch-Vishniac, Ilene J., Electromechanical Sensors and Actuators, Springer Science & Business Media, 1999 . Camara, John A., Electrical and Electronics Reference Manual for the Electrical and Computer PE Exam, Professional Publications, 2010 . Cauer, E.; Mathis, W.; Pauli, R., "Life and Work of Wilhelm Cauer (1900 – 1945)", Proceedings of the Fourteenth International Symposium of Mathematical Theory of Networks and Systems (MTNS2000), Perpignan, June, 2000. Enderle, John; Bronzino, Joseph, Introduction to Biomedical Engineering, Academic Press, 2011 . Fuchs, Hans U., The Dynamics of Heat: A Unified Approach to Thermodynamics and Heat Transfer, Springer Science & Business Media, 2010 . Gupta, S. C., Thermodynamics, Pearson Education India, 2005 . Heaviside, Oliver, Electromagnetic Theory: Volume I, Cosimo, 2007 (first published 1893). Hillert, Mats, Phase Equilibria, Phase Diagrams and Phase Transformations, Cambridge University Press, 2007 . Horowitz, Isaac M., Synthesis of Feedback Systems, Elsevier, 2013 . Howe, G. W. O., "The nomenclature of the fundamental concepts of electrical engineering", Journal of the Institution of Electrical Engineers, vol.  70, no.  420, pp. 54–61, December 1931. Jerrard, H. G., A Dictionary of Scientific Units, Springer, 2013 . Kennelly, Arthur E.; Kurokawa, K., "Acoustic impedance and its measurement", Proceedings of the American Academy of Arts and Sciences, vol.  56, no.  1, pp. 3–42, 1921. Klein, H. Arthur, The Science of Measurement: A Historical Survey, Courier Corporation, 1974 . Miles, Robert; Harrison, P.; Lippens, D., Terahertz Sources and Systems, Springer, 2012 . Mills, Jeffrey P., Electro-magnetic Interference Reduction in Electronic Systems, PTR Prentice Hall, 1993 . Mitchell, John Howard, Writing for Professional and Technical Journals, Wiley, 1968 Peek, Frank William, Dielectric Phenomena in High Voltage Engineering, Watchmaker Publishing, 1915 (reprint) . Regtien, Paul P. L., Sensors for Mechatronics, Elsevier, 2012 . van der Tweel, L. H.; Verburg, J., "Physical concepts", in Reneman, Robert S.; Strackee, J., Data in Medicine: Collection, Processing and Presentation, Springer Science & Business Media, 2012 . Tschoegl, Nicholas W., The Phenomenological Theory of Linear Viscoelastic Behavior, Springer, 2012 . Vieil, Eric, Understanding Physics and Physical Chemistry Using Formal Graphs, CRC Press, 2012 Yavetz, Ido, From Obscurity to Enigma: The Work of Oliver Heaviside, 1872–1889, Springer, 2011 . Electrostatics Physical quantities Electromagnetism Capacitance Electromagnetic quantities ca:Elastància (electricitat)
Elastance
[ "Physics", "Mathematics" ]
2,553
[ "Physical phenomena", "Electromagnetism", "Electromagnetic quantities", "Physical quantities", "Quantity", "Fundamental interactions", "Capacitance", "Voltage", "Wikipedia categories named after physical quantities", "Physical properties" ]
1,023,396
https://en.wikipedia.org/wiki/Tremolite
Tremolite is a member of the amphibole group of silicate minerals with composition Ca2(Mg5.0-4.5Fe2+0.0-0.5)Si8O22(OH)2. Tremolite forms by metamorphism of sediments rich in dolomite and quartz, and occurs in two distinct forms, crystals and fibers. Tremolite forms a series with actinolite and ferro-actinolite. Pure magnesium tremolite is creamy white, but the color grades to dark green with increasing iron content. It has a hardness on Mohs scale of 5 to 6. Nephrite, one of the two minerals known as the gemstone jade, is a green crystalline variety of tremolite. The fibrous form of tremolite is one of the six recognised types of asbestos. This material is toxic, and inhaling the fibers can lead to asbestosis, lung cancer and both pleural and peritoneal mesothelioma. Fibrous tremolite is sometimes found as a contaminant in vermiculite, chrysotile (itself a type of asbestos) and talc. Occurrence Tremolite is an indicator of metamorphic grade since at high temperatures it converts to diopside. Tremolite occurs as a result of contact metamorphism of calcium and magnesium rich siliceous sedimentary rocks and in greenschist facies metamorphic rocks derived from ultramafic or magnesium carbonate bearing rocks. Associated minerals include calcite, dolomite, grossular, wollastonite, talc, diopside, forsterite, cummingtonite, riebeckite and winchite. Tremolite was first described in 1789 for an occurrence in Campolungo, Piumogna Valley, Leventina, Ticino (Tessin), Switzerland. Fibrous tremolite One of the six recognized types of asbestos, approximately 40,200 tons of tremolite asbestos is mined annually in India. It is otherwise found as a contaminant. See also Libby, Montana – location of asbestos-related ailments caused by tremolite References Mineral may unlock secrets of Venus's ancient oceans, New Scientist, 10 October 2007 Calcium minerals Magnesium minerals Asbestos Amphibole group Monoclinic minerals Minerals in space group 12 Luminescent minerals
Tremolite
[ "Chemistry", "Environmental_science" ]
493
[ "Luminescence", "Toxicology", "Asbestos", "Luminescent minerals" ]